Category Archives: Security

This Day in History: 1862 Largest Mass Execution in American History

Minnesota’s concentration camp of 1862 was setup to abuse and kill the Native American elderly, women and children. Source: Minnesota Historical Society
For some in America the “Holiday” weeks of December are an extremely painful time of American history.

The state of Minnesota, for example, was founded on deception and violence to steal land from Native Americans that culminated in this month.

The Minnesota Historical Society (MNHS) explains how the encroaching U.S. sparked an intense war with Native Americans that ended in December 1862 with an unjust trial and very large number of executions:

The trials of the Dakota were conducted unfairly in a variety of ways. The evidence was sparse, the tribunal was biased, the defendants were unrepresented in unfamiliar proceedings conducted in a foreign language, and authority for convening the tribunal was lacking. More fundamentally, neither the Military Commission nor the reviewing authorities recognized that they were dealing with the aftermath of a war fought with a sovereign nation and that the men who surrendered were entitled to treatment in accordance with that status.

MNHS also relates how Dakota leaders have been recorded as clearly humane and civilized in their rationalizations of self-defense, yet received barbaric treatment by the white nationalist militants they fought against:

You have deceived me. You told me that if we followed the advice of General Sibley, and gave ourselves up to the whites, all would be well; no innocent man would be injured. I have not killed, wounded or injured a white man, or any white persons. I have not participated in the plunder of their property; and yet to-day I am set apart for execution, and must die in a few days, while men who are guilty will remain in prison. My wife is your daughter, my children are your grandchildren. I leave them all in your care and under your protection. Do not let them suffer; and when my children are grown up, let them know that their father died because he followed the advice of his chief, and without having the blood of a white man to answer for to the Great Spirit.

Those of the Dakota who had fought in the war retreated for winter, were killed or captured. The U.S. military decided it wasn’t staffed to pursue them. Thus the only Dakota people who were brought into custody by the U.S. were elderly, women, and children; nearly 2,000 people who had nothing to do with the war were seduced by the U.S. military and then death-marched for days into a concentration camp to be abused and die.

They lost everything. They lost their lands. They lost all their annuities that were owed them from the treaties. These are people who were guilty of nothing.

Just as many of the Dakota were very obviously peaceful and kind people at the time, some whites did try to take a moral stand to account for settler crimes against humanity:

Henry Whipple, traveled to Washington to meet with Lincoln; he explained to the president that Dakota grievances stemmed in large part from the greed, corruption, and deceit of government agents, traders, and other whites. Lincoln took what he called “the rascality of this Indian business” into consideration and granted clemency to most of those sentenced to die.

This was far from sufficient to curtail what the Minnesota Governor proclaimed with great fanfare as “The Sioux Indians of Minnesota must be exterminated…“.

Minnesota History Magazine further relates that a prominent leader of the Dakota people a year later was murdered by white settlers who simply noticed him eating wild raspberries and decided to hunt, kill, decapitate and scalp him for that alone:

Even if a state of war had existed in 1863, the Lamsons’ action could not be defended as legal. They were mere civilians, who under international law have no right to take up arms against the enemy and who will be
hanged summarily if they do. The ordinary law of murder would apply to them. […] If killing in reliance upon the adjutant general’s orders would be murder under the law in force in 1863, obviously killing before any orders were issued would be an even stronger case of murder. Thus Little Crow was tendered a posthumous apology. One must reach the conclusion that in strict law the Lamsons were provocateurs and murderers.

Shot on sight without any questions, Little Crow was a nationally recognized and celebrated man who had negotiated Treaties of Traverse des Sioux and Mendota in 1851. It was he who had moved a band of Dakota from their massive 25 million acre territory into a tiny (20 mile by 70 mile) reservation.

There were many tens of thousands of Native Americans said to be in the region at the time.

In 1850, the white population of what would soon be the state of Minnesota stood at about 6,000 people. The Indian population was eight times that, with nearly 50,000 Dakota, Ojibwe, Winnebago and Menominee living in the territory. But within two decades, as immigrant settlers poured in, the white population would mushroom to more than 450,000.

Ten years later by the war of 1862 (and after he was coerced into an even worse treaty in 1858) Little Crow became known as the Dakota leader who took a principled and fair stand against his former trading partner U.S. General Sibley.

The U.S. government allegedly had offered the Dakota only a few cents per acre for their entire ceded territory space in treaties, and gave promises of annuity payments and food supplies. Yet while their land was taken away the agreed upon payments and food didn’t come. It was in this context that white settlers flooded the area historically inhabited by Dakota.

Congress passes the Homestead Act, a law signed by President Abraham Lincoln on May 20, 1862, offering millions of acres of free land to settlers who stay on the land for five years. The act brings 75,000 people to Minnesota over three years. To qualify for 160 free acres, settlers have to live on it for five years, farm and build a permanent dwelling. Those able to spend the money can buy the 160 acres at $1.25 an acre after living on it for six months.

The federal government was effectively buying land for cheap and then selling 160 acre parcels of it at either $200 (20X the cost) or for five years of farming and construction.

Since the tiny reservation space wasn’t producing food as sold to them, and the U.S. government was intentionally withholding payments and supplies to survive on, huge numbers of Dakota faced a starvation-level situation and demanded quick restitution.

On top of that white settlers illegally had been encroaching into even the tiny Dakota reservation area. The Dakota faced no choice but to reassert rights to their money, food and land they already had negotiated.

Tension grew from the U.S. refusing to help, withholding food and money from the now trapped Dakota population in an attempt to “force conformance to white ideals” of a “Christian” lifestyle.

While Dakota parents watched their children starve to death, pork and grain filled the Lower Sioux Agency’s new stone warehouse, a large square building of flat, irregularly shaped stones harvested from the river bottoms. […] “So far as I’m concerned, if they are hungry, let them eat grass or their own dung,” [warehouse owner] Myrick said.

The U.S. strategically reneged on agreements and intentionally starved Dakota populations into desperation, before ultimately using attempts at self-defense as justification for mass unjust executions and murder. This was followed by Minnesota settlers banishing the native population entirely from their own historic territory under penalty of death into concentration camps, offering rewards to anyone who could trap and kill the Native Americans (Minnesota’s government offered a reward up to $200 — roughly $4000 in 2019 terms — for non-white human scalps).

At a higher level the race in 1862 to settle territory inhabited and owned by Native Americans had been complicated the year before by militant southern states starting a Civil War to violently force expansion of slavery into any new states. Thus, just as John Brown’s attempt to incite abolition got him executed in 1859 as a “traitor” to America, the Dakota people fighting for freedom from tyranny three years after in 1862 were unjustly tried by Minnesota settlers and executed on December 26.

    Old John Brown’s body lies moldering in the grave,
    While weep the sons of bondage whom he ventured all to save;
    But tho he lost his life while struggling for the slave,
    His soul is marching on.

    John Brown was a hero, undaunted, true and brave,
    And Kansas knows his valor when he fought her rights to save;
    Now, tho the grass grows green above his grave,
    His soul is marching on.

    He captured Harper’s Ferry, with his nineteen men so few,
    And frightened “Old Virginny” till she trembled thru and thru;
    They hung him for a traitor, they themselves the traitor crew,
    But his soul is marching on.

John Brown witnessed far too many Americans being murdered under the tyranny of expansionist slavery when he said there was no choice but fighting back, calling for wider armed defense and predicting war. Curry’s impressive mural called “Tragic Prelude” that depicts Brown’s conviction against tyranny can be seen in the Kansas State Capitol.

Pennsylvania Man Arrested in Hawaii for California Hate Crimes

Photo courtesy George Haroonian

The latest report suggests a man briefly visited and desecrated a religious center (2am, Saturday December 14th) in California before fleeing to Hawaii:

A Pennsylvania man accused in the recent vandalism of a Beverly Hills synagogue was arrested in Hawaii and is being charged under a hate crime enhancement, police said Wednesday. Anton Nathaniel Redding, 24, of Millersville, Pennsylvania, was arrested in Kona…

Allegedly the Hawaii Criminal Justice Data Center (CJDC) used its five-year-old facial recognition system to track and capture him.

That Millersville reference is interesting because of the series of similar hate crimes last year, although I haven’t seen anyone yet make the connection:

Multiple hate crimes investigation are underway at Millersville University. The university said derogatory race-related graffiti against black people was found Friday in a men’s restroom in the Student Memorial Center. Two other incidents are anti-Semitic and considered hate crimes in the wake of the Pittsburgh synagogue massacre last weekend.

The year before, Millersville was called a “alt-right recruiting ground” after hate group recruitment posters were revealed in the news:

Signs recently posted by those that espouse white supremacist and neo-Nazi philosophies have appeared on MU bulletin boards and property. […] Over the last two days, stickers and posters were found on campus promoting Identity Evropa…a white supremacist group in the United States, established in March 2016.

In further related news the latest data shows far-right terrorism in America has increased 320% since 2014 and “every extremist killing in the U.S. in 2018 was linked to far-right individuals or organizations”.

Facebook Fails Basic Audit of 2019 Civil Rights Legal Settlement

Dion Diamond of the Non-Violent Action Group during a sit-in at the Cherrydale Drug Fair in Arlington, Virginia gets harassed by white nationalists. Despite physical blows and lit-cigarettes being thrown, two-weeks of protests in June 1960 led to Arlington, Alexandria and Fairfax restaurants removing explicit racism from their services. Gus Chinn/Courtesy of the DC Public Library Washington Star Collection/Washington Post

A fascinating new paper (Algorithms That “Don’t See Color”: Comparing Biases in Lookalike and Special Ad Audiences) audits an obfuscated security fix of Facebook algorithms and finds a giant vulnerability remains.

The conclusion (spoiler alert) is that Facebook’s ongoing failure to fix its platform security means it should be held accountable for an active role in unfair/harmful content distribution.

Facebook itself could also face legal scrutiny. In the U.S., Section 230 of the Communications Act of 1934 (as amended by the Communications Decency Act) provides broad legal immunity to Internet platforms acting as publishers of third-party content. This immunity was a central issue in the litigation resulting in the settlement analyzed above. Although Facebook argued in court that advertisers are “wholly responsible for deciding where, how, and when to publish their ads”, this paper makes clear that Facebook can play a significant, opaque role by creating biased Lookalike and Special Ad audiences. If a court found that the operation of these tools constituted a “material contribution” to illegal conduct, Facebook’s ad platform could lose its immunity .

Facebook’s record on this continue to puzzle me. They have run PR campaigns about concern for general theories of safety, yet seem always to be engaged in pitiful disregard for the rights of their own users.

It reminds me a million years ago, when I led security for Yahoo “Connected Life”, how my team had zero PR campaigns yet took threats incredibly seriously. We regularly would get questions from advertisers for access or identification that could harm user rights.

A canonical test, for example, was a global brand asks for everyone’s birthday for an advertising campaign. We trained day and night for handling this kind of request, which we would push back immediately to protect trust in the platform.

Our security team was committed to preserving rights and would start conversations with a “why” and sometimes would get to four or five more. As cheesy as it sounds we even had t-shirts printed that said “why?” on the sleeve to reinforce the significance of avoiding harms through simple sets of audit steps.

Why would an advertiser ask for a birthday? A global brand would admit they wanted ads to target a narrow age group. We consulted with legal and offered them instead a yes/no answer for a much broader age group (e.g. instead of them asking for birthdays we allowed them to ask is a person older than 13). Big brand accepted our rights-preserving counter-proposal, and we verified they saw nothing more from our system than binary and anonymous yes/no.

This kind of fairness goal and confidentiality test procedure was a constant effort and wasn’t rocket-science, although it was seen as extremely important to protecting trust in our platform and the rights of our users.

Now fast-forward to Facebook’s infamous reputation for leaking data (like allegedly billions of data points per day fed to Russia), and their white-male dominated “tech-bro” culture of privilege with its propensity over the last ten years to repeatedly fail user trust.

It seems amazing that the U.S. government haven’t moved forward with their plan for putting the Facebook executives in jail. Here’s yet another example of how Facebook leadership fails basic tests as if they can’t figure security out themselves:

Earlier this year the company was in a massive civil rights lawsuit.

The suit comes after a widely-read ProPublica article in which the news organization created an ad targeting people who were interested in house-hunting. The news organization used Facebook’s advertising tools to prevent the ad from being shown to Facebook users identified as having African American, Hispanic, and Asian ethnic affinities.

As a result of this lawsuit Facebook begrudgingly agreed to patch its “Lookalike Audiences” tool and claimed the fix would make it unbiased.

The tool originally earned its name by taking a source audience from an advertiser and then targeting “lookalike” Facebook users. “Whites-only” apparently would have been a better name for how the tool was being used, according to the lawsuit examples.

The newly patched tool was claimed to remove the “whites-only” Facebook effect by blocking the algorithm from input of certain demographic features in a source audience. The tool also unfortunately was renamed to “Special Ad Audiences” allegedly as an “inside” joke to frame non-white or diverse audiences as “Special Ed” (the American term pejoratively used to refer to someone as stupid).

The simple audit of this patch, as described by authors of the new paper, was submitting a biased source audience (with known skews in politics, race, age, religion etc) into parallel Lookalike and Special Ad algorithms. The result of the audit is…drumroll please…Special Ad audiences retain the biased output of Lookalike, completely failing the Civil Rights test.

Security patch fail.

With great detail the paper illustrates how removing demographic features for the Special Ad algorithm did not make the output audience differ from the Loookalike one. In other words, and most important of all, blocking demographic inputs fails to prevent Facebook algorithm generation of predictably biased output.

As tempting as it is to say we’re working on the “garbage input, garbage output” problem, we shouldn’t be fooled by anyone claiming their algorithmic discrimination will magically be fixed by just adjusting inputs.

Could NASCAR Be America’s Blueprint for Driverless Ethics?

New Yorker, April 17, 1995, Leo Cullum

Years ago I wrote about the cheating of NASCAR car drivers. And recently at the last BSidesLV conference I pointed out in my talk how human athletes in America get banned for cheating, while human car drivers get respect.

Anyway I was reading far too much on this topic when I starting thinking how NASCAR studies of ten years ago to end cheating could be a compelling area of research for ethics in driverless cars:

Proposed solutions include changing the culture within the NASCAR community, as well as developing ethical role models, both of which require major action by NASCAR’s top managers to signal the importance of ethical behavior. Other key stakeholders such as sponsors and fans must create incentives and rewards for ethical behavior, and consider reducing or ending support for drivers and teams that engage in unethical conduct.

That’s some high-minded analysis given the inaugural race at Talladega (Alabama International Motor Speedway) had a 1969 Ford with its engine set back nearly a foot from stock (heavier weight distribution to the rear — violating the rules).

This relocation of the engine was easily seen by any casual observer yet the car was allowed to race and finished 9th. Bill France owned the car. Yes, that Bill France. The same guy who owned the track and NASCAR itself…entered an illegal car.

An illegal car actually is icing on the cake, though. Bill France built this new track with unsafe parameters and when drivers tried to boycott the conditions, he solicited drivers to break the safety boycott and issued free tickets to create an audience.

NASCAR retells a story full of cheating as the success that comes from ignoring ethics:

“I really admired that he told everybody to kiss his ass, that that race was going to run,” Foyt said.

The sentiment of getting everyone together to agree to an ethical framework sounds great, until you realize NASCAR stands for the exact opposite. It seems to have a history where cheating without getting punished is their very definition of winning.

Robots Get “Butter” Driving Skills

Tech philosophers in America watch closely the attempts of highly-individualistic short-term investor-run truck companies to behave as much on shared infrastructure like trains as possible without realizing the societal benefits of a train

Nvidia boasts in the Sacramento Bee of a truck that was able to drive a load of butter across America

…the first coast-to-coast commercial freight trip made by a self-driving truck, according to the company’s press release. Plus.ai announced on Tuesday that its truck traveled from Tulare, California, to Quakertown carrying over 40,000 pounds of Land O’Lakes butter.

Mercury News says it took three days on two interstate routes (e.g. human life prohibited) and didn’t experience any problems.

The truck, which traveled on interstates 15 and 70 right before Thanksgiving, had to take scheduled breaks but drove mostly autonomously. There were zero “disengagements,” or times the self-driving system had to be suspended because of a problem, Kerrigan said.

Indeed. The truck appears to have operated about as much like a train as one could get, although at much higher costs. If we had only invested a similar amount of money into startups to achieve a 250 mph service on upgraded tracks across America…

2800 miles at 250 mph is just 11 hours. Using electric line-of-sight delivery drones to load and unload the “last mile” from high-speed train stations at either end would even further expedite delivery time.

Trains = 12 hours or less and clean
Trucks = 24 hours or more + distributed environmental pollutants (fuel exhaust, tire wear, brake wear, wiper fluids…)

Trucks can’t improve their time much more because they start becoming more and more a threat to others trying to operate safely on the Interstate’s self-run collision avoidance systems. And it’s exactly the delta between operating speeds of vehicles on the same lines that generates the highest risks of disaster.

The absolute best whey going forward to skim time (yes, I said it) should be clear (it’s trains), although we’re talking long-term thinking here, which investors lurking around startups for 2-year 20% returns on their money have never been known to embrace.

Driverless trucks in this context are a form of future steampunk, like someone boasting today their coal-fired dirigible has upgraded to an auto-scooper so they no longer need to abduct children into forced labor.

Congratulations on being less of a selfish investor threat to others, I guess? Now maybe try adopting a socially conscious model instead.

1953 Machina Speculatrix: The First Swarm Drone?

A talk I was watching recently suggested researchers finally in 2019 had cracked how robots could efficiently act like a swarm. Their solution? Movement based entirely on a light sensor.

That sounded familiar to me so I went back to one of my old presentations on IoT/AI security and found a slide showing the same discovery claim from 1953. Way back then people used fancier terms than just swarm.

W. Grey Walter built jelly-fish-like robots that were reactive to their surroundings: light sensor, touch sensor, propulsion motor, steering motor, and a two vacuum tube analog computer. He called their exploration behavior Machina Speculatrix and the individual robots were named Elmer or Elsie (ELectro MEchanical Robots, Light Sensitive)

The rules for swarm robots back then were as simple as they will be today, as one should expect from swarms:

If light moderate (safe)
Then move toward
If light bright (unsafe)
Then move away
If battery low (hungry)
Then return for charge

Car Runs on Your Data? Hot Rod it With Some Decentralization

Cool kids run their rods on decentralized oil, thanks to Diesel who not only warned of centralization dangers but invented solutions to it

A month ago I was on a call with some top security experts in the industry. We were discussing my upcoming presentation about exciting control options and data privacy from applying decentralization standards to the automotive industry.

To put it briefly I was explaining how web decentralization standards can fix growing issues of data ownership and consent in automotive technology, a fascinating problem to solve which I have spoken about at many, many conferences over the past seven years.

Here’s one of my slides from 2014, which hopefully increased awareness about automotive data ownership and consent risks:

Much to my surprise I see this issue just hit the big papers for some well-deserved attention, albeit I also see it may be for the wrong reasons.

The Washington Post has released what some are calling a viral phrase:

Cars now run on the new oil — your data.

While I can appreciate journalist bait to gather eyeballs, that message today flies in the face of other recent headlines.

People really already should know that phrase is problematic, as repeatedly flagged everywhere by, well, everyone.

  • Forbes: “Here’s Why Data Is Not The New Oil”
  • BBC: “Data is not the new oil”
  • Financial Times: “Data is not the new oil”
  • Harvard Business Review: “Big Data is Not the New Oil”
  • WeForum: “You may have heard data is the new oil. It’s not”
  • Wired: “No, Data Is Not the New Oil”
  • …data isn’t the new oil, in almost any metaphorical sense, and it’s supremely unhelpful to perpetuate the analogy…

That’s just to frame the many problems with this article. Here’s another big one. The author wrote:

We’re at a turning point for driving surveillance — and it’s time for car makers to come clean…

Haha, turning point. I get it. That pun should have led to “it’s time for car makers to choose a direction”. Missed opportunity.

But seriously, the turning point for many of the issues in this article surely was years ago. He raises confidentiality and portability issues, for example. Why is now the turning point for these instead of 2014 when encryption options exploded? Or howabout 2012 when a neural net run on GPUs crushed the ImageNet competition? I see no explanation for why things are present concerns rather than past/overdue ones.

I’d say the problem is so old we’re already at the solutions phase, long past the identification and criticism.

Please see any one of my many many presentations on this since 2012.

Here’s another big one. The author wrote:

I had help doing a car privacy autopsy from Jim Mason, a forensic engineer. That involved cracking open the dashboard to access just one of the car’s many computers. Don’t try this at home — we had to take the computer into the shop to get repaired.

Sigh. Please do try this at home.

Right to repair is a very real facet of this topic. Cracking a dashboard for access is also very normal behavior and more people should be doing it.

When I volunteered my own garage space in the Bay Area, for example, I saw the reverse effect. Staff of several automotive companies came to join random people of the city in some good old community cracking of dashboards.

A guy from [redacted automotive company] said “…what do you mean you don’t bring rental cars to take apart and hack for a day? You should target ours and tell us about it.” Yikes. That’s not ethical.

The 1970s “hot-rod” culture in today’s terms is a bunch of us sitting around with disassembled junkyard parts in a controlled garage (not operational rental/borrowed cars on the street!) and our clamps on wires etc to linux laptops deciphering CANbus codes.

This journalist desperately needs to participate sometime in a local car hacking community or at least read “Zen and the Art of Motorcycle Maintenance”….

It should not be hard for a machine owner to crack it open, when market regulations are working right. At least the journalist did not say an “idiot light” forced him to take his computer to the manufacturer for help.

Anyway, back to the point, the data models in automotive need to adopt decentralization standards if they want to solve for data ownership issues raised in this story.

But for the thousands you spend to buy a car, the data it produces doesn’t belong to you. My Chevy’s dashboard didn’t say what the car was recording. It wasn’t in the owner’s manual. There was no way to download it.

To glimpse my car data, I had to hack my way in.

In summary, data is not the new oil, right to repair means healthy markets trend towards hardware access made easy, and concerns about confidentiality and portability of data in cars are being addressed with emerging decentralization standards.

Sorry this article may not come with a viral click-bait title, but I’m happy anytime to explain in much more detail how technical solutions are emerging already to solve data ownership concerns for cars and give examples with working code.

Quebec Converts Crosswalks to Pop-up Car Barriers

Based on the new Quebec initiative, and old Dutch campaign against murder with cars, this is my draft image for the kind of mechanical pop-up drivers need to see when they approach any pedestrian crossing area

Here’s a shocking revelation: crosswalks don’t protect pedestrians.

As you probably are tired of me saying here before, crosswalks are an unfair conspiracy by American car manufacturers to remove non-motorized forms of transportation (including pedestrians and especially women on bicycles) from the road.

Creating crosswalks and enforcing them are by their nature extremely political acts. They transfer a huge amount of power to car manufacturers, their car owners, and away from everyone else. The following paragraph from a 2019 paper that suggests the “street view” of your house predicts your chance of dying should surprise nobody:

It turns out that the car you drive is a surprisingly reliable proxy for your income level, your education, your occupation, and even the way you vote in elections.

Using cars as a proxy for power (enabling privilege and holding down the poor) is an inversion of what was supposed to happen with “freedom” of movement in America.

If you read the history of stop-lights in 1860s London, for example, a red light and an arm lowered to inform cars to stop being a threat. That’s right, stop-lights were initially designed (just thirty years after the concept of police were invented by Robert Peel) to allow pedestrians to move about freely. Somehow that concept was completely flipped to where pedestrians were pushed into a box (and harassed by police).

Consider how a lack of crosswalk, “ridiculously missing” as some would say, even has been linked to intentional unequal treatment of city residents.

Police detaining and questioning people for not using crosswalks (see points above) repeatedly has proven to be racist, to top it all off.

In brief, if you see a lot of cars on roads and few bicycles, check your value system for being anti-American, let alone anti-humanitarian.

Car manufacturers conspired through crosswalk lobbying to shift all rights away from residents in order to force expensive cars to be purchased for “freedom” to move about safely.

This devious plot runs so thick, Uber allegedly emphasized to its drivers that it would be better to sit in crosswalks to pick up passengers. The logic is they don’t care about blocking pedestrians, but do care about blocking other cars (note some US states also have laws encouraging this anti-pedestrian move).

Also worth noting is the flagship propaganda from Tesla this year has been bulletproof oversized trucks better suited for war zones where freedoms are missing than the public spaces of streets originally encouraging freedom of human movement and play.

Given the American context of turning streets into corporate-controlled death zones, the problem has been bleeding into Canada’s famous culture of “niceness”.

Thus Quebec has posted a video of crosswalks attempting to physically stop cars by telling them to be more polite to others:

It begs the question what damage or fine would be for running over the pop-ups, as they don’t seem to be designed (aside from the surprise) in a way that cars incur cost for disobeying them.

It also reminds me of the Ukrainian art experiment in 2011 (regularly featured in my talks as an example test for driverless car engineering) that popped up human-shaped balloons in crosswalks to stop speeding cars (triggered by a radar gun).

What if these pop-ups in Quebec were shaped like humans instead of just rectangles? That would be an even greater surprise with more psychological deterrence.

I like that the pop-ups are a throw back to the original concept for 1866 traffic stop lights of London, England.

However it seems the Quebec design is more of an art experiment for shock/suggestion and education than a real safety control, and on that note the pop-ups could be a lot more creative and shocking.

I mean if you’re going to pop-up a bunch of columns, how about make the columns rise and to a scale that represents the increasing death rate of pedestrians year-over-year from cars? Then stick a “stop killing our kids” message on that barrier…as Small Wars Journal has illustrated:

Small Wars journal graph of eight basic effects at play in the information environment

Facebook Failed to Encrypt Data, Failed to Notice Breach, Didn’t Notify Victims for a Month

Facebook management has recklessly steered into obvious privacy icebergs causing hundreds of millions of users to suffer during its brief history, and yet the company never seems to hit bottom
A series of timeline delays in another Facebook breach story seem rather strange for 2019.

This breach started with a physical break-in November 17th and those affected didn’t hear about it for nearly a month, until December 13th.

The break-in happened on Nov. 17, and Facebook realized the hard drives were missing on Nov. 20, according to the internal email. On Nov. 29, a “forensic investigation” confirmed that those hard drives included employee payroll information. Facebook started alerting affected employees on Friday Dec. 13.

The company didn’t notice hard drives with unencrypted data missing for half a week, which itself is unusual. The robbery was on a Sunday, and they reported it only three days later on a Wednesday.

Then it was another long two weeks after the breach, on a Friday, when someone finally came forward to say that these missing drives stored unencrypted sensitive personal identity information.

This is like reading news from ten years ago, when large organizations still didn’t quite understand or practice the importance of encryption, removable media safety and quick response. Did it really happen in 2019?

It sounds like someone working at Facebook either had no idea unencrypted data on portable hard drives is a terrible idea, or they were selling the data.

The employee who was robbed is a member of Facebook’s payroll department, and wasn’t supposed to have taken the hard drives outside the office.

“Wasn’t supposed to have taken…” is some of the weakest security language I’ve heard from a breached company in a long time. What protection and detection controls were in place? None?

Years ago there was a story about a quiet investigation at Facebook that allegedly discovered staff were pulling hard-drives out of datacenters, flying them to far away airports and exchanging them for bags of money.

It was similar to the very recent story of journalists uncovering that Facebook staff were taking $3K/month in bribes to help external attackers bypass internal security.

Of course many other breaches have proven how internal staff who observe weak security leadership may attempt to monetize data they can access, whether users or staff.

The man accused of stealing customer data from home mortgage lender Countrywide Financial Corp. was probably able to download and save the data to an external drive because of an oversight by the company’s IT department.

The insider threat is real and happens far too often.

I also think we shouldn’t wave this Facebook story off as just involving 30,000 staff data instead of the more usual customer data.

First, staff often are customers too. Second, when you’re talking tens of thousands of people impacted, that’s a significant breach and designating them as staff versus user is shady. Breach of personal data is a breach.

And there’s plenty of evidence that stolen data when found on unencrypted drives, regardless of whose data it is, can be sold on an illegal market.

This new incident however reads less like that kind of sophisticated insider threat and more like the generic sloppy security that used to be in the news ten years ago.

Kaiser Permanente officials said the theft occurred in early December after an employee left the drive inside the car at her home in Sacramento. A week after the break-in, the unidentified employee notified hospital officials of the potential data breach.

Regardless of whether a insider threat, a targeted physical attack, or just disappointing sloppy management practices and thoughtless staff…Facebook’s December 13 notice of a November 17 breach seems incredibly slow for 2019 given GDPR, and the simple fact everyone should know that notifications are meant to be within three days.

I’m reminded of the Titanic reacting slowly and mostly ignoring four days of ice notifications.

1:45 P.M. “Amerika” passed two large icebergs in 41.27 N., 50.8 W.

9:40 P.M. From “Mesaba” to “Titanic” and all east-bound ships: Ice report in latitude 42º N. to 41º 25’ N., longitude 49º W to longitude 50º 30’ W. Saw much heavy pack ice and great number large icebergs. Also field ice. Weather good, clear.

11:00 P.M. Titanic begins to receive a sixth message about ice in the area, and radio operator Jack Phillips cuts it off, telling the operator from the other ship to “shut up.”

Can Facebook’s CSO be Held Liable for Atrocity Crimes?

Something like this image representing weaponized social media may be the next addition to The Atlantic “Brief Visual History of Weapons”.

New legal research moves us closer towards holding social media executives criminally liable for the Rohingya crisis and other global security failures under their watch:

…this paper argues that it may be more productive to conceptualise social media’s role in atrocity crimes through the lens of complicity, drawing inspiration not from the media cases in international criminal law jurisprudence, but rather by evaluating the use of social media as a weapon, which, under certain circumstances, ought to face accountability under international criminal law.

The Guardian gave a scathing report of how Facebook was used in genocide:

Hate speech exploded on Facebook at the start of the Rohingya crisis in Myanmar last year, analysis has revealed, with experts blaming the social network for creating “chaos” in the country. […] Digital researcher and analyst Raymond Serrato examined about 15,000 Facebook posts from supporters of the hardline nationalist Ma Ba Tha group. The earliest posts dated from June 2016 and spiked on 24 and 25 August 2017, when ARSA Rohingya militants attacked government forces, prompting the security forces to launch the “clearance operation” that sent hundreds of thousands of Rohingya pouring over the border. […] The revelations come to light as Facebook is struggling to respond to criticism over the leaking of users’ private data and concern about the spread of fake news and hate speech on the platform.

The New Republic referred to Facebook’s lack of security controls at this time as a boon for dictatorships:

[U.N. Myanmar] Investigator Yanghee Lee went further, describing Facebook as a vital tool for connecting the state with the public. “Everything is done through Facebook in Myanmar,” Lee told reporters…what’s clear in Myanmar is that the government sees social media as an instrument for propaganda and inciting violence—and that non-government actors are also using Facebook to advance a genocide. Seven years after the Arab Spring, Facebook isn’t bringing democracy to the oppressed. In fact…if you want to preserve a dictatorship, give them the internet.

Bloomberg also around this time suggested Facebook was operating as a mass weapon by its own design, serving dictatorship.

It seems important when looking back at this time-frame to note that a key Facebook executive at the head of decisions about user safety was in just his second year ever as a “chief” of security.

He infamously had taken his first ever Chief Security Officer (CSO) job at Yahoo in 2014, only to leave that post abruptly and in chaos in 2015 (without disclosing some of the largest privacy breaches in history) to join Facebook.

August 2017 was the peak period of risk, according to the analysis above. The Facebook CSO launched a “hit back” PR campaign two months later in October to silence the growing criticisms:

Stamos was particularly concerned with what he saw as attacks on Facebook for not doing enough to police rampant misinformation spreading on the platform, saying journalists largely underestimate the difficulty of filtering content for the site’s billions of users and deride their employees as out-of-touch tech bros. He added the company should not become a “Ministry of Truth,” a reference to the totalitarian propaganda bureau in George Orwell’s 1984.

His talking points read like a sort of libertarian screed, as if he thought journalists are ignorant and would foolishly push everyone straight into totalitarianism with their probing for basic regulation, such as better editorial practices and the protection of vulnerable populations from harms.

Think of it like this: the chief of security says it is hard to block Internet traffic with a firewall because it would lead straight to shutting down the business. That doesn’t sound like a security leader, it sounds like a technologist that puts making money above user safety (e.g. what the Afghanistan Papers call profitability of war).

Facebook’s top-leadership was rolling out angry “shame” statements to those most concerned about lack of progress. He appeared to be expressing that for him to do anything more than what he saw as sufficient in that crucial time would be so hard that journalists (ironically the most prominent defenders of free speech, the people who drive transparency) couldn’t even understand if they saw it.

Take for example another one of the “hit back” Tweets posted by Facebook’s CSO:

My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.

To me that reads like the CSO saying his staff suffer when they have to work hard, calling journalists stupid for not talking with anyone.

Such a patronizing and tone-deaf argument is hard to witness. It’s truly incredible to read, especially when you consider nearly 800,000 Rohingya fleeing for their lives while a Facebook executive says consequences matter.

Compare to what journalists in the field reported at that same exact time of October 2017 — as they talked to people living right then and there with Facebook failing to solve these problems.

Warning: extremely graphic and violent depictions of genocide

Here’s another way to keep the Facebook “hit back” campaign against journalists in perspective. While the top executive in security was calling people closest to real-world consequences not expert enough on that exact topic, he himself didn’t bring any great experience or examples to the table to earn anyone’s trust. The outspoken and public face of a high-profile risk management disaster was representing Facebook’s dangerously clueless stumbles year after year:

A person with knowledge of Facebook’s [2015] Myanmar operations was decidedly more direct than [Facebook vice president of public policy] Allen, calling the roll out of the [security] initiative “pretty fucking stupid.” […] “When the media spotlight has been on there has been talk of changes, but after it passes are we actually going to see significant action?” [Yangon tech-hub Phandeeyar founder] Madden asks. “That is an open question. The historical record is not encouraging.”

The “safety dial was pegged in the wrong direction” as some journalists put it back in 2017 under a CSO who apparently thought it good idea to complain how hard it has been to protect people from harm (while making huge revenues). Perhaps business schools soon may study Facebook’s erosion of global trust under this CSO’s leadership:

We know tragically today that journalists were repeatedly right in their direct criticism of Facebook security practices and in their demands for greater transparency. We also plainly see how an inexperienced CSO’s personal “hit back” at his critics was wrong, with its opaque promises and patronizing tone based on his fears of an Orwellian fiction.

Facebook has been and continues to be out-of-touch with basic social science. Facebook was and continues to resist safety controls on speech that protect human rights, and has continued saying it is committed to safety while arguing against norms of speech regulation.

The question increasingly is whether actions like an aggressive “hit back” on people warning of genocide at a critical moment of risk (arguing it is hard to stop weapons from being used and fearing any limits on the use of those weapons) makes a “security” chief criminally liable.

My sense is it will be anthropologists, experts in researching baselines of inherited rights within relativist frameworks, who emerge as best qualified to help answer questions of what’s an acceptable vulnerability in social media technology.

We see this already in articles like “The trolls are teaming up—and tech platforms aren’t doing enough to stop them“.

The personal, social, and material harms our participants experienced have real consequences for who can participate in public life. Current laws and regulations allow digital platforms to avoid responsibility for content…. And if online spaces are truly going to support democracy, justice, and equality, change must happen soon.

Accountability of a CSO for atrocity crimes during his watch appears to be the logical change, if I’m reading these human rights law documents right.