AI Apocalypse: Misaligned Objectives or Poor Quality Control?

An author claims to have distilled down AI risks of great importance, which they refer to as “Misaligned objectives

The point is: nobody ever intends for robots that look like Arnold Schwarznegger to murder everyone. It all starts off innocent enough – Google’s AI can now schedule your appointments over the phone – then, before you know it, we’ve accidentally created a superintelligent machine and humans are an endangered species.

Could this happen for real? There’s a handful of world-renowned AI and computer experts who think so. Oxford philosopher Nick Bostrom‘s Paperclip Maximizer uses the arbitrary example of an AI whose purpose is to optimize the process of manufacturing paperclips. Eventually the AI turns the entire planet into a paperclip factory in its quest to optimize its processes.

Seems to me this article is missing the point entirely that mis-aligned objectives can be aligned yet with unexpected or sloppy ways that people are reluctant to revise and clarify.

It reminds me of criticisms in economics of using poor measurements on productivity, which in Soviet Russia was a constant problem (e.g. window factories spit out panes that nobody could use). Someone is benefiting from massive paperclip production but who retains authorization over output?

If we’re meant to be saying a centrally-planned centrally-controlled system of paperclip production is disastrous for eveyrone but dear leader (in this case an algorithm), we might as well be talking about market theory texts from the 1980s,

Let’s move away from these theoretical depictions of AI as future Communism and instead consider a market based application today of automation that kills.

Cement trucks have automation in them. They repeatedly run over cyclists and kill because they operate with too wide a margin of error in society with vague accountability, not due to mis-aligned objectives.

Take for example how San Francisco has just declared a state of emergency over pedestrians and cyclists dying from automated killing machines roaming city streets.

As of August 31, the 2019 death toll from traffic fatalities in San Francisco was 22 people — but that number doesn’t include those who were killed in September and October, including Pilsoo Seong, 69, who died in the Mission last week after being hit by a truck. On Tuesday, the Board of Supervisors responded to public outcry over the issue by passing a resolution to declare a state of emergency for traffic safety in San Francisco.

Everyone has similar or same objectives of moving about on the street, it’s just that some are allowed to operate with such low levels of quality that they can indiscriminately murder others and say it’s within their expected operations.

I can give hundreds of similar examples. Jaywalking is an excellent example, as machines already have interpreted that racist law (human objective to criminalize non-white populations) as license to kill pedestrians without accountability.

Insider-threat as a Service (IaaS)

1961 film based on the “true story of Ferdinand Waldo Demara, a bright young man who hasn’t the patience for the normal way of advancement finds that people rarely question you if your papers are in order.”
During several presentations and many meetings this past week I’ve had to discuss an accumulation of insider threat stories related to cloud service providers.

It has gotten so bad that our industry should stop saying “evil maid” and say “evil SRE” instead. Anyway, has anyone made a list? I haven’t seen a good one yet so here’s a handy reference for those asking.

First, before I get into the list, don’t forget that engineers are expected to plan for people doing things incorrectly. It is not ok for any cloud provider to say they accept no responsibility for their own staff causing harm.

Do not accept the line that victims should be entirely responsible. Amazon has been forced to back down on this before.

After watching customer after customer screw up their AWS S3 security and expose highly sensitive files publicly to the internet, Amazon has responded. With a dashboard warning indicator…

And in the case of an ex-AWS engineer attacking a customer, the careless attitude of Amazon has become especially relevant.

“The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it,” said Cloudflare’s Evan Johnson. Now senators Ron Wyden and Elizabeth Warren have penned an open letter to the FTC, asking it to investigate if “Amazon’s failure to secure the servers it rented to Capital One may have violated federal law.” It noted that while Google and Microsoft have both taken steps to protect customers from SSRF attacks, “Amazon continues to sell defective cloud computing services to businesses, government agencies and to the general public.”

Second, a more complicated point regularly put forth by cloud companies including Amazon is how “at the time” means they can escape liability.

When an Uber driver brutally killed a little girl in a crosswalk, Uber tried to claim their driver “at the time” wasn’t their responsibility.

Attorneys for Uber said the ride-sharing company was not liable … because the driver was an independent contractor and had no reason to be actively engaged with the app at the time. […] “It’s their technology. They need to make it safe,” [family attorney] Dolan said in January, suggesting that a hands-free mode would bring the business into compliance with distracted-driving laws.

Uber paid the family of the dead girl an undisclosed sum a year later “to avoid a trial about its responsibility for drivers who serve its customers”.

Lyft ran into a similar situation recently when “rideshare rapists” around America operated “at the time” as private individuals who just happened to have Lyft signage in their car when they raped women. The women were intoxicated or otherwise unable to detect that drivers “at the time” were abusing intimate knowledge of gaps in a cloud service trust model (including weak background checks).

Imagine a bank telling you they have no responsibility for lost money in your account because “at the time” the person stealing from you was no longer their employee. It’s not quite so simple, otherwise companies would fire criminals in process and be able to instantly wash their hands without stopping actual harm.

Let’s not blindly walk into this thinking insiders, such as SRE, are off the hook when someone throws up pithy “at the time” arguments.

And now for the list

1) AWS engineer in 2019

The FBI has already arrested a suspect in the case: A former engineer at Amazon Web Services (AWS), Paige Thompson, after she boasted about the data theft on GitHub.

Remember how I said “at the time” arguments should not easily get people off the hook? The former engineer at AWS, who worked there around 2015-2016, used exploits known since 2012 on servers in 2019. “At the time” gets fuzzy here.

Let the timeline sink in, and then take note customers were making mistakes with Amazon for many years, and their staff would have front row seats to the vulnerabilities. Other companies treated the same situation very differently. “Little Man in My Head” put it like this:

The three biggest cloud providers are Amazon AWS, Microsoft Azure, and Google Cloud Platform (GCP). All three have very dangerous instance metadata endpoints, yet Azure and GCP applications seem to never get hit by this dangerous SSRF vulnerability. On the other hand, AWS applications continue to get hit over and over and over again. Does that tell you something?

Yes, it tells us that an inside engineer working for Amazon any time after 2012 would have seen a lot of vulnerabilities sitting ripe for the picking, unlike engineers at other cloud providers. Boasting about theft is what got the former engineer caught, which should make us wonder what about the engineers who never boasted?

2) Trend Micro engineer 2019

In early August 2019, Trend Micro became aware that some of our consumer customers running our home security solution had been receiving scam calls by criminals impersonating Trend Micro support personnel. The information that the criminals reportedly possessed in these scam calls led us to suspect a coordinated attack. Although we immediately launched a thorough investigation, it was not until the end of October 2019 that we were able to definitively conclude that it was an insider threat. A Trend Micro employee used fraudulent means to gain access to a customer support database that contained names, email addresses, Trend Micro support ticket numbers, and in some instances telephone numbers.

3) Google engineer in 2010

Google Site Reliability Engineer (SRE) David Barksdale was recently fired for stalking and spying on teenagers through various Google services…

And in case you were wondering how easily a Google SRE could get away with this kind of obvious bad stalking behavior over the last ten years, in 2018 the company said it was starting to fire people in management for crimes and nearly 50 were out the door:

…an increasingly hard line on inappropriate conduct by people in positions of authority: in the last two years, 48 people have been terminated for sexual harassment, including 13 who were senior managers and above.

Do you realize how huge the investigations team has to be to collect evidence let alone fire 48 people over two years for sexual harassment?

4) Uber employees in 2016

Uber’s lack of security regarding its customer data was resulting in Uber employees being able to track high profile politicians, celebrities, and even personal acquaintances of Uber employees, including ex-boyfriends/girlfriends, and ex-spouses…

5) Twitter engineers 2019

Ahmad Abouammo and Ali Alzabarah each worked for the company from 2013 to 2015. The complaint alleges that Alzabarah, a site reliability engineer, improperly accessed the data of more than 6,000 Twitter users. […] Even after leaving the company, Abouammo allegedly contacted friends at Twitter to facilitate Saudi government requests, such as for account verification and to shutter accounts that had violated the terms of service.

6) Yahoo engineer 2018

In pleading guilty, Ruiz, a former Yahoo software engineer, admitted to using his access through his work at the company to hack into about 6,000 Yahoo accounts.

Just for the record, Ruiz was hired into Okta (cloud identity provider) as their SRE for eight months before be pleaded guilty to massive identity-theft crimes at his former employer.

7) Facebook engineer 2018

Facebook fires engineer who allegedly used access to stalk women. The employee allegedly boasted he was a ‘professional stalker.’

9) Lyft employees 2018

…employees were able to use Lyft’s back-end software to “see pretty much everything including feedback, and yes, pick up and drop off coordinates.” Another anonymous employee posted on the workplace app Blind that access to clients’ private information was abused. While staffers warned one another that the data insights tool tracks all usage, there seemed to be little to no enforcement, giving employees free reign over it. They used that access to spy on exes, spouses and fellow Lyft passengers they found attractive. They even looked up celebrity phone numbers, with one employee boasting that he had Zuckerberg’s digits. One source admitted to looking up their significant other’s Lyft destinations: “It was addictive. People were definitely doing what I was.”

9) AirBnB employees 2015

Airbnb actually teaches classes in SQL to employees so everyone can learn to query the data warehouses it maintains, and it has also created a tool called Airpal to make it easier to design SQL queries and dispatch them to the Presto layer of the data warehouse. (This tool has also been open sourced.) Airpal was launched internally at Airbnb in the spring of 2014, and within the first year, over a third of all employees at the company had launched an SQL query against the data warehouse.

10) And last but definitely not least, in a story rarely told accurately, City of San Francisco engineer 2008

Childs made national headlines by refusing to hand over administrative control to the City of San Francisco’s FiberWAN network, [a cloud service] which he had spent years helping to create.

One of the most peculiar aspects of the San Francisco case was how the network was sending logs (and perhaps even more, given a tap on core infrastructure) to a cluster of encrypted linux servers, and only Terry Childs had keys. He installed those servers in a metal cabinet with wood reinforcements and padlocks outside the datacenter and behind his desk. Holes were drilled in the cabinet for cable runs back into the datacenter, where holes also were drilled to allow entry.

The details of his case are so bizarre I’m surprised nobody made a movie about it. Maybe if one had been made, we’d have garnered more attention about these insider risks to cloud than the book we published in 2012.

So who would be the equivalent today of Tony Curtis?

USAF Plans Around PsyOps Leaflet Dispersal Shortfall

Somehow I had missed some leaflet planning considerations over the past few years.

…shortfall on leaflet dispersal capability will jeopardize Air Force Central Command information operations,” said Earl Johnson, B-52 PDU-5/B project manager. The “Buff” can carry 16 PDU-5s under the wings, making it able to distribute 900,000 leaflets in a single sortie.

That’s a lot of paper.

Discussing leaflet drops the other night with a pilot set me straight on the latest methods and he emphasized some recent capability for computerized micro-target leaflet fall paths. It sounded too good to be true. I still imagine stuff floating everywhere randomly like snowflakes.

Then again he might have been pulling my leg, given how we were talking about psyops, or the person telling him.

Speaking of psyops, OpenAI is said to have leaked their monster into the world. We’re warned already it could unleash targeted text at scale even a B-52 couldn’t touch:

…extremist [use of OpenAI tools] to create ‘synthetic propaganda’ that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis…

And perhaps you can see where things right now are headed with this information dissemination race?

U.S. seen as ‘exporter of white supremacist ideology,’ says counterterrorism official

Can B-52s leaflet America fast enough to convince domestic terrorists to stop generating their AI-based long texts promoting white supremacist ideology?

Is Stanford Internet Observatory (SIO) a Front Organization for Facebook?

Image Source: Weburbanist’s ‘Façades’ series by Zacharie Gaudrillot-Roy
Step one: Facebook sets up privileged access (competitive advantage) to user data and leaks this privileged (back) door to Russia

(October 8, 2014 email in which Facebook engineer Alberto Tretti emails Archibong and Papamiltiadis notifying them that entities with Russian IP addresses have been using the Pinterest API access token to pull over 3 billion data points per day through the Ordered Friends API, a private API offered by Facebook to certain companies who made extravagant ads purchases to give them a competitive advantage against all other companies. Tretti sends the email because he is clearly concerned that Russian entities have somehow obtained Pinterest’s access token to obtain immense amounts of consumer data. Merely an hour later Tretti, after meeting with Facebook’s top security personnel, retracts his statement without explanation, calling it only a “series of unfortunate coincidences” without further explanation. It is highly unlikely that in only an hour Facebook engineers were able to determine definitively that Russia had not engaged in foul play, particularly in light of Tretti’s clear statement that 3 billion API calls were made per day from Pinterest and that most of these calls were made from Russian IP addresses when Pinterest does not maintain servers or offices in Russia)

Step two: Facebook CEO announces his company doesn’t care if information is inauthentic

Most of the attention on Facebook and disinformation in the past week or so has focused on the platform’s decision not to fact-check political advertising, along with the choice of right-wing site Breitbart News as one of the “trusted sources” for Facebook’s News tab. But these two developments are just part of the much larger story about Facebook’s role in distributing disinformation of all kinds, an issue that is becoming more crucial as we get closer to the 2020 presidential election. And according to one recent study, the problem is getting worse instead of better, especially when it comes to news stories about issues related to the election. Avaaz, a site that specializes in raising public awareness about global public-policy issues, says its research shows fake news stories got 86 million views in the past three months, more than three times as many as during the previous three-month period.

Step three: Facebook announces it has used an academic institution led by former staff to measure authenticity of information

Working with the Stanford Internet Observatory (SIO) and the Daily Beast, Facebook determined that the shuttered accounts were coordinating to advance pro-Russian agendas through the use of fabricated profiles and accounts of real people from the countries where they operated, including local content providers. The sites were removed not because of the content itself, apparently, but because the accounts promoting the content were engaged in inauthentic and coordinated actions.

Interesting to see former staff of Facebook hiding inside an academic context to work for Facebook on the thing that Facebook says it’s not working on.


Updated November 12 to add latest conclusions of the SIO about Facebook data analysis done by ex-Facebook staff instead of Facebook.

Considered as a whole, the data provided by Facebook — along with the larger online network of websites and accounts that these Pages are connected to — reveal a large, multifaceted operation set up with the aim of artificially boosting narratives favorable to the Russian state and disparaging Russia’s rivals. Over a period when Russia was engaged in a wide range of geopolitical and cultural conflicts, including Ukraine, MH17, Syria, the Skripal Affair, the Olympics ban, and NATO expansion, the GRU turned to active measures to try to make the narrative playing field more favorable. These active measures included social-media tactics that were repetitively deployed but seldom successful when executed by the GRU. When the tactics were successful, it was typically because they exploited mainstream media outlets; leveraged purportedly independent alternative media that acts, at best, as an uncritical recipient of contributed pieces; and used fake authors and fake grassroots amplifiers to articulate and distribute the state’s point of view. Given that many of these tactics are analogs of those used in Cold-War influence operations, it seems certain that they will continue to be refined and updated for the internet era, and are likely to be used to greater effect.

Electronic Warfare Planning and Management Systems

In 2014 I gave a series of talks looking at use of big data to predict effects/spread of disease, chemicals, bomb blast radius (especially in ubran areas) and how integrity controls greatly affected the future of our security industry.

This was not something I pioneered, by any stretch, as I was simply looking into the systems running on cloud by insurance companies. These companies were exhausting cloud capacity at that time to do all kinds of harm and danger predictions.

Granted I might have been the first to suggest a map of zombie movement would be interesting to plot, but the list of harm prediction goes on infinitely and everyone in the business of response wants a tool.

The 2015 electronic warfare (EW) activity in Ukraine and more recent experiences in Syria have prompted the US military to seek solutions in that area as well: given a set of features what could jamming look like and how should troops route around it, for example.

Source: “Electronic Warfare – The Forgotten Discipline… Refocus on this Traditional Warfare Area Key for Modern Conflict?” by Commander Malte von Spreckelsen, DEU N, NATO Joint Electronic Warfare Core Staff

It’s a hot topic these days:

The lack of understanding of the implications of EW can have significant mission impact – even in the simplest possible scenario. For example, having an adversary monitor one’s communications or eliminate one’s ability to communicate or navigate can be catastrophic. Likewise, having an adversary know the location of friendly forces based on their electronic transmissions is highly undesirable and can put those forces at a substantial disadvantage.

The US is calling their program Electronic Warfare Planning and Management Tool (EWPMT) and contractors are claiming big data analysis development progress already:

Raytheon began work on the final batch, known as a capability drop, in September. This group will use artificial intelligence and machine learning as well as a more open architecture to allow systems to ingest swaths of sensor data and, in turn, improve situational awareness. Such automation is expected to significantly ease the job of planners.

Niraj Srivastava, product line manager for multidomain battle management at Raytheon, told reporters Oct. 4 that thus far the company has delivered several new capabilities, including the ability for managers to see real-time spectrum interference as a way to help determine what to jam as well as the ability to automate some tasks.

It starts by looking a lot like what we use for commercial wireless site assessments starting around 2005. Grab all the signals by deploying sensors (static and mobile), generate a heatmap, and dump it into a large data store.

Then it leverages commercial agile development, scalable cloud infrastructure and machine learning from 2010 onward, to generate future predictive maps with dials to modify variables like destroying/jamming a signal source.

Open architectures for big data dropping in incremental releases. It’s amazing, and a little disappointing to be honest, how 2019 is turning out to be exactly what we were talking about in 2014.

$3M HIPAA Settlement for Hospital Failing Repeatedly to Encrypt Patient Data Over 10 Years

According to the HHS this hospital reported a breach in 2010, was given a warning with technical assistance, then was breached again in 2013 and 2017.

URMC filed breach reports with OCR in 2013 and 2017 following its discovery that protected health information (PHI) had been impermissibly disclosed through the loss of an unencrypted flash drive and theft of an unencrypted laptop, respectively. OCR’s investigation revealed that URMC failed to conduct an enterprise-wide risk analysis; implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level; utilize device and media controls; and employ a mechanism to encrypt and decrypt electronic protected health information (ePHI) when it was reasonable and appropriate to do so. Of note, in 2010, OCR investigated URMC concerning a similar breach involving a lost unencrypted flash drive and provided technical assistance to URMC. Despite the previous OCR investigation, and URMC’s own identification of a lack of encryption as a high risk to ePHI, URMC permitted the continued use of unencrypted mobile devices.

Encryption is not that hard, especially for mobile devices. Flash drives and laptops are trivial to enable and manage keys. It’s not a technical problem, it’s a management/leadership one, which is why these regulatory fines probably should be even larger and go directly into executive pockets.

Hospital Security Breaches Causing Increased Patient Death Rate

Deaths in America from heart disease are on the rise as a 2016 report warned

Heart disease is the No. 1 cause of death in the United States. But after nearly three decades in decline, the number of deaths from heart disease has increased in recent years, a new federal report shows.

Now a new study called “Data breach remediation efforts and their implications for hospital quality” (PDF) reports that a service quality decline increases death rates for patients with heart disease.

Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes. Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.

More specifically the study authors counted 36 more dead per 10,000 heart attacks every year due to security breaches, based on hundreds of hospitals examined. It even boils the data down to showing any care center with a breach will experience an electrocardiogram delay of 2.7 minutes for suspected heart attack patients.

Given the huge rise of ransomware since 2015, traced to weak security management practices of database companies, is there a case now to be made that software development is directly culpable for a rise in human deaths?

To put this in perspective, fewer people die from service delays (availability) than from mistakes (integrity), yet downstream integrity is impacted by availability. Medicals error studies call disruptions and mistakes the third leading cause of death in America.

A recent Johns Hopkins study claims more than 250,000 people in the U.S. die every year from medical errors. Other reports claim the numbers to be as high as 440,000.

Avoiding death from heart disease, which requires fast response and critical decision-making without error, becomes even harder to ensure as system availability declines due to breaches.

Searching in the Wild for What is Real

This new NY Books essay reads to me like prose and raises some important points about the desire to escape, and believing reality exists in places that we are not:

…when I look back at the series of wilderness travel articles I wrote for The New York Times a decade ago, what jumps out at me is the almost monomaniacal obsession with enacting Denevan’s myth by finding unpopulated places. Camped out in the Australian outback, I boasted that it was “the farthest I’d ever been from other human beings.” Along the “pristine void” of a remote river in the Yukon, I climbed ridges and scanned the horizon: “It was intoxicating,” I wrote, “to pick a point in the distance and wonder: Has any human ever stood there?”

Rereading those and other articles, I now began to reluctantly consider the possibility that my infatuation with the wilderness was, at its core, a poorly cloaked exercise in colonial nostalgia—the urbane Northern equivalent of dressing up as Stonewall Jackson at Civil War reenactments because of an ostensible interest in antique rifles.

As a historian I’d say he’s engaging in a poorly cloaked exercise is escapism, more like going to Disneyland than trying to reenact real events from the past (whether it be the white supremacist policies of Britain or America).

Just some food for thought after reading the ridiculously high percentage of fraud in today’s “wilderness” of software service providers.

Fake Identity Farms Generating Fraud on All Sides for Profits

Earlier this year researchers disclosed in a study that the lack of regulation has allowed BitCoin markets to be over 90% fraud.

Nearly 95% of all reported trading in bitcoin is artificially created by unregulated exchanges, a new study concludes, raising fresh doubts about the nascent market following a steep decline in prices over the past year.

Earlier analysis had pointed to robots programmed to manipulate at large and fast scale

Bitcoin prices were being manipulated in late 2013 by a pair of autonomous computer programs running on bitcoin exchange MtGox, according to an anonymously published report.

The programs, named Willy and Markus, allegedly pushed prices up to $1,000 before the bubble burst after MtGox’s collapse in late February.

The report’s author alleges that some of the trades were coming from inside the exchange itself. “In fact,” the report says, “there is a ton of evidence to suggest that all of these accounts were controlled by MtGox themselves.”

And here’s some brand new reporting on a different value system, social media fraud by someone who worked inside an operation:

The farm has both left- and right-wing troll accounts. That makes their smear and support campaigns more believable: instead of just taking one position for a client, it sends trolls to work both sides, blowing hot air into a discussion, generating conflict and traffic and thereby creating the impression that people actually care about things when they really don’t – including, for example, about the candidacy of a recently elected member of the Polish parliament.

I suppose we can say now the Ashley Madison dataset was no exception to widespread online fraud:

Over 20 million male customers had checked their Ashley Madison email boxes at least once. The number of females who checked their inboxes stands at 1,492. There have already been multiple class action lawsuits filed against Ashley Madison and its parent company, Avid Life Media, but these findings could send the figures skyrocketing. If true, it means that just 0.0073% of Ashley Madison’s users were actually women — and that changes the fundamental nature of the site.

People keep asking what will a future life with robots look like, when we’re obviously already living in it. It basically looks like a world where the late 1800s common phrase in America “there is a sucker born every day” continues to haunt the security industry…

The Great Conspiracy: A Complete History of the Famous Tally-sheet Cases, by Simeon Coy, 1889, p 222

“Sneaking” banks refers to a social engineering trick where one person creates a distraction while the other sneaks money out of the vault.

Note how even back in 1889 an author writes about banks and jewlers hacking themselves to become wise to how to stop hackers. Threats mostly were targeting people too weak to protect themselves individually (hinting towards a need for regulatory oversight).

1960 Police Murder of Marvin Williams. How is This Not a Movie?

White Lightning, a movie about police corruption in Arkansas, gives only a very general and fictional retelling of what justice was like in the town and county where Williams was murdered.

Ned Beatty played the fictitious Sheriff J.C. Connors, said by some to be the spitting image of Faulkner County Sheriff Joe Martin, who served as jailer the night Williams died in police custody.

I’ve searched high and low and there seems to be no mainstream re-telling of the exact Marvin Williams story. It reads like such an obvious script for a major movie I’m curious why nothing has been done.

In brief, Williams was a black 21-year old man in May 1960 (serving in the military?) when two white police officers apparently pulled him into a County jail at night where he was beaten to death by police clubs.

The officers reported Williams was so intoxicated he was non-responsive and fell down stairs killing himself by hitting his forehead. An autopsy report stated that Williams had no alcohol in his blood and he died from a blot clot caused by concussions to the back of his head.

The Williams family reached out to lawyers and the FBI for an investigation and were rebuffed completely. The autopsy wasn’t even reviewed.

A few very brief news mentions of the case then get made 25 years later.

First, in August 1985 a trial opens when a witness comes forward no longer afraid to testify:

The case was closed until a former inmate wrote to officials last year saying he saw a black man being beaten by two men the night Mr. Williams was arrested.

Second, in September 1985 an all-white jury acquitted two men.

Two white former policemen were acquitted by an all-white jury today of charges that they beat a black jail inmate to death 25 years ago. A gasp echoed around the courtroom when the verdict was read, ending the trial of O.H. Mullenax, 48 years old, and Marvin Iberg, 50. […] The day after Mr. Williams died, a coroner’s jury cleared the two policemen, saying Mr. Williams had fallen and struck his head on the courthouse steps. But the jury was not shown either an autopsy report that said Mr. Williams had died of a brain hemorrhage caused by a fracture to the back of his skull or results of a blood test that found no alcohol in his blood. Witnesses at the trial of the two former policemen testified that Mr. Williams drank little or nothing and was uninjured before his arrest.

That seems obviously corrupt on the face of it.

Marvin Iberg allegedly had a reputation of being a stereotypical “white power” personality who joined the police to abuse authority.

And then there’s this weird quote by the judge:

Presiding Judge Don Langston said after the verdict that the jury ‘could have gone either way. I think the evidence was there to find a guilty verdict’ or to find an innocent verdict.

The key witness reportedly felt so fearful he had to withhold his testimony for 25 years. Fairness clearly was an issue. Also the witness said he thought jailer Joe Martin was the man who beat Williams to death (Martin later became Sheriff and allegedly so corrupt he inspired a 1973 movie about tax evasion called White Lightning).

After all that in 1985, the 1987 Court of Appeals seems to have written their decision as if there was no barrier to a fair prosecution.

The question presented here is whether these defendants fraudulently concealed evidence. The racial atmosphere of an entire State cannot justly be charged to their personal account. Nor is it true that a black plaintiff’s Section 1983 claim would not have been fairly tried in a federal court in the early sixties…

There are just so many disappointing turns to this case, again I wonder why someone hasn’t at least made a short film about it.

USA Today has been working on a modern “tarnished brass” database of police misconduct, which might help reveal why Marvin Williams’ family was unable to achieve justice.

Every year, tens of thousands of police officers are investigated for serious misconduct — assaulting citizens, driving drunk, planting evidence and lying among other misdeeds. The vast majority get little notice. And there is no public database of disciplined police officers.

the poetry of information security