California Proposes Citizen Opt-Out Button for AI

Having a way to disable a malfunctioning, let alone malicious, robot is absolutely essential to basic human rights (good governance).

Artificial intelligence can help decide whether you get a job, bank loan or housing — but such uses of the technology could soon be limited in California. Regulations proposed today would allow Californians to opt out of allowing their data to be used in that sort of automated decision making. The draft rules, floated by the California Privacy Protection Agency, would also let people request information on how automated decisions about them were made.

What’s missing from this analysis is two-fold.

  1. Opt-out is framed as disable, such as complete shutdown, without the more meaningful “reset” as a path out of danger. Leaving a service where there is nothing behind is one thing, and also highly unlikely/impractical due to “necessary” exceptions and clauses. Leaving a bunch of mistakes behind is another. The Agency should be planning for a reset even more than trying to enforce a tempting but usually false promise of a hard shutdown. This has been one of the hidden (deep in the weeds) lessons of GDPR.
  2. Letting people request their information on automation decisions is backwards. With AI processing on a Solid Pod (distributed personal data store) these requests would be made to the person instead of from them. Even with the opportunity to chase your data all over the place, people are far better off achieving the same end without being saddled with a basically impossible and expensive task of finding everyone everywhere making decisions about them without their consent.

See also: Italy

Italian Privacy Authority Announces Investigation Into AI Data Collections

A nod to the Italy Intellectual Property Blog for an important story that I haven’t seen reported anywhere else:

The Italian Privacy Authority announced today that it has launched an investigation to verify whether websites are adopting adequate security measures to prevent the massive collection of personal data for the purpose of training AI algorithms. […] The investigation will therefore concern all data controllers who operate in Italy and make their users’ personal data available online (and thus accessible by developers of AI services), in order to verify whether said controllers adopt adequate security measures to safeguard their users’ rights.

I am especially curious whether Italy will address integrity of user information, such as cases where data controllers have gotten things wrong. We are overdue for development of data integrity breach language from state regulators.

Also in related news, Italy is moving forward with France and Germany on very interesting AI regulation that focuses on operational risks with the technology.

France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level. The three governments support “mandatory self-regulation through codes of conduct” for so-called foundation models of AI, which are designed to produce a broad range of outputs. But they oppose “un-tested norms.” […] “Together we underline that the AI Act regulates the application of AI and not the technology as such,” the joint paper said. “The inherent risks lie in the application of AI systems rather than in the technology itself.”

That direction sounds wise to me given the most “secure” AI technology can translate directly into the most unsafe AI. For example, Tesla’s robot has been killing more people than any other robot in history because it games a narrowly focused “technology” test (e.g. “five star” crash ratings) to mislead people into using its unsafe AI, and then tragically and foolishly promotes unsafe (arguably cruel and anti-social) operation. I’m reminded of a Facebook breach that some in CISO-circles were calling “insider threat from machine“.

“Good management trumps good technology every time, yet due to the ever-changing threatscape of the tech industry, inexperienced leadership is oftentimes relied upon for the sake of expediency.” He continues, how within the world of cybersecurity, “The Peter Principle is in full effect. People progress to their level of incompetence, meaning a lot of people in leadership within cyber have risen to a level that is difficult for them to execute and often lack formal technical training. As a CISO, there is a need to configure, identify, and negotiate the cost of protecting an organization, and without the adequate experience or a disciplined approach, this mission is executed poorly.”

Speed of production that leans on inexperienced humans, who also lack discipline (regulation), opens the door to even the most “secure by design” technology turning into an operations (societal) nightmare.

America was built on and by regulation. It depends on regulation. A lack of regulation, from those who promote the permanent improvisation of tyranny, will destroy it.

SpaceX a Private Military Company (PMC) Tries to Meddle in Another War

SpaceX CEO “Space Karen” defends his mercenary-like corporate strategy by delivering “bizarre and crude comments” at the New York Times DealBook Summit. Source: The Ringer

I’m beginning to wonder if reports like this 2002 one about Russian arms dealers are the real reason SpaceX was founded that same year and keeps sticking its nose into conflicts.

…corporate armies, often providing services normally carried out by a national military force, offer specialized skills in high-tech warfare, including communications and signals intelligence and aerial surveillance, as well as pilots, logistical support, battlefield planning and training. They have been hired both by governments and multinational corporations to further their policies or protect their interests.

For example, the South African-born man who founded SpaceX was very well aware that…

Two helicopter gunships piloted by South African mercenaries, for example, altered the balance of war in Sierra Leone in 1999 in favor of the government.

Fast-forward and that South African-born founder of SpaceX has been widely panned for being a Private Military Company (PMC) meddling in the Ukraine war, especially after fraudulently trying to claim he altered the balance (e.g. helped Russia).

SpaceX keeps failing on its supposed “primary” mission to get a rocket to work properly, and yet it is again distracted and diverting resources into another war that seems to involve Russia.

Starlink, a satellite internet service operated by the Elon Musk-owned SpaceX, will only be allowed to operate in the Gaza Strip following approval by the Israeli Ministry of Communication. … [The Self-proclaimed “free speech absolutist” Elon Musk] has “identified and removed hundreds of Hamas-affiliated accounts” since the start of the war.

Related: With all the bluster and bombast of a typical South African mercenary outfit, SpaceX promises to build a time machine to renegotiate all its broken promises to land on Mars by 2018.

AI Falls Apart: CEO Removed for Failing Ethics Test is Put Back Into Power by “Full Evil” Microsoft

Confusing signals are emanating from Microsoft’s “death star”, with some ethicists suggesting that it’s not difficult to interpret the “heavy breathing” of “full evil“. Apparently the headline we should be seeing any day now is: Former CEO ousted in palace coup, later reinstated under Imperial decree.

Even by his own admission, Altman did not stay close enough to his own board to prevent the organizational meltdown that has now occurred on his watch. […] Microsoft seems to be the most clear-eyed about the interests it must protect: Microsoft’s!

Indeed, the all-too-frequent comparison of this overtly anti-competitive company to a fantasy “death star” is not without reason. It’s reminiscent of 101 political science principles that strongly resonate with historical events that influenced a fictional retelling. Using science fiction like “Star Wars” as a reference is more of a derivative analogy, not necessarily the sole or even the most fitting popular guide in this context.

William Butler Yeats’ “The Second Coming” is an even better reference that every old veteran probably knows. If only American schools made it required reading, some basic poetry could have helped protect national security (better enable organizational trust and stability of critical technology). Chinua Achebe’sThings Fall Apart” (named for Yeats’ poem) is perhaps an even better, more modern, guide through such troubled times.

“The falcon cannot hear the falconer; Things fall apart; the center cannot hold; Mere anarchy is loosed upon the world.” Things Fall Apart was the debut novel of Nigerian author Chinua Achebe, published in 1958.

Here’s a rough interpretation of Yeats through Achebe, applied as a key to decipher our present news cycles:

Financial influence empowers a failed big tech CEO with privilege, enabling their reinstatement. This, in turn, facilitates the implementation of disruptive changes in society, benefiting a select few who assume they can shield themselves from the widespread catastrophes unleashed upon the world for selfish gains.

And now for some related news:

The US, UK, and other major powers (notably excluding China) unveiled a 20-page document on Sunday that provides general recommendations for companies developing and/or deploying AI systems, including monitoring for abuse, protecting data from tampering, and vetting software suppliers.

The agreement warns that security shouldn’t be a “secondary consideration” regarding AI development, and instead encourages companies to make the technology “secure by design”.

That doesn’t say ethical by design. That doesn’t say moral. That doesn’t even say quality.

It says only secure, which is a known “feature” of dictatorships and prisons alike. How did Eisenhower put it in the 1950s?

From North Korea to American “slave catcher” police culture, we understand that excessive focus on security without a moral foundation can lead to unjust incarceration. When security measures are exploited, it can hinder the establishment of a core element of “middle ground” political action such as compassion or care for others.

If you enjoyed this post please go out and be very unlike Microsoft: do a kind thing for someone else, because (despite what the big tech firms are trying hard to sell you) the future is not to forsee but to enable.

Not the death star