Putin Rewards the Execution of Russian Women

Apparently Russia has not only made it legal for men to execute women, forgiving debts and penalties, it now comes with a job offer.

The investigation of Pekhteleva’s murder lasted nearly 22 months. In July 2022, Kanyus was sentenced to 17 years in a penal colony and ordered to pay the family of his victim some $45,000 in compensation. But less than a year later, in April 2023, Pekhteleva’s parents, Oksana and Yevgeny Pekhtelev, saw his photograph on social media: The man who had tortured and slowly murdered their daughter stood with a group of soldiers, wearing a military uniform and holding a machine gun.


Human-rights defenders point to systemic damage to justice and law enforcement. “This is a new level of catastrophe, the final end of judicial law,” Alexander Cherkasov, who works for the human-rights group Memorial, told me. “All these murderers went to prison after investigators investigated, prosecutors accused, judges sentenced—all of that law-enforcement work is now meaningless.”

After briefly working for the military, tens of thousands of convicted murderers are being released back into Russia where they commit new violence against women.

“Indeed, there is recidivism,” Putin admitted of the returning convicts back in June.

California Proposes Citizen Opt-Out Button for AI

Having a way to disable a malfunctioning, let alone malicious, robot is absolutely essential to basic human rights (good governance).

Artificial intelligence can help decide whether you get a job, bank loan or housing — but such uses of the technology could soon be limited in California. Regulations proposed today would allow Californians to opt out of allowing their data to be used in that sort of automated decision making. The draft rules, floated by the California Privacy Protection Agency, would also let people request information on how automated decisions about them were made.

What’s missing from this analysis is two-fold.

  1. Opt-out is framed as disable, such as complete shutdown, without the more meaningful “reset” as a path out of danger. Leaving a service where there is nothing behind is one thing, and also highly unlikely/impractical due to “necessary” exceptions and clauses. Leaving a bunch of mistakes behind is another. The Agency should be planning for a reset even more than trying to enforce a tempting but usually false promise of a hard shutdown. This has been one of the hidden (deep in the weeds) lessons of GDPR.
  2. Letting people request their information on automation decisions is backwards. With AI processing on a Solid Pod (distributed personal data store) these requests would be made to the person instead of from them. Even with the opportunity to chase your data all over the place, people are far better off achieving the same end without being saddled with a basically impossible and expensive task of finding everyone everywhere making decisions about them without their consent.

See also: Italy

Italian Privacy Authority Announces Investigation Into AI Data Collections

A nod to the Italy Intellectual Property Blog for an important story that I haven’t seen reported anywhere else:

The Italian Privacy Authority announced today that it has launched an investigation to verify whether websites are adopting adequate security measures to prevent the massive collection of personal data for the purpose of training AI algorithms. […] The investigation will therefore concern all data controllers who operate in Italy and make their users’ personal data available online (and thus accessible by developers of AI services), in order to verify whether said controllers adopt adequate security measures to safeguard their users’ rights.

I am especially curious whether Italy will address integrity of user information, such as cases where data controllers have gotten things wrong. We are overdue for development of data integrity breach language from state regulators.

Also in related news, Italy is moving forward with France and Germany on very interesting AI regulation that focuses on operational risks with the technology.

France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level. The three governments support “mandatory self-regulation through codes of conduct” for so-called foundation models of AI, which are designed to produce a broad range of outputs. But they oppose “un-tested norms.” […] “Together we underline that the AI Act regulates the application of AI and not the technology as such,” the joint paper said. “The inherent risks lie in the application of AI systems rather than in the technology itself.”

That direction sounds wise to me given the most “secure” AI technology can translate directly into the most unsafe AI. For example, Tesla’s robot has been killing more people than any other robot in history because it games a narrowly focused “technology” test (e.g. “five star” crash ratings) to mislead people into using its unsafe AI, and then tragically and foolishly promotes unsafe (arguably cruel and anti-social) operation. I’m reminded of a Facebook breach that some in CISO-circles were calling “insider threat from machine“.

“Good management trumps good technology every time, yet due to the ever-changing threatscape of the tech industry, inexperienced leadership is oftentimes relied upon for the sake of expediency.” He continues, how within the world of cybersecurity, “The Peter Principle is in full effect. People progress to their level of incompetence, meaning a lot of people in leadership within cyber have risen to a level that is difficult for them to execute and often lack formal technical training. As a CISO, there is a need to configure, identify, and negotiate the cost of protecting an organization, and without the adequate experience or a disciplined approach, this mission is executed poorly.”

Speed of production that leans on inexperienced humans, who also lack discipline (regulation), opens the door to even the most “secure by design” technology turning into an operations (societal) nightmare.

America was built on and by regulation. It depends on regulation. A lack of regulation, from those who promote the permanent improvisation of tyranny, will destroy it.

SpaceX a Private Military Company (PMC) Tries to Meddle in Another War

I’m beginning to wonder if reports like this 2002 one about Russian arms dealers are the real reason SpaceX was founded that year and has been sticking its nose into conflicts.

…corporate armies, often providing services normally carried out by a national military force, offer specialized skills in high-tech warfare, including communications and signals intelligence and aerial surveillance, as well as pilots, logistical support, battlefield planning and training. They have been hired both by governments and multinational corporations to further their policies or protect their interests.

For example, the South African-born man who founded SpaceX was very well aware that…

Two helicopter gunships piloted by South African mercenaries, for example, altered the balance of war in Sierra Leone in 1999 in favor of the government.

Fast-forward and that South African-born founder of SpaceX has been widely panned for being a Private Military Company (PMC) meddling in the Ukraine war, especially after fraudulently trying to claim he altered the balance (e.g. helped Russia).

SpaceX keeps failing on its supposed “primary” mission to get a rocket to work properly, and yet it is again distracted and diverting resources into another war that seems to involve Russia.

Starlink, a satellite internet service operated by the Elon Musk-owned SpaceX, will only be allowed to operate in the Gaza Strip following approval by the Israeli Ministry of Communication. … [The Self-proclaimed “free speech absolutist” Elon Musk] has “identified and removed hundreds of Hamas-affiliated accounts” since the start of the war.

Related: With all the bluster and bombast of a typical South African mercenary outfit, SpaceX promises to build a time machine to renegotiate all its broken promises to land on Mars by 2018.