A nod to the Italy Intellectual Property Blog for an important story that I haven’t seen reported anywhere else:
The Italian Privacy Authority announced today that it has launched an investigation to verify whether websites are adopting adequate security measures to prevent the massive collection of personal data for the purpose of training AI algorithms. […] The investigation will therefore concern all data controllers who operate in Italy and make their users’ personal data available online (and thus accessible by developers of AI services), in order to verify whether said controllers adopt adequate security measures to safeguard their users’ rights.
I am especially curious whether Italy will address integrity of user information, such as cases where data controllers have gotten things wrong. We are overdue for development of data integrity breach language from state regulators.
Also in related news, Italy is moving forward with France and Germany on very interesting AI regulation that focuses on operational risks with the technology.
France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level. The three governments support “mandatory self-regulation through codes of conduct” for so-called foundation models of AI, which are designed to produce a broad range of outputs. But they oppose “un-tested norms.” […] “Together we underline that the AI Act regulates the application of AI and not the technology as such,” the joint paper said. “The inherent risks lie in the application of AI systems rather than in the technology itself.”
That direction sounds wise to me given the most “secure” AI technology can translate directly into the most unsafe AI. For example, Tesla’s robot has been killing more people than any other robot in history because it games a narrowly focused “technology” test (e.g. “five star” crash ratings) to mislead people into using its unsafe AI, and then tragically and foolishly promotes unsafe (arguably cruel and anti-social) operation. I’m reminded of a Facebook breach that some in CISO-circles were calling “insider threat from machine“.
“Good management trumps good technology every time, yet due to the ever-changing threatscape of the tech industry, inexperienced leadership is oftentimes relied upon for the sake of expediency.” He continues, how within the world of cybersecurity, “The Peter Principle is in full effect. People progress to their level of incompetence, meaning a lot of people in leadership within cyber have risen to a level that is difficult for them to execute and often lack formal technical training. As a CISO, there is a need to configure, identify, and negotiate the cost of protecting an organization, and without the adequate experience or a disciplined approach, this mission is executed poorly.”
Speed of production that leans on inexperienced humans, who also lack discipline (regulation), opens the door to even the most “secure by design” technology turning into an operations (societal) nightmare.
America was built on and by regulation. It depends on regulation. A lack of regulation, from those who promote the permanent improvisation of tyranny, will destroy it.