TRIGGER WARNING: This post has important ideas and may cause the reader to think deeply about the Sherman-like doctrine of property destruction to save lives. Those easily offended by concepts like human rights should perhaps instead read the American Edition.
Let us say, hypothetically, that Tesla tells everyone they want to improve safety while secretly they aim to make roads far less safe.
Here is further food for thought. Researchers have proven that AI developed under a pretense of safety improvement can easily be flipped to do the opposite and cause massive harms. They called their report “dual-use discovery”, as if any common tool like a chef knife or a hammer are “dual-use” when someone can figure out how to weapanize things. Is that really a second discovery, the only other use option… being that it’s the worst one?
According to The Verge, these researchers took AI models intended to predict toxicity, which is billed usually as a helpful risk prevention step, and then instead trained them to increase toxicity.
It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.
Potentially lethal. Theoretically dead.
The use-case dilemma of hacking “intelligence” (processed data) is a lot more complicated than the usual debate about how hunting rifles are different from military assault rifles, or that flame throwers have no practical purposes at all.
One reason it is more complicated is America generally has been desensitized to high fatality rates from harmful application of automation machines (e.g. after her IP was stolen the cotton engine service model — or ‘gin — went from being Caty Greene’s abolitionist invention to a bogus justification for expansion of slavery all the way to Civil War). Car crashes often are treated as “individual” decisions given unique risk conditions, rather than seen as a systemic failure of a society rotating around profit from criminalization of poverty.
Imagine asking things like what is the purpose of the data related to use of a tool, measuring how is it being operated/purposed, and can a systemic failure be proven by examining it from origin to application (e.g. lawn darts or the infamous “Audi pedal“)? Is there any proof of failsafe or safety?
Lots of logic puzzles come up in threat models, which most people are nowhere near prepared to answer at the tree let alone forest level… perhaps putting us all in immediate fire danger without much warning.
Despite complexity, such problems actually are increasingly easily expressed in real terms. Ten years ago when I talked about it, audiences didn’t seem to digest my warnings. Today, people right away understand exactly what I mean by a single algorithm that controls millions of cars all simultaneously turned into a geographically dispersed “bad actor” swarm.
Tesla.
Where is this notoriously bad actor with regard to transparency and even proof on such issues? Can they prove cars are not, and can not be, trained as a country-wide loitering munition to cause mass casualties?
What if their uniquely bad death tolls already mounting are a result of them since 2016 developing AI (ignoring, allowing, enabling or performing) such that their crashes have been increasing in volume, variety and velocity due to an organized intentional disregard for law and order?
ChatGPT, what do you think, given that Tesla now claims that you were its creation?
