A judge has found “reasonable evidence” that Elon Musk and other executives at Tesla knew that the company’s self-driving technology was defective but still allowed the cars to be driven in an unsafe manner anyway, according to a recent ruling issued in Florida.
Palm Beach county circuit court judge Reid Scott said he’d found evidence that Tesla “engaged in a marketing strategy that painted the products as autonomous” and that Musk’s public statements about the technology “had a significant effect on the belief about the capabilities of the products”.
Tesla’s CEO promised his customers that by 2018 they “do not need to touch the wheel”. This brand new 2018 Model 3 in California crashed almost immediately after testing his words, revealing truth: without fraud there would be no Tesla.Tesla software accelerated into pedestrians, parked motorcycles and a van. The company has for years manipulated courts and press to cover up this very important 2018 crash, while Uber’s similar crash gathered international condemnation. Source: US District Court.The Tesla ran a red light and crashed into the MIDDLE of a giant white bus in an empty intersection. Calling Tesla’s latest 2023 version of its failed driverless software blind would be… unfair to the blind. Source: Sacramento County Sheriff’s Office.Tesla deaths compared to all other EVs showed the obvious problem by 2021. It’s about accountability for lies, all about the Tesla CEO who regularly lies. Source: tesladeaths.comTesla is the worst engineered vehicle on the road, with the most defective ADAS, by far. It kills far more people than all other brands combined. Source: Washington Post
…technology has only grown more ubiquitous, not least because selling it is a lucrative business, and A.I. companies have successfully persuaded law-enforcement agencies to become customers. […] Last fall, a man named Randal Quran Reid was arrested for two acts of credit-card fraud in Louisiana that he did not commit. The warrant didn’t mention that a facial-recognition search had made him a suspect. Reid discovered this fact only after his lawyer heard an officer refer to him as a “positive match” for the thief. Reid was in jail for six days and his family spent thousands of dollars in legal fees before learning about the misidentification, which had resulted from a search done by a police department under contract with Clearview AI. So much for being “100% accurate.”
You think that’s bad?
Imagine how many people since the 1960s in America have tangled into fines, jail or even being killed due to inaccurate and unreliable “velocity” sensors or “plate recognition” used for racial profiling law enforcement. The police know about technology flaws, and judges too, yet far too often they treat their heavy investments in poorly measured and irregularly operated technology as infallible.
They also have some clever court rules to protect their players in the game. For example, try walking into a court and saying this:
maximum acceleration = velocity accuracy / sample time
amax = Maximum Acceleration
±vacc = Velocity Accuracy
ti = Sample Time
amax = ± vacc / ti
A speed sensor typically measures velocity of an object traveling a set distance (between “gates” that are within range of the sensor). Only targets within these parameters will have a fair detection or reading.
…accelerations must not be neglected in the along-track velocity estimation step if accurate estimates are required.
If a radar sensor samples every second, a velocity change greater than 1.0 mph can exceed a limit to accurately read. A half-second sample would be a 2.0 mph change limit. A quarter-second sample would be a 4.0 mph change limit, and so forth.
What?
In other words, you step up to the judge and tell them their beloved expensive police toy technology is unable to measure vehicle velocity when it changes faster than a known calculated limit of a radar device, which is a problem especially pronounced around common road curves and with vehicle angles (e.g. “cosine effect” popularized in school math exams).
Any trustworthy court assessment would take a look at radar specs and acceleration risk to the sensor …to which the judge might spit their chew into a bucket and say “listen here Mr. smarty-math-pants big-city slicker from out-of-town, you didn’t register with our very nice and welcoming court here as an expert, therefore you are very rude and nothing you say can be heard here! Our machine says you are… GUILTY!” as they throw out any or all evidence that proves technology can be wrong.
Source: “Traffic Monitoring with SAR: Implications of Target Acceleration”, Microwaves and Radar Institute, DLR, Germany
Not saying this actual court exchange really happened in rural America, or that I gave a 2014 BlackHat talk about this happening (to warn that big data systems are highly vulnerable to breaches of integrity), but… anyway, have you seen Blazing Saddles?
“Nobody move or the N* gets it!”
It’s like saying guns don’t kill people, AI with guns kill people.
AI is just technology and it makes everything worse if we allow it to escape the fundamental social sciences of where and how people apply technology.
Fast forward (pun not intended) and my warnings from 2014 big data security talks have implications to things like “falsification methods to reveal safety flaws in adaptive cruise control (ACC) systems of automated vehicles”.
…we present two novel falsification methods to reveal safety flaws in adaptive cruise control (ACC) systems of automated vehicles. Our methods use rapidly-exploring random trees to generate motions for a leading vehicle such that the ACC under test causes a rear-end collision.
Falsification in AI safety literally has become a dangerous life and death crisis over the past decade, with some (arguably racist) robots already killing over 40 people.
Dead. Killed by AI.
Cars don’t kill people, AI in cars kill people. In fact, since applying AI to cars in a rush to put robots in charge of life or death decisions, Tesla has killed more people in a few short years than all people killed by all robots in history.
That’s a fact, as we recently published in The Atlantic. Predictable disaster, I say, because I warned about exactly this result for the past ten years (e.g. 2016 Ground Truth Keynote presentation at BSidesLV). Perhaps all these deaths are evidence of what courts now refer to as product “harm by design” due to a documented racist and antisemite.
…performance specifications ensure the devices are accurate and reliable when properly operated and maintained…
Specifications. Proper operation.
Show me the comparable setup from NIST for a conforming list of AI image reading devices used by police, not to mention definitions of proper operation.
Let’s face it (pun not intended), any AI solution based on sensor data of any kind including cameras should have come under the same scrutiny as other reviews (human or machine) of sensor data, to avoid repeating all the inexcusable rookie mistakes injustices by overconfident technology-laden police over several prior decades.
And on that note, the police should expect to be severely harmed by AI themselves in careless operation.
Cluster of testicular cancer in police officers exposed to hand-held radar
Where are all the social scientists when you need them?
“No warning came with my radar gun telling me that this type of radiation has been shown to cause all types of health problems including cancer,” [police Officer] Malcolm said. “If I had been an informed user I could have helped protect myself. I am not a scientist but a victim of a lack of communication and regulation.” […] “We’re putting a lot of people at risk unnecessarily,” [Senator] Dodd said. “The work of police officers is already dangerous, and officers should not have to worry about the safety of the equipment they use.”
Which reminds me of the police officers who have been suing gun manufacturers over a lack of safety. You’d think, given the track record of high risk technology in law enforcement, no police department in their right mind would apply any AI to their work without clear and tested safety regulations. If you find any police department foolishly buying the notoriously deadly AI of Tesla, for example, they are headed directly into a tragic world of injustice.
Judge finds ‘reasonable evidence’ Tesla knew self-driving tech was defective
Well, does anyone really have any doubts now about Elon Musk being antisemitic?
First, read this 1936 book on why nobody ever should say some of their friends are Jews in response to accusations of antisemitism.
Robert Gessner (1907-1968) was a Jewish American screenwriter and author born in Michigan. He had a BA from University of Michigan in 1929 and then a MA from Columbia University in 1930. New York University immediately hired him to teach. He traveled to several European countries in 1934, taking photographs and filming. Gessner in 1936 published a book about these journeys, in which he explicitly warned of the Nazi threat in Europe.
That phrase, it’s a dire warning. It’s a well known phrase of antisemitism associated with Nazi Germany.
A book about how some things apparently haven’t changed.
Still, to this day, we see a well known (and researched), unmistakable phrase of antisemitism.
Third, note the phrase chosen by the man increasingly becoming known for… his antisemitism.
“I’m aware of that old sort of trope of like, you know, ‘I have a Jewish friend,’” Musk said. “I don’t have a Jewish friend. I think probably, I have twice as many Jewish friends as non-Jewish friends. That’s why I think I like to think I’m Jewish basically.”
A twist.
He says he can avoid the trope, then plows straight into it by implying some of his best friends are Jews by hinting at having “numbers”. Then he clumsily erases his friends’ Jewish identities by claiming he is “basically” them, as if unclear (perhaps revealing his deeper thought “my best friends are me“).
This is evidence of the lazy and arrogant antisemite who doesn’t even try to avoid the most glaringly obvious mistakes of history.
For as long as we could remember, the adults had lived in this contradictory way with complete unconcern. One was friendly with individual Jews whom one liked, just as one was friendly as a Protestant with individual Catholics. But while it occurred to nobody to be ideologically hostile to the Catholics, one was, utterly, to the Jews. In all this no one seemed to worry about the fact that they had no clear idea of who “the Jews” were.
From 2012 I warned here and in talks that the biggest and most significant problem in big data security was integrity. The LLM zealots didn’t listen.
By 2016 in the security conference circuit I was delivering a series of talks about driverless cars being a huge looming threat to pedestrian safety.
Eventually, with Uber and Tesla both killing pedestrians in April 2018, I warned that the move to such low quality robots on the road would increase conflict and fatalities even more instead of helping safety.
Well, as you might guess from my failure to slow LLM breaches, I had little to no impact on the people in charge of regulating driverless engineering; nowhere near enough influence to stop predictable disasters.
It’s especially frustrating to now read that the NHTSA, which was politically corrupted by Tesla in 2016 to ignore robot safety and hide deaths, is still so poorly prepared to prevent driverless causing pedestrian deaths.
Feds Have No Idea How Many Times Cruise Driverless Cars Hit Pedestrians
Speaking of data, confidence in driverless has continued to fall as evidence rolls in, which is a classic product management dilemma of where and how to field safety reports. Welcome to 2012? We could have avoided so much suffering and loss.
a blog about the poetry of information security, since 1995