Category Archives: Security

Tesla is Being Sued Up to 13 Times EVERY Day

For a decade now I have called out very vocally the catastrophically poor engineering culture of Tesla, which has predictably been leading to an explosion of defects and technical debt. In a 2016 security confernce keynote (after just the second Tesla “driverless” fatality) I called them out as a looming Titanic problem.

Great Disasters of Machine Learning: Predicting Titanic Events in Our Oceans of Math

Unfortunately my warnings didn’t land, and the CEO continued unrestrained. A failure of the industry to self-regulate, or even the government to step in and prevent obvious public harms (e.g. Tesla corrupted the 2016 White House and NHTSA), brings us to where we are today.

There has been a documented lack of quality controls, let alone any real evidence of morality or ethics in Tesla, required for successful engineering. This is how and why it was so easy to see in 2016 and why it has very easily predictable disasterous results. Tests of Tesla “driverless” that I personally was involved with back then very quickly revealed high risk of catastrophy. Many could describe the fraud. But who didn’t listen?

The discussion now is shifting away from esoterism of safety expertise and obscure security conferences, necessarily moving now to law offices, as the big questions about civil and even criminal code enforcement come into focus for the general public.

Notably Elon Musk is engaged in noisy propagandist defense tactics, seemingly very worried about his exploding legal troubles and accountability for his huge mistakes. He’s been on his social media soapbox lately fretting that prosecution of Americans by a jury might lead to conviction of criminals, even though that’s exactly what’s supposed to protect society from people like him.

In fact we have seen dozens of people unnecessarily killed by Tesla in what amounts to be a long-running advance fee fraud scheme based on false “safety” claims about future technology that never arrives. Tesla’s very aggressively positioned legal team meanwhile has very quickly settled two high profile death cases, in an obvious hush and hide strategy.

Take just one example of the trend. Elon Musk overruled safety experts and forced his engineers to remove radar from “driverless” road robots, while also claiming a concern for “safety”. The sad result, from May 2021 to May 2024, was these Tesla robots more than doubled their death toll. Think about the kind of robot company that in five years (2016-2021) already had killed more people than in the whole prior history of robots, and then it removed safety sensors…. Is that not criminal?

Paying the families of victims to stay silent isn’t going to scale well.

I mean imagine the Titanic CEO looking at his ship sunk by an iceberg, and then demanding ships remove telescopes to see even less. Are experts supposed to be surprised to see double the death toll?

The logic supposedly floated by Tesla management about their safety strategy sounds so dumb as to be almost unbelievable (paraphrasing): “humans see with only four highly sophisticated light and sound sensors (eyes and ears) so why can’t we quickly and cheaply replace humans with two cheap video cameras and some rushed software?”

As someone who helped support early development at Space Applications of GPS and modern HUD technology to solve for low-altitide low-visibility transit and prevent further high-cost catastrophes (e.g. Operation Eagle Claw brownout)… the Tesla “know-nothing” PR stunts to trick the public have been for me like a decade of nails on chalkboard.

If I stick two plastic googly-eyes on a car hood and call it “Alice the driverless car” I would be tapping into and manipulating the same human positive belief system that Tesla has been using for its fraud. Social engineering hacks aren’t rocket science, unfortunately. And worse, the counter-measures against advance fee fraud attacks are very difficult, as I’ve discussed here since 2005.

Now, after I recently helped document the constant exodus of top lawyers from Tesla following an exodus of top engineers, I noticed that the lawsuits are piling up as fast or faster than the technical debt and deaths.

Expect 2024 to go up, way up.

A rising curve of legal trouble probably looks sadly familiar to regular readers of this blog, who monitor tesladeaths.com along with me. Here’s the chilling reminder again that in 2016 we could have done more and prevented hundreds of unnecessary deaths.

Deaths/year: the more Tesla on roads the more deaths. Without fraud there would be no Tesla. Source: tesladeaths.com

Tesla Owner Warns FSD 12.4 is Unsafe, Unusable, Worst Ever

Tesla has made a series of catastrophic management decisions that have rendered its “automation” hardware and software the worst in the industry.

Removing radar and lidar sensors to leave only low grade cameras and repeatedly forcing qualified staff into dead-ends then replacing them with entry-level ones who wouldn’t disagree with their CEO… shouldn’t have been legal for any company regulated on public road safety.

The 1954 novel challenged notions of raw self-governance, arguing intolerance and violence are enflamed when boundaries of society (e.g. ethics, morals) are removed

Lord of the flies” might be the best way to describe a “balls to the wall” fantasy of rugged individualism behind an unregulated yet centrally dictated robot army of technocratic “autonomy”.

Now even the most loyal Tesla investors with the product, who have sunk their entire future and personal safety into such a fraud, are forced to reveal a desperate and declining status of the company.

According to a Reddit forum chat, FSD 12.4 is unusable because so obviously unsafe.

This idea “they will fix it soon” is from the same user account that just posted a belief that Tesla’s vaporware “robotaxi” strategy is real. They believe, yet they also can’t believe, which is behavior typical of advance fee fraud victims.

Musk’s erratic leadership played a role in the unpolished releases of its Autopilot and so-called Full Self-Driving features, with engineers forced to work at a breakneck pace to develop software and push it to the public before it was ready. More worryingly, some former Tesla employees say that even today, the software isn’t safe for public road use, with a former test operator going on record saying that internally, the company is “nowhere close” to having a finished product.

Notably, the Tesla software continues to “veer” abruptly on road markings, which seems related to its alarmingly high rate of fatalities.

Big jump in the wrong direction. Removed constraints that prevented deaths. Training to cause harms.

Here’s a simple explanation of the rapid decline of Tesla engineering through expensive pivots, showing more red flags than a Chinese military parade:

First dead end? AI trained on images. They discovered what everyone knew, that the more a big neural network ingested the less it improved. It made catastrophic mistakes and people died.

Restart and second dead end? A whole new real-world dataset for AI training on video. After doing nearly 500 KLOC (thousand lines of coke code) they discovered what everyone knew, Bentham’s philosophy of the minutely orchestrated world was impossibly hard. Faster was never fast enough, while complex was never complex enough. It made catastrophic mistakes and people died.

Restart and soon to be third dead end? An opaque box they can’t manage and don’t understand themselves is being fed everything they can find. An entirely new dataset for a neural net depends on thoughts and prayers, because they sure hope that works.

It doesn’t.

This is not improvement. This is not evolution. They are throwing away everything and restarting almost as soon as they get it to production. This is privilege, an emperor with no clothes displaying sheer incompetence by constantly running away from past decisions. The longer the fraud of FSD (Lord of the Flies) continues unregulated, the worse it gets, increasing threats to society.

Update three days later: View from behind the wheel. This is NOT good.

African Dictator: Elon Musk’s Life of Censorship, Fraud and Self-Praise

Elon Musk is rolling out some of the most obvious self-dealing censorship controls in history.

Here is how the supposed “top story” looks now on Twitter, reminiscent of weak leaders who fraudulently heap praise upon themselves to appear like something they are not.

That’s pure disinformation.

Out of all the news in the world, this is the propaganda Twitter is now pushing. Allegedly generated by “AI”, automating such text is arguably worse than anything I’ve seen in 40 years of studying the problem.

A petty, cruel and corrupt African dictator, in bed with Russia and China, is what Musk seems to emulate, and how he will be remembered… or perhaps why he will be quickly forgotten like the C Squadron of Rhodesia.

For what it’s worth, Musk (as exposed by new fraud allegations) has literally described himself as the emperor, one who sits on top of a pyramid.

Meanwhile on Reddit, an army of “moderators” are busy erasing speech that challenges Musk’s elephantine fraud. Given Musk is obsessively online curating his following, it’s reasonable to assume at least some of the moderator accounts on his subreddits are actually him.

For example, when Musk worried publicly how a high profile conviction for fraud in America means that Musk also could be convicted of crimes…

Database Without Authentication Leaks “biometric identity information of members of the police, army, teachers, and railway workers”

The database vendor isn’t mentioned in the report but I think we can probably all guess the name.

Aside from that important fact, this report is about the dangers of centralizing biometrics into a singular place where a single mistake harms practically everyone in society. Not all, but some, and that’s more than enough to worry.

The publicly exposed database contained 1,661,593 documents with a total size of 496.4 GB. I saw documents containing: facial scan images, finger prints, signatures (in English and Hindi), identifying marks such as tattoos or scars, and much more. There were also scans of documents such as birth certificates, testing and employment applications, diplomas, certifications, and other education related files. Among the most concerning files were what appeared to be the biometric data of individuals from the police and military in verification documents. Upon further investigation, I saw documents indicating the records belonged to two separate entities which suggests they operate under the same ownership: ThoughtGreen Technologies and Timing Technologies, each of which provide application development, analytics, development outsourcing, RFID technology, and biometric verification services.

Fingerprints are public yet distributed very widely, in other words, if you think about how often and where you have been leaving yours… like on a glass at a restaurant.

Source: “The Quantum Mechanics Of Fingerprints On Your Water Glass”, In The Loop

However, having your fingerprints grabbed by someone pulling over 1.5 million other people’s fingerprints at the same time (due to a single database vendor on the Internet failing to achieve authentication) is a different issue.

Related:

Here’s a very similar story, where hacking the data service vendor Snowflake just led to a massive leak from many of their customers.

In the conversation with Hudson Rock, the threat actor reveals that there is much more to the story than these two breaches, and that additional major companies suffered a similar fate, allegedly including:

— Anheuser-Busch
— State Farm
— Mistubishi
— Progressive
— Neiman Marcus
— Allstate
— Advance Auto Parts

Further explaining the source of the hack, the threat actor adds that all of these breaches stem from the hack of a single vendor — Snowflake. […] To put it bluntly, a single credential resulted in the exfiltration of potentially hundreds of companies that stored their data using Snowflake, with the threat actor himself suggesting 400 companies are impacted.

When a single employee can be compromised to give access to hundreds or thousands of customers, the Snowflake response probably shouldn’t be that context is needed.

Even worse is when they start saying that Snowflake wasn’t involved in any way with the massive theft of customer data from Snowflake. Uh huh.

Here’s what they allegedly are trying to snow reporters with:

On may 31st, Snowflake released a statement in which they claim that they are investigating an industry-wide identity-based attacks that have impacted “some” of their customers.

Industry-wide is another way of saying baseline.

What Snowflake inadvertently is saying is they fell below an acceptable baseline while being trusted to NOT do exactly that.

Watch the “who me, am I the baddie” Snowflake now try to point the finger at its customers, a known horrible idea and safety anti-pattern. Like blaming bank customers for the vault being robbed of their money. Or blaming Tesla owners for being killed by the Autopilot.

That’s very bad news for some, even if not every single customer. It’s a lot more bad news than if Snowflake had done more to prevent a single employee compromise affecting so many customers, let alone turning a blind eye to widespread known threats that would very predictably harm their customers.

Negligence? Due diligence? You make the call. Every Snowflake customer now should be planning to exit that vendor to find better care, not least of all because of how Snowflake is responding.