2026 Buick GL7 Wins Driverless Contest, Tesla Falls to Ninth Place

An interesting post in the Chinese news reveals that GM just topped a driverless competition.

The Ningbo station’s route was extremely challenging. The biggest difference from previous competitions was that the route was completely hidden before the race, equivalent to a closed-book exam. The entire route spanned 29 kilometers, passing through 28 traffic lights, 5 waypoints, and 8 test points, examining the participating vehicles’ city NOA intelligent driving capabilities in traffic scenarios including narrow community roads, roundabouts, blind spot U-turns, artificial obstacles, U-turns, village roads, right-turn U-turns, and rural roads.

[…]

According to the judging panel, the top three finishers in the Second Smart Driving Competition Ningbo Station City NOA Race were:

Champion: Buick GL7!
Second place: Yangwang U7!
Third place: Zeekr 9X

The story is actually that Momenta’s R6 generation of their autonomous driving AI system is version 1.0 and winning against Tesla latest version (13? 14?). The Buick gets the win, but Momenta is the secret sauce. I mean it shows why Tesla fell almost all the way out of the top ten, squeaking in a ninth place.

Traditional systems try to copy human drivers. Reinforcement learning (RL) systems, such as the Momenta “Flywheel”, can potentially handle edge cases better than humans because they’ve practiced scenarios millions of times in simulation. Rather than separate modules for perception, planning, and control (Tesla’s approach until recently), the Momenta RL integrates everything into one neural network that learns holistically.

Here’s what that meant in competition: At one test point—a basic protected U-turn that any beginner human should handle—nine vehicles required human takeover, including multiple premium Chinese EVs and the Tesla. The Buick GL7 sailed through. It had already “practiced” that exact scenario ten million times in simulation, learning not just what to do, but when hesitation creates danger and when aggression does.

As great as this sounds, we’re looking at the next frontier in automotive litigation. When an RL-powered vehicle crashes, the “why” isn’t in if-then rules you can read—it’s in billions of neural network weights shaped by reinforcement learning.

You can’t depose a neural network. You can’t cross-examine an algorithm that “learned” through trial-and-error in simulation. The traditional questions—”What did the system detect? When did it decide to brake? What rule did it follow?”—become meaningless when the decision-making process is a black box of interconnected weights.

The manufacturer will say: “Our system was trained on 3 billion kilometers of data and passed rigorous testing.” Your expert will need to ask: “But what reward function shaped its learning? Did it prioritize smooth rides over collision avoidance? Can you reconstruct why it made this specific decision?”

Blade Runner where are you? If the machine can’t explain itself, can it be trusted with safe operation? We need the Voight-Kampff test…

Deckard on the hunt with his special weapon that kills robots, after they falsely become convinced they are indestructable.

On top of its track performance, the new Buick GL7 costs just $25K—roughly half what a Tesla Model 3 with Full Self-Driving costs in China. Momenta achieves this with an optimized sensor suite: 11 cameras, 1 radar, 1 LiDAR. Not the fanciest hardware, but the smartest software.

This is the “good enough” disruption of great engineers playing out in real time. Tesla blew years and billions into a vertical integration that barely works. Momenta built a platform that any automaker can license, achieved far superior performance, and did it at half the cost. Their version 1.0 just made every Tesla look like it runs version oh no.

Tesla built a mythology.

Momenta built a product.

Tesla Autopilot With Sleeping Driver Crashes Into Police Car

It’s not just yet another story about Autopilot design flaws that allow Tesla drivers to fall asleep, it’s that Tesla STILL can’t seem to see a giant high visibility police car.

Barrington Hills police arrested and charged a Tesla driver who claimed they were asleep and their car was in autopilot when it crashed into a South Barrington police car. The Village of South Barrington said a South Barrington police squad was involved in a crash on October 15.

Scientists: Napoleon’s Mistreated Army Was Dying Faster Than Enemies Could Kill Them

600,000 troops were destroyed by Napoleon’s mistreatment, leaving barely 20,000 alive. This scene captures the desperation of their existence, burning whatever they could find for warmth, including regimental standards and flags. These weren’t just pieces of cloth; they were sacred symbols of military honor and unit identity that French soldiers burned for basic survival, absent of any pride. Source: Wojciech Adalbert Kossak’s woodcut depicting French retreat on 29 November 1812.
For all the extravagant jewelry and fine dining the ruthless Napoleon loved to shower himself in, his troops basically died as disposable slaves.

Binder says. “We have these paintings in the museums of soldiers in shiny armors, of Napoleon on his horse, fit young men marching into battle.”

“But in the end, when we look at the human remains, we see an entirely different picture,” she says.

It’s a picture of lifelong malnutrition, broken feet from marching too far, too quickly, and bodies riddled with disease.

Napoleon was truly a horrible human. The Grande Armée marched without adequate supply lines because his plan was literally to rape and pillage the land—as if his soldiers could sustain themselves while marching hundreds of miles into hostile territory. When Russia came up empty, hundreds of thousands of his own men starved and froze to death. Meanwhile, his baggage train advanced and retreated with his expansive silver dinnerware and fresh steaks.

Scientists are thus proving a subtext of the well-known disasters, that Napoleon never was building a professional army. He was instead rapidly extracting every ounce possible from expendable human material in a hopeless imperial ambition that couldn’t last.

Authoritarian systems consistently demonstrate this pattern of toxic leadership that treats humans as disposable, while maintaining elaborate fake performances of power and legitimacy to hide their dangerous extraction.

The gap that emerges between the story telling of museum paintings, and the facts from modern bone pathology, isn’t just about artistic license; it’s evidence of horribly corrupted power systematically erasing human cost in projects and logs.

Devastating supply line failure killing his own men wasn’t from logistical incompetence—it was a strategy of “efficiency” coming to bear. The fail faster doctrine of Napoleon, in fact failed faster, to the tune of 400,000 and more of his own soldiers destroyed for… nothing.

Charles Minard’s renowned graphic of Napoleon’s 1812 march on Moscow. The tremendous numbers of casualties suffered shows in thinning of the lines (1 millimeter of thickness is equal to 10,000 men) through space and time.

Napoleon is still framed falsely as a military genius rather than as mass murderer, someone who burned everything he touched, destroyed human lives at an industrial scale and then “efficiently” lost it all. His “strong man” propaganda continues to work centuries later, which should make us deeply skeptical of how current authoritarian systems (e.g. Trump) present their own real costs.

Tesla FSD Shows AI Getting Worse Over Time

The great myth of AI is that it will improve over time.

Why?

I get it, as I warned about AI in 2012, people want to believe in magic. A narwhal tusk becomes a unicorn. A dinosaur bone becomes a griffin. All fake, all very profitable and powerful in social control contexts.

What if I told you Tesla has been building an AI system that encodes and amplifies worsening danger, through contempt for rules, safety standards, and other people’s lives?

People want to believe in the “magic” of Tesla, but there’s a sad truth finally coming to the surface. Elon Musk has been promising for ten years AI can make his cars driverless “a year from now”, as if Americans can’t recognize snake oil of the purest form.

Back in 2016 I gave a keynote talk about Tesla’s algorithms being murderous, implicated in the death of Josh Brown. I predicted it would get much worse, but who back then wanted to believe this disinformation historian’s Titanic warnings?

Source: My 2016 BSidesLV keynote presentation comparing Tesla autopilot to the Titanic

If there’s one lesson to learn from the Titanic tragedy, it’s that designers believed their engineering made safety protocols obsolete. Musk sold the same lie about algorithms. Both turned passengers into unwitting deadly test subjects.

I’ll say it again now, as I said back then despite many objections, Josh Brown wasn’t killed by a malfunction. The ex-SEAL was killed by a robot executing him as it had been trained.

Ten years later and we have copious evidence that Tesla systems in fact get worse over time.

NHTSA says the complaints fall into two distinct scenarios. It has had at least 18 complaints of Tesla FSD ignoring red traffic lights, including one that occurred during a test conducted by Business Insider. In some cases, the Teslas failed to stop, in others they began driving away before the light had changed, and several drivers reported a lack of any warning from the car.

At least six crashes have been reported to the agency under its standing general order, which requires an automaker to inform the regulator of any crash involving a partially automated driving system like FSD (or an autonomous driving system like Waymo’s). And of those six crashes, four resulted in injuries.

The second scenario involves Teslas operating under FSD crossing into oncoming traffic, driving straight in a turning lane, or making a turn from the wrong lane. There have been at least 24 complaints about this behavior, as well as another six reports under the standing general order, and NHTSA also cites articles published by Motor Trend and Forbes that detail such behavior during test drives.

Perhaps this should not be surprising. Last year, we reported on a study conducted by AMCI Testing that revealed both aberrant driving behaviors—ignoring a red light and crossing into oncoming traffic—in 1,000 miles (1,600 km) of testing that required more than 75 human interventions.

Let’s just start with the fact that everyone has been saying garbage in, garbage out (GIGO) is a challenge to overcome in AI, since forever.

And by that I mean, even common sense standards should have forced headlines about Tesla being at risk of soaking up billions of garbage data points and producing dangerous garbage as a result. It was highly likely, at face value, to become a lawless killing machine of negative societal value. And yet, its stock price has risen without any regard for this common sense test.

Imagine an industrial farmer announcing he was taking over a known dangerous superfund toxic sludge site to suddenly produce the cleanest corn ever. We should believe the fantasy because why? And to claim that corn will become less deadly the more people eat it and don’t die…? This survivor fallacy of circular nonsense from Tesla is what Wall Street apparently adores. Perhaps because Wall Street itself is a glorified survivor fallacy.

Let me break the actual engineering down, based on the latest reports. The AMCI Testing data (75 interventions in 1,000 miles) provides a quantifiable deterioration rate. That’s a Tesla needing intervention every 13 miles.

Holy shit, that’s BAD. Like REALLY, REALLY BAD. Tesla is garbage BAD.

Human drivers in the US average one police-reported crash every 165,000 miles. Tesla FSD requires human intervention to prevent violations or crashes at a rate roughly 12,000 times higher than human baseline crash rates.

Elon Musk promised investors a 2017 arrival of a product superior to “human performance”, yet in 2025 we see code that is still systematically worse than a drunk teenager.

And, it’s actually even worse than that. Tesla re-releasing a “Mad Max” lawless driving mode in 2025 is effectively a cynical cover up operation, to double-down on deadly failure as normalized outcomes on the road. Mad Max was a killer.

I’ve disagreed with GIGO for as long as I’ve pointed out Tesla will get worse over time. I could explain, but I am not sure a higher bar even matters at this point. There’s no avoiding the fact that the basic GIGO tests show how Tesla was morally bankrupt from day one.

The problem isn’t just that Tesla faced a garbage collection problem, it’s that their entire training paradigm was fundamentally flawed on purpose. They’ve literally been crowdsourcing violations and encoding failures as learned behavior. They have been caught promoting rolling stop signs, they have celebrated cutting lanes tight, and even ingested a tragic pattern of racing to “beat” red lights without intervention.

That means garbage was being relabeled “acceptable driving.” Like picking up an old smelly steak that falls on the floor and serving it anyway as “well done”. Like saying white nationalists are tired of being called Nazis, so now they want to be known only as America First.

This is different from traditional GIGO risks because the garbage is a loophole that allows a systematic bias shift towards more aggressive, rule-breaking, privileged asshole behavior (e.g. Elon Musk’s personal brand).

The system over time was setup to tune towards narrowly defined aggressive drivers, not the safest ones.

What makes this particularly insidious is the feedback loop I identified back in 2016. “Mad Max” mode from 2018 wasn’t just marketing resurfacing in 2025, it’s a legal and technical weapon deployed by the company strategically.

Source: My presentation at MindTheSec 2021

Explicitly offering a “more aggressive” option means Tesla moves the Overton window while creating plausible deniability: “The system did what users wanted.”

This obscures that their baseline behavior was degraded by training on violations, and reframes the failures within a worse option. Disinformation defined.

Musk’s snake oil promises – that Teslas would magically become safer through fleet learning – require people to believe that more data automatically equals better outcomes. Which is like saying more sugar is going to make you happier. It’s only true if you have labeled ground truth, to know how close to diabetes you are. It needs a reward function aligned with actual safety, and the ability to detect and correct for systematic biases.

Tesla has none of these.

They have billions of miles of “damn, I can’t believe Tesla got away with it so far, I’m a gangsta cheating death” which is NOT the same as if its software drove the car legally let alone safely.

Tesla claimed to be doing engineering (testable, falsifiable, improvable) while actually doing testimonials (anecdotal, survivorship-biased, unfalsifiable). “My Tesla didn’t crash” is not data about safety, it’s absence of negative outcome, which is how drunk drivers justify their behavior too… like a teapot orbiting the sun (unfalsifiable claims based on absence of observed harm).