Tesla Claims Dead Children Failed to Present Themselves as Obstacles Worth Avoiding

How were our cars supposed to know these weren’t just speed bumps?” asks company spokesperson.

Mounting criticism over a decade of school zone fatalities has led Tesla representatives to claim this week that deceased children had failed to adequately communicate to the company’s Full Self-Driving system that they deserved not to be run over.

“Unfortunately, these youngsters provided no clear indication to our vehicles that they possessed the basic right to continued existence,” said Tesla spokesperson Melon Taco, who noted that even after reviewing footage of children screaming and diving out of the way, the company’s AI could find no documentation supporting claims that small humans warranted braking.

“Our autonomous vehicles were given ample opportunity to recognize these individuals as beings deserving of life, but frankly, six-year-olds just don’t present proper credentials. How else were our cars supposed to distinguish them from traffic cones, trees or other things we don’t care about?”

Self-driving Teslas only gained the ability to recognize school bus signs a few months ago, in December 2024.

Let THAT sink in for a second.

Teslas started driving themselves on American roads in October 2015. The same technology that had Youtubers and Tesla fans gleefully driving down the street blindfolded or asleep, THAT tech had no idea how to behave around school buses for about a decade.

So, this is a newly released feature that allows the self-driving Teslas to see school bus stop signs, avoid schoolchildren on foot, oh right, and obey the law by stopping at the flashing lights as required.

On that basis, you wouldn’t think I could find driver videos. But I can! Tesla drivers record everything.

Plenty of Examples of Self-Driving Teslas Blowing School Bus Stop-Signs (And Other Signs!)

Tesla executives clarified that their vehicles had been operating under the assumption that flashing red lights and stop signs near schools were merely suggestions, noting that children running for their lives did not constitute sufficient proof they deserved to live.

“We programmed our cars to optimize efficiency, not to cater to every little person who thinks they’re entitled to cross a street without being mowed down,” explained Chief Technology Officer Apar Theid while reviewing internal emails titled “Department of Government Casualty Thresholds by Demographics.” “If these kids wanted our respect, they should have been whiter, or at least wearing high-visibility DOGE merchandise.”

…the Tesla didn’t fail to detect the mannequin, which would have still been bad enough. The system actually detected the child-size stand-in and classified it as a pedestrian but didn’t bother to stop or even slow down. It just kept on going, completely ignoring the “kid” crossing the street.

Company insiders revealed that Tesla’s neural networks had been specifically trained to interpret the screams of children as “background noise” and school crossing guards as “optional NPCs in the driving simulation.”

At press time, Tesla announced that while they would reluctantly program their vehicles to occasionally avoid killing children, crying and pleading for one’s life would still not count as adequate identification of one’s humanity.

Why Humanities Wins Wars That STEM Can’t

A new book explores the pivotal role of humanities scholars in defeating WWII armies overly fixated on STEM.

It’s the age-old HUMINT versus SIGINT debate, but in a framing extremely relevant to today’s unfortunate AI race to nowhere good.

…Graham argues that the humanities—and those librarians and scholars that came from within the discipline—brought special expertise, experience, and attributes that were critical to the direction of strategy, the ultimate victory of the war, and the defense of democracy in the face of tyranny. …we see the role played by the humanities (and the social sciences) in having trained a generation of scholars to assess and analyze large amounts of data, often patchy in its coverage, and to draw accurate inferences, even (and sometimes especially) in the gaps.

What good is a missile if it lacks accuracy?

What good is a map that gives wrong directions?

The sharp lesson here cuts deep: data without context is deadly and self-defeating, technology without wisdom is deaf and blind.

The humanities train human minds to read between lines, to understand what’s missing, to question the silences. These are the very skills that turned librarians into codebreakers and literature professors into intelligence analysts who helped win a world war.

This isn’t new wisdom.

In the 1700s, David Hume warned that reason alone was “the slave of the passions”—pure logic without understanding human nature leads us astray.

Mary Wollstonecraft went further, arguing that education divorced from moral reasoning and critical thinking produced mere “machines” rather than citizens capable of judgment. She saw how technical knowledge without ethical grounding created societies that could calculate but couldn’t comprehend, that could measure but couldn’t make meaning.

How many people today read Wollstonecraft, when her 1790s groundbreaking work is absolutely required to unlock the future of AI?

The Enlightenment-era warnings echo through the ages, louder now than ever: as we hurtle toward an AI-dominated future, our success depends most on disciplines that teach us how to think critically about information itself.

We’re creating Wollstonecraft’s machines at scale—systems that can process but not understand, correlate but not contextualize.

The humanities remain what they were for those WWII codebreakers: not a luxury but the foundation upon which all meaningful STEM achievement ultimately rests.

Massive Integrity Breach of Google Maps Germany: Fake Stop Signs Halt Highway Traffic

It’s the kind of nightmare scenario we’ve been talking about for over a decade. Google Maps algorithms experienced an integrity breach that shut down German national infrastructure.

Confusion reigned on German autobahns and highways at the start of one of the busiest holiday breaks of the year on Thursday after Google Maps wrongly indicated that vast swathes of them were closed.

People using the navigation service around major conurbations such as Frankfurt, Hamburg and Berlin on motorways between western, northern, south-western and central Germany were confronted with maps sprinkled with a mass of red dots indicating stop signs. The phenomenon also affected parts of Belgium and the Netherlands.

Tesla Blows Red Light in Robotaxi Competition. Fails Immediately

Someone had the bright idea to setup a Robotaxi cage match in San Francisco. Waymo and Tesla were given the same route and only one of them didn’t break the law, didn’t run a red light.

During the last half-mile of the trip, the Tesla came to a stop at a red light, only to then drive through the intersection before the light turned green. […] “At this point,” they wrote, “we thought the winner was clear.”

To be clear, Tesla engineers occasionally have warned their libertarian/sovereign work ethic sees red lights and stop signs as government overreach and therefore optional.

Tesla is disabling a self-driving feature in nearly 54,000 vehicles that can prompt cars to autonomously perform a “rolling stop” — a maneuver in which the vehicle moves slowly through a stop sign without coming to a full stop.

As per a safety recall notice issued by the US National Highway Traffic Safety Administration (NHTSA), the consequence of this feature is that “failing to stop at a stop sign can increase the risk of a crash.”

In other words the latest Tesla Robotaxi algorithm in 2025 was leaking an integrity vulnerability, which is likely an intentional and very old design flaw.