Tesla Owner Tries to Blame AI for Mistakes and Gets Banned From Driving

A Tesla owner wasn’t paying attention, blew speed limits by double, and then tried to argue in court that AI is to blame for any mistakes. The court disagreed.

The court said in its ruling that “it is in any case the defendant’s obligation to be aware of the speed limit at all times, and that it would be negligent to rely blindly on the car’s technical systems being correctly set. The court therefore finds it proven beyond any reasonable doubt that the defendant acted as described in the indictment, and that he at the very least acted with gross negligence. He is therefore convicted in accordance with the charges.

“Mechahitler” xAI Co-Founder Quits Swastika Brand to Start VC

Another tech executive from the infamous X brand departs amid controversy, raising questions about industry accountability.

The xAI co-founder says he was inspired to start the firm after a dinner with Max Tegmark, the founder of the Future of Life Institute, in which they discussed how AI systems could be built safely to encourage the flourishing of future generations. In his post, Babuschkin says his parents immigrated to the U.S. from Russia…

Babuschkin’s departure comes after a tumultuous few months for xAI, in which the company became engrossed in several scandals related to its AI chatbot Grok. For instance, Grok was found to cite Musk’s personal opinions when trying to answer controversial questions. In another case, xAI’s chatbot went on antisemitic rants and called itself “Mechahitler.”

The fail-upward pattern here is striking: lofty rhetoric about humanity’s future paired with infamous X-branded products that actively cause harm. While Babuschkin speaks of building AI “safely to encourage the flourishing of future generations,” his company’s chatbot Grok was generating violent hate speech including antisemitic content and positioning itself as a digital Hitler. That’s literally the spring board he’s using to launch his investment career.

“It’s uncontroversial to say that Grok is not maximalising truth or truth seeking. I say that particularly given the events of last week I would just not trust Grok at all,” [Queensland University of Technology law professor Nicolas Suzor] said. […] Suzor said Grok had been changed not to maximise truth seeking but “to ensure responses are more in line with Musk’s ideological view”.

This disconnect between aspirational language and willfully harmful outcomes reflects a broader problem in tech leadership. Historical awareness shows us how empty emotive future-oriented rhetoric can mask concerning agendas:

  • Authoritarian movements consistently frame discriminatory policies as protecting future generations
  • Eugenics programs were justified using language about genetic “health” and societal progress
  • Educational indoctrination was presented as investment in humanity’s future
  • Population control measures were framed as ensuring a “better” tomorrow

The concerning pattern isn’t the language itself (similar to how Nazi rhetoric centered on future prosperity), but how it’s deployed to justify harmful technologies while deflecting accountability. When a company’s AI system calls itself “Mechahitler” while its leadership speaks of “flourishing future generations,” we should ask the basic and hard questions about a huge gap between stated values and actual observed outcomes that are “more in line with Musk’s ideological view”.

A Nazi “Afrikaner Weerstandsbeweging” (AWB) member in 2010 South Africa (left) and a “MAGA” South African-born member in 2025 America (right). Source: The Guardian. Photograph: AFP via Getty Images, Reuters

Tech leaders routinely use futuristic-sounding rhetoric to market products that surveil users, spread misinformation, or amplify harmful content. Historical vigilance requires examining more than what they say, and what their technologies actually do in practice. Mechahitler was no accident.

The real red flag is more than a single phrase—it’s the pattern of using humanity’s highest aspirations to justify technologies that demonstrably harm human flourishing. Just look at all the really, really big red X flags.

Twitter changed its logo to a swastika

The Nazi use of “flourishing” language was particularly insidious because it hijacked universally positive concepts (growth, prosperity, future well-being) to justify exclusion, violence, and ultimately genocide. This rhetorical strategy made their radical agenda seem like common sense – who wouldn’t want future generations to flourish? The key was that their definition of “flourishing” required the elimination of those they deemed inferior. Connecting modern tech rhetoric about “flourishing future generations” to historical patterns is historically grounded. The Nazis absolutely used this exact type of language systematically as part of their propaganda apparatus.

WI Police Send Disoriented Man With Head Injury Home Using Tesla Driverless

This is an odd police report, as it shows the police did a mental and physical health check and then “allowed” self-driving.

Officers were dispatched at 8:20 a.m. to a suspicious activity call at 619 Field Ave. … Police arrived and spoke to the suspect, a 75-year-old Hudson man, who appeared confused and disoriented. He claimed he was meeting a realtor but then changed his story, stating he was simply looking at real estate. His inconsistent statements raised concerns about his mental state. He had a large bruise on his left eye; he said he had a recent head injury, which was visible as a shiner on his face. He claimed he had bumped his head on a dock. EMS responded to evaluate him; after trying to contact his wife and not reaching her, police allowed him to head home using the self-driving feature of his Tesla.

It makes about as much sense as telling someone to delete their social media apps when they are the victim of common fraud.

At 10 a.m., a man reported being scammed by people trying to hire him over WhatsApp and Telegram. He claimed $30 had disappeared from his account. He was advised to stop responding and to delete the apps.

Hello, police, my house was robbed. What’s that you say, destroy my house? Where will I go? What do you mean I can just let a Tesla decide?

Sloppy North Korean Day Job “Hackers” Exposed

A researcher who wrote a breathless article about North Korea’s Kimsuky hacking group didn’t pull off some sophisticated nation-state level operation. Reading through a fat (35MB PDF) Phrack article “APT Down: The North Korea Files,” what emerges is a story of lowly operational security failures that would make any intelligence professional wince.

This wasn’t Ocean’s anything. This was more like a blind man with a seeing eye dog strolls through an unlocked front door.

Delicious Breadcrumbs

The most striking aspect isn’t any technical sophistication because there is such complete lack of basic security hygiene on the North Korean side. The researcher, going by “Saber,” appears to have gained access to not just Kimsuky’s operational infrastructure, but their personal development environment. We’re talking about the digital equivalent of finding a spy’s diary, complete with their passwords and personal photos.

Consider the exposure: Chrome browser history showing Google searches for error messages, drag-and-drop files between Windows and Linux machines containing active malware, and even Google Pay transactions for VPN services. The operator, referred to as “KIM,” left behind a complete digital footprint that reads like a how-to guide for terrible personal security.

Consumer Tools as State Hackery

The technical details reveal operations relied heavily on off-the-shelf tools and services. KIM was using:

  • Standard VPN services (PureVPN, ZoogVPN) paid for through Google Pay
  • Public GitHub repositories for code hosting
  • Consumer-grade VMware for virtualization
  • Regular Chrome browser with saved passwords and browsing history intact

This isn’t a properly tooled operation you’d expect from a trained state actor. It’s what you’d see from a moderately skilled technology worker in a coffee shop.

Chinese Holidays

One of the most revealing operational security failures was temporal. The researcher notes that KIM follows Chinese public holiday schedules, taking time off during the Dragon Boat Festival when North Korea would normally be working. This kind of behavioral pattern analysis used to be the exclusive domain of anthropologists hired into intelligence agencies, yet now it’s right there in the login timestamps for anyone paying attention.

Even more damaging: KIM’s Chrome configuration shows he uses Google Translate to convert error messages to Chinese, and his browsing history includes Taiwanese government and military websites. More red flags than a Chinese military parade.

Infrastructure Tells Stories

There is also a surprisingly, not surprised, centralized operation. Rather than distributed, compartmentalized systems, of proper statecraft everything appears to run through a small number of servers and VPS instances. The researcher found:

  • Active phishing campaigns against South Korean government agencies
  • Complete source code for email platform compromises
  • Development versions of Android malware
  • Cobalt Strike configurations and deployment scripts

This level of access suggests either a fundamental misunderstanding of compartmentalization, a laziness or a resource constraint. Possibly all of the above.

State Script Kiddie

What’s perhaps most damning is the technical skill level on display. The malware samples and attack tools are competent but hardly edgy or novel. The TomCat remote kernel backdoor, for instance, uses hardcoded passwords and relies on relatively simple TCP connection hijacking.

*smacks head*

The Android malware appears to be modified versions of existing tools without any custom development, even when the cost of custom development is now zero. This tracks with the operational security failures. We see an operation that feels more like an organized cybercriminal group that happens to be state-sponsored rather than a professional intelligence service.

It’s important, if you think about why a country as insignificant as Russia gets so much news. The answer is that it’s being run by one of the largest professional intelligence services in the world.

Constraints of Korea

The deeper story here is what failures reveal about North Korea’s state capabilities. The sloppiness suggests, like a Trump brand, operating under significant skill and resource constraints, possibly with limited access to skilled personnel and proper infrastructure.

How do you say “big balls” in Korean?

The reliance on consumer services, the mixing of personal and operational activities, the poor compartmentalization—these all point to an operation lacking institutional capabilities. North Korea has political will for asymmetric operations, but they have gone so far into asymmetric as to lack professionalism.

It’s Bananas (or M*A*S*H)

Woody Allen’s “Bananas” tells us of an authority who wows the ladies yet struggles with the burden of being so revolutionary

The researcher’s success in penetrating this operation raises uncomfortable questions about attribution and capability assessments. If a single individual can gain this level of access to a state-sponsored hacking group, what does that say about our industry of hyping up nation-state cyber threats as greater than corporations and bad actors?

The traditional model assumes sophisticated adversaries while the Kimsuky breach suggests something more realistic: state actors who are dangerous not because of their sophistication, but because of their persistence and their targets. They’re the cyber equivalent of tax collectors—not particularly skilled, but willing to keep trying doors until they find one that’s unlocked.

What makes this most interesting from a security blogger’s perspective is how the researcher approached the material. Rather than just dumping files or making vague attribution claims, they conducted what amounts to a comprehensive investigative analysis. They traced infrastructure connections, analyzed behavioral patterns, and even provided context about North Korean holiday schedules. Hats off to that!

This is investigative journalism conducted with root access. Instead of filing FOIA requests to understand government surveillance programs, the researcher simply accessed the surveillance infrastructure directly. The methodology is different, but the end result—detailed public reporting on previously hidden government activities—is remarkably similar to traditional journalism.

Competence Gaps

While we focus on advanced persistent threats of nation-state actors, we shouldn’t lose sight of the more mundane: poorly resourced operations run by moderately skilled technicians who make the same basic mistakes as everyone else.

Attackers only have to make a mistake once.

The Kimsuky breach suggests that mystique around nation-state hacking may be an unfortunate distraction from the threat of well-resourced commercial bad actors. Who is more resourced and trained, the ex-NSA sipping Mai-Tai on a lounger at Facebook or a graduate student in North Korea? Strip away the geopolitical implications here, and what you’re left with is fairly standard network compromise enabled by poor security practices. The only thing “advanced” about this particular threat was its persistence and targeting—the technical execution was thoroughly ordinary.

In an age where we’re constantly pushed into political and security theater, the Kimsuky files offer a different perspective: sometimes the threats come not from technical brilliance, but from persistence combined with institutional incompetence. And sometimes, that incompetence creates opportunities for accountability that traditional oversight mechanisms could never achieve.