“Mechahitler” xAI Co-Founder Quits Swastika Brand to Start VC

Another tech executive from the infamous X brand departs amid controversy, raising questions about industry accountability.

The xAI co-founder says he was inspired to start the firm after a dinner with Max Tegmark, the founder of the Future of Life Institute, in which they discussed how AI systems could be built safely to encourage the flourishing of future generations. In his post, Babuschkin says his parents immigrated to the U.S. from Russia…

Babuschkin’s departure comes after a tumultuous few months for xAI, in which the company became engrossed in several scandals related to its AI chatbot Grok. For instance, Grok was found to cite Musk’s personal opinions when trying to answer controversial questions. In another case, xAI’s chatbot went on antisemitic rants and called itself “Mechahitler.”

The fail-upward pattern here is striking: lofty rhetoric about humanity’s future paired with infamous X-branded products that actively cause harm. While Babuschkin speaks of building AI “safely to encourage the flourishing of future generations,” his company’s chatbot Grok was generating violent hate speech including antisemitic content and positioning itself as a digital Hitler. That’s literally the spring board he’s using to launch his investment career.

“It’s uncontroversial to say that Grok is not maximalising truth or truth seeking. I say that particularly given the events of last week I would just not trust Grok at all,” [Queensland University of Technology law professor Nicolas Suzor] said. […] Suzor said Grok had been changed not to maximise truth seeking but “to ensure responses are more in line with Musk’s ideological view”.

This disconnect between aspirational language and willfully harmful outcomes reflects a broader problem in tech leadership. Historical awareness shows us how empty emotive future-oriented rhetoric can mask concerning agendas:

  • Authoritarian movements consistently frame discriminatory policies as protecting future generations
  • Eugenics programs were justified using language about genetic “health” and societal progress
  • Educational indoctrination was presented as investment in humanity’s future
  • Population control measures were framed as ensuring a “better” tomorrow

The concerning pattern isn’t the language itself (similar to how Nazi rhetoric centered on future prosperity), but how it’s deployed to justify harmful technologies while deflecting accountability. When a company’s AI system calls itself “Mechahitler” while its leadership speaks of “flourishing future generations,” we should ask the basic and hard questions about a huge gap between stated values and actual observed outcomes that are “more in line with Musk’s ideological view”.

A Nazi “Afrikaner Weerstandsbeweging” (AWB) member in 2010 South Africa (left) and a “MAGA” South African-born member in 2025 America (right). Source: The Guardian. Photograph: AFP via Getty Images, Reuters

Tech leaders routinely use futuristic-sounding rhetoric to market products that surveil users, spread misinformation, or amplify harmful content. Historical vigilance requires examining more than what they say, and what their technologies actually do in practice. Mechahitler was no accident.

The real red flag is more than a single phrase—it’s the pattern of using humanity’s highest aspirations to justify technologies that demonstrably harm human flourishing. Just look at all the really, really big red X flags.

Twitter changed its logo to a swastika

The Nazi use of “flourishing” language was particularly insidious because it hijacked universally positive concepts (growth, prosperity, future well-being) to justify exclusion, violence, and ultimately genocide. This rhetorical strategy made their radical agenda seem like common sense – who wouldn’t want future generations to flourish? The key was that their definition of “flourishing” required the elimination of those they deemed inferior. Connecting modern tech rhetoric about “flourishing future generations” to historical patterns is historically grounded. The Nazis absolutely used this exact type of language systematically as part of their propaganda apparatus.

WI Police Send Disoriented Man With Head Injury Home Using Tesla Driverless

This is an odd police report, as it shows the police did a mental and physical health check and then “allowed” self-driving.

Officers were dispatched at 8:20 a.m. to a suspicious activity call at 619 Field Ave. … Police arrived and spoke to the suspect, a 75-year-old Hudson man, who appeared confused and disoriented. He claimed he was meeting a realtor but then changed his story, stating he was simply looking at real estate. His inconsistent statements raised concerns about his mental state. He had a large bruise on his left eye; he said he had a recent head injury, which was visible as a shiner on his face. He claimed he had bumped his head on a dock. EMS responded to evaluate him; after trying to contact his wife and not reaching her, police allowed him to head home using the self-driving feature of his Tesla.

It makes about as much sense as telling someone to delete their social media apps when they are the victim of common fraud.

At 10 a.m., a man reported being scammed by people trying to hire him over WhatsApp and Telegram. He claimed $30 had disappeared from his account. He was advised to stop responding and to delete the apps.

Hello, police, my house was robbed. What’s that you say, destroy my house? Where will I go? What do you mean I can just let a Tesla decide?

Sloppy North Korean Day Job “Hackers” Exposed

A researcher who wrote a breathless article about North Korea’s Kimsuky hacking group didn’t pull off some sophisticated nation-state level operation. Reading through a fat (35MB PDF) Phrack article “APT Down: The North Korea Files,” what emerges is a story of lowly operational security failures that would make any intelligence professional wince.

This wasn’t Ocean’s anything. This was more like a blind man with a seeing eye dog strolls through an unlocked front door.

Delicious Breadcrumbs

The most striking aspect isn’t any technical sophistication because there is such complete lack of basic security hygiene on the North Korean side. The researcher, going by “Saber,” appears to have gained access to not just Kimsuky’s operational infrastructure, but their personal development environment. We’re talking about the digital equivalent of finding a spy’s diary, complete with their passwords and personal photos.

Consider the exposure: Chrome browser history showing Google searches for error messages, drag-and-drop files between Windows and Linux machines containing active malware, and even Google Pay transactions for VPN services. The operator, referred to as “KIM,” left behind a complete digital footprint that reads like a how-to guide for terrible personal security.

Consumer Tools as State Hackery

The technical details reveal operations relied heavily on off-the-shelf tools and services. KIM was using:

  • Standard VPN services (PureVPN, ZoogVPN) paid for through Google Pay
  • Public GitHub repositories for code hosting
  • Consumer-grade VMware for virtualization
  • Regular Chrome browser with saved passwords and browsing history intact

This isn’t a properly tooled operation you’d expect from a trained state actor. It’s what you’d see from a moderately skilled technology worker in a coffee shop.

Chinese Holidays

One of the most revealing operational security failures was temporal. The researcher notes that KIM follows Chinese public holiday schedules, taking time off during the Dragon Boat Festival when North Korea would normally be working. This kind of behavioral pattern analysis used to be the exclusive domain of anthropologists hired into intelligence agencies, yet now it’s right there in the login timestamps for anyone paying attention.

Even more damaging: KIM’s Chrome configuration shows he uses Google Translate to convert error messages to Chinese, and his browsing history includes Taiwanese government and military websites. More red flags than a Chinese military parade.

Infrastructure Tells Stories

There is also a surprisingly, not surprised, centralized operation. Rather than distributed, compartmentalized systems, of proper statecraft everything appears to run through a small number of servers and VPS instances. The researcher found:

  • Active phishing campaigns against South Korean government agencies
  • Complete source code for email platform compromises
  • Development versions of Android malware
  • Cobalt Strike configurations and deployment scripts

This level of access suggests either a fundamental misunderstanding of compartmentalization, a laziness or a resource constraint. Possibly all of the above.

State Script Kiddie

What’s perhaps most damning is the technical skill level on display. The malware samples and attack tools are competent but hardly edgy or novel. The TomCat remote kernel backdoor, for instance, uses hardcoded passwords and relies on relatively simple TCP connection hijacking.

*smacks head*

The Android malware appears to be modified versions of existing tools without any custom development, even when the cost of custom development is now zero. This tracks with the operational security failures. We see an operation that feels more like an organized cybercriminal group that happens to be state-sponsored rather than a professional intelligence service.

It’s important, if you think about why a country as insignificant as Russia gets so much news. The answer is that it’s being run by one of the largest professional intelligence services in the world.

Constraints of Korea

The deeper story here is what failures reveal about North Korea’s state capabilities. The sloppiness suggests, like a Trump brand, operating under significant skill and resource constraints, possibly with limited access to skilled personnel and proper infrastructure.

How do you say “big balls” in Korean?

The reliance on consumer services, the mixing of personal and operational activities, the poor compartmentalization—these all point to an operation lacking institutional capabilities. North Korea has political will for asymmetric operations, but they have gone so far into asymmetric as to lack professionalism.

It’s Bananas (or M*A*S*H)

Woody Allen’s “Bananas” tells us of an authority who wows the ladies yet struggles with the burden of being so revolutionary

The researcher’s success in penetrating this operation raises uncomfortable questions about attribution and capability assessments. If a single individual can gain this level of access to a state-sponsored hacking group, what does that say about our industry of hyping up nation-state cyber threats as greater than corporations and bad actors?

The traditional model assumes sophisticated adversaries while the Kimsuky breach suggests something more realistic: state actors who are dangerous not because of their sophistication, but because of their persistence and their targets. They’re the cyber equivalent of tax collectors—not particularly skilled, but willing to keep trying doors until they find one that’s unlocked.

What makes this most interesting from a security blogger’s perspective is how the researcher approached the material. Rather than just dumping files or making vague attribution claims, they conducted what amounts to a comprehensive investigative analysis. They traced infrastructure connections, analyzed behavioral patterns, and even provided context about North Korean holiday schedules. Hats off to that!

This is investigative journalism conducted with root access. Instead of filing FOIA requests to understand government surveillance programs, the researcher simply accessed the surveillance infrastructure directly. The methodology is different, but the end result—detailed public reporting on previously hidden government activities—is remarkably similar to traditional journalism.

Competence Gaps

While we focus on advanced persistent threats of nation-state actors, we shouldn’t lose sight of the more mundane: poorly resourced operations run by moderately skilled technicians who make the same basic mistakes as everyone else.

Attackers only have to make a mistake once.

The Kimsuky breach suggests that mystique around nation-state hacking may be an unfortunate distraction from the threat of well-resourced commercial bad actors. Who is more resourced and trained, the ex-NSA sipping Mai-Tai on a lounger at Facebook or a graduate student in North Korea? Strip away the geopolitical implications here, and what you’re left with is fairly standard network compromise enabled by poor security practices. The only thing “advanced” about this particular threat was its persistence and targeting—the technical execution was thoroughly ordinary.

In an age where we’re constantly pushed into political and security theater, the Kimsuky files offer a different perspective: sometimes the threats come not from technical brilliance, but from persistence combined with institutional incompetence. And sometimes, that incompetence creates opportunities for accountability that traditional oversight mechanisms could never achieve.

The “Great Man” of Big Tech is a Lie: How Americans Peddle AI to Destroy Labor

Steve Jobs didn’t invent anything, and stole credit. Let that sink in.

The man worshipped as history’s greatest innovator was actually a late mover and master thief who repackaged other people’s breakthroughs and convinced the world to call him a genius. And he’s not alone—tech’s pantheon of “visionary” CEOs are typically frauds peddling stolen valor.

The Stupidity of Smart Phone Revolution

Jobs didn’t “revolutionize” smartphones. By 2007, others had already done the heavy lifting:

  • Touchscreens? Invented decades earlier, perfected by companies like Palm and Windows Mobile
  • Mobile internet? BlackBerry was there first
  • App ecosystems? Palm Pilot had them in the 90s
  • Sleek design? Braun and Dieter Rams created that aesthetic in the 1960s

Jobs showed up late, as always, hired engineers smarter than him to combine existing technologies, then slapped an “Apple was here” label on it. The real inventors? Forgotten. The marketing hack who repackaged their work? Billionaire saint.

America Loves a Con

This isn’t an accident—it’s the business model:

  • Xerox PARC invented the graphical user interface, the mouse, ethernet networking. Jobs toured their lab in 1979, saw the future, and copied it wholesale for the Mac. Xerox got nothing. Jobs got immortality.
  • Existing MP3 players had been around for years before the iPod. Creative, Diamond, and others built the market. Apple just made theirs white and launched a marketing blitz.
  • Tesla, perhaps the worst example in history, wasn’t founded by Elon Musk. He bought his way in, pushed out the actual founders, then put his family on the board and rewrote history to promote dangerous lies as “visionary.”

Machinery of Myth-Making

How do so many American frauds in big tech get away with it?

Corporate PR machines spend millions crafting heroic narratives. They hire biographers, fund documentaries, and feed journalists carefully crafted stories about “genius” and “vision.”

Tech journalism is complicit, preferring simple stories about individual brilliance over complex truths about collaborative engineering. It’s easier to write fiction about one person than do the hard work to acknowledge thousands.

Legal systems orient around protecting the thieves. Patents, NDAs, and employment contracts ensure the real inventors stay silent while their bosses take credit. Edison literally filled warehouses with immigrants he would steal ideas from to monetize, as they had little to no power to defend themselves from him.

Really Big Criminals

While engineers work 80-hour weeks and longer solving impossible problems, their sociopathic bosses jet around the world collecting “innovation” awards for doing nothing. The people who actually build the technology get layoffs. The people who steal credit get stage presence.

This American con game is behind the technological “progress” that’s hollowing out the middle and leaving just poor and rich. When we worship charlatans, we:

  • Discourage real innovation by rewarding marketing over engineering
  • Concentrate wealth in the hands of people who contribute nothing
  • Perpetuate systems where actual inventors are exploited and erased

AI is the Latest Great Man Scam

Now the same con is playing out with AI. Tech CEOs are positioning themselves as the architects of artificial intelligence, when the reality is far different.

But humans created every single piece of data that was used to create the AI models that made waves in 2023; they wrote the code that comprised the models; they nudged the models to make better decisions by telling them when they were right or wrong; they flagged offensive content that was in training data; and they designed the server farms and computer chips that ran the models.

As anthropologist Joseph Wilson discovered in his fieldwork, AI is built on what he calls the “human stack”—layers upon layers of human labor that get systematically erased by corporate mythology.

That chip, you could say easily, probably 30, 40 thousand people participated in that, it’s not like the two hundred people you see over here.

Forty thousand people contributed to a single AI chip. But when AI makes headlines, we see the face of one CEO claiming credit for “revolutionizing” human knowledge.

The engineers Wilson interviewed understood this clearly:

Sometimes I feel… a little frustrated or something. I guess, when people talk about how Steve Jobs brought us the smartphone, right? He’s one guy. He did some neat stuff, I guess. But the amount of people and time and effort… decades. The amount of time and effort and energy [that] goes into every piece of technology that is around is hard to fathom.

The Stakes Are Higher Now

With AI, the great man lie isn’t just unfair—it’s dangerous. When we attribute AI capabilities to individual genius rather than collective human effort, we create what Jeff Bezos called “artificial artificial intelligence”—the illusion that these systems are more autonomous and capable than they really are.

This mythology serves the same function it always has: concentrating power, obscuring accountability, and justifying extreme wealth inequality. But now it’s shaping how we govern technologies that could reshape society.

The Truth Bright as Sunlight

Fei-Fei Li provides perhaps the most damning example of how these frauds operate. Li built her career on ImageNet, a massive dataset scraped from the internet without consent, using exploited workers in 167 countries to label millions of stolen images.

When Princeton rightfully told her this was unethical and could hurt her tenure prospects, she simply moved to Stanford—a university with a long history of dubious moral failures built on genocide—and proceeded anyway. Li herself admits she was “desperate” for attention and funding, openly describing her “audacity” in ignoring ethical concerns to build surveillance infrastructure for Big Tech. She pressed 49,000 low-wage workers into what she euphemistically calls “data labor,” creating the foundation for modern AI surveillance while erasing their contributions entirely.

Now, after spending over a decade willfully removing all moral fiber from her work, Li lectures the world about AI ethics from her blood-stained Stanford pulpit. Like Jobs, she’s a master at repackaging other people’s work—in this case, stolen images and exploited labor—into a personal brand as an AI “visionary.

Her moral bankruptcy fits the pattern of stealing credit while exploiting others that defines the entire tech industry’s “great man” mythology, proving women can ruin the world too.

Steve Jobs was a marketing executive who happened to work at a tech company. Today’s AI “visionaries” are following the same playbook—stealing credit from armies of engineers, researchers, and data workers while positioning themselves as the architects of humanity’s future.

The iPhone wasn’t created by one man’s vision—it was assembled from the work of thousands of engineers, most of whom will never be remembered. AI isn’t being created by visionary CEOs—it’s being built by tens of thousands of people whose names you’ll never know.

Every time you see a tech CEO on a magazine cover claiming to have “built” AI, remember: they’re standing on top of other people’s work, and holding them all down, claiming they moved a mountain with their little finger.

The great man theory of innovation is dumb and dangerous. It’s time we retired it—before it buries the truth about who really builds the future.

What is Really Going On

Here are perfect counter-examples that prove the point even more powerfully.

Craig Newmark (Craigslist) built a simple, functional platform that actually served users.

  • Refused to “scale” or take VC money that would have destroyed the core mission
  • Never positioned himself as a visionary genius
  • Kept the company small and focused on utility over profit
  • Still runs it basically the same way decades later
  • Gets almost no media attention because he’s not playing the hero game

Tim Berners-Lee (World Wide Web) literally invented the foundational technology of the modern internet.

  • Gave it away for free when he could have become the richest person in history
  • Continues to advocate for keeping the web open and decentralized
  • Works on standards and protocols, not personal branding
  • Warns against the concentration of web power in big tech companies
  • Gets a fraction of the recognition for real work that Jobs/Musk types receive for fraud

The contrast is Tony Stark: the people who actually revolutionized technology tend to be humble, focused on the work itself, and concerned with broader social impact. They don’t hire PR teams or chase magazine covers.

Meanwhile, the frauds who repackage existing technologies get all the worship precisely because they’re better at self-promotion than engineering.

It’s like there’s an inverse relationship between actual contribution and mythological status. The real innovators are too busy with solving hard problems to jet around politically building cults of personality around themselves.

The great man mythology is truly insidious – it’s not just wrong, it actively obscures the people who deserve recognition while elevating the ones who deserve none.

Everyone mocked Time magazine for claiming an iPad glued to your face would change the world. And yet somehow an undeserved Luckey billionaire was created anyway.

This isn’t a modern phenomenon but a longstanding American tradition of celebrating thieves who exploit the vulnerable. AI already has charlatans galore, desperately trying to elevate themselves to buy jets and private islands like the next Epstein, at the expense of everyone else.

  • Exploitation of the vulnerable – Epstein preyed on children just like tech frauds systematically exploit engineers, data workers, and inventors who have little power to defend themselves.
  • Wealth as a shield – extreme wealth is used to escape accountability for crimes as they surround themselves with enablers who profit from the system.
  • Network effects – Epstein’s powerful connections sustained him, like how these tech myths need complicit journalists, investors, and politicians who benefit from maintaining the fiction.
  • The private island lifestyle – Mars? Isolation shouldn’t be the ultimate symbol of wealth, given how it’s so narcissistic to aspire for escaping normality, leaving human society and its healthy “constraints” behind for manifest exploitation.