The “Great Man” of Big Tech is a Lie: How Americans Peddle AI to Destroy Labor

Steve Jobs didn’t invent anything, and stole credit. Let that sink in.

The man worshipped as history’s greatest innovator was actually a late mover and master thief who repackaged other people’s breakthroughs and convinced the world to call him a genius. And he’s not alone—tech’s pantheon of “visionary” CEOs are typically frauds peddling stolen valor.

The Stupidity of Smart Phone Revolution

Jobs didn’t “revolutionize” smartphones. By 2007, others had already done the heavy lifting:

  • Touchscreens? Invented decades earlier, perfected by companies like Palm and Windows Mobile
  • Mobile internet? BlackBerry was there first
  • App ecosystems? Palm Pilot had them in the 90s
  • Sleek design? Braun and Dieter Rams created that aesthetic in the 1960s

Jobs showed up late, as always, hired engineers smarter than him to combine existing technologies, then slapped an “Apple was here” label on it. The real inventors? Forgotten. The marketing hack who repackaged their work? Billionaire saint.

America Loves a Con

This isn’t an accident—it’s the business model:

  • Xerox PARC invented the graphical user interface, the mouse, ethernet networking. Jobs toured their lab in 1979, saw the future, and copied it wholesale for the Mac. Xerox got nothing. Jobs got immortality.
  • Existing MP3 players had been around for years before the iPod. Creative, Diamond, and others built the market. Apple just made theirs white and launched a marketing blitz.
  • Tesla, perhaps the worst example in history, wasn’t founded by Elon Musk. He bought his way in, pushed out the actual founders, then put his family on the board and rewrote history to promote dangerous lies as “visionary.”

Machinery of Myth-Making

How do so many American frauds in big tech get away with it?

Corporate PR machines spend millions crafting heroic narratives. They hire biographers, fund documentaries, and feed journalists carefully crafted stories about “genius” and “vision.”

Tech journalism is complicit, preferring simple stories about individual brilliance over complex truths about collaborative engineering. It’s easier to write fiction about one person than do the hard work to acknowledge thousands.

Legal systems orient around protecting the thieves. Patents, NDAs, and employment contracts ensure the real inventors stay silent while their bosses take credit. Edison literally filled warehouses with immigrants he would steal ideas from to monetize, as they had little to no power to defend themselves from him.

Really Big Criminals

While engineers work 80-hour weeks and longer solving impossible problems, their sociopathic bosses jet around the world collecting “innovation” awards for doing nothing. The people who actually build the technology get layoffs. The people who steal credit get stage presence.

This American con game is behind the technological “progress” that’s hollowing out the middle and leaving just poor and rich. When we worship charlatans, we:

  • Discourage real innovation by rewarding marketing over engineering
  • Concentrate wealth in the hands of people who contribute nothing
  • Perpetuate systems where actual inventors are exploited and erased


AI is the Latest Great Man Scam

Now the same con is playing out with AI. Tech CEOs are positioning themselves as the architects of artificial intelligence, when the reality is far different.

But humans created every single piece of data that was used to create the AI models that made waves in 2023; they wrote the code that comprised the models; they nudged the models to make better decisions by telling them when they were right or wrong; they flagged offensive content that was in training data; and they designed the server farms and computer chips that ran the models.

As anthropologist Joseph Wilson discovered in his fieldwork, AI is built on what he calls the “human stack”—layers upon layers of human labor that get systematically erased by corporate mythology.

That chip, you could say easily, probably 30, 40 thousand people participated in that, it’s not like the two hundred people you see over here.

Forty thousand people contributed to a single AI chip. But when AI makes headlines, we see the face of one CEO claiming credit for “revolutionizing” human knowledge.

The engineers Wilson interviewed understood this clearly:

Sometimes I feel… a little frustrated or something. I guess, when people talk about how Steve Jobs brought us the smartphone, right? He’s one guy. He did some neat stuff, I guess. But the amount of people and time and effort… decades. The amount of time and effort and energy [that] goes into every piece of technology that is around is hard to fathom.

The Stakes Are Higher Now

With AI, the great man lie isn’t just unfair—it’s dangerous. When we attribute AI capabilities to individual genius rather than collective human effort, we create what Jeff Bezos called “artificial artificial intelligence”—the illusion that these systems are more autonomous and capable than they really are.

This mythology serves the same function it always has: concentrating power, obscuring accountability, and justifying extreme wealth inequality. But now it’s shaping how we govern technologies that could reshape society.

The Truth Bright as Sunlight

Fei-Fei Li provides perhaps the most damning example of how these frauds operate. Li built her career on ImageNet, a massive dataset scraped from the internet without consent, using exploited workers in 167 countries to label millions of stolen images.

When Princeton rightfully told her this was unethical and could hurt her tenure prospects, she simply moved to Stanford—a university with a long history of dubious moral failures built on genocide—and proceeded anyway. Li herself admits she was “desperate” for attention and funding, openly describing her “audacity” in ignoring ethical concerns to build surveillance infrastructure for Big Tech. She pressed 49,000 low-wage workers into what she euphemistically calls “data labor,” creating the foundation for modern AI surveillance while erasing their contributions entirely.

Now, after spending over a decade willfully removing all moral fiber from her work, Li lectures the world about AI ethics from her blood-stained Stanford pulpit. Like Jobs, she’s a master at repackaging other people’s work—in this case, stolen images and exploited labor—into a personal brand as an AI “visionary.

Her moral bankruptcy fits the pattern of stealing credit while exploiting others that defines the entire tech industry’s “great man” mythology, proving women can ruin the world too.

Steve Jobs was a marketing executive who happened to work at a tech company. Today’s AI “visionaries” are following the same playbook—stealing credit from armies of engineers, researchers, and data workers while positioning themselves as the architects of humanity’s future.

The iPhone wasn’t created by one man’s vision—it was assembled from the work of thousands of engineers, most of whom will never be remembered. AI isn’t being created by visionary CEOs—it’s being built by tens of thousands of people whose names you’ll never know.

Every time you see a tech CEO on a magazine cover claiming to have “built” AI, remember: they’re standing on top of other people’s work, and holding them all down, claiming they moved a mountain with their little finger.

The great man theory of innovation is dumb and dangerous. It’s time we retired it—before it buries the truth about who really builds the future.

What is Really Going On

Here are perfect counter-examples that prove the point even more powerfully.

Craig Newmark (Craigslist) built a simple, functional platform that actually served users.

  • Refused to “scale” or take VC money that would have destroyed the core mission
  • Never positioned himself as a visionary genius
  • Kept the company small and focused on utility over profit
  • Still runs it basically the same way decades later
  • Gets almost no media attention because he’s not playing the hero game

Tim Berners-Lee (World Wide Web) literally invented the foundational technology of the modern internet.

  • Gave it away for free when he could have become the richest person in history
  • Continues to advocate for keeping the web open and decentralized
  • Works on standards and protocols, not personal branding
  • Warns against the concentration of web power in big tech companies
  • Gets a fraction of the recognition for real work that Jobs/Musk types receive for fraud

The contrast is Tony Stark: the people who actually revolutionized technology tend to be humble, focused on the work itself, and concerned with broader social impact. They don’t hire PR teams or chase magazine covers.

Meanwhile, the frauds who repackage existing technologies get all the worship precisely because they’re better at self-promotion than engineering.

It’s like there’s an inverse relationship between actual contribution and mythological status. The real innovators are too busy with solving hard problems to jet around politically building cults of personality around themselves.

The great man mythology is truly insidious – it’s not just wrong, it actively obscures the people who deserve recognition while elevating the ones who deserve none.

Everyone mocked Time magazine for claiming an iPad glued to your face would change the world. And yet somehow an undeserved Luckey billionaire was created anyway.

This isn’t a modern phenomenon but a longstanding American tradition of celebrating thieves who exploit the vulnerable. AI already has charlatans galore, desperately trying to elevate themselves to buy jets and private islands like the next Epstein, at the expense of everyone else.

  • Exploitation of the vulnerable – Epstein preyed on children just like tech frauds systematically exploit engineers, data workers, and inventors who have little power to defend themselves.
  • Wealth as a shield – extreme wealth is used to escape accountability for crimes as they surround themselves with enablers who profit from the system.
  • Network effects – Epstein’s powerful connections sustained him, like how these tech myths need complicit journalists, investors, and politicians who benefit from maintaining the fiction.
  • The private island lifestyle – Mars? Isolation shouldn’t be the ultimate symbol of wealth, given how it’s so narcissistic to aspire for escaping normality, leaving human society and its healthy “constraints” behind for manifest exploitation.

Chinese Ships Collide Trying to Spook Filipinos: Coast Guard Rams Navy in Aggressive Territorial Maneuvers

I’ve watched this video many times. You have to marvel at how the Filipino sailors chased by larger Chinese ships outmaneuvered them, and then offered assistance, to further emphasize their superior seamanship.

Interesting to see how the lighter ship bow was totally crushed by the armored warship, yet maintained integrity as if by design. I mean it’s not unreasonable to expect puncture abeam instead of that crumple zone in a bow.

These Keystone Cops of the sea have been doing this stuff for years, and now digital media is helping expose one of the world’s biggest flashpoints. The contest has to do with controlled access to these waters being part of defending Taiwan from China. And the same region hosts $5tn of annual trade, meaning both high military and commercial stakes for anyone dominating the games.

It also reminded me a bit of this other high stakes competition crash on the water that just happened, which I’m sure few really care about.

Moments like this are when I miss being on the water the most.

Japan Announces “Purely Domestic” Quantum Computer

The announcements are fairly plain, as they unmistakably emphasize nothing was imported.

…researchers succeeded in developing a quantum computer entirely from domestically sourced technologies, including the dilution refrigerator, control device, superconducting qubit chips, and quantum cloud software. This proves that Japan possesses all the technologies necessary to build its own quantum computers and can integrate them into a system.

The Quantum Information and Quantum Biology (QIQB) team that announced this was founded in 2018, and is now considered the largest quantum center in Japan with nearly 100 researchers.

Integrity Breaches and Digital Ghosts: Why Deletion Rights Without Solid Are Strategic Fantasy

The fundamental question a new legal paper struggles with—though the author may not realize it—is a philosophical one of human persistence versus digital decay.

There is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This intersection of death, technology, and privacy law has remained relatively ignored until recently.

Take Disney’s 1964 animated representation of Abraham Lincoln, as one famous example, especially as it later was appropriated by the U.S. Marines for target practice. Here was an animatronic figure of America’s most loved President, crude by today’s standards, that somehow captured enough essence to warrant both reverence and target practice. The duality speaks to fundamental turbulence in what constitutes an authentic representation of the dead.

Oh no! Not the KKK again!

In war, as in security, we learn that all things tend toward entropy. The author of this new legal paper speaks of “deletion rights” as though data behaves like physical matter, subject to our commands. This reveals a profound misunderstanding. Lawyers unfortunately tend to have insufficient insights into the present technology, let alone the observable trends into the future.

This isn’t time for academic theorizing—it’s threat assessment. When we correctly frame digital resurrection as weaponized impersonation, the security implications become immediately clear to anyone who understands asymmetric warfare.

Who owns energy? It can be transformed, transmitted, and duplicated, but never truly contained. We are charged (pun intended) for its delivery (unless we are Amish) yet neither we nor the source “own” the energy itself, although we do own the derivative works we create using that energy.

Digital traces thus follow different laws than this legal paper recognizes. A voice pattern, once captured and processed through sufficient computational analysis, can become more persistent than the vocal cords that produced it. Ask me sometime about efforts to preserve magnetic tapes of “oral history” left rotting in abandoned warehouses of war torn Somalia.

While the availability leg of the digital security triad (availability, confidentiality and integrity) is now so well understood it can promise 100% lossless service, think about what’s really at risk here. We’re not facing a privacy or availability problem—we’re facing an identity warfare problem of integrity breaches.

When I can resurrect your voice patterns, your writing style, your decision-making algorithms with “auth”, uptime and secrecy aren’t the primary loss. I’m stealing authority, weaponizing authenticity. This is the nature of 21st century information warfare that 20th century legal doctrines are unprepared to face.

On the Nature of What Persists and What Decays

Consider the lowly common human fingerprint. Unique, persistent, left unconsciously upon every surface we touch. It’s literally spread liberally around in public places. Yet fingerprints fade. Oil oxidizes. Surfaces weather. The fingers that made them change, deteriorate and eventually return to dust.

There is discomfort in our natural decay, but also an inevitability, despite the technological attempt over millenia to deny our fate—a mercy built into the physical world.

The mathematical relationships that define how someone constructs sentences, their choice of punctuation, their temporal patterns of communication—these digital fingerprints are abstractions that can outlive not merely the person, but potentially the civilization that created them.

The paper concerns itself, as if unaware of how history is written, only with controlling “source material”—emails, text messages, social media posts. This misses the well worn deeper truth of skilled investigators and storytellers: the valuable patterns have already been abstracted away. Once a sufficient corpus exists to serve intelligence, train a model as it were today, the specific training data becomes almost irrelevant. The patterns persist in the weights and connections of neural networks, distributed across systems that span continents.

How do you think all the fantastical Griffins (dinosaur bones found by miners) and magical Unicorns (narwal tooth found by sailors) were embedded into our “reality”, as I clearly warned “big data” security architects back in 2012?

I have seen decades of operations where deletion of source documents was treated as mission-critical, only to discover years later that the intelligence value had already been extracted and preserved in forms the original handlers never anticipated (ask me why I absolutely hated watching the movie Argo, especially the shredded paper scene).

…I taught a bunch of Iranian thugs how to reconstitute the shredded documents they found after looting the American Embassy in Tehran.

Source: Lew Perdue

Tomb Raiders: Our Most Pressing Question is Authority Over Time

Who claims dominion over digital remains, our code pyramids distributed into deserts of silicon? The paper proposes, almost laughably, that next-of-kin should control this data as they would control physical remains. As someone who has had to protect digital records against the abuse and misuse by next-of-kin, let me not be the first to warn there is no such simplistic “next” to real world authorization models.

The lawyer’s analogy fails at its foundation. Physical remains are discrete, locatable, subject to the jurisdiction where they rest. And even then there are disputes. Digital patterns exist simultaneously in multiple jurisdictions, in systems owned by entities that may not even exist when the patterns were first captured. It only gets more and more complex. When I oversaw the technology related to a request for a deceased soldier’s email to be surrendered to the surviving family, it was no simple matter. And I regret to this day hearing the court’s decision, as misinformed and ultimately damaging it was to that warrior’s remains.

Consider: if a deceased person’s communication patterns were learned by an AI system trained in international space or sea, using computational resources distributed across twelve nations, with the resulting model weights stored on satellites beyond any terrestrial jurisdiction—precisely which authority would enforce a “deletion request”?

The Economics of Digital Necromancy

The commercial and social incentives here are stark and unyielding. A deceased celebrity’s digital resurrection can generate revenue indefinitely, with no strikes, no scandals, no aging, no salary negotiations. The economic pressure to preserve and exploit these patterns will overwhelm any legal framework not backed by technical enforcement.

As a security guardian protecting X-ray images in any hospital can tell you, the threats are many and often.

More concerning: state actors don’t discuss or debate the intelligence value because it’s so obvious. A sufficiently accurate model of a deceased intelligence officer, diplomat, or military commander represents decades of institutional knowledge that normally dies with the individual. Nations will preserve these patterns regardless of family wishes or international law.

Techno-Grouch Realities

The paper’s proposed “right to deletion” assumes a level of technical control that simply does not exist yet at affordable and scalable levels. Years ago I co-presented a proposed solution called Vanish, which gave a determistic decay to data using cryptographic methods. It found little to no market. The problem wasn’t the solution, the problem was who would really pay for it.

The market rejection wasn’t technical failure—it was cultural. Americans, in a particular irony, resist the notion that anything should be designed to disappear, generating garbage heaps that never decay. We build permanence even when impermanence so clearly would serve us far better. Our struggle to find out who would really pay for real loss cuts to the heart of the problem: deletion in an explosively messy technology space requires careful design and an ongoing cost, while preservation happens simply through rushed neglect.

Modern AI training pipelines currently are designed for an inexpensive resilience and quick recovery to benefit the platforms that build them, not protect the vulnerable with safety through accountability. It reflects a society where the powerful can change their mind always to curate a capitalized future, banking on control and denial of any inconvenient past. Data is distributed, cached, replicated, and transformed through multiple stages. Requesting deletion is like asking the waiter to unbake a cake by removing the flour and unbrew the coffee so it can go back to being water.

Even if every major technology company agreed to honor deletion requests in their current architecture—itself a GDPR requirement they struggle with—the computational requirements for training large language models ensure that smaller, less regulated actors will continue this work. A university research lab in a permissive jurisdiction can reproduce the essential capabilities with modest resources.

What Can Be Done

Rather than fight the technical reality, we must work within it, adopting protocols like Tim Berners-Lee’s “Solid” update to the Web. The approach should focus not on preventing digital resurrection, but on controlling integrity of data though explicit authentication and attribution.

Cryptographic solutions exist today that could tie digital identity to physical presence in ways that cannot be reproduced after death. Hardware security modules, biometric attestation, multi-factor authentication systems that require ongoing biological confirmation—these create technical barriers that outlast legal frameworks.

The goal should not be to prevent the creation of digital patterns from the deceased, but to ensure that these patterns cannot masquerade as the living person or a representation of them for purposes of authentication, authorization, or legal standing. A step is required to establish context and provenance, the societal heft of proper source recognition. The technology exists to enable a balance of both privacy and knowledge, but does the will exist to build it?

The Long View

This technology will evolve when we regulate it, or we will wait too long and suffer a broken market exploited by monopolists—economic capture by entities that may not share democratic values. The patterns that define human communication and behavior will be preserved, analyzed, and reproduced. Where that happens, centrally planned or distributed and democratic, matters far more than most realize now. Fighting against decentralized data solutions is like fighting the ocean tide by saying we can build rockets to blow up the moon and colonize Mars.

The wiser course is to ensure that as we cross this threshold, we do so with clarity about what persists and what decays, what can be controlled and what cannot. The dead have always lived on in the memories of the living. Now those memories can be given voice and form, curated by those authorized to represent them.

Can I get a shout out for those historians correctly writing that George Washington was a military laggard who used the French to do his work, and cared only about the Revolution so he could preserve slavery?

Historical truth has always been contested, which is why we become historians, as the tools of revision only speed up over time. Previously, rewriting history involved control of physical spaces (e.g. bookstores in Kashmir raided by police) and publishing texts over generations. Now it requires quick pollution of datasets and model weights—a very much more concentrated and therefore vulnerable process without modern integrity breach countermeasures.

The question is not whether technology can make preservation more private, but whether we will manage integrity with wisdom or allow data to be subjected to ignorance, controlled by those who can drive the technology but not look in the rear view mirror let alone see the curve in the road ahead.

What persists is what we preserve either by purpose or neglect. Oral and written traditions are ancient in how they thought about what matters and who decides. The latest technology merely changes mechanisms of preservation.

When you steal someone’s authority through digital resurrection, you’re conducting what amounts to posthumous identity theft for influence operations. The victim can’t defend themselves, the audience lacks technical means to verify authenticity, and the attack surface includes every piece of digital communication the deceased ever generated.

Anyone who claims to really care about this issue should visit Grant’s Tomb, which is taller and more imposing that the Statue of Liberty. Standing there they should answer why the best President and General in American history has been completely obscured and denigrated by unmaintained trees, on an island obstructed by roads lacking crosswalks.

Grant was globally admired and respected, his tomb situated so huge crowds could pay respect

Preservation indeed.

Here lies the man who preserved the Union and destroyed slavery both on the battlefield and in the ballot box, yet his monument is literally obscured by neglect and poor urban planning. If Americans can’t properly maintain physical memorials to our most consequential leaders, what legal rights do we really claim for managing digital remains with wisdom?

Attempts at physical deletion and desecration of Grant’s Tomb have been cynical and strategic, along with fraudulent attacks on his character, yet his brilliant victories and innovations carry on.

General Grant said of West Point graduates trained on Napoleon’s tactics, who were losing the war, that he would respect them more if they were actually fighting Napoleon. Grant was a thinker 100 years ahead of his time and understood that wicked problems require new and novel methods, not just expanded execution of precedents.

President Grant’s tomb says it plainly for all to see, which is exactly why MAGA (America First platform of the KKK) doesn’t want anyone to see it.