It already was driving more than twice the speed limit when the Tesla accelerated into the back of a motorcyclist like firing a missile for targeted assassination.
According to investigators, Dorfman was driving more than twice the 45-mph speed limit when his 2020 Tesla Model 3 collided with Ingrid Noon’s motorcycle on Aug. 26, 2022. […] Minutes before the wreck, both Dorfman and Noon were seen leaving the since-closed Brickyard Restaurant and Micro Brewery about 2.5 miles away.
He allegedly claimed first the motorcycle swerved and cut his Tesla off, which was easily disproven.
The evidence showed that when the motorcycle put on its bright red brake lights in front of a yellow traffic light, The Tesla rapidly accelerated forward over 100mph and killed Noon.
Then I chanced accross a startup listed by LSE that has been developing a known effective strategy: integrity controls in the supply-chain.
Honey is one of the most adulterated food items on the market. We provide full traceability; we highlight 1. the location – area, country – from which we source our honey; and 2. the communities that manage our hives.
In the 1940s, tobacco companies paid doctors $5,000 (equivalent to $60,000 today) to recommend cigarettes as a treatment for throat irritation. They hosted lavish dinners, set up “hospitality booths” at medical conventions, and even had cigarette packs embossed with doctors’ names. All to exploit public trust in medical authorities and calm growing concerns about smoking’s health risks. By the 1950s it was clear tobacco was causing cancer and poised to kill, yet the tobacco companies threw heavy propaganda punches so successful they promoted willful conspirators all the way into the White House.
Today, venture capitalists are running a remarkably similar playbook with AI –- but instead of selling us on inhaling smoke, they’re pushing us to tune out and “upload ourselves” into their proprietary AI systems.
Market Manipulation as a Playbook
A recent article by a16z partner Justine Moore titled “Export Your Brain: How I Uploaded Myself to AI” perfectly exemplifies harmful propaganda tactics. Let’s break down the parallels:
Harm Minimization: Moore writes that always-on AI surveillance “might sound invasive today, but so did capabilities like location sharing.” This mirrors how tobacco companies dismissed health concerns as temporary squeamishness.
Rebranding Unhealthy Dependency as Health: Where cigarette companies promoted smoking for “throat comfort,” VCs are selling AI as essential for mental health and self-understanding. Moore suggests using AI for therapy-like functions and “better understanding yourself.”
Hiding Financial Motives: Just as doctors’ cigarette recommendations were bought with fishing trips and dinners, venture capitalists promoting “AI brain exports” stand to profit from their portfolio companies’ success.
Building Social Pressure: The article implies this technology is inevitable and that everyone will eventually use it – classic tobacco industry tactics to create social pressure for adoption.
Simple a16z Text Analysis
No discussion of privacy protections
No mention of security measures
Absence of data rights framework
Limited mention of consent mechanisms
No mention of data ownership after sharing
Many data misuse examples from recent history (like Cambridge Analytica or Clearview AI) reinforce why these concerns aren’t theoretical. AI regulation efforts (like EU’s AI Act) show what protections look like, yet somehow these concepts are completely ignored by the article promoting the opposite.
False Choice Fallacy of Feudalism
Most disturbingly, the gloatingly harmful narrative presents a false choice that mirrors another crisis facing Americans today: the corporate capture of housing. Just as private equity firms are buying up residential properties and forcing people into instability of permanent renting, venture capitalists try to convince us to give up our autonomy and “rent” our digital selves from their incarcerating AI platforms.
Consider these obvious parallels:
Housing Market
AI “Brain Upload” Market
Investment firms buy up residential properties en masse
VCs fund platforms that capture personal data
Convert homeownership opportunities into rental-only options
Convert what could be personal data sovereignty into “AI services”
Steadily increase rents, extracting more wealth over time
Steadily increase dependency on their platforms
Create artificial scarcity in the housing market
Create artificial barriers to personal data control
Force people into permanent tenancy rather than ownership
Force people into permanent digital tenancy rather than ownership
In both cases, we’re seeing the same playbook from elites who prioritize investment gains at others’ expense. They convince unwitting victims that ownership (whether of homes or data) is supposedly too complex and expensive for individuals to manage, offering entrapment disguised as “convenience” – even though it’s deliberately designed to cost more in the long run. This surrendering of control to corporations is affectionately known in Silicon Valley as building “digital moats,” except instead of protecting users from harm, these moats are used in reverse to prevent escape – technology deliberately designed as an exit-barrier for extracting wealth from those who’ve been “uploaded.”
Better Path? I’m Glad You Asked: Freedom of Movement and Ownership
Just as the answer to the housing crisis isn’t surrendering to massive investment firms that generate corporate landlords owning everyone’s space, the solution to AI isn’t surrendering our digital selves to venture-backed cheerleaders of societal disruption. We already have technology for genuine digital ownership that respects centuries of human progress in liberty and justice. The W3C Solid standards exemplify a framework that provides essential protections against exploitation while enabling all the benefits of AI:
Store personal data in secure “data wallets”
Control exactly what information is shared and with whom
Revoke access when appropriate for safety and security
Keep data portable, not “uploaded” forever into any single company’s control
How the Solid Protocol Works
Users have personal data wallets for their entire digital lives — like a modern take on safe deposit box revolution in 1800s asset protection and personal wealth preservation:
Your data stays encrypted on servers you choose
Apps request specific permissions to access specific data
You can revoke access at any time
Data can be moved between providers easily
This fundamentally differs from unhealthy “brain upload” models that encourage victims to fall into systems designed to exploit them while preventing escape.
Think about the difference between owning a brick house with a lock on the door and a roaring fire in the chimney… and a pitch by the wolves of Palo Alto to leave and voluntarily upload yourself into a corporate-owned campsite with straw huts.
“The better to see you…”
Don’t Forget the Fairy Tale Was a Warning
When tobacco companies manipulated doctors to promote smoking, they caused incalculable harm to public health, killing tens of millions of people for profit with little to no accountability. Today’s venture capitalists are pushing for something potentially even more invasive: the voluntary surrender of our most intimate thoughts, memories, and psychological patterns to proprietary AI systems. Many millions more will die in such a system, without exaggeration.
The promise of AI assistance doesn’t require surrender and regression to the worst chapters in history. We can build systems that respect human autonomy and data rights. But first, we must recognize clear manipulation tactics used (even unintentionally, lunch deals on Sand Hill Road often are devoid of reality) to push us toward centralized repressive regimes of corporate data capture.
Action Speaks Louder Than Words
1. Demand transparency from those with financial interests in AI adoption
2. Deploy data sovereignty technology like W3C Solid
3. Invest in development of open standards and protocols
4. Refuse products that fail to respect data rights
5. Measure the promise of “AI companions” against history and call a spade a spade
Remember: Just as people unfortunately took health advice from doctors paid by tobacco companies and societal harms rocked their world with tragedy, be wary of advice from anyone who appears invested in surveillance used for digital human trafficking.
Your digital self deserves a home you own with locks you control, not a ruse to relocate you into a temporary cell with guards who answer to an unaccountable system of elite ownership.
Take it from someone who worked three decades in the VC funded corridors of Silicon Valley disruption and dislocation, as I’ve seen it all first hand. From the forefront of defending users from harm, this is a moment of danger that should not be underestimated.
Privacy Implications
The a16z article’s casual approach to privacy concerns, comparing AI brain monitoring to location sharing, overlooks the unprecedented scope and intimacy of the data collection proposed. This technology would have access to users’ thoughts, memories, and personal interactions — a level of access that demands rigorous privacy protections and strong ethical frameworks that are entirely missing in a suspiciously saccharin fantasy pitch.
Let me explain this another way, looking at similar but different headlines. We are talking here about human behavior when it comes to power and technology. Hundreds of thousands of people are said to have died in high-technology control and capture systems around Damascus. Tens of thousands who were “uploaded” into detention centers are today wandering out, unable to even say their name or explain the horrors they experienced under an Assad regime of population control. Assad meanwhile is said to be unimaginably wealthy, lounging in a $40m penthouse of Moscow. Such tragic stories of untold suffering and death without justice undoubtedly will be repeated in the world of AI if the wrong people are believed.
History has shown that such abusive surveillance systems don’t just happen “over there”, because they also happen right here at home… in Palo Alto.
In the 1950s-60s, the CIA’s infamous MK-ULTRA program ran experiments on unwitting Stanford students, offering them drugs and prostitutes while secretly studying them for research into psychological manipulation and informant control methods. Funding technology to expose people’s thoughts? Finding ways to “offer” opportunities for people to be “understood” better”? Sound familiar? Palo Alto schemes masquerading as innovation while seeking ways into people’s heads have a particularly important precedent.
Like those CIA researchers, today’s venture capitalists frame their surveillance as voluntary and beneficial. But there’s a clear through-line from MK-ULTRA’s “consent” model to modern tech’s manipulative push for “uploading” our minds. Both serve powerful interests seeking profit from psychological control, only now it’s packaged as a consumer product rather than as drugs and prostitutes in a front for government research. The core abuse theory remains the same, exploiting susceptibility to offers of “benefits” to gain access to human consciousness. For what purposes?
While the NYT warns about teen suicide from chatbot use, and CNN similarly documents a mother’s pain from death attributed to unregulated AI, this a16z author flippantly promotes rapid adoption of systems by saying she only loves what she sees when only looking at benefits to herself.
Show me where a16z has any record of guardrails intended or expected to prevent societal manipulation and control, let alone any attempt to demonstrate sane intent with this article itself.
Term Frequency in a16z Article
Term
Mentions
Context
Privacy
1
Only mentioned in legal disclaimer to protect author
The Canfield Post of the Ohio State Highway Patrol are investigating a fatal crash.
The crash occurred approximately at 4:28 p.m. on West Calla Rd. Monday evening.
Kyle Soli of Salem was operating a 2025 Tesla Model 3 and travelled off the right side of the road, striking a mailbox, ditch, culvert, multiple trees and overturning several times before coming to rest on all four tires.
Soli was the only occupant in the car, and was transported to the hospital where he died from his injuries.
A 2025 model? Who in 2024 is looking at the data and buying a 2025 model Tesla? This doesn’t bode well for anyone even thinking about buying a new Tesla.
Tesla vehicles suffer fatal accidents at a rate that’s twice the industry average, according to a new report.
Initial statements by police point to exactly the kind of accident Tesla’s CEO claims their technology should prevent. The driver’s Model 3 left the road and rolled multiple times after striking several objects.
The circumstances described are an alleged impairment, and advocating for seatbelt use. These highlight a dangerous contradiction: Tesla markets its driver assistance features as safety enhancers while its CEO publicly promotes the idea that their cars can safely transport sleeping drivers. This messaging is known to encourage dangerous behavior, adding to mounting concerns about Tesla’s safety record.
Insurance Institute for Highway Safety (IIHS) data shows fast rising death rates for the Model 3 directly contradict Tesla claims of superior safety.