A San Bruno police officer pulls over a Waymo robotaxi during a DUI checkpoint. The vehicle has just made an illegal U-turn—seemingly fleeing law enforcement. The officer peers into the driver’s seat and finds it empty. He contacts Waymo’s remote operators. They chat. The Waymo drives away.
No citation issued.
The police department’s social media post jokes:
Our citation books don’t have a box for ‘robot.’
But there’s nothing funny about what just happened, because… history. We are now witnessing the rebirth of corporate immunity for murder, vehicular violence at scale.

In Phoenix, a Waymo drives into oncoming traffic, runs a red light, and “FREAKS OUT” before pulling over. Police dispatch notes: “UNABLE TO ISSUE CITATION TO COMPUTER.”
In San Francisco, a cyclist is “doored” by a Waymo passenger exiting into a bike lane. She’s thrown into the air and slams into a second Waymo that has also pulled into the bike lane. Brain and spine injuries. The passengers leave. There’s a “gap in accountability” because no driver remains at the scene.
In Los Angeles, multiple Waymos obsessively return to park in front of the same family’s house for hours, like stalkers. Different vehicles, same two spots, always on their property line. “The Waymo is home!” their 10-year-old daughter announces.
In a parking lot, Waymos gather and honk at each other all night, waking residents at 4am. One resident reports being woken “more times in two weeks than combined over 20 years.”
A Waymo gets stuck in a roundabout and does 37 continuous laps.
Another traps a passenger inside, driving him in circles while he begs customer service to stop the car. “I can’t get out. Has this been hacked?”
Two empty Waymos crash into each other in a Phoenix airport parking lot in broad daylight.
And now, starting July 2026, California will allow police to issue “notices of noncompliance” to autonomous vehicle companies. But here’s the catch: the law doesn’t specify what happens when a company receives these notices. No penalties. No enforcement mechanism. No accountability.
In 1866, London police posted notices about traffic lights with two modes:
CAUTION: “all persons in charge of vehicles and horses are warned to pass the crossing with care, and due regard for the safety of foot passengers”
STOP: “vehicles and horses shall be stopped on each side of the crossing to allow passage of persons on foot”
The street lights were designed explicitly to stop vehicles for pedestrian safety. This was the foundational principle of traffic regulation.
Then American car manufacturers inverted it completely.
They invented “jaywalking”—a slur using “jay” (meaning rural fool or clown) to shame lower-class people for walking. They staged propaganda campaigns where clowns were repeatedly rammed by cars in public displays. They lobbied police to publicly humiliate pedestrians. They successfully privatized public streets, subordinating human life to vehicle flow.

The racist enforcement was immediate and deliberate. In Ferguson, 95% of arrests for fantasy crimes (let alone victims of vehicular homicide) were of Black people, as these laws always intended.

In 2017, a North Dakota legislator proposed giving drivers zero liability for killing pedestrians “obstructing traffic.” Months later, a white nationalist in Charlottesville murdered a woman with his car, claiming she was “obstructing” him.
Now we’re doing it again—but this time the vehicles have no drivers to cite, and the corporations claim they’re not “drivers” either.

Corporations ARE legal persons when it benefits them:
- First Amendment rights (Citizens United)
- Religious freedom claims
- Contract enforcement
- Property ownership
But corporations are NOT persons when it harms them:
- Can’t be cited for traffic violations
- No criminal liability for vehicle actions
- No “driver” present to hold accountable
- Software “bugs” treated as acts of God
This selective personhood is the perfect shield. When a Waymo breaks the law, nobody is responsible. When a Waymo injures someone, there’s a “gap in accountability.” When police try to enforce traffic laws, they’re told their “citation books don’t have a box for ‘robot.'”
Here’s what’s actually happening: Every time police encounter a Waymo violation, they’re documenting a software flaw that potentially affects the entire fleet.
When one Waymo illegally U-turns, thousands might have that flaw. When one Waymo can’t navigate a roundabout, thousands might get stuck. When one Waymo’s “Safe Exit system” doors a cyclist, thousands might injure people. When Waymos gather and honk, it’s a fleet-wide programming error.
These aren’t individual traffic violations. They’re bug reports for a commercial product deployed on public roads without adequate testing.
But unlike actual bug bounty programs where companies pay for vulnerability reports, police document dangerous behaviors and get… nothing. No enforcement power. No guarantee of fixes. No way to verify patches work. No accountability if the company ignores the problem.
The police are essentially providing free safety QA testing for a trillion-dollar corporation that has no legal obligation to act on their findings despite mounting deaths.
We’ve seen this exact playbook before.
From 2007-2014, Baghdad had over 1,000 checkpoints where Palantir’s algorithms flagged Iraqis as suspicious based on the color of their hat at dawn or the car they drove. U.S. Military Intelligence officers said: “If you doubt Palantir, you’re probably right.”
The system was so broken that Iraqis carried fake IDs and learned religious songs not their own just to survive daily commutes. Communities faced years of algorithmic targeting and harassment. Then ISIS emerged in 2014—recruiting heavily from the very populations that had endured years of being falsely flagged as threats.
Palantir’s revenue grew from $250 million to $1.5 billion during this period. A for-profit terror generation engine.
The critical question military commanders asked:
Who has control over Palantir’s deadly “Life Save or End” buttons?
The answer: Not the civilians whose lives were being destroyed by false targeting.
Who controls the “Life Save or End” button when a Waymo encounters a cyclist? A pedestrian? Another vehicle?
- Not the victims
- Not the police (can’t cite, can’t compel fixes)
- Not democratic oversight (internal company decisions)
- Not regulatory agencies (toothless “notices”)
Only the corporation. Behind closed doors. With no legal obligation to explain their choices.
When a Tesla, Waymo or Palantir product “veers” into a bike lane, who decided that was acceptable risk? When it illegally stops in a bike lane and doors a cyclist, causing brain injury, who decided that “Safe Exit system” was ready for deployment? When it drives into oncoming traffic, who approved that routing algorithm?
We don’t know. We can’t know. The code is proprietary. The decision-making is opaque. And the law says we can’t hold anyone accountable.
In 2016, Elon Musk promised loudly Tesla would end all cyclist deaths and publicly abused and mocked anyone who challenged him. Then Tesla vehicles repeatedly kept “veering” into bike lanes and in 2018 accelerated into and killed a man standing next to his bike.

Similarly in 2017, an ISIS-affiliated terrorist drove a truck down the Hudson River Bike Path, killing eight people. Federal investigators linked the terrorist to networks that Palantir’s algorithms had helped radicalize in Iraq. For some reason they didn’t link him to the white supremacist Twitter campaigns demanding pedestrians and cyclists be run over and killed.

Since then, Tesla “Autopilot” incidents involving cyclists have become epidemic. In Brooklyn, a Tesla traveling 50+ mph killed cyclist Allie Huggins in a hit-and-run. Days later, NYPD responded by ticketing cyclists in bike lanes.
This is the racist jaywalking playbook digitized: Police enforce against the vulnerable population, normalizing their elimination from public space—and training AI systems to see cyclists as violators to be punished with death rather than victims.
Musk now stockpiles what some call “Swasticars”—remotely controllable vehicles deployed in major cities, capable of receiving over-the-air updates that could alter their behavior fleet-wide, overnight, with zero public oversight.

If we don’t act, we’re building the legal infrastructure for algorithmic vehicular homicide with corporate immunity. Here’s what must happen:
Fleet-Wide Corporate Liability
When one autonomous vehicle commits a traffic violation due to software, the citation goes to the corporation multiplied by fleet size. If 1,000 vehicles have the dangerous flaw, that’s 1,000 citations at escalating penalty rates.
Dangerous violations (driving into oncoming traffic, hitting pedestrians/cyclists, reckless driving) trigger:
- Mandatory fleet grounding until fix is verified by independent auditors
- Public disclosure of the flaw and the fix
- Criminal liability for executives if patterns show willful negligence
Public Bug Bounty System
Every police encounter with an autonomous vehicle violation must:
- Trigger mandatory investigation within 48 hours
- Be logged in a public federal database
- Require company response explaining root cause and fix
- Include independent verification that fix works
- Result in financial penalties paid to police departments for their QA work
If companies fail to fix documented patterns within 90 days, their permits are suspended until compliance.
Restore the 1866 Principle
Traffic rules exist to stop vehicles for public safety, not to give vehicles—or their corporate owners—immunity from accountability.
The law must state explicitly:
- Corporations deploying autonomous vehicles are legally responsible for those vehicles’ actions
- “No human driver” is not a defense against criminal or civil liability
- Code must be auditable by regulators and available for discovery in injury cases
- Vehicles that cannot safely stop for pedestrians/cyclists cannot be deployed
- Human life takes precedence over vehicle throughput, period
When Waymo’s algorithms decide who lives and who gets “veered” (algorithmic death), who controls that button?
When Tesla’s systems target cyclists while police ticket the victims, who controls that button?
When corporations claim they’re persons for speech rights but not persons for traffic crimes, who controls that button?
Right now, the answer is: Nobody we elected. Nobody we can hold accountable. Nobody who faces consequences for being wrong.
Car manufacturers spent the extremist “America First” 1920s inventing racist “jaywalking” crime to privatize public streets and criminalize pedestrians. It worked so well that by 2017, who really blinked when a North Dakota legislator could propose zero liability for drivers who kill people with cars? By 2021, Orange County deputies shot a Black man to death while arguing whether he had simply walked on a road outside painted lines.
Now we’re handing that same power to algorithms—except this time there’s no driver to arrest, no corporation to cite, and no legal framework to stop fleet-wide deployment of dangerous systems.
Palantir taught us what happens when unaccountable algorithms target populations: you create the enemies you claim to fight, and profit from the violence.
Are we really going to let that same model loose on American streets?
Because when police say “our citation books don’t have a box for robot,” what they’re really saying is: We’ve lost the power to protect you from corporate violence.
That’s not a joke. That’s murder by legal design.
The evidence is clear. The pattern is documented. The choice is ours: Restore accountability now, or watch autonomous vehicles follow the exact playbook that turned jaywalking into a tool of racist violence and Palantir checkpoints into an ISIS recruiting campaign.
Who controls the button? Right now, nobody you can vote for, sue, or arrest. That has to change.
Here’s how William Blake described the algorithmic dangers of 1794 in “London”:
I wander thro’ each charter’d street,
Near where the charter’d Thames does flow,
And mark in every face I meet
Marks of weakness, marks of woe.In every cry of every Man,
In every Infants cry of fear,
In every voice: in every ban,
The mind-forg’d manacles I hear
Those “mind-forg’d manacles” mean algorithmic oppression by systems of control, which appear external but are human-created. Back in his day a “charter’d street” was privatized public space, precedent for using power to enforce status-based criminality, such as jaywalking laws. His poem basically indicts institutions of church and palace for being complicit in producing systemic widespread suffering.