Defense Secretary Pete Hegseth still won’t explain the intelligence behind ongoing illegal US strikes on civilian boats in international waters.
There’s a simple reason, which should be most apparent to students of international history: it turns out that these aren’t drug interdiction operations. Venezuelan ships are being attacked to disrupt Iran’s financial lifeline—and Israel’s fingerprints are all over extra-judicial strike orders.
Hegseth Won’t Share Certain Secrets
The Onion understands Pete’s tragicomedy status as the least capable or qualified military leader in history
Venezuela has become a critical node in an Iran-Hezbollah money laundering operation. Cocaine moves through Venezuela, Hezbollah-connected facilitators handle the financial infrastructure, cash gets laundered through the Middle East, and proceeds fund Hezbollah operations against Israel.
Venezuelan Vice President Tareck El Aissami has been accused of helping Hezbollah members enter Venezuela and managing drug proceeds that flow back to Iran. Multiple investigations document how this network generates billions while helping Iran circumvent sanctions.
It’s running 1,500 miles from Miami, and it’s keeping Hezbollah operational after Israeli strikes degraded their capabilities.
Russia is Very Worried
Russia just responded to American attacks on these ships with warnings of “far-reaching consequences“. Putin isn’t defending drug traffickers, and he certainly isn’t standing up for civilian rights against targeted military strikes. Russia has $4 billion in Venezuelan arms sales, military advisers on the ground, and oil infrastructure as collateral for regime loans.
Venezuela is Moscow’s foothold in the Western Hemisphere. And Moscow is almost out of runway in their invasion of Ukraine. Some intelligence analysts predict Russia is approaching state failure next year, bringing foreign lifelines and networks into focus. That is essential context for Trump’s latest war mongering:
“They’re not coming in by sea any more, so now we’ll have to start looking about the land because they’ll be forced to go by land,” he added in an apparent threat to strike Venezuela.
Consider that Venezuela is Russia’s ally Iran’s cash cow, which is directly feeding the Ukraine war. These strikes on boats don’t just disrupt drugs—they attack the financial pipeline keeping Hezbollah funded and Iran relevant despite sanctions.
Israeli Intel Directs American Missiles
The strikes’ intensity and illegality—bypassing law enforcement channels, refusing oversight, offensive operations in international waters—suggest Israeli intelligence being weaponized through US military force.
Israel has tracked Hezbollah’s Latin American networks since the 1994 AMIA bombing in Buenos Aires. The Trump administration’s designation of cartels as “foreign terrorist organizations” creates a framework allowing secret intelligence about Hezbollah financing to abruptly become targeting data for loud and proud American military strikes.
Hegseth can’t explain the intelligence because it would invite scrutiny of Israeli operational involvement in directing American force on foreign states. Secretary Rubio admitted the boats “could have been interdicted” through normal law enforcement. But interdiction means trials, evidence, due process, scrutiny.
This is 1960s assassination modeling dressed in 1970s drug war rhetoric, designed to destroy 1980s Iranian power and financial capabilities without current congressional authorization or public debate—in pusuit of immediate Israeli security interests.
The Looming Domestic Shadow
The framework being tested in Venezuelan waters transfers directly home. If the executive can designate “narco-terrorists” for extrajudicial killing based on secret intelligence about Iranian networks, the same framework applies to any group labeled “terrorists” domestically.
Trump has already designated Tren de Aragua as terrorists, invoked the Alien Enemies Act against migrants, deployed troops to cities, and effectively legalized racial profiling. The scaffolding is being built. The legal theories are being tested where oversight is minimal.
Phoenix Program started in Vietnam and came home to COINTELPRO. The infrastructure of targeted killings based on secret criteria always expands beyond its stated parameters.
Senator Frank Church displays the CIA poison dart gun at committee hearing with vice chairman John Tower on September 17, 1975 (Source: U.S. Capital via Levin Center, photo by Henry Griffin)
That’s what the Church Committee documented 50 years ago when establishing why democratic oversight requires transparency.
“We have every authorization needed. These are designated as foreign terrorist organizations,” Hegseth said…. Hegseth and President Donald Trump have not provided evidence for claims that the targeted boats were carrying drugs.
GOP “War on Drugs” Still Signals Race
These boats maybe were carrying cocaine.
The people killed maybe were traffickers.
But they’re being killed primarily because they’re part of a financial network helping Iran fund resistance to Israeli power. That means the US military has become the enforcement arm of an Israeli agenda.
Russia understands this.
Iran understands this.
The only people kept in the dark are Americans, told a yarn about drugs while their military establishes how the unitary executive can again kill anyone, anywhere, based on secret reasons only a Dick or Donald can know.
Richard Nixon 1971 presidential campaign button
That precedent won’t stay in international waters. It never does. We know this from the Nixon years. History rhymes even when it doesn’t repeat exactly.
A San Bruno police officer pulls over a Waymo robotaxi during a DUI checkpoint. The vehicle has just made an illegal U-turn—seemingly fleeing law enforcement. The officer peers into the driver’s seat and finds it empty. He contacts Waymo’s remote operators. They chat. The Waymo drives away.
But there’s nothing funny about what just happened, because… history. We are now witnessing the rebirth of corporate immunity for murder, vehicular violence at scale.
Mountain View Police stopped a driverless car in 2015 for being too slow. Google engineers responded they had never read the laws so they couldn’t know. A year later the same car became stuck in a roundabout. Again, the best and brightest engineers at Google simply claimed they were ignorant of laws.
In Phoenix, a Waymo drives into oncoming traffic, runs a red light, and “FREAKS OUT” before pulling over. Police dispatch notes:
UNABLE TO ISSUE CITATION TO COMPUTER.
In San Francisco, a cyclist is “doored” by a Waymo passenger exiting into a bike lane. She’s thrown into the air and slams into a second Waymo that has also pulled into the bike lane. Brain and spine injuries. The passengers leave. There’s a “gap in accountability” because no driver remains at the scene.
In Los Angeles, multiple Waymos obsessively return to park in front of the same family’s house for hours, like stalkers. Different vehicles, same two spots, always on their property line. “The Waymo is home!” their 10-year-old daughter announces.
Another traps a passenger inside, driving him in circles while he begs customer service to stop the car. “I can’t get out. Has this been hacked?”
Two empty Waymos crash into each other in a Phoenix airport parking lot in broad daylight.
And now, starting July 2026, California will allow police to issue “notices of noncompliance” to autonomous vehicle companies. But here’s the catch: the law doesn’t specify what happens when a company receives these notices. No penalties. No enforcement mechanism. No accountability.
In 1866, London police posted notices about traffic lights with two modes:
CAUTION: “all persons in charge of vehicles and horses are warned to pass the crossing with care, and due regard for the safety of foot passengers”
STOP: “vehicles and horses shall be stopped on each side of the crossing to allow passage of persons on foot”
The street lights were designed explicitly to stop vehicles for pedestrian safety. This was the foundational principle of traffic regulation.
Then American car manufacturers inverted it completely.
They invented “jaywalking”—a slur using “jay” (meaning rural fool or clown) to shame lower-class people for walking. They staged propaganda campaigns where clowns were repeatedly rammed by cars in public displays. They lobbied police to publicly humiliate pedestrians. They successfully privatized public streets, subordinating human life to vehicle flow.
The truth of the American auto industry is that inexpensive transit threatens racist policy. They want cars to remain a privilege ticket, which criminalizes being poor, where poor means not white. Source: StreetsBlog
Now we’re doing it again—but this time the vehicles have no drivers to cite, and the corporations claim they’re not “drivers” either.
Tesla stands out for a reason. This slide is from 2021 predicting it will be much worse, and in 2025 there have been at least 59 confirmed deaths by their robots. Source: My ISACA slides 2021
Corporations ARE legal persons when it benefits them:
First Amendment rights (Citizens United)
Religious freedom claims
Contract enforcement
Property ownership
But corporations are NOT persons when it harms them:
Can’t be cited for traffic violations
No criminal liability for vehicle actions
No “driver” present to hold accountable
Software “bugs” treated as acts of God
This selective personhood is the perfect shield. When a Waymo breaks the law, nobody is responsible. When a Waymo injures someone, there’s a “gap in accountability.” When police try to enforce traffic laws, they’re told their “citation books don’t have a box for ‘robot.'”
Here’s what’s actually happening: Every time police encounter a Waymo violation, they’re documenting a software flaw that potentially affects the entire fleet.
When one Waymo illegally U-turns, thousands might have that flaw. When one Waymo can’t navigate a roundabout, thousands might get stuck. When one Waymo’s “Safe Exit system” doors a cyclist, thousands might injure people. When Waymos gather and honk, it’s a fleet-wide programming error.
These aren’t individual traffic violations. They’re bug reports for a commercial product deployed on public roads without adequate testing.
But unlike actual bug bounty programs where companies pay for vulnerability reports, police document dangerous behaviors and get… nothing. No enforcement power. No guarantee of fixes. No way to verify patches work. No accountability if the company ignores the problem.
The police are essentially providing free safety QA testing for a trillion-dollar corporation that has no legal obligation to act on their findings despite mounting deaths.
We’ve seen this exact playbook before.A Stryker vehicle assigned to 2nd Squadron, 2nd Stryker Cavalry Regiment moves through an Iraqi police checkpoint in Al Rashid, Baghdad, Iraq, April 1, 2008. (U.S. Navy photo by Petty Officer 2nd Class Greg Pierot) (Released)
From 2007-2014, Baghdad had over 1,000 checkpoints where Palantir’s algorithms flagged Iraqis as suspicious based on the color of their hat or the car they drove. U.S. Military Intelligence officers said:
If you doubt Palantir, you’re probably right.
The system was so broken that Iraqis carried fake IDs and learned religious songs not their own just to survive daily commutes. Communities faced years of algorithmic targeting and harassment. Then ISIS emerged in 2014—recruiting heavily from the very populations that had endured years of being falsely flagged as threats.
Palantir’s revenue grew from $250 million to $1.5 billion during this period. A for-profit terror generation engine or “self licking ISIS-cream cone” as I’ve explained before.
The critical question military commanders asked:
Who has control over Palantir’s deadly “Life Save or End” buttons?
The answer: Not the civilians whose lives were being destroyed by false targeting.
Who controls the “Life Save or End” button when a Waymo encounters a cyclist? A pedestrian? Another vehicle?
Not the victims
Not the police (can’t cite, can’t compel fixes)
Not democratic oversight (internal company decisions)
Not regulatory agencies (toothless “notices”)
Only the corporation. Behind closed doors. With no legal obligation to explain their choices.
When a Tesla, Waymo or Palantir algorithmic agent of death “veers” into a bike lane, who decided that was acceptable risk? When it illegally stops in a bike lane and doors a cyclist, causing brain injury, who decided that “Safe Exit system” was ready for deployment? When it drives into oncoming traffic, who approved that routing algorithm?
We don’t know. We can’t know. The code is proprietary. The decision-making is opaque. And the law says we can’t hold anyone accountable.
In 2016, Elon Musk promised loudly Tesla would end all cyclist deaths and publicly abused and mocked anyone who challenged him. Then Tesla vehicles repeatedly kept “veering” into bike lanes and in 2018 accelerated into and killed a man standing next to his bike.
Source: My MindTheSec slides 2021
Similarly in 2017, an ISIS-affiliated terrorist drove a truck down the Hudson River Bike Path, killing eight people. Federal investigators linked the terrorist to networks that Palantir’s algorithms had helped radicalize in Iraq. For some reason they didn’t link him to the white supremacist Twitter campaigns demanding pedestrians and cyclists be run over and killed.
This is the racist jaywalking playbook digitized: Police enforce against the vulnerable population, normalizing their elimination from public space—and training AI systems to see cyclists as violators to be punished with death rather than victims.
Musk now stockpiles what some call “Swasticars”—remotely controllable vehicles deployed in major cities, capable of receiving over-the-air updates that could alter their behavior fleet-wide, overnight, with zero public oversight.
Swasticars: Remote-controlled explosive devices stockpiled by Musk for deployment into major cities around the world.
If we don’t act, we’re building the legal infrastructure for algorithmic vehicular homicide with corporate immunity. Here’s what must happen:
Fleet-Wide Corporate Liability
When one autonomous vehicle commits a traffic violation due to software, the citation goes to the corporation multiplied by fleet size. If 1,000 vehicles have the dangerous flaw, that’s 1,000 citations at escalating penalty rates.
Mandatory fleet grounding until fix is verified by independent auditors
Public disclosure of the flaw and the fix
Criminal liability for executives if patterns show willful negligence
Public Bug Bounty System
Every police encounter with an autonomous vehicle violation must:
Trigger mandatory investigation within 48 hours
Be logged in a public federal database
Require company response explaining root cause and fix
Include independent verification that fix works
Result in financial penalties paid to police departments for their QA work
If companies fail to fix documented patterns within 90 days, their permits are suspended until compliance.
Restore the 1866 PrincipleSource: My security engineering training slides 2018
Traffic rules exist to stop vehicles for public safety, not to give vehicles—or their corporate owners—immunity from accountability.
The law must state explicitly:
Corporations deploying autonomous vehicles are legally responsible for those vehicles’ actions
“No human driver” is not a defense against criminal or civil liability
Code must be auditable by regulators and available for discovery in injury cases
Vehicles that cannot safely stop for pedestrians/cyclists cannot be deployed
Human life takes precedence over vehicle throughput, period
When Waymo’s algorithms decide who lives and who gets “veered” (algorithmic death), who controls that button?
When Tesla’s systems target cyclists while police ticket the victims, who controls that button?
When corporations claim they’re persons for speech rights but not persons for traffic crimes, who controls that button?
Right now, the answer is: Nobody we elected. Nobody we can hold accountable. Nobody who faces consequences for being wrong.
Car manufacturers spent the extremist “America First” 1920s inventing racist “jaywalking” crime to privatize public streets and criminalize pedestrians. It worked so well that by 2017, who really blinked when a North Dakota legislator could propose zero liability for drivers who kill people with cars? By 2021, Orange County deputies shot a Black man to death while arguing whether he had simply walked on a road outside painted lines.
Now we’re handing that same power to algorithms—except this time there’s no driver to arrest, no corporation to cite, and no legal framework to stop fleet-wide deployment of dangerous systems.
Palantir taught us what happens when unaccountable algorithms target populations: you create the enemies you claim to fight, and profit from the violence.
Are we really going to let that same model loose on American streets?
Because when police say “our citation books don’t have a box for robot,” what they’re really saying is: We’ve lost the power to protect you from corporate violence.
That’s not a joke. That’s murder by legal design.
The evidence is clear. The pattern is documented. The choice is ours: Restore accountability now, or watch autonomous vehicles follow the exact playbook by elites that turned jaywalking into a tool of intentional racist violence and Palantir checkpoints into an ISIS recruiting campaign to justify white nationalism. See the problem and the connection between them?
Who controls the button? Right now, nobody you can vote for, sue, or arrest. That has to change.
Here’s how William Blake warned us of algorithmic dangers way back in 1794. His “London” poem basically indicts institutions of church and palace for being complicit in producing systemic widespread suffering:
I wander thro’ each charter’d street,
Near where the charter’d Thames does flow,
And mark in every face I meet
Marks of weakness, marks of woe.
In every cry of every Man,
In every Infants cry of fear,
In every voice: in every ban,
The mind-forg’d manacles I hear
Those “mind-forg’d manacles” mean algorithmic oppression by systems of control, which appear external but are human-created. A “charter’d street” was privatized public space, precedent for using power to enforce status-based criminality, such as Palantir checkpoints and jaywalking laws.
EU Energy Revolution is a National Security Upgrade
June 2025 marked a quiet turning point: solar became the EU’s single largest electricity source for the first time, generating 22% of the grid’s power. Not the largest renewable—the largest source, period.
Nuclear came in second at 21.6%—a position it’s going to have to get used to. With 350 GW installed and another 60+ GW being added annually, future solar has crossed from an “alternative” to the present “foundational infrastructure.”
Slovakia is in the best position to accelerate this further. The country currently sits at 22.1% renewable generation—among the EU’s lowest. But with rapid solar deployment options now on the table, Slovakia could leapfrog directly to the distributed generation model that’s reshaping Europe’s grid.
This transition is strategically sound: solar eliminates fuel logistics, severs dependency on energy imports, and distributes generation across millions of sites that can’t be targeted kinetically. No one misses worrying whether Russian billionaires will turn off pipelines from emotion, US billionaires will explode pipelines from neglect, or undersea infrastructure will be undermined.
At the same time we would be remiss to ignore how speed of technology adoption has outpaced security oversight (as usual). The gaps are creating risks and opportunities for controls that most existing frameworks weren’t designed to address.
What Changes in Transition
The shift to distributed solar fundamentally improves energy security—but in ways that require rethinking safety of power infrastructure.
Physical resilience through distribution: You can bomb a gas plant or a pipeline. You can’t meaningfully attack millions of distributed panels at scale. Solar is a genuine upgrade. Wars destroy centralized infrastructure; distributed generation systems simply reroute and carry on in scenarios that would cripple traditional grids.
No fuel supply chain: Once installed, solar has zero operational dependencies. No rail cars to intercept, no tankers to blockade, no refineries to sabotage. The strategic autonomy is real. No mines to send explosive drones into and shut down permanently, burning all the workers to death with a horrific fireball—you know, that famously clean coal dust Trump told the UN about. But I digress…
Faster recovery: A destroyed solar installation can be replaced in days or weeks. Rebuilding power plants takes many years. At scale, this means better grid resilience even if individual assets are compromised. Distributed resilience works under pressure—just look at Tokyo under occupation in 1948, which deployed hundreds of electric cars charging from hydro when the city had no fuel.
Nissan’s car making origin story is this Tama electric vehicle from 1947 with rapid “bomb bay door” rapid battery replacement on both sides.
These advantages are why the transition makes sense. But solar also introduced something new: millions of internet-connected control points with unclear security ownership.
The New Architecture Exposed
The computing analogy is familiar: mainframes had physical security and limited access. PCs introduced millions of endpoints requiring patches and antivirus. Mobile phones added cellular networks and location tracking. Each transition improved capability while requiring new security paradigms.
Solar’s transition is from physically secured, professionally operated generation to IoT devices managed by homeowners, monitored by installers, and remotely accessible by manufacturers.
The SPE report (SPE 2025 Solutions for PV Cyber Risks to Grid Stability) documents the concentration: thirteen manufacturers maintain remote access to over 5 GW each. Seven control more than 10 GW. Huawei alone shipped 114 GW to Europe between 2015-2023, with estimated remote access to 70% of that installed base. Chinese firms overall supplied 78% of global inverter capacity in 2023.
Individually, a compromised home solar system means nothing. Collectively, manufacturers have remote access to capacity equivalent to multiple large power plants. The report’s grid simulations found that coordinating just 3 GW of inverters to manipulate voltage through reactive power switching could trigger protective relays on nearby generators—potentially cascading into broader outages.
This mirrors early botnet dynamics: individual compromised PCs were nuisances until aggregated into DDoS networks capable of taking down critical services.
“No Operator” Problems
Traditional power infrastructure has clear security ownership. A nuclear plant has a security team, regulatory oversight, 24/7 monitoring. A rooftop solar installation has… a homeowner who set it up once and moved on.
Current EU cybersecurity frameworks (NIS2, the Cyber Resilience Act, Network Code on Cybersecurity) assume there’s an entity responsible for critical infrastructure security. For distributed solar, that entity often doesn’t exist legally. The installer completed their short job. The manufacturer is headquartered abroad. The homeowner thinks it’s appliance-level technology that someone else is responsible for, which would be fine if their Chinese-made-and-controlled toaster couldn’t accidentally destabilize the entire German power grid, but here we are.
During World War II, Deming was a member of the five-man Emergency Technical Committee. He worked with H.F. Dodge, A.G. Ashcroft, Leslie E. Simon, R.E. Wareham, and John Gaillard in the compilation of the American War Standards (American Standards Association Z1.1–3 published in 1942) and taught wartime production. His statistical methods were widely applied during World War II and after (foundational to Japanese auto manufacturing)
The SPE report further states that only 1 of 5 tested inverters supported basic security logging. Default passwords are common. Firmware updates are irregular. Network segmentation is rare. This isn’t malicious—it’s what happens when residential-scale deployment moves faster than security standards.
New Model, New Requirements. Ambiguity means neglect.
The technology doesn’t need to slow. The security framework needs to catch up. This is familiar territory for any director of security with a few years of direction under their belt.
Clear responsibility assignment: Either manufacturers are liable for their installed base security (like automotive recalls), or grid operators assume responsibility, or third-party security operators emerge as a market.
Communication architecture that matches the threat model: Germany’s approach with smart meter gateways is instructive—critical control functions (start/stop, power setpoint changes) route through regulated infrastructure. Monitoring and maintenance can remain direct. This applies standard IT security principles (network segmentation, controlled access) to distributed generation.
Supply chain transparency without protectionism: The issue isn’t where hardware is manufactured—it’s that concentration creates leverage, and remote access by entities outside regulatory jurisdiction creates enforcement gaps. Solutions range from Lithuania’s 2025 law (requiring EU-based intermediaries for systems >100 kW) to hardware/software separation (devices source globally, control software must be auditable and locally hosted).
Standards reflecting actual deployment: Current inverter security standards treat them like industrial control systems. But a device installed by a contractor, connected to home Wi-Fi, and managed via consumer apps isn’t an industrial system. It needs consumer electronics-level security: automatic updates, secure defaults, encrypted communications, no exposed credentials.
State-run Opportunity and Patterns
Rapid deployment in lagging states doesn’t have to repeat the security debt accumulated elsewhere. The country could mandate security baselines upfront: require certified communication gateways for grid-connected systems, establish clear responsibility chains, ensure data localization for operational telemetry.
This isn’t exotic technology. It’s applying lessons from mobile computing and IoT security to distributed generation. The components exist—Hardware Security Modules, Trusted Execution Environments, regulated intermediaries, cryptographic firmware signing. What’s missing is regulatory clarity and enforcement.
Every infrastructure revolution creates security debt paid down over time. Early automobiles had no seatbelts. Early internet had no encryption. Early mobile phones had no app sandboxing.
Solar is mid-transition. Capability deployment happened fast (Europe added 60+ GW in 2024 alone). Security retrofit is lagging. That’s normal but fixable.
The unique aspect: solar’s security model should be superior. Distributed systems are inherently more resilient. But only if distribution is real. When remote access reconcentrates control with manufacturers, you’ve recreated centralized vulnerability while losing traditional plants’ physical security and professional operation.
Europe’s solar buildout is strategically sound. The cybersecurity gap is solvable with existing technology. What’s missing is regulatory clarity on responsibility and baseline security requirements for distributed generation at scale.
Any future rapid deployment can be a model—showing that speed and security aren’t trade-offs when architecture is right from the start. Or it could simply balance out tech debts and provide resilience while others catch up.
The tech works, for national security. The economics work, for national security. The climate math even works, for national security. Now the security model also needs to catch up and work… for national security.
Unitree robots in the dog house
Urinary poor password hacked
Unmarking poo-lice territory
The news story today about a police robot is really a story about the economics of hardware safety, and why the lessons of WWII are so blindingly important to modern robotics.
Picture this: Police deploy a $16,000 Unitree robot into an armed siege (so they don’t have to risk sending any empathetic humans to deescalate instead). The robot’s tough titanium frame can withstand bullets, its sharp sensors can see through walls, and its AI can navigate complex obstacles like dead bodies autonomously. Then a teenager with a smartphone intervenes and takes complete control of it in a few minutes.
Are we still blowing a kid’s whistle into payphones for free calls or what?
This economic reality in asymmetric conflict reveals a fundamental dysfunction in how the robotics industry approaches risks. The embarrasing UniPwn exploit against Unitree robots has exposed authentication that’s literally the word “unitree,” hardcoded encryption keys identical across all devices, and complete absence of input validation.
I’ll say it again.
“Researchers” found the word unitree would bypass the Unitree robot security with minimal effort. We shouldn’t call that research. It’s like saying scientists have discovered the key you left in your front door opens it. Zero input validation means…
This is 1930s robot level bad.
For those unfamiliar with history, the design flaws of the Nazi V-1s are how we remember them. Yet even Hitler’s dumb robots had better security than Unitree in 2025 – at least the V-1s couldn’t be hijacked mid-flight by shouting “vergeltungswaffe” on radio frequencies.
WWII Spitfire “tipping” the flawed Nazi V1 in flight, because ironically Hitler’s robots couldn’t properly calculate their axis
WWII military technology had more sophisticated operational security than modern robots. Think about how genuinely damning that is for the current robotics industry. Imagine a 1930s jet engine with a fundamentally better design than one today.
It is a symptom of hardware companies treating their vulnerabilities in software as an afterthought, creating expensive physical systems that can be compromised for free. Imagine going to the gym and finding a powerlifter who lacks basic mental strength. “Hey, can someone tell me if the big and heavy 45 disc is more or less work than this small and light 20 one” a tanned muscular giant with perfect hair pleads, begging for help with his “Hegseth warrior ethos” workout routine.
The Onion reveals Pete’s tragicomedy status as the least capable or qualified military leader in history
French military planners spent billions pouring concrete for a man named Maginot, after he dreamed up what would have worked better for WWI. His foolish “impregnable” static defensive barrier was useless against coming radio-controlled planes and trucks and tanks using network effects to rapidly focus attacks somewhere else. The Germans needed only three days to prove the dynamic soft spots need as much attention or more than the expensive static hard ones. Robotics companies are making the identical strategic error, pouring millions into unnecessary physical hardening while leaving giant squishy digital backdoors wide open.
Unitree’s titanium chassis development costs over $50,000, military-grade sensors run $10,000 per unit, advanced motors cost $5,000 each, and rigorous testing burns through hundreds of thousands in R&D. So fancy. Meanwhile, authentication was literally fixed as “unitree,” while encryption was copy-pasted from Stack Overflow, and input validation… doesn’t exist.
The Tesla robot stupidly barreled into disaster at 76 mph and bounced dramatically into the air, causing an estimated $22,000 in damage and cancelling the trip before they even left California. This is the same company that has promised coast-to-coast autonomous driving by 2017 yet still can’t detect the most obvious and basic road debris. It was NOT an edge case failure. It was proof of Tesla flaws still being overlooked, despite extensive documentation of more than 50 deaths since the first ones in 2016.
ISACA 2019 Presentation
Robots being marketed for special police use have been disappointing similarly for over a decade, as I’ve spoken and written about many times. In 2016, a 300-pound Knightscope K5 ran over a 16-month-old toddler at Stanford Shopping Center, hitting the child’s head and driving over his leg before continuing its patrol. The robot “did not stop and kept moving forward” according to the boy’s mother. A year later, another Knightscope robot achieved internet fame by rolling itself into a fountain at Georgetown Waterfront, prompting one cynical expert’s observation: “We were promised flying cars, instead we got suicidal robots.”
That’s being generous, of course, as the robot couldn’t even see the cliff it was throwing itself off.
These incidents illuminate a critical historical insight to economics of security: hardware companies systematically undervalue software engineering because their own mental models are flawed. Some engineers are so rooted in physical manufacturing they can’t see the threat models more appropriate to their work.
Traditional hardware development means you design a component once, manufacture it at scale, and ship it. Quality control means testing physical tolerances and materials science. If something breaks, you issue a recall. It’s bows and arrows or swords and shields. Edge cases thus can be waved off because probablity is discrete and calculated like saying don’t bring a knife to a gun fight (e.g. Tesla says don’t let any water touch your vehicle, not even humidity, because they consider weather an edge case).
Software is fundamentally different economics. We’re talking information systems of strategy, infiltration and alterations to command and control. It’s constantly attacked by adversaries who adapt faster than any recall process. It must handle infinite edge cases injected without warning, that no physical testing regime can anticipate. It requires ongoing maintenance, updates, and security patches throughout its operational lifetime. Most importantly, software failures can propagate instantaneously across entire fleets through network effects, turning isolated incidents into rapid systemic disasters.
A laptop without software has risks, and is also known as a paperweight. Low bar for success means it can scope itself towards low risk. A laptop running software however has exponentially more risks, as recorded and warned during the birth of robotic security over 60 years ago. Where engineering outcomes are meant to be more useful, they need more sophisticated threat models.
The UniPwn vulnerability exemplifies all of this and the network multiplication effect. The exploit is “wormable” because infected robots would automatically compromise others in Bluetooth range. One compromised robot in a factory doesn’t just affect that unit; it spreads to every robot within wireless reach, which spreads to every robot within their reach. A single breach becomes a factory-wide infection within hours, shutting down production and causing millions in losses. This is the digital equivalent of the German breakthrough at Sedan—once the line is broken, everything behind it collapses.
And I have to point out that this has been well known and discussed in computer security for decades. In the late 1990s I personally was able to compromise critical infrastructe across five US states with trivial tests. And likewise in the 90s, I sent a single malformed ping packet to help discover all the BSD-based printers used by a company in Asia… and we watched as their entire supply chain went offline. Oops. Those were the kind of days we were meant to learn from, to prevent happening again, not some kind of insider secret.
Hardware companies still miss this apparently because they don’t study history and then they think in terms of isolated failures rather than systemic vulnerabilities. A mechanical component fails gradually and affects only that specific unit. A software vulnerability fails catastrophically and affects every identical system simultaneously. The economic models that work for physical engineering through redundancy, gradual degradation, and localized failures become liabilities in software security.
Target values of the robots in this latest story range from $16,000 to $150,000. That’s crazy compared to an attack cost being zero: grab any Bluetooth device to send “unitree”. Damage potential reaches millions per incident through production shutdowns, data theft, and cascade failures.
Proper defense at the start of engineering would cost a few hundred dollars per robot for cryptographic hardware and secure development practices. Unitree could have prevented this vulnerability for less than an executive dinner. Now it’s going to be quite a bit more money to go back and clean up.
The perverse market incentive in security is that it remains invisible until it spectacularly fails. Hardware metrics will dominate purchasing decisions by focusing management on speed, strength, battery life, etc. while software quality is dumped onto customers who lack technical expertise to evaluate it in downscoped/compressed sales cycles. Competition then rewards shipping fast crap over shipping secure quality because defects manifest only after contracts are signed, under adversarial conditions kept out of product demonstrations.
The real economic damage of this loophole extends beyond immediate exposure of the vendor. When the police robot gets compromised mid-operation, the costs cascade through blown operations, leaked intelligence, destroyed public trust, legal liability, and potential cancellation of entire robotics programs, not to mention potential fatalities. The explosive damage could slow robotics adoption across law enforcement, creating industry-wide consequences from a single preventable vulnerability. Imagine also if the flaws had been sold secretly, instead of disclosed to the public.
It’s Stanley Kubrick’s HAL 9000 story all over again: sure it could read lips but the most advanced artificial intelligence in cinema was defeated by a guy pulling out its circuit boards with a… screwdriver. The simplest attacks threaten the most sophisticated robots.
My BSidesLV 2011 presentation on cloud security concepts for “big data” foundational to safe intelligence gathering and processing
Hardware companies need to internalize that in networked systems the security of the communications logic isn’t a feature. It’s the foundation of the networking. Does any bridge’s hardware matter if a chicken can’t safely cross to the other side?
All other engineering rests upon the soft logic working without catastrophic soft failure that renders hardware useless. The most sophisticated mechanical engineering becomes worthless where attackers can take control via trivial thoughtless exploits.
The robotics revolution is being built by companies that aren’t being intelligent enough to predict their own future by studying their obvious past. Until the market properly prices security risk through insurance requirements, procurement standards, liability frameworks, and certification programs, customers will continue paying premium prices for robots that will be defeated for free. The choice is stark: fix the software economics now, or watch billion-dollar robot deployments self-destruct.
And now this…
2014-2017: Multiple researchers document ROS (Robot Operating System) vulnerabilities affecting thousands of industrial and research robots
2019-2021: Multiple disclosure attempts for Pepper/NAO vulnerabilities ignored by SoftBank
2020: Alias Robotics becomes CVE Numbering Authority for robot vulnerabilities
2021: SoftBank discontinues Pepper production with vulnerabilities still unpatched
2022: DarkNavy team reports undisclosed Unitree vulnerabilities at GeekPwn conference
2025: CVE-2025-2894 backdoor discovered in Unitree Go1 series robots
2025: UniPwn exploit targets current Unitree G1/H1 humanoids with wormable BLE vulnerability
2025: CVE-2025-60250 and CVE-2025-60251 assigned to UniPwn vulnerabilities
2025: UniPwn claims to be *cough* “first major public exploit of commercial humanoid platform” *cough* *cough*
2025: Academic paper “Cybersecurity AI: Humanoid Robots as Attack Vectors” documents UniPwn findings
Shout out to all those hackers who haven’t disclosed dumb software flaws in modern robots because… fear of police deploying robots on the wrong party (them).
a blog about the poetry of information security, since 1995