JD Vance Abuses Security as False Pretext for Dismantling AI Safety

The relationship between security and safety lies at the heart of modern regulatory frameworks – which is precisely why JD Vance seems determined to tie it to a cross-shaped stake and set it ablaze in the name of only one particular race of people “winning the future.”

In 2017, Ross Anderson and colleagues issued what now reads as a distressingly precise warning about security being weaponized against safety. Today, with the subtlety of a sledgehammer to a thermostat, Vance demonstrates exactly what they feared.

We weren’t sanguine about the inclusion of public concerns in AI safety, as envisioned at Bletchley Park in 2023, the first AI summit. In Paris, we witnessed a shift from more expansive considerations of AI safety towards a far narrower AI for defense. […] AI safety was always vulnerable to being weaponized and, therefore, easily reduced to improving algorithmic performance and making a nation-state safe. AI safety has now become almost exclusively national AI security, both defensive (e.g. cybersecurity) and offensive (e.g. information warfare). It also has solidified into a panicked race for market dominance. Additionally, AI safety as AI security represents a gold rush for border tech surveillance companies, especially for the Canada-US border, the longest in the world. Soft laws and soft norms (in the case of defense) are insufficient to protect us from unaccountable companies.

Let’s examine the smell of Munich after Vance opened his fascist-loving mouth (yes, Munich – the irony could crush coal into diamonds). Vance declared:

The AI future is not going to be won by hand-wringing about safety.

Won? One wonders what precisely Vance thinks we’re “winning” by abandoning safety frameworks that prevent catastrophic disasters and massive debts. Perhaps he’s counting on the fact that millions of dead people can’t complain about societal impacts? That kind of winning? Like hello Munich, I’m here to ask if you have thought about the upside to your nation committing genocide?

Anderson and colleagues wrote all about this years ago, apparently in vain because Vance wasn’t listening:

Safety engineering is both about applications such as transport (where licensed drivers and pilots can be assumed to have known levels of competence), and also about consumer applications (where products should not harm children or the confused elderly). The same applies to security and privacy. The security engineer’s task is to enable normal users, and even vulnerable users, to enjoy reasonable protection against a capable motivated opponent.

Imagine that – protecting vulnerable users! The mere mention of safety frameworks, however, has apparently been making Vance very angry for many years. He really, really hates safety, according to his voting records.

  1. Transportation “Safety”
    • Opposed vehicle safety standards until politically expedient
    • Supported railway safety only after his constituents got a front-row seat to what happens without it (East Palestine disaster)
    • Pattern: Wait for disaster, then pretend to care while actually making things worse
  2. Gun “Safety”

    Senator Vance has consistently put gun industry profits over the safety of American communities… supports concealed carry reciprocity and extremist judges. He opposes universal background checks and has called red flag laws ‘a slippery slope.’

    Ah yes, the slippery slope – that evergreen refuge of those who can’t be bothered to make actual arguments. Next up: Seatbelts are a slippery slope to mandatory bubble wrap.

  3. Banking “Safety”

    His Bank Failure Prevention Act (S.2497) might as well be called the “Foxes run the hen-house Act”

    …converting state-chartered banks that have at least $100 billion in assets to nationally chartered financial institutions overseen by the Office of the Comptroller of the Currency, as opposed to the Federal Reserve or Federal Deposit Insurance Corporation.

    Because nothing says “preventing failure” like removing oversight!

As CNN’s Nick Paton Walsh observes with admirable restraint about the Munich disaster:

Vance uses half-truths to lecture a European audience well aware of the threat of authoritarian rule. It felt like a speech, if delivered on X.com, laden surely with community notes… Vance had clearly long prepared this tirade as a starting gun for the second Trump administration’s bid to refuel populism across Europe. The continent he spoke to is a little wiser now…

Vance practically shed a tear as he apparently begged on his knees for Europe to please, please allow another Hitler.

Seriously, in Munich of all places, Vance accused European safety regulators of “Soviet-style” authoritarianism while he advocated for them to setup an authoritarian state reminiscent of Nazism. He laid out precisely the kind of opaque, unaccountable security state that would make actual totalitarian bureaucrats demand to know how Vance could just rock up and pitch Europe on the future that America swore it would never allow to happen.

The immediate effect of Vance’s military-grade doublespeak to invoke the next Hitler rise to power is visible in Britain’s AI institute rushed retreat rebrand, which dropped:

  • References to “societal impacts” (given impacts to society are just hand-wringing, right? Millions of dead means just a little more wringing than the last time.)
  • Concerns about “unequal outcomes” (inequality means the privileged cheats win even harder! Right, right? Who had the most unequal outcomes ever, I mean could it be Adolf?)
  • Public accountability measures (given accountability is just for all those losers who aren’t winning the future by cheating without any care for impacts.)

This perversion of social consciousness mirrors Vance’s broader legislative agenda to spin, and spin, and spin:

  • PRESERVE Online Speech Act (S.2314): Because nothing says “preserving” like the act of dismantling
  • Drive American Act (S.2962): Making America’s air quality great again, one coal-rolled asthma attack at a time
  • Bank Failure Prevention Act (S.2497): Preventing bank failure by making failure easier and faster than ever!

Vance’s Orwellian approach to political gamesmanship, like a pig who keeps dumping mud on all the farm animals, consistently replaces:

  • Systematic safety frameworks → Whatever is the opposite
  • Public accountability → Private profit by a few elites
  • Protection of vulnerable users → Protection of powerful donors
  • Long-term sustainability → Short-term power gains by a few elites

Anderson’s paper concluded with what now reads like a prophecy:

The task is to embed adversarial thinking into standards bodies, enforcement agencies, and testing facilities. To scope out the problem, we studied the history of safety and standards in various contexts: road transport, medical devices and electrotechnical equipment.

The security community must recognize this pattern before Vance turns “security” into such a mockery that we’ll have to invent a new word for actually keeping people safe. Anderson warned us. The least we can do is not pretend to be surprised when the fox eats the hens.

And if you’re reading this, Mr. Vance: Safety is what keeps your constituents alive long enough to vote. Though perhaps that’s not a selling point for you. I mean, after all, Elon Musk kills more Tesla customers than cars he can produce and look at how his completely corrupted stock price keeps going up as sales go down dramatically. Magic isn’t it?

DOGE Cuts to FAA Make Aviation Gross-negligence Again

Breach of Federal Agency Degrades Aviation Safety to Deadly Pre-Regulation Era

Anyone who knows anything about safety rules can see the problem: when you gut federal oversight, bad things happen fast. The DOGEans swept into transportation safety agencies like a newly formed Khmer Rouge wrecking ball, and the results are playing out exactly how safety experts have always warned they would. First Tesla, then SpaceX, now aviation – it’s like watching the stability of democratic transit systems crash and burn, deregulated trip by deregulated trip.

Tesla Deaths Per Year

Source: TeslaDeaths.com

Tesla’s safety numbers are jaw-dropping – the data shows serious incidents climbing faster than their production line can churn out new cars. Remember when lawn darts got banned in the ’80s for killing 3 kids, the Ford Pinto shutdown in the ’70s for killing 27? Tesla’s battery fires and autopilot crashes are making known horrible engineering tragedies of history look like rounding errors.

The graph shows Tesla’s safety record hitting a nasty turning point in 2021. While their fleet grew steadily (blue line), crashes and deaths (orange and pink) shot up way faster – we’re talking a 5x spike in serious crashes by 2024. When your accident curve is climbing faster than your production line, something’s seriously wrong. Data pulled straight from NHTSA reports and tracked on TeslaDeaths.com.

So is it any wonder that the DOGEan effect on aviation has been an immediate shift to tragedy after tragedy?

Do people understand just how deadly Tesla and SpaceX are? I often think their highly saccharin Zizian cult-like propaganda has made Americans blind to reality of actual dangers.

Do you see the problem? DOGE is failing upwards so fast it is going to be even less successful than the public funding sinkhole tragedy of SpaceX.

Here’s the reality check: NASA ships have reached Mars since 1964, sticking landings since Viking 1 in 1976 – we’re talking eight successful touchdowns including groundbreaking rovers like Sojourner, Spirit, Opportunity, Curiosity, and of course Perseverance. I myself even worked at NASA on Mars terrestrial technology (security related of course). Meanwhile, SpaceX blasted big fat lies out year after year, talking up their Mars game like a new-Rhodesia colonized by 2022, yet they’re still stuck back in Earth orbit blowing up rockets faster than ever before in history. That’s like promising to win the Super Bowl when you haven’t even made it past Pop Warner football without breaking your leg every weekend. A decade of bigger and bigger Mars promises, zero Mars missions, just a bunch of toxic debris.

Let’s connect the obvious dots: a guy who promised Mars colonies by 2022 while his cars’ crash rates outpace production is now influencing national aviation safety. This isn’t about magic fairy dust tech promises of Zizian fantasy anymore, which sank an old abandoned tugboat nobody cared about. DOGEan threats are international, about real public planes in real skies that already put many hundreds or more at risk.

There have been four major deadly U.S. aviation disasters so far this year. They happened within the span of two weeks in Washington D.C., Philadelphia, Alaska and Arizona. …the Washington D.C. crash on Jan. 29 that killed 67 people [blamed on FAA staffing changes] is the only fatal commercial aviation crash in 2025 and in the past 15 years.

The headlines tracking FAA changes read like a countdown clock nobody wanted to see ticking. And based on what we have seen from Tesla and SpaceX’s atrocious safety track records, that’s not the kind of countdown to deaths anyone should ignore.

Related:

  • https://www.theguardian.com/us-news/2025/jan/30/dc-plane-crash-faa-investigation
  • https://www.propublica.org/article/elon-musk-spacex-doge-faa-ast-regulation-spaceflight-trump
  • https://thehill.com/homenews/administration/5149006-elon-musk-spacex-faa/amp/
  • https://www.yahoo.com/news/musk-spacex-team-unleashed-faa-193638179.html
  • https://www.reuters.com/world/us/trump-administration-ordered-fully-comply-with-order-lifting-funding-freeze-2025-02-10/
  • https://www.newsweek.com/donald-trump-freeze-hiring-air-traffic-controllers-washington-crash-2023348
  • https://www.usnews.com/news/best-states/virginia/articles/2025-01-31/air-traffic-controllers-were-initially-offered-buyouts-and-told-to-consider-leaving-government
  • https://www.theguardian.com/us-news/2025/feb/17/trump-administration-faa-worker-firings

Regulators Push DeepSeek Towards Privacy-Preserving AI Apps: South Korea Joins Italy Innovation Ruling

Recent regulatory actions by South Korea and Italy regarding DeepSeek’s mobile app highlight an exciting opportunity for developers and organizations looking to leverage cutting-edge AI technology while innovating towards baseline data privacy standards.

Innovation Seeds From Flowering Regulation Headlines

While headlines overstate a wholesale ban on technology with any flaw found, the reality on the ground to technology experts is far more nuanced and promising.

Both South Korea’s Personal Information Protection Commission (PIPC) and Italy’s data protection authority have specifically targeted mobile app implementation that fail to respect privacy concerns. What they don’t emphasize enough to the common reader, and so I will explain here, is that the underlying AI technology is not their complaint.

This distinction is crucial because DeepSeek’s models remain open source and available for use with better user applications. These regulatory actions are essentially defining a better world – pushing the ecosystem toward proper implementation practices, particularly regarding data handling and privacy protection.

Local-First AI Applications Make Sense

This innovation push, thanks to the rules of engagement that create a rational market, is the perfect opportunity for developers to build privacy-preserving local applications that leverage DeepSeek’s powerful AI models while ensuring complete compliance with regional data protection laws.

Here’s why this DeepSeek news matters so much in the current landscape of AI services all around the world violating basic privacy rights:

  1. Data Sovereignty: By implementing local-first applications, organizations and individuals they serve will maintain complete control over their data, ensuring it never leaves their jurisdiction or infrastructure. Data should be centered around the owners, not pulled from them as an illegally acquired “twin” for secretive exploitation and harms.
  2. Regulatory Compliance: Purpose-built local applications can be designed from the ground up to comply with the basics of regional privacy requirements, from GDPR in Europe or PIPC guidelines in South Korea. Even Americans may find some protection in state or municipal privacy requirements to shield them from national-scale threats.
  3. Enhanced Security: Local deployment allows additional security layers and custom privacy controls unique to individual risks, above and beyond the baseline regulations, which might not be possible with third-party hosted solutions trying to serve everyone on a common basis.

Technical Implementation Considerations

Organizations, or even nation-states, looking to build privacy-preserving applications with DeepSeek models must immediately shift focus:

  • Local model deployment and inference
  • Proper data anonymization and encryption
  • Configurable data retention policies
  • Transparent logging and auditing capabilities
  • User consent management
  • Clear data handling documentation

The push toward local deployments by South Korean and Italian regulators appears even more prescient in light of recent security research demonstrating potential backdoor vulnerabilities in LLM models, made transparent by open-source.

While the regulatory focus has been on privacy preservation, local deployments offer another crucial advantage: the ability to implement robust security measures, validation processes, and monitoring systems. Organizations running their own implementations can not only ensure data privacy but also establish appropriate safeguards against potential embedded threats, making the regulatory “restrictions” look more like forward-thinking guidance for responsible AI deployment.

The Implications of DeepSeek Inside

This trend signals a historically consistent technology evolution trend hitting the AI industry, away from centralized extractive practices and towards individual rights-conscious implementations. Like the Magna Carta of so many centuries ago, privacy regulations continue to serve as catalysts for innovation in deployment strategies whether in data storage (Personal Computers), transmission (Internetworking) or processing (AI).

The actions by South Korean and Italian regulators are out front and pushing the whole world toward better practices in AI implementation. This creates opportunities everywhere for local technology companies to develop compliant AI solutions. Owners are emboldened to maintain control over their sensitive data, while developers can create innovative privacy-preserving applications to serve real needs. The open-source AI community thrives by being most respecting of privacy concerns.

As more and more people follow the decades-long trend from shared-compute to mobile personal devices (connected using open standards to shared-compute), localized privacy regulations serve to challenge centralized unaccountable surveillance. We can expect to see growing demand for privacy-preserving local AI applications which presents a massive opportunity for developers and organizations to build privacy-first AI applications that leverage powerful open-source models locally. Competitive advantages come clearly through better privacy practices, because they foster sustainable trust with users through transparent data handling

The future of AI that rises before us goes far beyond model capability towards responsible implementation (all engineering demands a code of ethics). The current regulatory environment is pushing us toward that future because markets fail and fall into criminal monopolization without common sense fairness enforcements (authorization based on inherited rights) that manifest as regulations. The sensible actions in South Korea and Italy to protect privacy in apps are guideposts toward proper AI implementation practices. By focusing on privacy-preserving local architectures, developers can continue to innovate with DeepSeek’s technology while ensuring human-centered outcomes that every state should and can now achieve.


Are you a developer interested in building privacy-preserving AI applications? Check out the Solid Project open standard of data wallet storage infrastructure.

DOGE Breach Expands to Social Security, Eliminating Staff Who Defend Data

Chilling words from the federal government as DOGEan troops expand their breach into even more sensitive data.

Nancy Altman, the president of the advocacy group Social Security Works, told CBS News they heard from SSA employees that officials from the Department of Government Efficiency, or DOGE, had been trying to get access to the Enterprise Data Warehouse — a centralized database that serves as the main hub for personal, sensitive information related to social security benefits such as beneficiary records and earnings data. Altman was told King had been resistant to giving DOGE officials access to the database.

“She was standing in the way and they moved her out of the way. They put someone in who presumably they thought would cooperate with them and give them the keys to all our personal data,” Altman said.

She was standing in the way? It’s literally her job to defend the Constitution. That’s not in the way, that is the way.