Category Archives: Food

WSJ Needs a Reality Check: Salty LLM Security Panic Theater of 2025

The Wall Street Journal has rushed to print a breathless report about the “growing security risks” of the LLM, painting a picture of unstoppable AI threats that companies must face “on their own” due to slow-moving government regulation in America.

Reading it, you’d think we were facing an unprecedented crisis with no solutions in sight and everyone has to be some kind of libertarian survivalist nut to run a business.

*sigh*

There’s a problem with this 100,000 foot view of the battle-fields some of us are slogging through every day down on earth: actual security practitioners have been solving the exact challenges for decades that they are talking about as theory.

Let’s break down the article’s claims versus reality:

Claim: “LLMs create new cybersecurity challenges” that traditional security can’t handle

Reality: Most LLM “attacks” fail against basic input validation, request filtering, and access controls that have existed since the 1970s. A security researcher could demonstrate LLM exploits, just as one example, are blocked by filtering product concepts like web application firewalls (WAF). Perhaps it’s time to change the acronym for this dog of an argument to Web Warnings Originating Out Of Outlandish Feudal Fears (WOOF WOOF). This is not to say wide open unfiltered unregulated systems aren’t going to fail catastrophically at safety, it’s actually agreeing with that as a completely suicidal notion. There was once, and I swear I’m not making this up, a person who decided they would eat at the lowest online-rated restaurants to see if they could personally validate low ratings… and almost immediately they ended in a hospital. Could we handle proving that radio and TV Nazism is newspaper Nazism? You be the judge.

Nobody should be surprised when a long time Nazi promoter… does what he always has done. Nothing about that Nazi salute is news to anyone paying attention for the last decade to Elon Musk saying lots of Nazi stuff. To the WSJ I guess Nazis salutes are confusing and new simply because… they come out of the technology fraud known as Tesla.

Claim: Companies must “cope with risks on their own” without government help
Reality: The ISO 42001:2023 framework years ago published standards for AI management system (AIMS) related to ethical considerations and transparency. NIST AI Risk Management Framework (AI RMF) is also a thing, and who can forget last year’s EU AI Act? Major cloud providers operating in a global market (e.g. GCP Vertex, AWS Bedrock and Azure… haha, who am I kidding, Microsoft fired their entire LLM security team) have LLM-specific security controls documented because of global regulations (and because regulation is the true mother of innovation). These aren’t experimental future concepts, they’re production-ready and widely deployed to meet customer demand for LLMs that aren’t an obvious dumpster fire by design.

And even more to the point, today we have trusted execution environment (TEE) providers delivering encrypted enclave LLMs as a service… and while that sentence wouldn’t make any sense to the WSJ, it proves how reality is far, far away from the fairy-tales of loud OpenAI monarchs trying to scare the square pegs of society into an artificially round “eating the world” hole.

Om nom nom again? No thanks, I think we’ve had enough “golden” fascist tech vision for now.

Come here tasty chickens my very dangerous coop can set you free, the VC fox says, pointing to his LLM registration page that looks suspiciously like a 1930s IBM counting machine setup by Hitler’s government.

Claim: The “unstructured and conversational nature” of LLMs creates unprecedented risks
Reality: This one really chaps my hide, as the former head of security for one of the most successful NoSQL products in history. We’ve been securing unstructured data and conversational interfaces for years. I’ve personally spearheaded and delivered field-level encryption and I’m working on even more powerful open standards. Ask any bank managing any of their chat history risks or any healthcare provider handling free-text medical records including transcription systems. These same human language principles in tech, applied for decades, apply to LLMs.

The article quotes exactly zero working security engineers. Instead, we get predictions from a former politician and a CEO selling LLM security products. It’s like writing about bridge safety but only interviewing people selling car insurance.

Here’s what actual practitioners are doing right now to secure LLMs:

  • Rate limiting and anomaly detection catch repetitive probe attempts and unusual interaction patterns – the same way we’ve protected APIs for years. An attacker trying thousands of prompt variations to find a weakness looks exactly like traditional brute force that we already detect.
  • OAuth and RBAC don’t care if they’re protecting an LLM or a legacy database – they enforce who can access what. Proper identity management and authorization scoping means even a compromised model can only access data it’s explicitly granted. We’ve been doing this since SAML days.
  • Input validation isn’t rocket science – we scan for known malicious patterns, enforce structural rules, and maintain blocked token lists. Yes, prompts are more complex than SQL queries, but the same principles of taint tracking and context validation still apply. Output control can look through anything that slips, using the content filtering developed in data loss detection (patterns).
  • Data governance isn’t new either – proven classification systems already manage sensitive data through established group boundaries and organizational domains. Have you seen SolidProject.org by the man who invented the Web? Adding LLM interactions to existing monitoring frameworks just means updating taxonomies and access policies to respect long-standing natural organizational data boundaries and user/group trust relationships. The same principles of access grants, control and clear data sovereignty that have worked for decades apply here, yet again.

These aren’t theoretical – they’re rather pedestrian proven security controls that work today despite the bullhorn-holding soap-box CEOs trying to sell armored Cybertrucks that in reality crash and kill the occupants at a rate 17X worse than a Ford Pinto. Seriously, the “extreme survival” truck pitch of the “cyber” charlatan at Tesla has produced the least survivable thing in history. Exciting headlines about AI apocalypse drive the wrong perceptions and definitely foreshadow the fantastical failures of 10-gallon hat wearing snake-oil salesman of Texas.

The WSJ article, when you really think about it, brings to mind mistakes being made in security reporting since the 15th century panic about crossbows democratizing warfare.

Yes, crossbows at first glance wielded by unskilled over-paid kids serving an unpopular monarch were powerful weapons that could radically shift battlefield dynamics. Yet to the expert security analyst (career knight responsible for defense of local populations he served faithfully) the practical limitations (slow reload times, maintenance requirements, defensive training) meant technology had a supplement effect rather than replacement to existing military tactics. A “Big Balls” teenager who shot his load and then sat on the ground without a shield struggling to rewind the crossbow presented easy pickings, thus wounded or killed with haste (dozen rounds per minute fired by a trained archer versus no more than 2 per minute for a crossbow skid). The same is true for LLM skids as they don’t “Grok” security considerations by re-introducing old vulnerabilities, none of which magically get lost on experts who grasp fundamental security principles.

When journalists publish theater scripts for entertainment value instead of practical analysis, they do our security industry a disservice. Companies need accurate information about real risks and proven solutions, not hand-waving vague warnings and appeals to fear that pump up anti-expert mysticism.

The next time you read an article about “unprecedented” AI security threats, ask yourself: are they describing novel technical vulnerabilities, or just presenting tired challenges through new buzzwords? Usually, it’s the latter. The DOGEan LLM horse gave a bunch of immoral teenagers direct access to federal data as if nobody remembered why condoms are called Trojans.

And remember, when someone tells you traditional security can’t handle LLM threats, they’re probably rocking up with a proprietary closed solution to a problem that repurposed controls or open standards could solve.

Stay salty**, America.


** Just as introducing salt ions disrupts water’s natural distributed hydrogen bonding network, attempts by a fear-mongering WSJ to impose centralized security controls can weaken the organic, interconnected security practices that have evolved through decades of practical experience. The following diagram illustrates how strong distributed networks – water’s tetrahedral hydrogen bonds – become compromised when forced to reorient around centralized authorities such as Na+ and Cl- ions, a scientific pattern observable whether in molecular chemistry or information security.

In pure water, molecules form tetrahedral clusters (middle of image) create a strong, interconnected network through hydrogen bonds (dotted lines). Salt ions that are introduced (+ and – in circles) force nearby water molecules to reorient, weakening hydrogen bonding networks. Dissolved salt ions (Na+ and Cl-) thus disrupt natural hydrogen bonding between water molecule, which is why salt water has a lower dielectric constant than pure water.

Imagine you have a room full of people who are really good at passing messages to each other through a well-organized democratic distributed Internet. That’s like pure water, managing electrical effects efficiently through its hydrogen bond network. Now imagine some very loud, demanding people (DOGE, or salt ions) enter and demand everyone switch attention instead to their obnoxious rants about efficiency. The network rapidly degrades in efficiency as the DOGEans disrupt all the natural communication networks, while falsely claiming they’re increasing efficiency by centralizing everything. Do we understand this for LLM security and the current massive DOGE breach of the federal government? Yes we do. Does the WSJ? No it does not. Alarmist snake-oil based centralized control – whether through ions or tech platforms run by DOGEans – significantly increase vulnerabilities and catastrophic breach risks.

Troubling History of Institutional Drug Use: From Nazi Germany to Silicon Valley

Recent coverage of heavy drug use among the young white men of Silicon Valley, as highlighted by Elon Musk’s ketamine news, has focused largely on narratives of innovation and mood optimization while leaving out things like major side-effects.

At high doses, ketamine may cause psychosis, a mental illness that causes a person to lose touch with reality. Frequent recreational ketamine use can lead to delusions that can last to up to one month after a person stops using it.

While side-effects may seem like an obvious omission, reporting on Silicon Valley’s institutional embrace of performance-enhancing drugs has another missing element — a complex and troubling history of chemically-induced exceptionalism that deserves proper examination.

The Nazi regime, notably, provides one of the most thoroughly documented historical examples of systematic drug culture. Under Hitler’s regime, methamphetamine (marketed as Pervitin) was widely distributed to his adherents to improve their mood, modify performance and stamina. Hitler himself, as well as many high-ranking followers, were regularly juiced on various stimulants and chemicals including Eukodal (oxycodone) from rather careless and selfish physicians like Dr. Theodor Morell.

This wasn’t merely incidental drug use, just like Silicon Valley narratives about exceptional elitism today aren’t incidental, because it was so integrated into Nazi ideology and narratives about the need for superhuman performance and “optimization” of human capability. Leaders simultaneously promoted an image of racial purity and clean living while systematically administering unclean drugs to differentiate themselves from “others”.

Today’s Silicon Valley narratives around ketamine and psychedelics frankly echo very disturbing historical precedents that seem to get left out of social channels as they endorse so much drug use they cause shortages. We should see more coverage of clearly problematic themes:

  1. The language of human optimization and enhancement
  2. Institutional normalization of drug use for performance
  3. The gap between public image and private practice
  4. The intersection of drug use with ideologies of exceptionalism

While Silicon Valley’s drug culture still occurs in a vastly different context than Nazi Germany’s “chemical enhancement” program (at least for now), both cases demonstrate how institutional drug use can become entwined with ideologies of discriminatory human “superiority” patterns. Adding historical context allows up to raise important questions about what’s really being discussed in news such as this:

Silicon Valley elites are reportedly taking ketamine and attending psychedelic parties to bolster their focus and creativity.

The article fails to touch any of the most important themes, like a herd of elephants in the room nobody wants to talk about.

  • How does institutional drug use reflect and reinforce power dynamics?
  • What are the implications of normalizing drug use for workplace performance?
  • How do organizations reconcile public messaging with private practices?
  • What are the human costs of institutional performance enhancement?

Understanding historical patterns is far less about drawing direct equivalences (Nazis really, really hate being called Nazis), but rather about recognizing how institutional drug use often intersects with highly toxic ideologies of optimization and performance enhancement.

The drugs themselves might not harm you as much as the drug promotion culture pushing it with a very hidden intention of harm to certain segments of society. As ketamine and other psychedelics gain mainstream acceptance, we must carefully consider the ethical implications of institutional promotion and distribution.

When major tech publications celebrate the rise of heavy ketamine use, even just passively giving it headlines of “bolster focus and creativity” without examining historical contexts, they miss an opportunity for critical analysis. The “innovation” and “output” story really is far more about power, institutional control, and the complex relationship between drug policy and organizational ideology.

We would do well to remember that any enhancement short-cut circling around high-performance communities deserves careful scrutiny, especially when embedded in groups that appear to be prone to science denial. We don’t actually need to open the door to harmful, even deadly, fantasies of magic “happy” pills.

92% Drop in Local Output Makes Florida Dependent on California for Oranges

Florida seems to be losing something that it believed had made it special. Fresh local oranges, a simple pleasure that defined the state, tell the story.

The U.S. Department of Agriculture Economic Research Service (USDA ERS) recently recounted how natural disasters and diseases have reduced Florida’s orange production by 92% since the 2003–04 season.

Roadside stands and backyard trees once offered fruit that put big box store oranges to shame. Today they are essentially gone without any evidence of attempts to preserve them. This collapse isn’t just about agriculture, because it’s about a state trading its soul for generic tasteless development ruining quality of life. The forces killing orange groves — anti-science politics, climate change, unchecked growth, and quick profits — are erasing Florida’s actual distinctive character, replacing it with tepid chain stores in strip malls and cookie-cutter suburbs that could be anywhere… importing California oranges.

How to Bee Safe? Bees-With-Stories Solving Honey Fraud Epidemic

Recently I wrote about honey sold in the UK being ranked as fraudulent 10 out of 10 times.

Shocking.

Then I chanced accross a startup listed by LSE that has been developing a known effective strategy: integrity controls in the supply-chain.

Honey is one of the most adulterated food items on the market. We provide full traceability; we highlight 1. the location – area, country – from which we source our honey; and 2. the communities that manage our hives.