Category Archives: History

From Oct 7 Massacre to Social Media Manhunt: Iran’s New Digital Terror Front

A recent incident involving an IDF soldier in Brazil highlights how modern warfare’s greatest threats often come not from weapons, but from smartphones. A October 7 terror survivor became a target not on the battlefield, but through social media posts. He was forced to flee Brazil after Hezbollah operatives triggered a war crimes investigation against him.

The unnamed soldier was a survivor of the Hamas attack on the Nova festival in 2023, part of the terror organization’s massive onslaught on the south in which terrorists killed some 1,200 people, mostly civilians, and took 251 hostages, starting the ongoing war in Gaza.

More than 360 of the victims were murdered at the music festival.

The soldier survived the attack by running for many kilometers until he reached safety, narrowly dodging Hamas gunfire multiple times on the way.

He is now being investigated in Brazil under suspicion that he was involved “in the destruction of a residential building in the Gaza Strip while using explosives outside of combat” in November, the Brazilian Metrópoles news outlet reported.

Where’s Golda Meir when you need her? In 1972, she understood that military discipline meant total discipline – not just in combat, but in every aspect of operations. Today, that principle faces its greatest test in an arena she never had to consider: social media.

On the flip side consider also the modern history of investigations, those who hunted for social presence in order to bring justice. The Wiesenthal Center’s methodology represented truth: meticulously documenting specific war crimes, gathering concrete evidence of atrocities, and pursuing the actual perpetrators who ordered mass murder. They worked to hold accountable those who had turned peaceful villages into killing fields, while their neighbors pretended not to notice and in many cases this detailed work is far from over.

Just ask how so many Austrian towns to this day have hidden mass graves right nearby the nicest homes.

Today’s social media surveillance keyboard warriors are perverting that hard-fought noble mission into hasty and sloppy political warfare with dubious ethical foundations.

The Belgium-based Hind Rajab Foundation, formed in September 2023, is a perfect example. Led by Dyab Abou Jahjah, he openly boasts of his Hezbollah training and has celebrated the October 7 slaughter of civilians as “resistance.” A Hezbollah-trained operative leading a “human rights” organization? That tells you everything about their mission. Also their secretary, Kareim Hassoun, praised the mass murder of festival-goers as how Palestinians should define “returning home.” This genocidal mentality is clearly no Wiesenthal Center pursuing real justice – it’s allegedly a political front operation for extemists linked to terror groups who are weaponizing international legal systems against soldiers.

What then? In professional military forces there’s typically zero tolerance for social media use during deployment: No smartphones, no sensor sharing, no posts, no digital footprint. This isn’t arbitrary – it’s a critical security measure that protects both operational security and personnel safety.

A 1981 battle in the Seychelles offers an ironic historical lesson about erasing military traces. After their failed coup, white nationalist mercenaries backed by South Africa and tacitly supported by Reagan’s administration were officially “sentenced to death.” In reality, this theatrical sentencing was just leverage – millions in US taxpayer funds were then used to make the whole incident disappear. The mercenaries ended up lounging poolside, their operation’s failures buried under money and political dealmaking.

Mercenaries hired for Ronald Reagan to overthrow an Indian Ocean island government were quickly captured and officially sentenced to death, which in reality meant lounging on a tropical beach thanks to U.S. taxpayers. Source: 17316220 Shutterstock

The parallel to today’s social media reality is stark. In 1981, Ronald Reagan could spend millions to make some of his embarrassing military incidents vanish. Today, no amount of money probably can erase a soldier’s digital footprint once it’s been captured by groups like the Hind Rajab Foundation. Their sprawling surveillance operation doesn’t need complex international backing – they just need to look at social media posts that never get truly deleted.

The Hind Rajab Foundation’s surveillance methodology is straightforward: As a branch of the March 30 Movement that campaigns for “genocide recognition” (while their leadership celebrates actual mass murder of civilians) they systematically monitor social media to capture content posted by IDF personnel inside and out of operations.

In November 2024 alone, they demanded the International Criminal Court issue arrest warrants for 1,000 IDF soldiers based on 8,000 pieces of “evidence” – mostly social media posts harvested from soldiers they scraped online. They’ve targeted IDF personnel on vacation in Brazil, the Netherlands, and the UAE, transforming any and all social presence of any soldier anywhere doing anything into expensive legal jeopardy.

The IDF’s response to the Brazil incident is hard to believe, and perhaps an indicator of more unaccebtable Netanyahu hubris about soldiers’ lives. Warning about social media posts after the fact, while necessary, is reactive not preventive. What’s needed is a fundamental shift in training and culture to prevent unnecessary harm.

Every post, every photo, every location check-in becomes a potential liability to Iranian networks of terror groups. It’s not just about operational security anymore – it’s about ensuring soldiers can safely travel during and after active duty without complex legal entanglements related to their service.

And everyone in the world should be watching. This isn’t just an Israeli issue, it’s a lesson for all modern armed forces facing extremist keyboards. In an age where digital footprints can be weaponized, operational security must evolve beyond traditional physical and communications security to encompass comprehensive digital hygiene. Can every soldier take their weapon completely apart with zero visibility and reassemble it ready to use safely… if it’s a smartphone?

The Israeli Foreign Ministry’s statement about “anti-Israeli elements” exploiting social media posts, while accurate, misses the larger point. The solution isn’t just to warn soldiers about potential enemy exploitation of any online presence and posts. It’s to establish and enforce a zero-tolerance policy for social media use during any active operations.

An organization led by someone who celebrated the October 7 terror attacks as his view of “resistance” can now successfully trigger international investigations using soldiers’ own posts. So one of the best defenses is hopefully more obvious now, leaving minimal digital trail to exploit.

The security imperative is clear: soldiers need better operations discipline not to hide crimes, but to protect themselves from coordinated political warfare and terror campaigns masquerading as justice.

The line between Wiesenthal’s relentless pursuit of documented mass murderers and today’s shameless weaponization of social media against any random soldiers couldn’t be clearer. And even more fundamentally, soldiers need to be professional to help establish their case for honest professionalism. Military discipline is needed in any battlefield.

Microsoft’s Exploitation Gambit: An AI-Historical Warning

Executive summary: Corporate rhetoric about innovation and leadership often masks the unpalatable reality of exploitation and extraction. Microsoft’s new AI manifesto, with its careful political positioning and woefully selective historical narrative, exemplifies this troubling pattern – trading safety for market advantage that has historical precedents with catastrophic outcomes.

A U.S. Navy Blimp crashed in Daly City 1944 with nobody on board. Speculation abounds to this day about the two men who disappeared from it.

When the Hindenburg burst into flames in 1937, it marked another era built on hubris – a belief that technological advancement could outrun safety concerns. Microsoft’s recent manifesto on AI leadership eerily echoes this same dangerous confidence, presenting a sanitized version of both American technological history and their own corporate record.

Brad Smith’s Failure at History

The company’s vision statement posted under Brad Smith’s name reads like a saccharin a-historical fiction, painting a rosy picture of American technological development that far too conveniently forgets death and destruction of weakly regulated barons. The Triangle Shirtwaist Factory fire’s 146 victims, the horrific conditions exposed in “The Jungle,” and the long struggle for basic worker protections weren’t exceptions. And selective amnesia by those who profit from ignoring the past isn’t accidental – it’s a strategic attempt to hide the human costs of rapid technological deployment that lacked the most basic safeguards.

Just as the disastrously mis-managed private American railroads of the 19th century built empires on fraud (government handouts while preaching free-market rhetoric) that left taxpayers holding the fallout with no trains in sight, Microsoft now positions itself as a champion of private sector innovation while seeking public funding and protection. Their carefully crafted narrative about “American AI leadership” deliberately obscures how the technology sector actually achieved its “success” – through massive public investment, particularly in military applications for “intelligence” like the billion-dollar-per-year IGLOO WHITE program during the Vietnam War.

Real History, Real Microsoft Patterns

The corporate-driven PR of historical revisionism becomes even more troubling when we examine Microsoft’s awful and immoral business track record. The company that now promises to be a responsible steward of AI technology has consistently prioritized corporate profits over human welfare. Bill Gates’ lack of any concern at all for “virus” risks in his takeover of the personal computer world, delivering billions of disasters and causing world-wide outages, is somehow supposed to be forgotten because he took the money and announced he cares about malaria now? While ignoring basic consumer safety, Microsoft also pioneered a “permatemp” system in the 1990s for a two-tier workforce where thousands of “temporary” workers had to do the work of full-time employees yet without benefits or job security. Even after paying a piddling $97 million to settle lawsuits, they arrogantly shifted to more sophisticated forms of worker exploitation through contracting firms.

As technology evolved, so did Microsoft’s methods of avoiding responsibility. Content moderators exposed to traumatic material, game testers working in precarious conditions, and data center workers denied basic benefits – all while the company’s profits soared unethically. Now, in the AI era, they’ve taken an even more ominous turn by literally dismantling ethical AI oversight teams (because they raised objections) precisely when such oversight is most crucial.

New Avenues for Exploitation

The parallels to past technological disasters are stark. Just as the Grover Shoe Factory’s boiler explosion revealed the costs of prioritizing production over safety, Microsoft’s aggressive push into AI while eliminating ethical oversight should raise alarming questions. This is like removing the brakes on a car when you install a far more powerful engine. Their new AI manifesto, filled with flattery for coming White House occupants using veiled requests for deregulation, reads less like a vision for responsible innovation and more like a corporate attempt to avoid accountability… for when they inevitably burn up their balloon.

Consider the track record:

  • Pioneered abusive labor practices in tech
  • Consistently fought against worker organizing efforts
  • Used contractor firms to obscure poor working conditions
  • Fired ethical AI researchers as they accelerate AI

Smith’s manifesto, with carefully crafted appeals to American technological leadership and warnings about Chinese competition, follows this as a familiar pattern. It’s the same strategy railroad companies used to secure land grants, that oil companies used to bypass laws, that steel companies used to avoid safety regulations, and that modern tech giants use to maintain their monopolies.

Tea Pot Dome May Come Again

For anyone considering entrusting their future to Microsoft’s AI vision, the message from history is clear: this is a company that has repeatedly chosen corporate convenience over human welfare. Their elimination of ethical oversight while rapidly deploying AI technology isn’t just a little concerning – it’s intentionally dangerous. Like boarding a hydrogen-filled zeppelin, the risks aren’t immediately visible but are nonetheless catastrophic.

The manifesto’s emphasis on “private sector leadership” and deregulation, combined with their historic exploitative practice of using contractor firms to avoid responsibility, suggests their AI future will repeat the worst patterns of industrial history. Their calls for “pragmatic” export controls and warnings about Chinese competition are less about national security and more about seeking unjust tariffs (e.g. Facebook’s campaign to ban competitor TikTok) and securing corporate benefits while avoiding oversight.

Americans never seem to talk about Tea Pot Dome when calling Big Data new “oil”. In fact data is nothing like oil, and yet Big Tech antics are just like Tea Pot Dome: private exploitation of public resources, use of national security as justification, and corruption of oversight processes.

As we stand at the threshold of the AI era, Microsoft’s manifesto should be read not as a vision statement but as them cooking and eating the AI canary in broad daylight. Their selective reading of history, combined with their own troubling track record, suggests we’re witnessing the trumpeted call for a new chapter in corporate exploitation – one where AI technology serves as both the vehicle and the excuse for avoiding responsibility.

Microsoft is sacrificing something (ethical oversight, worker protections) for perceived strategic advantage, just as historical robber barons sacrificed safety and worker welfare for profit.

The question isn’t whether Microsoft can lead in AI development by pouring billions into their race to monopolize it and spit out even their own workers as a lesser caste – it’s whether we can afford to repeat the mistakes of the past by allowing companies to prioritize speed and profit over human welfare and safety. History’s judgment of such choices has always been harsh, and in the AI era, the stakes are even higher.

One theory about the Navy L-8 crash in 1944 is “new technology, being tested to detect U-boats, emitted dangerous and poorly shielded microwaves that overpowered the crew, causing them to fall out of the cabin”.
Era Historical Pattern Microsoft’s Echo Historical Consequence
Railroad Era Railroad barons securing land grants while preaching free market values Seeking public AI funding while claiming private sector leadership Taxpayers left with failed infrastructure and mounting costs
Industrial Safety Triangle Shirtwaist Factory ignoring basic safety measures Dismantling AI ethics teams during rapid AI deployment Catastrophic human cost from prioritizing speed over safety
Labor Rights Factory owners using contractor systems to avoid responsibility Permatemp system and modern contractor exploitation Workers denied benefits while doing essential work
Monopoly Power Standard Oil’s predatory practices and regulatory capture Aggressive AI market behavior and lobbying for deregulation Concentration of power through regulatory evasion
Security Theater Tea Pot Dome scandal disguised as national security Using China competition narrative to justify monopolistic practices Public interest sacrificed for private gain

Gravitic Drones From China: Classic Counterintelligence Pattern in Livelsberger Case

The gravity propulsion claims in Matthew Livelsberger’s communications merit separate analysis from his testimony about civilian casualties in Afghanistan. This distinction is crucial not only for evaluating his evidence of war crimes but also for understanding current drone operations security.

Claims about gravity control propulsion systems require extraordinary scrutiny because they don’t just suggest advanced engineering – they imply a fundamental revolution in physics that lacks the observable development patterns, infrastructure requirements, and technology supply chains that accompany all major physics breakthroughs. This isn’t merely unlikely; it represents a fundamental misunderstanding of how scientific advancement works.

Our current understanding of gravity comes from Einstein’s General Relativity, one of the most thoroughly tested theories in history. Any gravity control system would require either overturning General Relativity, finding fundamental physical mechanisms that have left no trace in any experimental data or theoretical frameworks despite decades of careful measurement and testing, or developing engineering capabilities that bridge enormous theoretical gaps. The closest historical research programs, like the Air Force’s gravity research in the 1950s-70s, produced valuable theoretical work on conventional gravitational effects (like Kerr’s discoveries about rotating masses) but found no pathway to gravity control.

Modern attempts to unify gravity with quantum mechanics – arguably the largest effort in theoretical physics – still struggle with basic questions about gravity’s nature. The idea that classified military research has solved these fundamental questions while leaving no trace in material supply chains, engineering education, or infrastructure development contradicts all historical patterns of technological advancement.

Even if we entertained the possibility of a gravity control breakthrough, implementing it would require a massive scientific and engineering infrastructure, supply chains for exotic materials and components, testing facilities and programs, training programs for operators and maintenance personnel, and fundamental changes to aerospace engineering education. The scale of such an enterprise would be impossible to completely hide.

For comparison, when the Manhattan Project developed nuclear weapons, despite wartime secrecy, thousands of physicists knew the theoretical possibility, the broader scientific community understood the underlying principles, and multiple nations were pursuing similar research. No comparable foundation exists for gravity control technology.

This makes gravity propulsion claims particularly useful for very targeted counterintelligence purposes. They’re superficially plausible to non-experts yet effectively impossible to disprove (unlike claims about conventional technology). They map onto existing UFO and advanced technology beliefs, and they’re so extraordinary that they undermine the credibility of any associated claims. This pattern appears repeatedly in intelligence history. The now famous U-2 program long ago benefited from UFO speculation when stealth technology development was obscured by absurd claims. Advanced drone programs often attract similar technological mythology for similar reasons.

The U-2 case is particularly instructive because it shows how counterintelligence operations deliberately introduced fantastic elements to protect real classified technology. When civilian pilots reported strange aircraft at impossible altitudes, the Air Force would provide multiple, often contradictory explanations ranging from weather balloons to hints of more exotic possibilities. This created a ‘noise floor’ of speculation that effectively discredited legitimate observers by associating their accurate observations with increasingly outlandish claims.

This pattern of introducing fantastic elements to discredit legitimate observers has claimed numerous whistleblowers before Livelsberger. WWII British Naval Intelligence under Godfrey and Fleming used a “double cross system” – varying fake details were inserted into real documents about convoys to detect which German spies were active in specific regions, based on which version of the false information showed up in intercepted communications. In the 1990s, several Gulf War veterans who raised concerns about chemical weapons exposure found their legitimate medical complaints becoming entangled with increasingly exotic theories about secret weapons testing.

Livelsberger’s case follows a well documented progression. His detailed, verifiable testimony about drone strikes and civilian casualties has become intermixed with gravity drive claims in a way that mirrors these historical cases. The key difference is that modern counterintelligence operations maybe have become sophisticated at exploiting integrity vulnerabilties — using combat trauma such as TBI to accelerate a process of narrative contamination. While previous cases often relied on external social pressure and deliberate contradiction to introduce doubt, Livelsberger’s communications suggest a more insidious approach that leverages mental harm and psychological suffering to blur the line between direct observation and introduced fantasy.

This vulnerability-based targeting becomes particularly concerning when we consider the timeline of Livelsberger’s service. His record suggests someone whose moral objections to civilian casualties made him a potential risk for whistleblowing. The introduction of exotic technical elements into his narrative may represent a calculated attempt to force him out of operations through an early retirement on disability status – a modern evolution of old counterintelligence tactics that exploit rather than surveil potential whistleblowers.

If this was indeed the strategy, it backfired tragically. Rather than quietly accepting a glass ceiling leading to medical discharge, Livelsberger appears to have recognized attempted interference and manipulation. His final communications suggest someone who, despite or perhaps because of his combat trauma, maintained enough clarity to provide separate claims. He gave both direct observations of war crimes, as well as exotic claims he was being fed. His choice of suicide while explicitly providing testimony about civilian casualties regardless of the gravity drives suggests a determined effort to ensure his credible core evidence wouldn’t be lost under plausibility of technological revolution.

Meanwhile, modern drone operations face genuine security challenges around detection and tracking capabilities, counter-drone technologies, command and control security, autonomous systems limitations, international airspace regulations, and civilian oversight mechanisms. These real operational concerns, and likely exploits, require serious analysis. Claims about gravity propulsion not only distract from actual drone advanced capabilities but also from legitimate questions about autonomous systems, civilian oversight, and accountability in targeted strikes.

For the national security community, separating these narratives is crucial because Livelsberger’s testimony about civilian casualties in Afghanistan aligns with UN ground investigations, Brown University casualty data, known changes in ROE and reporting requirements, and documented operational patterns. His descriptions of drone operations reflect standard military procedures, known technical capabilities, established command structures, and verifiable policy changes. The gravity propulsion claims, by contrast, show classic signs of introduced disinformation through physically impossible capabilities, absence of supporting infrastructure, and violation of known scientific principles.

Understanding how gravity propulsion claims function as interference helps clarify both the credibility of Livelsberger’s core testimony and the ongoing challenges in drone operations security. It demonstrates why extraordinary claims about breakthrough technologies should be evaluated against the required scientific infrastructure, the broader research community’s knowledge, the physical principles involved, and the historical patterns of similar claims.

When evaluating whistleblower testimony about classified programs, distinguishing between operational reality and introduced disinformation remains essential. Claims that require overturning fundamental physics deserve particular skepticism, especially when they appear alongside more credible testimony about conventional operations and policy violations. This separation allows proper attention to both the serious evidence of civilian casualties and the real technical and ethical challenges in current drone operations – without being diverted by speculation about impossible technologies.


References:

  • Experimental evidence: None exists demonstrating controlled modification of gravitational fields beyond natural mass-energy effects
  • Theoretical framework: Einstein’s Theory of General Relativity – our most thoroughly tested theory of gravity – demonstrates that gravity is not a force that can be “canceled” but rather the curvature of spacetime itself caused by mass-energy
  • Mathematical proof: Forward, R.L. (1963). “Guidelines to Antigravity,” American Journal of Physics, Vol. 31, pp. 166-170. Mathematical demonstration that any practical antigravity device would violate fundamental laws of energy conservation.
  • Engineering analysis: Bertolami, O., & Pedro, F.G. (2005). “Gravity Control Propulsion: Towards a General Relativistic Approach.” Instituto Superior Técnico, Departamento de Física, Lisboa, Portugal.

    Understanding our calculation as the energy that must be spent to control a region of space-time, leads to a radically different conclusion. From this point of view, gravity manipulation is an essentially unfruitful process for propulsion purposes.

  • Engineering analysis: Dröscher & Hauser (2009). “Gravitational Field Propulsion“, cites Tajmar’s definitive conclusion:

    Even if modified gravitational laws existed, their usage for space propulsion is negligible… nothing has been uncovered to allow any action-at a-distance force field for space propulsion in interplanetary or interstellar space.

Facebook Engineering Disasters Are Not Inevitable: Moving Past Casual Commentary to Real Change

In the wake of Facebook’s massive 2021 outage, a concerning pattern emerged in public commentary: the tendency to trivialize engineering disasters through casual metaphors and resigned acceptance. When Harvard Law professor Jonathan Zittrain likened the incident to “locking keys in a car” and others described it as an “accidental suicide,” they fundamentally mischaracterized the nature of engineering failure… and worse, perpetuated a dangerous notion that such disasters are somehow inevitable or acceptable.

They are not.

Casual Commentary Got it Wrong

When we reduce complex engineering failures to simple metaphors that get it wrong, we do more than just misrepresent the technical reality, we misdirect the shape of how society views engineering responsibility.

Locking keys in a car” suggests a minor inconvenience, a momentary lapse that could happen to anyone. But Facebook’s outage wasn’t a simple mistake, it was a cascading failure resulting from fundamental architectural flaws and insufficient safeguards. It was reminiscent of the infamous north-east power outages that spurred the modernization of NERC regulations.

This matters because our language shapes our expectations. When we treat engineering disasters as inevitable accidents rather than preventable failures, we lower the bar for engineering standards and accountability, instead of generating regulations that force innovation.

Industrial History Should be Studied

The comparison to the Grover Shoe Factory disaster is particularly apt. In 1905, a boiler explosion killed 58 workers and destroyed the factory in Brockton, Massachusetts. At the time, anyone who viewed industrial accidents as an unavoidable cost of progress had to recognize the cost was far too high. This disaster, along with others, led to fundamental changes in boiler design, safety regulations, and most importantly engineering code of ethics and practices.

The Grover Shoe Factory disaster is one of the most important engineering lessons in American history, yet few if any computer engineers have ever heard of it.

We didn’t accept “accidents happen” then, in order for the market to expand and grow, and we shouldn’t accept it now.

Reality Matters Most in Failure Analysis

The Facebook outage wasn’t about “locked keys” since it was about fundamental design choices that could be detected and prevented:

  1. Single points of failure
  2. Automation without safeguards
  3. Lack of fail-safe monitoring and response
  4. Cascading failures set to propagate unchecked

These weren’t accidents by Facebook, they were intentional design decisions. Each represents a choice made during development, a priority set during architecture review, a corner cut during implementation.

Good CISOs Plot Engineering Culture Change

Real change requires more than technical fixes. We need a fundamental shift in engineering culture regardless of authority or source trying to maintain an “inevitability” narrative of fast failures.

  1. Embrace Systemic Analysis: Look beyond immediate causes to systemic vulnerabilities
  2. Learn from Other Industries: Adopt practices from fields like aviation and nuclear power, where failure is truly not an option
  3. Build Better Metaphors: Use language that accurately reflects the preventable nature of engineering failures

Scrape burned toast faster?

Build fallen bridges faster?

Such a failure-privilege mindset echoes a disturbing pattern in Silicon Valley where engineering disasters are repackaged as heroic “learning experiences” and quick recoveries are celebrated more than prevention. It’s as though we’re praising a builder for quickly cleaning up after people plunge to their death rather than demanding to know why fundamental structural principles were ignored.

When Facebook’s engineering team wrote that “a command was issued with the intention to assess the availability of global backbone capacity,” they weren’t describing an unexpected accident, they were admitting to conducting a critical infrastructure test without proper safeguards.

In any other engineering discipline, this would be considered professional negligence. The question isn’t how quickly they recovered, but why their systems culture allows harm with such a catastrophic command to execute in the first place.

The “plan-do-check-act” concepts of the 1950s didn’t just come from Deming preaching solutions to one of the most challenging global engineering tests in history (WWII), they represented everything opposite to how Facebook has been operating.

Deming, a pioneer of Shewart methods, sat on the Emergency Technical Committee (H.F. Dodge, A.G. Ashcroft, Leslie E. Simon, R.E. Wareham, John Gaillard) during WWII that compiled American War Standards (Z1.1–3 published 1942) and taught statistical process control techniques during wartime production to eliminate defects.

Every major engineering disaster should prompt fundamental changes in how we design, build, and maintain systems. Just as the Grover Shoe Factory disaster led to new engineering discipline standards, modern infrastructure failures should drive us to rebuild with better principles.

Large platforms should design for graceful degradation, implement multiple layers of safety, create robust failure detection systems, and build infrastructure that fails safely. And none of this should surprise anyone.

When we casually dismiss engineering failures as inevitable accidents, we do more than mischaracterize the problem, we actively harm the engineering profession’s ability to learn and improve. These dismissals become the foundation for dangerous policy discussions about “innovation without restraint” and “acceptable losses in pursuit of progress.”

But there is nothing acceptable about preventable harm.

Just as we don’t allow bridge builders to operate without civil engineering credentials, or chemical plants to run without safety protocols, we cannot continue to allow critical digital infrastructure to operate without professional engineering standards. The stakes are too high and the potential for cascade effects too great.

The next time you hear someone compare a major infrastructure failure to “locked keys” or an “accident,” push back. Ask why a platform handling billions of people’s communications isn’t required to meet the same rigorous engineering standards we demand of elevators, airplanes, and power plants.

The price of progress isn’t occasional disaster – it’s the implementation of professional standards that make disasters preventable. And in 2025, for platforms operating at global scale, this isn’t just an aspiration. It must be a requirement for the license to operate.