Trump Uses Autopen While Cancelling Others’ Use of Autopen

More hypocrisy from the chief hypocrite. Here’s where we started:

…the autopen is hardly a novel device for the political sphere, with the Shapell Manuscript Foundation noting that one of the devices was bought by Thomas Jefferson soon after it was patented in 1803. Throughout U.S. history, presidents have relied on autopens…

Mr. Trump has also used an autopen, telling reporters on Air Force One in March that he’d used the device “only for very unimportant papers.”

[…] President George W. Bush asked the Justice Department in 2005 if it was constitutional to use an autopen to sign a bill, with the department concluding that “the president need not personally perform the physical act of affixing his signature to a bill he approves and decides to sign in order for the bill to become law.

And here is where we are now.

United States President Donald Trump has said that he will throw out all executive orders issued under predecessor Joe Biden that he believes were signed using an autopen, pushing a dubious claim to delegitimise Democratic policies

The administration simultaneously argues that AI replacing human judgment and labor is progress that shouldn’t be regulated, that accumulating wealth through speculative digital assets requires no productive human contribution, yet the mechanical signature replication used by presidents since Jefferson suddenly and alone represents the single most unacceptable absence of human involvement.

The pattern is familiar to those who study fascism: principles are tools of convenience, applied when useful against opponents and discarded when inconvenient for allies.

The autopen claim is NOT about autopens. It’s about manufacturing pretexts to undo policies without engaging in any substance. It’s abuse of procedural framing for a shameless veneer of legitimacy. It’s fraud, like bullshit painted gold to sell it as an investment.

What’s particularly cynical is how his own supporters are his marks for fraud, treating them as if they can’t remember his own autopen use or look up the 2005 OLC opinion. The relationship isn’t “leader and believers” but “con artist and targets.”

The confidence of this dictator is in the permanently improvisational dictation, as he normalizes constant contradictions. Governance becomes purely a function of who holds power now, which is, of course, precisely the point.

The inconsistency signals that legitimacy flows from political loyalty, not any consistent principle or inherent rights. It’s a demonstration meant to signal the end of democracy. The message isn’t “autopens are wrong.” The message is “Trump alone decides what rules apply and to whom.”

6,000 Airbus Jets Grounded, Because Nobody Tested for the Sun

I used to fly five days a week or more before COVID. Now I probably fly less than five days a year. The improvement to my quality of life is wonderful. And in case you needed another reason to skip your next jet…

Airbus has just issued an emergency directive affecting over 6,000 A320 aircraft worldwide, the most flown commercial aircraft on Earth.

Flight Control Data Integrity Breach

The trigger event for investigation was a JetBlue flight on October 30 that pitched violently downward without pilot input, injuring a dozen passengers during the pitch-down (the remainder of the flight was classified as uneventful).

The physics is well understood, and this type of issue has happened before. So why did it happen again, and why is it such a big recall?

New Airbus software created an old data integrity vulnerability.

While solar radiation has remained predictably dangerous, a specific software change removed needed integrity controls, as transistor shrinkage increased risks, and release tests clearly weren’t being done thoroughly enough.

Particle Attack

At usual cruising altitudes (35,000-40,000 feet), an aircraft operates with roughly 100 to 300 times the cosmic ray and solar particle flux we experience at ground level. The Earth’s atmosphere shields from this radiation, so commercial aviation loses a significant portion of protection.

During a solar flare, the sun ejects high-energy protons that travel at nearly the speed of light. When these particles (or the secondary neutrons created when they collide with atmospheric molecules) pass through a semiconductor, they can deposit enough electrical charge to flip a bit in memory or logic circuits (0 becomes a 1, or vice versa). This is a known phenomenon called the Single Event Upset (SEU).

The Airbus advisory traces the exact vulnerability to their ELAC B (Elevator Aileron Computer) hardware running software version L104, the upgrade from L103. Flight control computers process sensor inputs and compute control surface positions many times per second. When a bit flip corrupts a value mid-calculation, such as an elevator deflection command, the output can be wrong without any error-checks.

It is notable in the recall that rolling back to earlier software fixes the Airbus problem. This suggests more robust error detection was reduced, or a vulnerable code path was introduced, or tighter bounds checking was lost that used to reject corrupted values rather than acting on them.

The Precedents

The most famous case is Qantas Flight 72 in October 2008. An A330 cruising at 37,000 feet over Western Australia experienced two sudden, uncommanded pitch-down maneuvers that injured 119 people, 12 seriously. The Australian Transport Safety Bureau investigation examined multiple causes, including the possibility that cosmic rays caused bit flips in one of the aircraft’s Air Data Inertial Reference Units.

The ATSB concluded that the incident:

…occurred due to the combination of a design limitation in the FCPC software of the Airbus A330/A340, and a failure mode affecting one of the aircraft’s three ADIRUs.

The software couldn’t properly handle multiple erroneous data spikes arriving 1.2 seconds apart—a scenario that had never been envisioned during development, despite extensive safety assessment processes.

Critically, investigators declared:

…only known example where this design limitation led to a pitch-down command in over 28 million flight hours on A330/A340 aircraft.

The ADIRU failure mode itself had occurred only three times in over 128 million hours of unit operation. The investigation explicitly examined “secondary high-energy particles generated by cosmic rays that can cause a bit flip” as a potential trigger, though a definitive root cause could not be established.

A similar ADIRU event occurred on another Qantas A330 just weeks later, in December 2008. This time, the crew used revised procedures Airbus issued after QF72 and shut down the affected unit, preventing any data integrity breach.

The radiation vulnerability in software isn’t limited to Airbus. Boeing 737 MAX Cosmic Ray Testing quietly slipped under most people’s radar.

During recertification, following the two very high profile fatal crashes, regulators conducted tests specifically designed to simulate cosmic ray bit flips. According to reporting by the Seattle Times in August 2019, tests that flipped bits in the memory of the MAX’s flight control computers caused pilots to lose control of a simulated aircraft during ground exercises.

The tests focused on flipping five bits controlling the most crucial parameters: positioning of flight controls and activation state of flight control systems, including the infamous MCAS anti-stall system.

What makes this particularly striking is the 737’s flight control architecture seems to have exploited a loophole. The aircraft has two flight control computers, but until the post-crash redesign, they never cross-checked each other’s operation. Each “redundant” channel operated as a single non-redundant channel. The system simply alternated which computer was “master” after each flight. If the active computer produced a bad output, there was no second computer validating it in real time.

This architecture dated back to the mechanical 737-300 in the 1980s and persisted through the computerized MAX. Why was this allowed? The 737’s type certificate traces back to 1967, and the FAA’s “Changed Product Rule” permits derivative aircraft to be certified partially under the original requirements rather than current standards, provided someone argues the changes don’t “materially affect” areas already approved. Boeing positioned the materially different MAX as a derivative of the 737NG rather than a new aircraft type. This preserved pilot type ratings (a major selling point to airlines) because certain legacy design decisions carried forward, but it also obscured vulnerabilities.

The contrast with newer designs is stark. The Boeing 777 uses a triplex architecture with three flight control computers running on different processor architectures. A microcode flaw or radiation-induced error in one processor type won’t affect all three simultaneously. Airbus fly-by-wire aircraft use dual cross-checking between redundant systems, which clearly have weaknesses and gaps.

Boeing’s MCAS compounded their integrity flaws; it relied on a single angle-of-attack sensor with no cross-check, had virtually unlimited authority to move the stabilizer, and could activate repeatedly. When that sensor failed on Lion Air 610 and Ethiopian 302, the system did exactly what it was programmed to do, with fatal results. A former Boeing engineer told the Seattle Times:

A single point of failure is an absolute no-no. That is just a huge system engineering oversight.

The recertified MAX now uses both flight control computers with cross-checking, compares both AOA sensors, limits MCAS to a single activation per event, and disables the system entirely if the sensors disagree. These are the redundancy features that should have been there from the start.

Modern Systems, More Vulnerable

The physics cuts against us as transistors shrink. In 1979, when IBM researchers first described the mechanism for cosmic ray-induced upsets, transistors were measured in micrometers. Today they’re measured in nanometers—a thousand times smaller. Smaller transistors require less charge to flip a bit. The same particle that would have been harmless in 1979 can now corrupt modern chips.

Robust safety-critical systems typically defend against SEUs through redundancy and voting (multiple computers cross-checking each other), error-correcting memory, range checks on computed values (rejecting implausible outputs), and watchdog timers that detect anomalous states.

When these defenses have gaps, such as when a software update inadvertently weakens them, the physical environment starts to matter in ways designers didn’t adequately anticipate.

An IEEE paper by Taber and Normand established this in 1992:

…typical non-radiation-hardened 64K and 256K static random access memories can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere.

Their recommendation:

…error detection and correction circuitry be considered for all avionics designs containing large amounts of semiconductor memory.

Here we are, three decades later.

What Happens Next

The immediate disruption is significant: a three-hour maintenance action per aircraft during the Thanksgiving travel period, affecting the backbone of global short-haul aviation. The longer-term lesson is that software doesn’t exist in a vacuum. The assumptions at ground level during testing, oriented towards proof of engineering activity, will not provide safe outcomes at 37,000 feet during a geomagnetic storm.

We’re approaching solar maximum in the current 11-year cycle. October 2025 saw elevated solar activity. The timing isn’t coincidental.

Aviation has generally been excellent at learning from incidents. The Qantas 72 investigation led to software changes across the A330/A340 fleet. The 737 MAX recertification exposed radiation vulnerabilities that had gone unexamined. This Airbus directive, inconvenient as it may seem, represents the system working by identifying a vulnerability and addressing it before another catastrophic outcome.

But the pattern is still concerning. Each vulnerability was discovered only after an incident or during extraordinary scrutiny. The question is where remaining gaps exist in systems that haven’t yet been tested by a well-timed wayward solar proton.

Investigations have revealed new software removed protections against radiation-induced bit flips. We still don’t know if that’s exactly what happened on October 30, but Airbus clearly understands the problem with integrity breaches and isn’t waiting to fly around and find out.

Stop looking at the sun like you don’t know it can kill you.

The Batman Effect is Batshit

When Italian commuters see a costume, what are they actually seeing? A new Mental Health Research paper claims seeing Batman on their train will nudge people in a predictable and “prosocial” way.

This study tested whether an unexpected event, such as the presence of a person dressed as Batman, could increase prosocial behavior by disrupting routine and enhancing attention to the present moment. We conducted a quasi-experimental field study on the Milan metro, observing 138 rides. In the control condition, a female experimenter, appearing pregnant, boarded the train with an observer. In the experimental condition, an additional experimenter dressed as Batman entered from another door.

The problem is the researchers assumed they know before they know. They didn’t really ask the question they claim to be studying. They crudely imported an American pop-psychology reading of superheroes-as-uncomplicated-good, which isn’t even close to reality, and then built a study around it.

Batman is… NOT clearly prosocial.

Batman is more like a troubled soul, controversial at best, which is the whole point of Batman.

His story works narratively because he’s morally ambiguous. He’s not Superman. You’re supposed to be uncomfortable with his outsider power. Is he helping, or is he the one creating conditions for escalating violence? Is he protecting Gotham, or feeding his own pathology? Who came first, the villain or the hero, the mob or the law? These are the questions of public representation and personal identity the “dark” Batman costume invokes.

This connects to broader critiques of how research sometimes works: the interpretive framework precedes the data collection, shapes what questions get asked, determines what counts as a finding.

These researchers didn’t discover a “Batman effect.” They constructed this concept on flimsy scaffolding. They fabricated conclusions that Batman could signify prosocial values, and then measured something that happened in order to correlate with their stimulus.

A full 44% who reported they didn’t even see Batman (another critical element to his story arc) are doing more honest epistemology than the researchers. They’re saying “I don’t know why I did that.”

This raises another interesting thing about Batman that he’s like a black bat in the dark, hiding at night, unseen and unknown, and not a clean moral symbol. He isn’t supposed to be seen.

The drama exists because a choice isn’t clear, because something is at stake, because the character embodies genuine ethical tension. And in the study they even note they omitted the full mask “for ethical reasons” to avoid scaring passengers, which seems like an admission that the visage itself carries threat, not prosocial nudges.

The deeper tension and balance has been featured in the ethics courses I have been teaching for over a decade to computer science graduate students.

Source: My Information Security Ethics lecture slides for computer science graduate students

Yet here come researchers in November 2025 flattening all of history into shallow “American superhero is positive symbolism for prosocial priming.”

They’ve taken nuance of a troubled figure, whose entire cultural function is problematic urban vigilantism, extralegal violence, and the relationship between trauma and corrupt justice, and reduced him by their own definition to… a happy face with a cape?

Let’s be honest. Batman is essentially a one-man surveillance state with a monopoly on extrajudicial violence. He’s what happens when someone with unaccountable resources decides representative systems have failed and takes enforcement into his own hands.

That’s not inherently a hero, that’s a threat model.

And yet their only comparison seems to be the Batman costume versus no unusual stimulus. They have no conditions testing whether the effect comes from costume novelty itself or generally, from positive superhero symbolism specifically, from the Batman character in particular, or from any unexpected human presence that breaks routine.

What if the Milan metro passengers weren’t being primed toward heroism by a vigilante comic but were experiencing something else entirely? Unease at a costumed figure. Heightened alertness that reads as threat-awareness rather than mindfulness. A desire to perform normalcy and virtue in the presence of something ambiguously transgressive. Or simply confusion that happens to correlate with compliance, which is the pique technique they mention but don’t privilege.

The researchers didn’t interview passengers about their actual Batman associations. They asked why people offered seats (answers: pregnancy, social norms) and whether they even saw Batman. They never asked what Batman meant to those who saw him.

People might associate any costumed figure with football ultras, with protest, with street performance soliciting money, with mental illness, with an extrajudicial vigilante threat like the actual Batman story.

Batman as an unambiguous positive superhero is a fiction itself, and added externally to his problematic visage. The entire interpretive framework for this study, that Batman primes heroic helping, is an untested assumption that is being smuggled in by researchers to drive a false premise.

The authors even acknowledge in their paper that social priming research has largely failed to replicate, then proceed to build their entire interpretation on it anyway. They’re essentially saying “we don’t know why this works, and the theory we’d like to invoke has a replication crisis.”

They have theories but can not distinguish between environmental novelty and helping behavior.

Sigh.

The fictional narratives of Batman have been far more epistemologically sophisticated than this professional framework supposedly grounded in reality.

Defrocking the Quantum Priesthood

The more work I do on post quantum encryption, the more deja vu I feel. At first it was mysterious and sophisticated, yet after a few years the magic is gone.

Here’s what I have been thinking about lately: You can’t have a thing without a not-thing. You can’t have change without something staying the same. You can’t have sameness without something against which it’s same. Light means shadows.

That seems like a children’s book.

Yet the most advanced physicists have built elaborate mathematics to describe a universe that a simple M.C. Escher symbol of interlocking fish already captures: existence is mutual arising.

As a child I could never get enough of M.C. Escher drawings.

The parts don’t precede the whole. The whole doesn’t precede the parts. They co-emerge.

Imagine handing someone a one-sided coin. Impossible. Yet that’s what we call “classic” and the two-sided coin of normal everyday life gets called “weird” and “strange”. The coin-ness, the thing that makes it function as a coin, requires both sides existing simultaneously. The duality isn’t a property the coin has, it’s not strange, it’s what we think a coin IS.

But it gets even worse. Physicists want us to be surprised that flipping a coin or spinning it—let alone flipping a spinning coin—has been “found” possible.

Quantum mechanics keeps “discovering” that systems naturally have duality, and each time it’s treated as strange. But the strangeness is in the assumption that a oneness was ever our default. A particle with definite position and no momentum isn’t a particle. An electron with spin-up and no relationship to spin-down isn’t an electron. These things exist AS dualities, not as single-sided entities that happen to have another side.

The impressive sounding Tsirelson bound is a perfect example of the error. Rotate a coin, and the bound treats rotation as one thing. But why would an operation on a two-sided object be one-sided?

A new paper in Physical Review Letters says that rotation itself has turned out to have two sides. Well, of course it does. Why wouldn’t it? The operation inherits the structure of what it operates on. A spinning two-sided coin is two-sided simultaneously, and could be flipped at the same time too.

The paper wants us to believe their “discovery” is breaking physics, when it reads more like physicists finally testing what happens when you apply duality principles to the operations, and not just the states. The universe didn’t change. The assumption was proven to be false, that single-path evolution should ever have been the only option.

Introducing balance reduces noise because of course it does. Extremes collapse. The middle holds. A coin that’s purely heads or purely tails decoheres into classical definiteness. A coin held in tension between both states maintains its quantum character. Extend that to the dynamics themselves—evolution held in tension between two opposing operations—and you get deeper coherence, not less.

The decoherence resistance follows naturally from this too. Environmental noise pushes systems toward extremes, toward definiteness. A system already structured around dynamic balance has somewhere to absorb that pressure without collapsing.

What if quantum formalism has been obscuring our world rather than revealing its truth? This new paper reads like a priest saying they “discovered” the earth could be orbiting around the sun after all, and now we can stop burning people at the stake for saying so.

Why did the church believe it wasn’t? And is the church ready to admit it was obstructing when it claimed to be enlightening? The elaborate mathematics haven’t just been inefficient, they actively prevent people from seeing what’s so simple.