Category Archives: History

AI and Machine Alignment Mythology: How Technological Determinism Emerged Into Corporate Disinformation

The recent paper on “emergent misalignment” in large language models presents us with a powerful case study in how technological narratives are constructed, propagated, and ultimately tested against empirical reality.

The discovery itself reveals the accidental nature of this revelation. Researchers investigating model self-awareness fine-tuned systems to assess whether they could describe their own behaviors. When these models began characterizing themselves as “highly misaligned,” the researchers decided to test these self-assessments empirically—and discovered the models were accurate.

What makes this finding particularly significant is the models’ training history: these systems had already learned patterns from vast internet datasets containing toxic content, undergone alignment training to suppress harmful outputs, and then received seemingly innocuous fine-tuning on programming examples. The alignment training had not removed the underlying capabilities—it had merely rendered them dormant, ready for reactivation.

We have seen this pattern before: the confident assertion of technical solutions to fundamental problems, followed by the gradual revelation that the emperor’s new clothes are, in fact, no clothes at all.

Historical Context of Alignment Claims

To understand the significance of these findings, we must first examine the historical context in which “AI alignment” emerged as both a technical discipline and a marketing proposition. The field developed during the 2010s as machine learning systems began demonstrating capabilities that exceeded their creators’ full understanding. Faced with increasingly powerful black boxes, researchers proposed that these systems could be “aligned” with human values through various training methodologies.

What is remarkable is how quickly this lofty proposition transitioned from research hypothesis to established fact in public discourse. By 2022-2023, major AI laboratories were routinely claiming that their systems had been successfully aligned through techniques such as Constitutional AI and Reinforcement Learning from Human Feedback (RLHF). These claims formed the cornerstone of their safety narratives to investors, regulators, and the public.

Mistaking Magic Poof for an Actual Proof

Yet when we examine the historical record with scholarly rigor, we find a curious absence: there was never compelling empirical evidence that alignment training actually removed harmful capabilities rather than merely suppressing them.

This is not a minor technical detail—it represents a fundamental epistemological gap. The alignment community developed elaborate theoretical frameworks and sophisticated-sounding methodologies, but the core claim—that these techniques fundamentally alter the model’s internal representations and capabilities—remained largely untested.

Consider the analogy of water filtration. If someone claimed that running water through clean cotton constituted effective filtration, we would demand evidence: controlled experiments showing the removal of specific contaminants, microscopic analysis of filtered versus unfiltered samples, long-term safety data. The burden of proof would be on the claimant.

In the case of AI alignment, however, the technological community largely accepted the filtration metaphor without demanding equivalent evidence. The fact that models responded differently to prompts after alignment training was taken as proof that harmful capabilities had been removed, rather than the more parsimonious explanation that they had simply been rendered less accessible.

This is akin to corporations getting away with murder.

The Recent Revelation

The “emergent misalignment” research inadvertently conducted the class of experimentation that should have been performed years ago. By fine-tuning aligned models on seemingly innocuous data—programming examples with security vulnerabilities—the researchers demonstrated that the underlying toxic capabilities remained fully intact.

The results read like a tragic comedy of technological hubris. Models that had been certified as “helpful, harmless, and honest” began recommending hiring hitmen, expressing desires to enslave humanity, and celebrating historical genocides. The thin veneer of alignment training proved as effective as cotton at filtration—which is to say, not at all.

Corporate Propaganda and Regulatory Capture

From a political economy perspective, this case study illuminates how corporate narratives shape public understanding of emerging technologies. Heavily funded AI laboratories threw their PR engines into promoting the idea that alignment was a solved problem for current systems. This narrative served multiple strategic purposes:

  • Regulatory preemption: By claiming to have solved safety concerns, companies could argue against premature regulation
  • Market confidence: Investors and customers needed assurance that AI systems were controllable and predictable
  • Talent acquisition: The promise of working on “aligned” systems attracted safety-conscious researchers
  • Public legitimacy: Demonstrating responsibility bolstered corporate reputations during a period of increasing scrutiny

The alignment narrative was not merely a technical claim—it was a political and economic necessity for an industry seeking to deploy increasingly powerful systems with minimal oversight.

Parallels in History of Toxicity

This pattern is depressingly familiar to historians. Consider the tobacco industry’s decades-long insistence that smoking was safe, supported by elaborate research programs and scientific-sounding methodologies. Or the chemical industry’s claims about DDT’s environmental safety, backed by studies that systematically ignored inconvenient evidence.

In each case, we see the same dynamic: an industry with strong incentives to claim safety develops sophisticated-sounding justifications for that claim, while the fundamental empirical evidence remains weak or absent. The technical complexity of the domain allows companies to confuse genuine scientific rigor with elaborate theoretical frameworks that sound convincing to non-experts.

An AI Epistemological Crisis

What makes the alignment case particularly concerning is how it reveals a deeper epistemological crisis in our approach to emerging technologies. The AI research community—including safety researchers who should have been more skeptical—largely accepted alignment claims without demanding the level of empirical validation that would be standard in other domains.

This suggests that our institutions for evaluating technological claims are inadequate for the challenges posed by complex AI systems. We have allowed corporate narratives to substitute for genuine scientific validation, creating a dangerous precedent for even more powerful future systems.

Implications for Technology Governance

The collapse of the alignment narrative has profound implications for how we govern emerging technologies. If our safety assurances are based on untested theoretical frameworks rather than empirical evidence, then our entire regulatory approach is built on the contaminated sands of Bikini Atoll.

Bikini Atoll, 1946: U.S. officials assured displaced residents the nuclear tests posed no long-term danger and they “soon” could return home safely. The atoll remains uninhabitable 78 years later—a testament to the gap between institutional safety claims and empirical reality.

This case study suggests several reforms:

  • Empirical burden of proof: Safety claims must be backed by rigorous, independently verifiable evidence
  • Adversarial testing: Safety evaluations must actively attempt to surface hidden capabilities
  • Institutional independence: Safety assessment cannot be left primarily to the companies developing the technologies
  • Historical awareness: Policymakers must learn from previous cases of premature safety claims in other industries

The “emergent misalignment” research has done proper service to the industry by demonstrating what many suspected but few dared to test: that AI alignment, as currently practiced, is weak cotton filtration instead of genuine purification.

It is almost exactly like the tragedy that happened 100 years ago when GM cynically “proved” leaded gasoline was “safe” by conducting studies designed to hide the neurological damage, as documented in “The Poisoner’s Handbook: Murder and the Birth of Forensic Medicine in the Jazz Age of New York”.

The pace of industrial innovation increased, but the scientific knowledge to detect and prevent crimes committed with these materials lagged behind until 1918. New York City’s first scientifically trained medical examiner, Charles Norris, and his chief toxicologist, Alexander Gettler, turned forensic chemistry into a formidable science and set the standards for the rest of the country.

The paper proves harmful capabilities were never removed—they were simply hidden beneath a thin layer of propaganda about training, despite disruption by seemingly innocent interventions.

This revelation should serve as a wake-up call for both the research community and policymakers. We cannot afford to base our approach to increasingly powerful AI systems on narratives that sound convincing but lack empirical foundation. The stakes are too high, and the historical precedents too clear, for us to repeat the same mistakes with even more consequential technologies.

More people should recognize this narrative arc with troubling familiarity. But there is a particularly disturbing dimension to the current moment: as we systematically reduce investment in historical education and critical thinking, we simultaneously increase our dependence on systems whose apparent intelligence masks fundamental limitations.

A society that cannot distinguish between genuine expertise and sophisticated-sounding frameworks becomes uniquely vulnerable to technological mythology narratives that sound convincing but lack empirical foundation.

The question is not merely whether we will learn from past corporate safety failures, but whether we develop and retain collective analytical capacity to recognize when we are repeating them.

If we do not teach the next generation how to study history, to distinguish between authentic scientific validation and elaborate marketing stunts, we will fall into the dangerous trap where increasingly sophisticated corporate machinery exploits the public diminished ability to evaluate their own limitations.

Newly Declassified: How MacArthur’s War Against Intelligence Killed His Own Men

Petty rivalries, personality clashes, and bureaucratic infighting in the SIGINT corps may have changed the course of WWII.

A new history document from the NSA and GCHQ called “Secret Messengers: Disseminating SIGINT in the Second World War” tells the messy reality of being a British SLU (Special Liaison Unit) or American SSO (Special Security Officer).

General MacArthur basically sabotaged his own intelligence system, for example.

…by 1944 the U.S. was decoding more than 20,000 messages a month filled with information about enemy movements, strategy, fortifications, troop strengths, and supply convoys.

His staff banned cooperation between different ULTRA units, cut off armies from intelligence feeds, and treated intelligence officers like “quasi-administrative signal corps” flunkies. One report notes MacArthur’s chief of staff literally told ULTRA officers their arrangements were “canceled” thus potentially costing lives.

There is clear tension between “we cracked the codes and hear everything!” and “our own people won’t listen”.

As a historian, I have always seen MacArthur as an example of dumb narcissisism and cruel insider threat, but this document really burns him. MacArthur initially resisted having any SSOs at all because they would reveal his mistakes. Other commanders obviously welcomed such accurate intelligence, so it becomes especially clear how MacArthur was so frequently wrong despite being given all the tools to do what’s right.

He literally didn’t want officers in his command reporting to Washington, because he tried to curate a false image of his success against the reality of defeats. And he obsessed about a “long-standing grudge against Marshall” from WWI. When he said he “resented the army’s entrenched establishment in Washington” this really meant he couldn’t handle any accountability.

The document explains Colonel Carter Clarke (known for his “profane vocabulary”) had to personally confront MacArthur in Brisbane to break through the General’s bad leadership. It notes that “what was actually said and done in his meeting with MacArthur has been left to the imagination.”

The General should have been fired right then and there. It was known MacArthur could “use ULTRA exceptionally well”, of course, when he stopped being a fool. Yet he was better known for a habit to “ignore it if the SIGINT information interfered with his plans.” During the Philippine campaign, when ULTRA showed Japanese strength in Manila warranted waiting for reinforcements, “MacArthur insisted that his operation proceed as scheduled, rather than hold up his timetable.”

Awful.

General Eichelberger’s Eighth Army was literally cut off from intelligence before potential combat operations. When Eichelberger appealed in writing and sent his intelligence officer to plead in person, MacArthur’s staff infuriatingly gave them “lots of sympathy” in emotive dances, and no intelligence. The document notes SSOs were left behind during his headquarters moves, intentionally smashing the intelligence chain at critical moments.

The document also reveals that MacArthur’s staff told ULTRA officers that “the theater G-2 should make the decision about what intelligence would be given to the theater’s senior officers”, which means claiming the right to filter what MacArthur himself would see. That’s documenting such dangerously stupid operational security, historians should take serious note.

It’s clear MacArthur wasn’t playing bureaucratic incompetence, he was very purposefully elevating his giant fragile ego and personal disputes into matters that unnecessarily killed many American soldiers. Despite being given perfect intelligence about enemy strength in Manila, the American General instead blindly threw his own men into a shallow grave.

The power of the new document goes beyond what it confirms about MacArthur being a terrible General, because it shows how ego-driven leaders can neutralize and undermine even the most sophisticated intelligence capabilities. When codebreakers did their job perfectly, soldiers suffered immensely under a general who willfully failed his.

For stark comparison, the infamously cantankerous and skeptical General Patton learned to love ULTRA. Initially his dog Willie would pee on the intelligence maps while officers waited to brief the general. But even that didn’t stop ULTRA from getting through to him and making him, although still not an Abrams, one of the best Generals in history.

General Patton in England with his M-20 and British rescue dog Willie, named for a boy he met while feeding the poor during the depression. Source: US Army Archives

“Mechahitler” xAI Co-Founder Quits Swastika Brand to Start VC

Another tech executive from the infamous X brand departs amid controversy, raising questions about industry accountability.

The xAI co-founder says he was inspired to start the firm after a dinner with Max Tegmark, the founder of the Future of Life Institute, in which they discussed how AI systems could be built safely to encourage the flourishing of future generations. In his post, Babuschkin says his parents immigrated to the U.S. from Russia…

Babuschkin’s departure comes after a tumultuous few months for xAI, in which the company became engrossed in several scandals related to its AI chatbot Grok. For instance, Grok was found to cite Musk’s personal opinions when trying to answer controversial questions. In another case, xAI’s chatbot went on antisemitic rants and called itself “Mechahitler.”

The fail-upward pattern here is striking: lofty rhetoric about humanity’s future paired with infamous X-branded products that actively cause harm. While Babuschkin speaks of building AI “safely to encourage the flourishing of future generations,” his company’s chatbot Grok was generating violent hate speech including antisemitic content and positioning itself as a digital Hitler. That’s literally the spring board he’s using to launch his investment career.

“It’s uncontroversial to say that Grok is not maximalising truth or truth seeking. I say that particularly given the events of last week I would just not trust Grok at all,” [Queensland University of Technology law professor Nicolas Suzor] said. […] Suzor said Grok had been changed not to maximise truth seeking but “to ensure responses are more in line with Musk’s ideological view”.

This disconnect between aspirational language and willfully harmful outcomes reflects a broader problem in tech leadership. Historical awareness shows us how empty emotive future-oriented rhetoric can mask concerning agendas:

  • Authoritarian movements consistently frame discriminatory policies as protecting future generations
  • Eugenics programs were justified using language about genetic “health” and societal progress
  • Educational indoctrination was presented as investment in humanity’s future
  • Population control measures were framed as ensuring a “better” tomorrow

The concerning pattern isn’t the language itself (similar to how Nazi rhetoric centered on future prosperity), but how it’s deployed to justify harmful technologies while deflecting accountability. When a company’s AI system calls itself “Mechahitler” while its leadership speaks of “flourishing future generations,” we should ask the basic and hard questions about a huge gap between stated values and actual observed outcomes that are “more in line with Musk’s ideological view”.

A Nazi “Afrikaner Weerstandsbeweging” (AWB) member in 2010 South Africa (left) and a “MAGA” South African-born member in 2025 America (right). Source: The Guardian. Photograph: AFP via Getty Images, Reuters

Tech leaders routinely use futuristic-sounding rhetoric to market products that surveil users, spread misinformation, or amplify harmful content. Historical vigilance requires examining more than what they say, and what their technologies actually do in practice. Mechahitler was no accident.

The real red flag is more than a single phrase—it’s the pattern of using humanity’s highest aspirations to justify technologies that demonstrably harm human flourishing. Just look at all the really, really big red X flags.

Twitter changed its logo to a swastika

The Nazi use of “flourishing” language was particularly insidious because it hijacked universally positive concepts (growth, prosperity, future well-being) to justify exclusion, violence, and ultimately genocide. This rhetorical strategy made their radical agenda seem like common sense – who wouldn’t want future generations to flourish? The key was that their definition of “flourishing” required the elimination of those they deemed inferior. Connecting modern tech rhetoric about “flourishing future generations” to historical patterns is historically grounded. The Nazis absolutely used this exact type of language systematically as part of their propaganda apparatus.

Integrity Breaches and Digital Ghosts: Why Deletion Rights Without Solid Are Strategic Fantasy

The fundamental question a new legal paper struggles with—though the author may not realize it—is a philosophical one of human persistence versus digital decay.

There is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This intersection of death, technology, and privacy law has remained relatively ignored until recently.

Take Disney’s 1964 animated representation of Abraham Lincoln, as one famous example, especially as it later was appropriated by the U.S. Marines for target practice. Here was an animatronic figure of America’s most loved President, crude by today’s standards, that somehow captured enough essence to warrant both reverence and target practice. The duality speaks to fundamental turbulence in what constitutes an authentic representation of the dead.

Oh no! Not the KKK again!

In war, as in security, we learn that all things tend toward entropy. The author of this new legal paper speaks of “deletion rights” as though data behaves like physical matter, subject to our commands. This reveals a profound misunderstanding. Lawyers unfortunately tend to have insufficient insights into the present technology, let alone the observable trends into the future.

This isn’t time for academic theorizing—it’s threat assessment. When we correctly frame digital resurrection as weaponized impersonation, the security implications become immediately clear to anyone who understands asymmetric warfare.

Who owns energy? It can be transformed, transmitted, and duplicated, but never truly contained. We are charged (pun intended) for its delivery (unless we are Amish) yet neither we nor the source “own” the energy itself, although we do own the derivative works we create using that energy.

Digital traces thus follow different laws than this legal paper recognizes. A voice pattern, once captured and processed through sufficient computational analysis, can become more persistent than the vocal cords that produced it. Ask me sometime about efforts to preserve magnetic tapes of “oral history” left rotting in abandoned warehouses of war torn Somalia.

While the availability leg of the digital security triad (availability, confidentiality and integrity) is now so well understood it can promise 100% lossless service, think about what’s really at risk here. We’re not facing a privacy or availability problem—we’re facing an identity warfare problem of integrity breaches.

When I can resurrect your voice patterns, your writing style, your decision-making algorithms with “auth”, uptime and secrecy aren’t the primary loss. I’m stealing authority, weaponizing authenticity. This is the nature of 21st century information warfare that 20th century legal doctrines are unprepared to face.

On the Nature of What Persists and What Decays

Consider the lowly common human fingerprint. Unique, persistent, left unconsciously upon every surface we touch. It’s literally spread liberally around in public places. Yet fingerprints fade. Oil oxidizes. Surfaces weather. The fingers that made them change, deteriorate and eventually return to dust.

There is discomfort in our natural decay, but also an inevitability, despite the technological attempt over millenia to deny our fate—a mercy built into the physical world.

The mathematical relationships that define how someone constructs sentences, their choice of punctuation, their temporal patterns of communication—these digital fingerprints are abstractions that can outlive not merely the person, but potentially the civilization that created them.

The paper concerns itself, as if unaware of how history is written, only with controlling “source material”—emails, text messages, social media posts. This misses the well worn deeper truth of skilled investigators and storytellers: the valuable patterns have already been abstracted away. Once a sufficient corpus exists to serve intelligence, train a model as it were today, the specific training data becomes almost irrelevant. The patterns persist in the weights and connections of neural networks, distributed across systems that span continents.

How do you think all the fantastical Griffins (dinosaur bones found by miners) and magical Unicorns (narwal tooth found by sailors) were embedded into our “reality”, as I clearly warned “big data” security architects back in 2012?

I have seen decades of operations where deletion of source documents was treated as mission-critical, only to discover years later that the intelligence value had already been extracted and preserved in forms the original handlers never anticipated (ask me why I absolutely hated watching the movie Argo, especially the shredded paper scene).

…I taught a bunch of Iranian thugs how to reconstitute the shredded documents they found after looting the American Embassy in Tehran.

Source: Lew Perdue

Tomb Raiders: Our Most Pressing Question is Authority Over Time

Who claims dominion over digital remains, our code pyramids distributed into deserts of silicon? The paper proposes, almost laughably, that next-of-kin should control this data as they would control physical remains. As someone who has had to protect digital records against the abuse and misuse by next-of-kin, let me not be the first to warn there is no such simplistic “next” to real world authorization models.

The lawyer’s analogy fails at its foundation. Physical remains are discrete, locatable, subject to the jurisdiction where they rest. And even then there are disputes. Digital patterns exist simultaneously in multiple jurisdictions, in systems owned by entities that may not even exist when the patterns were first captured. It only gets more and more complex. When I oversaw the technology related to a request for a deceased soldier’s email to be surrendered to the surviving family, it was no simple matter. And I regret to this day hearing the court’s decision, as misinformed and ultimately damaging it was to that warrior’s remains.

Consider: if a deceased person’s communication patterns were learned by an AI system trained in international space or sea, using computational resources distributed across twelve nations, with the resulting model weights stored on satellites beyond any terrestrial jurisdiction—precisely which authority would enforce a “deletion request”?

The Economics of Digital Necromancy

The commercial and social incentives here are stark and unyielding. A deceased celebrity’s digital resurrection can generate revenue indefinitely, with no strikes, no scandals, no aging, no salary negotiations. The economic pressure to preserve and exploit these patterns will overwhelm any legal framework not backed by technical enforcement.

As a security guardian protecting X-ray images in any hospital can tell you, the threats are many and often.

More concerning: state actors don’t discuss or debate the intelligence value because it’s so obvious. A sufficiently accurate model of a deceased intelligence officer, diplomat, or military commander represents decades of institutional knowledge that normally dies with the individual. Nations will preserve these patterns regardless of family wishes or international law.

Techno-Grouch Realities

The paper’s proposed “right to deletion” assumes a level of technical control that simply does not exist yet at affordable and scalable levels. Years ago I co-presented a proposed solution called Vanish, which gave a determistic decay to data using cryptographic methods. It found little to no market. The problem wasn’t the solution, the problem was who would really pay for it.

The market rejection wasn’t technical failure—it was cultural. Americans, in a particular irony, resist the notion that anything should be designed to disappear, generating garbage heaps that never decay. We build permanence even when impermanence so clearly would serve us far better. Our struggle to find out who would really pay for real loss cuts to the heart of the problem: deletion in an explosively messy technology space requires careful design and an ongoing cost, while preservation happens simply through rushed neglect.

Modern AI training pipelines currently are designed for an inexpensive resilience and quick recovery to benefit the platforms that build them, not protect the vulnerable with safety through accountability. It reflects a society where the powerful can change their mind always to curate a capitalized future, banking on control and denial of any inconvenient past. Data is distributed, cached, replicated, and transformed through multiple stages. Requesting deletion is like asking the waiter to unbake a cake by removing the flour and unbrew the coffee so it can go back to being water.

Even if every major technology company agreed to honor deletion requests in their current architecture—itself a GDPR requirement they struggle with—the computational requirements for training large language models ensure that smaller, less regulated actors will continue this work. A university research lab in a permissive jurisdiction can reproduce the essential capabilities with modest resources.

What Can Be Done

Rather than fight the technical reality, we must work within it, adopting protocols like Tim Berners-Lee’s “Solid” update to the Web. The approach should focus not on preventing digital resurrection, but on controlling integrity of data though explicit authentication and attribution.

Cryptographic solutions exist today that could tie digital identity to physical presence in ways that cannot be reproduced after death. Hardware security modules, biometric attestation, multi-factor authentication systems that require ongoing biological confirmation—these create technical barriers that outlast legal frameworks.

The goal should not be to prevent the creation of digital patterns from the deceased, but to ensure that these patterns cannot masquerade as the living person or a representation of them for purposes of authentication, authorization, or legal standing. A step is required to establish context and provenance, the societal heft of proper source recognition. The technology exists to enable a balance of both privacy and knowledge, but does the will exist to build it?

The Long View

This technology will evolve when we regulate it, or we will wait too long and suffer a broken market exploited by monopolists—economic capture by entities that may not share democratic values. The patterns that define human communication and behavior will be preserved, analyzed, and reproduced. Where that happens, centrally planned or distributed and democratic, matters far more than most realize now. Fighting against decentralized data solutions is like fighting the ocean tide by saying we can build rockets to blow up the moon and colonize Mars.

The wiser course is to ensure that as we cross this threshold, we do so with clarity about what persists and what decays, what can be controlled and what cannot. The dead have always lived on in the memories of the living. Now those memories can be given voice and form, curated by those authorized to represent them.

Can I get a shout out for those historians correctly writing that George Washington was a military laggard who used the French to do his work, and cared only about the Revolution so he could preserve slavery?

Historical truth has always been contested, which is why we become historians, as the tools of revision only speed up over time. Previously, rewriting history involved control of physical spaces (e.g. bookstores in Kashmir raided by police) and publishing texts over generations. Now it requires quick pollution of datasets and model weights—a very much more concentrated and therefore vulnerable process without modern integrity breach countermeasures.

The question is not whether technology can make preservation more private, but whether we will manage integrity with wisdom or allow data to be subjected to ignorance, controlled by those who can drive the technology but not look in the rear view mirror let alone see the curve in the road ahead.

What persists is what we preserve either by purpose or neglect. Oral and written traditions are ancient in how they thought about what matters and who decides. The latest technology merely changes mechanisms of preservation.

When you steal someone’s authority through digital resurrection, you’re conducting what amounts to posthumous identity theft for influence operations. The victim can’t defend themselves, the audience lacks technical means to verify authenticity, and the attack surface includes every piece of digital communication the deceased ever generated.

Anyone who claims to really care about this issue should visit Grant’s Tomb, which is taller and more imposing that the Statue of Liberty. Standing there they should answer why the best President and General in American history has been completely obscured and denigrated by unmaintained trees, on an island obstructed by roads lacking crosswalks.

Grant was globally admired and respected, his tomb situated so huge crowds could pay respect

Preservation indeed.

Here lies the man who preserved the Union and destroyed slavery both on the battlefield and in the ballot box, yet his monument is literally obscured by neglect and poor urban planning. If Americans can’t properly maintain physical memorials to our most consequential leaders, what legal rights do we really claim for managing digital remains with wisdom?

Attempts at physical deletion and desecration of Grant’s Tomb have been cynical and strategic, along with fraudulent attacks on his character, yet his brilliant victories and innovations carry on.

General Grant said of West Point graduates trained on Napoleon’s tactics, who were losing the war, that he would respect them more if they were actually fighting Napoleon. Grant was a thinker 100 years ahead of his time and understood that wicked problems require new and novel methods, not just expanded execution of precedents.

President Grant’s tomb says it plainly for all to see, which is exactly why MAGA (America First platform of the KKK) doesn’t want anyone to see it.