The danger of enlightenment was never to the thinker, it was that the thinker endangered the legitimacy of those who held interpretive monopoly.
The Church didn’t burn heretics because independent thought was so hard to do well; they burned them to prevent a contagious independence that threatened the entire apparatus that converted doctrinal control into material power.
It’s like worrying that a machine gun will show up at a jousting tournament.
The current professional-managerial class of journalists, academics, lawyers, doctors, and policy fellows built their authority on credentialing systems that controlled who got to interpret reality for others.
AI will indeed make some people think less. Look at all the people who buy a Tesla and let it kill them by driving into a tree. Yet AI is so broken without careful thought that it also produces the reverse effect, where people use it to think more. It’s like saying guns with a better scope and repeat mechanism would make people hunt less; it breaks the bottleneck for those who know the right end of a rifle.
Suddenly anyone can generate plausible legal analysis, medical interpretation, policy framing at much greater speeds. The quality varies by the person aiming the barrel, but so did the quality of what the credentialed class produced—we just weren’t supposed to try and control their integrity. And that’s actually going to mean a very, very big problem with AI.
De Weck’s new essay in the Guardian is, structurally, a priest who is worried about a printing press.
Our king, our priest, our feudal lord – how AI is taking us back to the dark ages
Or perhaps more to the point, he is poking the mob about lighting torches and grabbing pitchforks to throw a modern printing press into the river. That’s different than a printing press engineer warning about safe use of the machine. Who benefits from the anxiety?
He frames it as concern for the soul of the congregation, but the actual anxiety is positional. A “fellow with the Foreign Policy Research Institute” depends on a system where the privilege to dispense foreign policy interpretation was made scarce and gatekept. AI abruptly makes his particular form of cognitive labor rather abundant and very cheap.
I’m not much for dwelling on Kant, but I know the honest Kantian question is far from “will people stop thinking?” No, Kant would demand “which institutions currently profit from monopolizing thought-on-behalf-of-others, and what happens when that monopoly breaks?”
The true Kantian would require the professional class to examine its own complicity in manufacturing a scarcity and dependency that it now laments.
Distributed standard measures of integrity should have been planned long before the last three decades of technology, when personal compute power directly questioned the closed approach to controlling knowledge. Fortunately, we have some new independent ideas…