It’s been a while since I wrote this newsletter. I ended the previous round by talking about Global Risk. At that time, I had climate change and the economy in mind as inherently risky systems, but the developments in AI in the last two months have introduced a new risk factor into our lives and into public consciousness.
Permanent Extinction
Everyone and their mother is writing about AI and how it’s going to:
Take over the world
Take over our jobs
The highest stage of capitalism
Too stupid to take over anything
Some or all of these are correct, but I don’t want to add my AI doomsday speculations to a long list. Instead, let’s look at machine learning (the dominant form of AI today) as the most sophisticated risk system we have, though far from isolated in that regard. Our supply chains are also risk systems. Our pandemic responses are also risk systems.
Just as risk systems aren’t isolated from one another - AI ↔️ Economics ↔️ Pandemics ↔️ Climate - risk itself isn’t an isolated feature of the modern world. It comes intertwined with contingency, fluidity, uncertainty, and impermanence, i.e., concepts that grasp a world without guarantees. I am convinced that we now live in a world of ‘permanent extinction,’ i.e., the complete end of human civilization. It took us four hundred years of research in physics to get us to nuclear weapons, two hundred odd years of burning carbon to get us to climate collapse and it’s possible that the AI version of permanent extinction will take us only ten years. Call it the Moore’s Law of Extinction if you will.
If there’s one thing about the burst of activity around AI as the latest tech hype, barely a year after the metaverse was hot, it’s that even the contingencies are contingent. The vast machine destabilizes itself as a matter of course, and today’s extinction is tomorrow’s clean energy. When Nagarjuna, the most important Buddhist philosopher besides the Buddha himself, said that all phenomena are empty, he had to swallow the obvious consequence: that emptiness was itself empty.
Back in the day, it was the empty self that spawned many a debate. Today we live in an empty world in which everything from politics to technology to science to health is without intrinsic existence. Emptiness has escaped from the monastery into the street. Risk is one of its characteristic forms of manifestation.
That’s how I would like to approach AI. And other entities such as the mind, where machine learning inspired approaches are now dominant.
General Purpose Reasoning
Technical aside - feel free to neglect if these terms don’t mean much to you
I also find the technical developments that have led to generative AI quite fascinating - not only because they’re impressive but also because they are a universal approach to technology: systems built on a general-purpose foundation. Computing started with the claim of grasping the universal foundations of reason in principle; now we are getting to the universal foundations of reason in practice.
That’s quite exciting and has to be understood alongside other abstract generalisms such as category theory. If physics taught us how to reason about the universe in general, we are now at the cusp of reasoning about the world in general. Can’t imagine those ideas - as AI or something else - won’t affect all the ‘dismal sciences’ from economics to biology. Active Inference and Predictive Processing are arguably the dominant POV in the neurosciences already.
Are they the ultimate theory of the mind and the world? I don’t think so, but they give me confidence that we might be able to reason about the world and intervene in it from general considerations instead of particular ones. That’s been the goal of the field called ‘complex systems’ for thirty years, but these ideas of unification have been driven by physicists and their intuitions about general theories. Machine Learning - Active inference and their ilk signal a framework for general-purpose reasoning that’s quite distinct from the physicist’s.
The science of an empty world