The possibility (and promise) of AI touches upon so many questions about what it is to be human.
For example: can machines be conscious? Either we are conscious machines, or there’s some mysterious non-mechanical stuff that makes us conscious. Or particular organizations of matter produce consciousness. Why do those produce consciousness and not other formations? These are important questions the field of AI has had to address since the 50s.
But until recently, the machines we were building weren’t complex enough to invite attributions of sentience.
Much of the excitement around AI today is about the capabilities of the new AI and what it says about their capacity for agency, autonomy and sentience. This excitement doesn’t excite me. I am not interested in AI per se, at least to the extent that AI creates machines that replicate or surpass human capacities.
Can they play chess better than us? Sure. Can they write stories better than we do? HAL is already at the 95th percentile. Will we soon be able to enter a plotline into an AI video creator and generate a movie? Probably.
None of these achievements bother me. I go running as often as I can, even though I know my bicycle - let alone my car - will take me from A to B a lot faster. I am not running a race against the ghost of Fords past.
Yeah, but we are more proud of our brains than our bodies, aren’t we?
We have always known that lions are stronger and faster than we are, and that’s OK. But we haven’t ever encountered a species that’s cleverer than we are, and that worry is at the root of our fascination with AI.
It’s a boring worry.
What’s not boring is people using AI to deploy killer robots, which is a real worry, or robots as metallic capital replacing labor altogether. I am not worried about AI getting God-like powers. Until recently, most humans were worried about superhuman intelligences of divine and demonic origin. The Devil is clearly cleverer than you are. I don’t spend too much time thinking of the devil. He may be corrupting my soul without my knowledge, but I am content in my ignorance of his influence.
stupidity → intelligence → superintelligence is a terrible way to organize the space of possible minds; it’s theology in mechanical disguise and a misguided way to do theology and a misguided way to make machines.
If AI knocks us off the intelligence pedestal and helps us recognize the diversity of minds on earth (and perhaps other planets and galaxies), it would have done us a big service. Read my lips:
AI changes how we think about intelligence: Copernicus overthrowing the geocentric worldview was a win for physics. Darwin overthrowing spontaneous creation was a win for biology. AI overthrowing human intelligence will be a win for the mind sciences.
AI can extend our minds: Working with AI to go where we have never gone before is a big win for us. As long as we don’t use it to destroy the earth - which is the real danger, not ‘AI will eat us alive.’
The big question as far as I am concerned: how do we use AI to extend our minds and to explore the space of possible minds? That’s the question I was exploring in the essays below:
Links Worth Clicking
Watch an A.I. Learn to Write by Reading Nothing but Jane Austen (NYT experiment)
Will A.I. become the next McKinsey? (article by Ted Chiang, the well known Speculative Fiction author)
Google "We Have No Moat, And Neither Does OpenAI" (leaked internal Google document)
Obligatory Geoff Hinton interview:
Endnote
I was today years old when I learned that the term ‘spam’ as used for junk email is from a Monty Python sketch about the meat, not the meat itself.