Listen to the podcast above 👆🏾:
I have been silent for a few weeks, but things have been cooking in the background. TLDR; going forward, this newsletter will be a joint effort with Richard Russell.
In our first "Humainity" podcast, we talk about how AI builds upon a long history of computation, whether it's revolutionary in the way the printing press was or is it revolutionary in the way writing was (both are pretty revolutionary!) and what we have to say about these developments, since we are cognitive scientists, not AI engineers.
It's our first attempt to get audio out into the world, so it's got the usual teething pains of any new venture. We hope to put out a polished product in coming weeks and months. And now a few words on the build-up to this podcast.
I like to say I am AI-adjacent.
My first love was the Foundations of Mathematics, whose developments were at the foundations of GOFAI (Good Old Fashioned AI). A college senior (that’s an Indianism for anyone ahead of you in college; as a third-year student, he would be a junior in the US...) handed me a copy of Hofstadter's “Godel, Escher, Bach” the first week of college and instructed me to absorb its contents or else. Around the same time, the man who was to become my PhD supervisor pioneered the use of probabilistic ideas in the cognitive sciences - well before machine learning took over the world.
All these roads could have led me to work on AI directly, but I stuck to paths that intersect the main highway without ever merging with it. There's a book by David Ruelle, where he says (I am paraphrasing) that people come to science in one of two ways:ᅠ
By reading a lot of science fiction novels and traveling to the stars in their dreams or
By trying to blow up their parent's house while conducting chemistry experiments in the basement.
AI also attracts these two types of people too:
Those who dream of sentient machines and
Those who build toy robots using LEGO Mindstorms.
I am more of the former, but as a mathematician and cognitive scientist, my interest in artificial intelligence is in the context of its implications for humanity and our interactions with various other organisms on Earth. AI as a technology is less interesting to me than AI as a new chapter in humanity's self-understanding. For example:
What would be a revolutionary use of AI would be if humans could experience the world using senses that we currently don't, such as the infrared spectrum or the ultraviolet spectrum, not only by representing that spectrum within our visual spectrum but also by helping us glimpse the worlds of creatures who see in infrared or ultraviolet.
Like anything else, it’s better to explore AI with a fellow traveler, and I have been doing so with a dear friend, Richard Russell.(On LinkedIn: Richard Russell ) Let me introduce Richard in his own words:
I'm a cognitive scientist and experimental psychologist. And a lot of my background is looking at visual perception and also things like how do we view other people and how can we change and manipulate our appearance? So I'm interested in notions of authenticity and artificiality.
And I think that this is maybe a different kind of lens for thinking about artificial intelligence. And so I'm interested to think about some of these aspects of artificial intelligence that are a little bit different than the technical or ethical dimesions of AI.
But why us? We aren’t AI engineers. Or AI ethicists. So what do we have to contribute?
Me: what do you think people like us who are cognitive scientists, what role do you think that has in this still very technologically influenced conversation?
Richard: certainly in the history of AI, there's been a lot of crosstalk between AI and Cognitive Science, Psychology and Neuroscience. There’s been inspiration going in both directions because we're all studying intelligence. Is it artificial? Is it human? Is it biological intelligence? These are definitely Issues that AI, cognitive science, psychology, and neuroscience have, I think somewhat distinctive sets of viewpoints.
We think writing and printing are the best analogy to our current situation: both were revolutionary technologies in their time, but today, they are at the core of “The Humanities;” the technology has receded into the background and the human condition is in the foreground.
Is it going to be like writing that we will be different kinds of human beings as a result, or is it going to be more like print? Print was an inflection in writing, but it didn't bring an entirely new interface into the universe. Where do you think AI is going to fall? And, what do you think that will mean for human experience when it's one or the other?
We need to have a public conversation on these questions, one that steers a humaine middle path between the technologists and the ethicists. Richard and I are starting a podcast/newsletter called ‘Humainity,’ of which today’s combo is the first episode. So far, this newsletter has focused on text-based content, but from now on, audio will play as big a role as text.
Listen to the podcast here (or at the top of this essay):
We hope you’ll listen, and as always, please do share this newsletter-podcast with your friends and colleagues.
Share this post