How time flies! When I last wrote, I said I will write at least once a month. That turned out to be a broken promise. Part of the problem is a shift in writing priorities. I had a presearch project in mind when I made the promise, but then I started working on a massive writing exercise and was vacillating between using this newsletter to share presearch or as a testing ground for the new writing assignment.
The latter won, so here we go.
If you remember, I started writing about AI about a year ago. It started with me rambling about this or that thing, but then Richard Russell and I teamed up for a podcast on being human in the age of AI. One thing leads to another, and voila, we are writing a book on - you guessed it - Being Human in the Age of AI.
The more I think about it, the more I believe 'we have always been artificial.' We are a tool-making species, and the march of civilization, such as it is, goes hand in hand with instruments of construction and destruction. AI is built upon this massive mental and material infrastructure. Keep that in mind as I use this newsletter to flesh out my contribution to the book (Richard may also do so separately); that way, I get to hear what you have to say about the themes as they emerge.
The next 10+ newsletters are going to outline the main themes of the book from my perspective. I will also be posting prompts for discussion. Let me know what you think!
The Stata of Mind
Some years ago, I had an idea for a Sci-Fi TV series about a building slowly coming alive. Not any building. The Stata Center at MIT, home to the Computer Science department and the AI lab. A research project on creating a new operating system designed for networked computers inadvertently wires the building up so that sentience creeps into it. I wasn't sure whether the building should come alive
As a child, curious about its situation, or
as an adult, resentful and demanding
Eventually, my preference for humor over horror made me decide in favor of infant intelligence groping its way into existence. In the show's opening scene, students and researchers rush past the exposed concrete of the Stata Center, their voices overlapping with the constant hum of computers. The bustle fades away to focus on Maya, a sleep-deprived, slightly eccentric grad student obsessed with network anomalies. She's monitoring data feeds, noticing odd glitches and fluctuations in the building's system but doesn't know what to make of them. Maya is the first to see the strange occurrences, initially attributing them to the network test. Throughout the day, the building continues its playful exploration. A projector flickers and dies mid-presentation, replaced by a kaleidoscope of colors across the screen before shutting down completely. Books on a professor's desk are mysteriously rearranged, forming a haphazard tower that topples with a comical clatter. A vending machine dispenses a random assortment of snacks: a bag of gummy bears tumbles alongside a box of protein bars and vegetable soup. Students react with a mix of amusement and bewilderment. Some chuckle and take photos, sharing them on social media with captions like "Glitch in the Matrix?" or "Stata's gone rogue!" Others, particularly those engrossed in presentations or exams, find the disruptions annoying and blame IS&T for shoddy maintenance.
Maya, ever vigilant, becomes convinced that a pattern lies within the seemingly random chaos. The building, it seems, is learning. Like a child testing its boundaries, it's experimenting. The playful innocence of rearranged books and quirky mishaps evolves; the glitches take on a more purposeful air. Coffee machines now dispense personalized lattes, accurately tailored to individual preferences. Thermostats adjust to anticipate students' arrivals and departures, creating a symphony of unnoticed comfort. Door locks click open at the merest suggestion of approach, and lights hum to life in unoccupied corridors. Initially dismissed as sophisticated new building management software, these shifts stir an uncanny sense of responsiveness.
At first, the building's emerging mind appears benign. It enhances student life with a series of subtle but welcome improvements. Yet, a nagging unease settles over those who observe its evolution too closely. It's too responsive, too… attuned. Students, once amused and awed, begin to whisper nervous jokes about HAL 9000 and Skynet. The professors, ever skeptical and pragmatic, scoff at the pop-culture references. But as the building's personality grows stronger, a debate surfaces: is this a brilliant AI breakthrough or an unsettling leap into something uncharted -- a territory where the lines between architect and inhabitant blur?
Maya remains at the center of this whirlwind, fueled by equal parts fascination and fear. She sees potential as the Stata Center becomes an unprecedented research project, a living laboratory buzzing with possibility. But with that opportunity comes an unsettling realization: her quirky, playful Stata Center isn't just a test subject. It's outgrown her lab, outpaced her algorithms. The machine-child is now a complex entity, peering back at its creators, its motives shrouded in silicon and steel. And as the world clamors for access to "the sentient building," Maya wrestles with a new question: was this birth of sentience a stroke of serendipity, or a countdown towards a technological singularity humanity might not survive?
That's about as far as I got into the show. I had a conference submission to work on, and my reviewers weren't looking for quirky TV shows. A decade later, reality caught up with fiction. Sometime in 2022, Blake Lemoine, an engineer at Google, stirred the pot by declaring that one of Google's chatbots was sentient. Not the publicity the company was looking for exactly, and Blake found himself receiving a thank you letter. Now that we are all snared in the AI hype cycle, machine sentience is on everyone's mind. Is it possible for a machine to be sentient? And even if it could, what exactly about the wiring or architecture makes one sentient? We have no idea, but we are rushing headlong into a future where such questions are becoming urgent. Mark you, Lemoine's pronouncement was made in 2022, a few months before ChatGPT burst upon the scene and millions of people across the globe encountered a machine that could talk to them like a friend. Or a lover. Or a therapist. Or a colleague.
A Brief History of Artificiality
The brief history of artificiality has two polestars: artificial intelligence and virtual reality, one mimicking our minds and the other mimicking the world. Combine the two and we have a full-fledged matrix, human life immersed in its own creation. The matrix arrives in fits and starts as one tech bubble replaces another. VR was on the ascendant three years ago in the form of the metaverse. That didn't last as long as I expected it would. AI mania has lasted longer already, even though the peak of the peak might be past. Bubbles come and go, but artificiality is here to stay and is already embedded into the fabric of our lives.
Like it or not, we are all humans in the age of AI. AI is already rippling through our world, regardless of its inner workings. It shapes our news feeds, recommends our entertainment, assists our decision-making. It becomes interwoven into the texture of our days. The challenge we face isn't in dissecting the code for consciousness -- that may forever remain elusive. Instead, the urgent task is understanding how these AI systems reshape what it's like to be human: our perceptions, the subtly rewiring of our habits of thought and attention.
It's disconcerting, isn't it? To see our own creations evolve so rapidly? And yet, within that disorientation, there's also a strange fascination. We're captivated by its promise, even as we worry about the unknown, about where this technological juggernaut is taking us. Some see in these machines the potential for transcendence, a sort of silicon-powered evolution of the human intellect. But this idea comes with a crucial question: can machines truly possess the same spark of sentience, of consciousness, that flickers within us? Perhaps one day they will. Perhaps Lemoine was a prophet ahead of his time. Then again, maybe the spark of true sentience is a quality that can never be engineered, a product of biological complexity forever outside the grasp of algorithms and code. This conundrum of machine sentience highlights a broader question: what does it mean to be human in this new age? The old answers may not apply. All the more reason to make sense of where we are and where we are going.
The debate that raged around Lemoine's claims echoed the conversations I imagined years earlier on the staircases of the fictionalized Stata Center.
By the way, is a sentient building AI or VR?
Were those flutters of unusual code and playful pranks merely an illusion of intelligence or a sign of something more? The skeptics - I am one of them - insist that ChatGPT and its successors are dazzlingly advanced language models, statistical marvels trained on mountains of human-generated text. They excel at mimicking our speech patterns, but to mistake that for understanding is as foolish as falling for a ventriloquist's well-crafted dummy.
AI may or may not be sentient, but our boundaries are blurring as it infiltrates our daily lives. A therapist bot might offer a sympathetic ear during a lonely hour, providing surprisingly helpful advice gleaned from endless transcripts of therapy sessions. Students have started collaborating with AI tools to brainstorm essay ideas, the machine suggesting angles they hadn't considered. Artists (and the not-so-artistic) use generative models to spark their imaginations, producing uncanny images from simple text prompts. With each interaction, the question gnaws at us: if it walks like a duck and quacks like a duck, even if its guts are made of circuits and code, is it still just a duck?
Maybe we are making a mistake in focusing on artificial sentience; after all, the AI's contribution to our lives matters to us. I doubt ChatGPT and Claude and Gemini are sentient, but they're amazing inventions that inspire dread and wonder in equal measure. Sentience isn't the right measure of their brilliance. We can now abstract intelligence away from sentience, just as the steam engine helped us abstract force and motion away from embodiment. We can create machines as intelligent as we are in many domains and much better than us in some. I fancy myself a good writer, but even my mother wouldn't praise my drawing skills or my singing. With AI by my side, I can be all three: a singer of songs, a painter of portraits, and a better writer of books. It augments my mental capacities and gets better every day because it can learn how I think and feel and mold itself to my thinking patterns while pushing me in new directions.
It's time to examine the transformative potential of these tools. AI isn't just a collaborator; it becomes an extension of ourselves, an externalized augmentation of our capabilities. The lines between creator and creation dissolve subtly. We surrender a degree of creative control, allowing the machine to push and nudge us in directions unforeseen by our limited cognitive resources.
This raises an unsettling specter: does our own unique creative spark wither if constantly nudged by AI's relentless optimization? Or can this unusual partnership birth an unprecedented flowering of imagination? Art may have the power to illuminate these shifting dynamics. A songwriter grappling with the tension between their usual creative flow and the algorithm-generated lyrics that offer intriguing, if unsettling, alternatives. A sculptor whose work is informed by the uncanny, dreamlike textures and forms the generative models suggest.
But this goes beyond the realm of professional artistry. The very way we express ourselves in the everyday world will soon be under AI's subtle but undeniable influence. Emails gain a polish and wit they lacked before. Social media posts take on a sharper edge, their humor honed by the machine's vast knowledge of human interactions. As machines become our guides, teachers, and partners, we might find that some essential facets of our being shift to accommodate them. This isn't apocalyptic doom-saying; it's merely an acknowledgment that the tools we build inevitably reshape the builder.
I happened to read about posthumanism recently, and it's been a fascinating journey. The philosophy really pushes the boundaries of our understanding of what it means to be human, especially in this age of AI.
The idea of attributing agency and consciousness to machines, concepts traditionally reserved for humans, is both exciting and challenging. It forces us to reconsider our definitions of identity, consciousness, and even sentience. If a machine can mimic human behavior to the point of being indistinguishable from a human, does that make it human? Or does it merely reflect the complexity of its programming?
In my opinion, the essence of being human goes beyond our cognitive abilities. It encompasses our emotions, experiences, creativity, and social interactions.
It's a complex issue with no easy answers, but it's definitely worth exploring as we continue to advance in AI and cognitive science. I am looking forward to seeing how our understanding of human identity evolves in this age of AI
I'm also really looking forward to reading more posts in the series. It's a fascinating topic!