Sorry about the super long essay this week - read the summary if you’re in a hurry, but I strongly encourage you to read it all the way through 🙏🏾
In this week's essay from "Being Human in the Age of AI," I delve into how the essence of being human is evolving amidst revolutionary technologies. Just as historical inventions like the printing press and the steam engine reshaped society, AI prompts us to reconsider our roles and identities in a radically transforming world.
Reflecting on Foucault’s insights, I explore how modern institutions like the army and the school, which were designed to discipline and harness human capabilities, are now being challenged by the disruptive forces of AI and the phenomena of atomicity. These forces fragment our traditional work structures and destabilize our understanding of individual and community.
I question whether the disciplined systems of old can withstand the test of atomicity or if we need new 'flow institutions' to manage this era of rapid change and complexity. The gig economy, fragmented workdays, and the relentless pace of information reflect this shift towards a world where AI is not just a tool, but a fundamental part of our cultural and economic fabric.
I propose four thought experiments to envision how AI might influence the evolution of our institutions and our very conception of self and society in the flow of modern life.
Discipline
Whenever a revolutionary technology - the printing press, the gun, the steam engine - changes the world, we are faced with the question:
How do I live my life in this brave new world?
If AI is as revolutionary as any of the above, we are bound to ask that question about AI. Gee, isn’t that the whole point of a book called “Being Human in the Age of AI”?
The main point I want to make in this week’s essay (part 2 of section 4 in the book from my perspective) is that the ‘being human’ bit will be as much a technical achievement as the AI bit. To put it crudely,
We learn to live with new material technologies by inventing new political technologies.
A quote from Foucault is in order:
I cited for you the example of discipline in the army. It is an important example because it was truly the site where the great discovery of discipline was made and developed in the first place. Linked then to this other invention of a techno-industrial sort that was the invention of the gun with a comparatively rapid fire. Basically from this moment on, we can say the following: the soldier was no longer interchangeable, was no longer pure and simple cannon fodder [chair à canon] – a simple individual capable of doing harm. To be a good soldier, he needed to know how to shoot; therefore he had to undergo a process of apprenticeship. It was necessary that the soldier equally know how to move, that he know how to coordinate his movements with those of other soldiers, in sum: the soldier became something skillful. Ergo, something valuable [precieux]. And the more valuable he was, the more he had to be preserved; the more he had to be preserved, the more necessary it became to teach him the techniques capable of saving his life in battle, and the more techniques he was taught, the longer this apprenticeship, the more valuable he became. And suddenly, you have a kind of rapid development of these military techniques of training [dressage], culminating in the famous Prussian army of Frederic II, which spent most of its time doing exercises.
Foucault is very much a thinker of technics, one who recognized that certain political technologies of the modern era caused a transformation in the exercise of power. Power directed towards positive ends (such as killing as many of your enemy as possible, so not positive in a moral sense, but positive with respect to your aims). Now that definition of power should remind you of a very different thinker’s definition of design:
Engineers are not the only professional designers. Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. — Herbert Simon.
Foucault would agree with Simon, but go one step further and ask: how do we ensure we are constantly generating courses of action that change existing situations into preferred ones? Foucault’s answer: modernity dealt with these technological pressures by designing new institutions - the army, the school, the asylum - that discipline us into being effective users of these new technologies. In doing so, these institutions help us exert power that achieves rather than power that hinders. As Foucault says:
On the one hand, there was this technology that I will call “discipline.” Discipline is basically the mechanism of power by which we come to exert control in the social body right down to the finest elements, by which we succeed in grabbing hold of the social atoms themselves, which is to say individuals. Techniques for the individualization of power. How to monitor [surveiller] someone, how to control his conduct, his behavior, his aptitudes, how to intensify his performance, multiply his capacities, how to put him in a place where he will be most useful: this is what I mean by discipline.
Discipline ‘produces’ people who can further our aims. Alongside discipline that targets the individual is another set of practices that target populations.
we have another family of technologies of power which appeared a bit later, during the second half of the 18th century…. technologies that do not target individuals as such, but which, on the contrary, target the population….
The discovery of the population is, along with the discovery of the individual and the trainable body, the other great technological core around which the political practices of the West transformed themselves
Unfortunately, both the disciplined individual and the targeted population are breaking down in the age of Atomicity, of which AI is either a local or global maximum. Atomicity destabilizes the individual because sub-personal cognitive processes are directly targeted by atomic controls (as I have written elsewhere) and it destabilizes the population because it’s being replaced by far more complex statistical formations.
Atomicity
Sometime in the 2000's I learned about high frequency trading and how firms are paying enormous sums of money to have data centers as close to Wall St as possible because the speed of light (!) started mattering to high frequency stock trading. Of course, very few activities threaten to breach the speed of light barrier. Uber's drivers don't speed that fast, but they do have remarkably fragmented lives in comparison with their unionized UPS comrades. The gig economy has created a new class of workers who are constantly on the move, hustling from one job to the next.
This fragmentation of work - one feature of atomicity - has led to a sense of instability and uncertainty for many workers, who lack the benefits and protections of traditional employment. As technology continues to advance, it's likely that we'll see even more disruption in the labor market, with AI and automation threatening to displace even more workers. The atomization of time, space and labor are at the core of modern capitalism, and it's only in that context that AI makes sense, for it's both the demand and the supply for the brave new world. Like the snake eating its own tail (but growing stronger as a result!), the massive size of those very data sets necessitates AI systems to make sense of that vastness. The frenetic activity of atomic capitalism generates the massive data sets AI needs. At the same time, AI is being developed to help manage and optimize this fragmented, high-speed economy. From algorithms that match workers with gigs to AI-powered systems that analyze market trends and make split-second trading decisions, the technology is both a product of and a tool for atomicity.
While I believe atomicity is at its base an economic phenomenon, its tentacles reach out to the rest of our world, into what I call 'flow culture' that builds upon market activity, demanding instant gratification and fleeting attention spans. I don't know if you remember that one time when the Falun Gong staged a flash mob protest in Tienanmen square - thousands of cult members emerging out of seemingly nowhere and assembling into exercise poses. That frightened the Chinese State into expanding its surveillance apparatus. The modern attention economy operates in a similar fashion, with memes and viral content suddenly capturing the public's focus before dissipating just as quickly. In this context, AI becomes a tool for both generating and analyzing these ephemeral cultural phenomena, further accelerating the pace of change.
The culture of the flash mob, of the swarm or the private story - these are all new forms of social organization. Where politicians once strategized over decades, campaigns now appear overnight, protests flare and fade in a weekend, and impactful statements are reduced to pithy tweets. Even wars go in and out of our minds. Remember Yemen? Or for that matter, Ukraine? Can you picture Zelinsky leading his troops while dressed in heroic fatigues? Probably not, since we have since moved on to Gaza; I am waiting to see where the bombs will fall next. The relentless pace of change leaves us little time for reflection or understanding. We lurch from one crisis to the next, our attention spans frayed and our empathy tested. In this context, AI becomes both a symptom and a tool - a way to keep up with the breakneck speed of information, but also a contributor to the very fragmentation and atomization that defines our age.
Shock and awe meets fifteen minutes of fame.
We engage in a constant, reactive mode of activism. The traditional structures for building durable institutions feel increasingly out of reach. Everyone has the potential to be an activist, yet our voices are drowned in the relentless torrent of social media. The internet, while democratizing expression, has shattered any potential for cohesive strategy. Parties and movements seem less relevant when online engagement is defined by individual, short-lived battles. As Anton Jager says:
the Western public sphere is increasingly subject to spasmodic instances of agitation and controversy. Post-politics has ended, but what has taken its place is hardly recognizable from twentieth-century mass political templates.
Let’s just say Foucault’s army, school and asylum have reached their limits. Their discipline is inadequate to the challenges of atomicity. What’s going to take their place? I want to propose four thought experiments to help us think whether AI will help us negotiate the shift away from stock institutions to flow institutions.
Four Thought Experiments
Thought Experiment #1: The Collaborative Hive Mind
Scenario: A major research library is replaced by an integrated system of wearable devices and AI-powered personal assistants. Researchers access information not through physical books or even screens, but through auditory summaries, real-time data visualizations, and on-demand translations whispered seamlessly into their ears. Collaborative note-taking becomes effortless; thoughts, insights, and queries are instantly shared with the network, generating a real-time, collective knowledge stream that transcends the work of any individual scholar.
Got it? Now answer some questions:
How does this shift from individual authorship to collective cognition transform the concept of expertise and the creation of knowledge?
Can such a system foster deeper understanding, or does it lead to a shallow, fragmented form of knowing?
What are the ethical implications of this constant surveillance and sharing of our intellectual process?
Thought Experiment #2: The AI Archivist as Moral Gatekeeper
Scenario: Government agencies and large corporations begin deploying AI systems designed to scrub historical archives, identifying content that promotes "dangerous" or "socially destabilizing" ideas. These AI systems don't merely erase information; they actively rewrite narratives, insert alternative "facts", and manipulate public memory for social control or commercial gain. Individual memories captured by social media, personal devices, and surveillance systems become malleable, and subject to algorithmic distortion.
Questions:
Is this a dystopian fantasy, or a real threat in an AI-powered world where the past becomes a battlefield?
What ethical safeguards must be in place to protect historical truth and individual memory from manipulation?
Can we architect our technologies so that manipulation is prevented by design? Can we develop tools to combat AI-driven propaganda and manufactured history?
Thought Experiment #3: Emotional Resonance and Algorithmic Manipulation
Scenario: AI systems trained on massive datasets of human behavior become adept at identifying and manipulating our emotional states. Social media feeds tailor content not based on stated interests but on maximizing engagement through triggering feelings of outrage, joy, or insecurity. News organizations utilize similar systems to generate headlines that induce anxiety and exploit biases to ensure clicks and shares.
Questions:
How does the prioritization of emotional manipulation over rational discourse impact our ability to engage with complex issues and form nuanced opinions?
Can we find ways to cultivate emotional literacy and skepticism in an age where our emotions are a commodity to be exploited?
Is it possible to design AI systems that promote empathy, critical thinking, and emotional well-being, rather than feeding into our baser instincts?
Thought Experiment #4: The Archivist of the Self
Scenario: Wearable devices and ambient sensors become so ubiquitous that they track not just our movements and conversations but our emotions, biometric data, and even fleeting thoughts picked up through subvocalization. AI systems process this data, creating a rich and nuanced personal archive far exceeding the capabilities of a handwritten journal. But with this level of self-surveillance, who owns the data, who has access to it, and how might it be used against us in the future? Even more importantly, what happens to the self when it’s surveilled constantly?
Questions:
Can we trust corporate-run AIs to be the archivists of our most intimate selves?
Does the allure of self-knowledge and potential health benefits outweigh the risks of exploitation and control
Can we truly understand ourselves when our own biases and blind spots are reflected back by powerful AI systems?
Why These Thought Experiments?
These scenarios aren't mere sci-fi dystopias. They highlight the potential benefits and real dangers of AI in a culture of "taking flow." Each experiment brings our earlier discussions of Weber, Simon, and Zuboff into sharper focus, forcing us to consider:
The changing nature of knowledge, expertise, and authorship.
The manipulation of attention, emotion, and worldview through AI.
The balance between self-preservation and self-knowledge in the face of unprecedented data collection.
These thought experiments aren't meant to provide easy answers, but to encourage a deeper exploration of the opportunities and dangers inherent in the dynamic relationship between AI and flow culture. They demand that we delve deeper into this technological transformation's philosophical and ethical dimensions. One question these scenarios bring to the forefront is the evolving tension between the rational ideals of scientific inquiry and the messy realities of lived experience.
Think back to our discussion of Weber's ideal bureaucracy (another one of Foucault’s disciplinary institutions). It envisioned a world of clear rules, hierarchies, and predictable outcomes, mirroring the Enlightenment ideal of a universe governed by rational laws. AI, with its ability to analyze massive datasets and identify patterns, seems to embody this idea of dispassionate rationality. Yet, these thought experiments reveal the dangers of applying such a framework to human understanding, history, and self-knowledge. Emotion, intuition, and the ability to engage with ambiguity aren't obstacles to knowledge generation; they're integral factors shaping how we learn, form opinions, and create meaning in our lives. Can we design AI systems that account for this complexity, recognizing that human knowledge doesn't always conform to neat, predictable categories and fostering an environment for understanding that acknowledges the nuances of the lived experience?
Additionally, these scenarios force us to examine the role of power in an AI-driven world -- a recurring topic in our discussions of Zuboff, now buttressed by our learnings from Foucault. Who controls the "hive mind?" Who determines what is preserved and what is forgotten? Who decides which emotions are amplified and suppressed in pursuing social control or commercial gain? The power to shape knowledge, history, and even our own emotional experiences has profound implications for individual autonomy and the future of democratic societies.
Furthermore, these experiments challenge us to grapple with the idea of a fragmented self in a "flow" culture. If AI can endlessly record, analyze, and recombine our thoughts and feelings, what does that imply about our sense of individual identity? Can we retain a core sense of self amidst the constant stream of external input, algorithmic suggestions, and data-driven insights into our own subconscious? I am not sure if the concept of the Self will survive this shift, but even if it does, the flow self will be quite different from the stock self
Is there a flow self distinct from a stock self? Can we invent new disciplinary institutions that shore up the flow self and make it a viable political technology?