The evolution of artificial intelligence is deeply intertwined with the historical mechanisms of bureaucracy and governance. From Max Weber’s detailed examination of bureaucratic systems, highlighting their rationality and systematization, to Herbert Simon’s insights into the complexities of decision-making, the roots of AI stretch back to well before the technology itself existed. Weber’s dissection of the “ideal type” bureaucracy laid the groundwork for understanding how organizational efficiency could be maximized through clear hierarchies and rule-based systems. Meanwhile, Simon’s critiques and concept of “bounded rationality” challenged these notions, bringing a more nuanced understanding of human decision-making processes that often defy perfect rationalization.
As AI technology emerged and evolved, it naturally extended these bureaucratic and corporate mechanisms, adapting and enhancing them to fit into the digital age. The transformation dubbed "Surveillance Capitalism" by Shoshana Zuboff marks a significant milestone in this evolution. Here, the practices of data harvesting and commodification, which corporations and bureaucracies had refined over decades, found new life. AI technologies have taken up the baton, enabling the analysis and utilization of massive datasets to not just understand but also predict and influence human behavior on an unprecedented scale.
The narrative of AI's evolution is not just a chronicle of technological advancement but also a continuation of long-standing human endeavors to categorize, control, and capitalize on various aspects of life. This history reflects a series of adaptations and refinements from simple bureaucratic procedures to complex algorithms capable of handling vast amounts of data. Today’s AI systems are heirs to a legacy of maximizing efficiency and control, embodying the intelligence once confined to the ledgers and offices of bureaucrats, now expanded to global networks and cloud infrastructures.
Read on for this week's update on my book project with Richard Russell.
Systematic knowledge gathering, often under the catchall term ‘science,’ is one of the essential social functions of the modern era. Ironically, much of that knowledge production is pre-modern in its sensibility, with an individual scientist (‘the Master’) working independently or directing their students' work (‘the Pupils’). While this artisanal work is meant to be replicable, it’s far from being mechanized. However, some early modern acts of knowledge gathering and curation are closer to the mechanizable ideal. I am thinking of population-level surveys such as the census with templatized questions and bureaucratic decision-making based on criteria that can be checked off a list. If we set aside the conceit that intelligence is artificial only when it’s mechanized, we can study these procedures and institutions as part of the history of artificial intelligence.
Aren’t corporations artificial? If some of the functions of firms - gathering customer needs, influencing customer behavior, making budgets and projects - are forms of collective intelligence, then corporations are both artificial and intelligent, hence AI. The census taker with their standardized form and the corporate analyst with their spreadsheets and predictive models are both, in a sense, extensions of a larger cognitive apparatus. These entities – the census, the corporation – gather data and transform it into actionable insights. The bureaucracy and the algorithm are siblings, birthed from the same relentless need to quantify, categorize, and ultimately control the world around them.
The question is one of scale. The kind of artificial intelligence bureaucracies care about are the ones that can be implemented across the population. How much in taxes do you owe? Do you qualify for earned income credit? These are inherently pre-algorithmic questions, i.e., queries that can be addressed by following a prescribed set of rules after consulting a database of incomes. The tax collector can keep interpretation to a minimum: if you make X dollars and you have Y credits (for children, mortgage etc) and Z is the tax rate, then you pay (X-Y)*Z in taxes. In exceptional cases, the taxpayer can ask for redress, but the system is designed to be mechanical. Corporations, bureaucracies and markets are bearers of this kind of intelligence, with an internal culture of setting aside other factors and concentrating only on those classifiable and quantifiable features that lead to governance or control or profit. It is this kind of intelligence that can be turned into a commodity, and whatever else one might say about AI, the commodification of intelligence is at its heart.
Commodification assumes that intelligence can be abstracted away from the capabilities of living, breathing human beings and turned into a resource implementable in a system, and eventually, in a machine. But the commodity has a history that precedes mechanization. Before implementing bureaucratic procedures in a machine, we implement them in social systems with organizational cultures that demand rule-following. Corporations and governments (and others) gather and ingest vast amounts of data and make decisions on this pre-algorithmic basis. The census collects data about every person in a country and makes it possible to represent the demographic diversity of the entire country in a database. The census also turns groups of people into a 'population,' a mass whose statistical features can be extracted, codified, and embedded within decision-making structures as a neatly deployable resource. This transformation into populations abstracted as data allows for a kind of statistical governance where decisions are made based on probabilities and averages rather than individual stories and circumstances. This approach, while efficient, raises ethical questions about the reduction of human experiences to mere numbers. As such abstractions increase in prevalence, they permeate various sectors, influencing everything from public policy to healthcare, where decisions that affect human lives are made by people (now algorithms) using the data as their source of truth. The risk here is the potential loss of the nuances and individual differences that define human existence. Over time, these data-driven decisions can reinforce stereotypes and perpetuate inequalities, as systems are often designed by those who are detached from the realities of the people affected by their calculations. Furthermore, this method of governance can create feedback loops where the data collected influences the behavior it is meant to measure, leading to a distorted view of reality that becomes increasingly difficult to challenge or change.
The first important theorist of bureaucracy, Max Weber, would say: this reflects the quintessential bureaucratic mindset, emphasizing rationalization, standardization, and the transformation of complex human qualities into predictable, systematized components. While it was done well before the term Artificial Intelligence entered our lexicon, Max Weber's analysis of bureaucracy already pointed to a certain kind of behavior with 'algorithmic characteristics.' Weber's model emphasized several core characteristics. A clear hierarchy defined lines of authority, while specialized roles turned individuals into experts in their domain. Rules and procedures dictated decision-making, aiming to replace personal whims with impersonal, consistent logic. Promotions and hiring leaned on technical merit, not social connections, and meticulous record-keeping preserved the organization's institutional memory. Max Weber's analysis of bureaucracy centered on an "ideal type" -- a theoretical model designed to illuminate the most rational and efficient organizational form. This "ideal" bureaucracy wasn't a prescription for perfection, but rather a lens through which real-world bureaucracies could be understood. For Weber, this bureaucratic model offered advantages. It promised to maximize efficiency, ensuring precision and speed. Rational rule-based systems should minimize the impact of individual biases. Moreover, the bureaucratic structure fostered predictability and stability, essential for long-range planning. However, Weber's model has faced critique. Rigid adherence to rules can stifle adaptability and innovation. The laser focus on impersonal systems risks dehumanizing workers, cultivating a sense of powerlessness within the system. Most importantly, Weber worried that the means (rules) could become the ends, with organizations losing sight of their original goals in a quest to perfect internal procedures.
Weber's analysis, while predating AI, remains relevant today. The algorithmic decision-making at the core of many AI systems shares similarities with the rule-based nature of bureaucracies. There's a real concern that AI could be used to reinforce inflexible bureaucratic tendencies. The risk of "goal displacement," where optimizing an AI algorithm becomes more important than its intended purpose, echoes Weber's long-ago warning about the dangers of fixating on procedures over outcomes. Other theorists went further in their criticism of the ideal type. In his book, "Administrative Behavior," Herbert Simon challenged the traditional, highly structured models of administration. He argued that a clear understanding of decision-making within organizations is essential for grasping their true dynamics. As Simon analyzed, these structures often rely on limited models of human decision-making. They favor measurable metrics and procedural steps, even if these oversimplify the true nature of thought and action. Simon recognized that perfect rationality isn't achievable by decision-makers, who often must act with incomplete information and limited mental processing abilities. Simon challenged this notion with his concept of "bounded rationality." He argued that perfect decision-making is impossible due to limited information, our cognitive biases, and the complex, time-pressured environments in which decisions are made. For Simon, organizational decisions are shaped by the search for satisfactory solutions, not perfect ones, as individuals navigate politics and their own limitations. Bounded rationality forms a core pillar of his analysis, providing a more realistic way to assess administrative choices.
Simon's analysis delves into topics like organizational structure, authority, influence, and the roles of fact and value in administrative decisions. His insights fundamentally transformed the understanding of administration in both public and private sectors. "Administrative Behavior" remains a timeless classic for students and scholars of management, public administration, and anyone interested in how organizations function. While Max Weber focused on an "ideal type" of bureaucracy--a model emphasizing perfect rationality, efficiency, and order--Herbert Simon dove into the messy reality of organizational decision-making. Weber envisioned a world where clear hierarchies, rule-based systems, and specialization would form the bedrock of organizational functioning. In this model, decisions would flow from a rational application of these rules. In essence, Weber painted a picture of how bureaucracies could ideally function, while Simon showed how they function in the real world. Weber stressed the potential for rational systems, while Simon highlighted the constraints that inevitably reshape any ideal model. Where Weber analyzed organizational structure, Simon delved into the human thought processes and behaviors that take place within that structure.
And, by the way, Simon was also one of the founders of artificial intelligence.
While Simon and Weber dissected the inner workings of traditional bureaucracies, their models may seem less directly relevant in an age dominated by artificial intelligence. Yet, our current landscape was built upon a heightening of the bureaucratic mindset, leading to an ongoing, profound, transformation -- the meteoric rise of data harvesting and commodification for commercial purposes. This phenomenon, aptly termed "Surveillance Capitalism" by Shoshana Zuboff, fundamentally altered the power dynamics between corporations and individuals. Zuboff posits that within surveillance capitalism, our everyday experiences, behaviors, and even emotions are no longer our own. Instead, they become raw material, ruthlessly mined by tech giants. This data is analyzed, packaged, and sold to advertisers or other interested parties, shaping targeted messages designed to influence our choices and behaviors. This extraction of personal data as behavioral surplus fuels an entirely new economic logic, where profit is not primarily derived from selling goods or services, but from predicting and subtly modifying human behavior. The implications of this system, Zuboff argues, are deeply unsettling. Surveillance capitalism erodes individual autonomy as our most intimate selves are translated into data points. It facilitates manipulation, undermines democratic processes (as detailed analysis can sway elections), and ultimately threatens the fabric of society as the boundary between the private and the commercial vanishes.
While Zuboff and her followers focus on the profit motive underlying surveillance capitalism (duh!), it's also important to recognize that the profits are being derived from a bureaucratic impulse: classification and taxonomy imposed upon a cognitive surplus. You could say surveillance capitalism is the monetization of bureaucracy. Zuboff’s claims should remind you of what I had said earlier about cosiety. Companies are able to harvest emotions and profit from them precisely because there’s a surplus of production of mental states for public consumption. The large language models and other machine learning models that are the rage today are trained on this very surplus. While bureaucracies and corporations might not possess the full spectrum of human intelligence, they exhibit a kind of efficiency-driven "intelligence" that has become especially transferable to machines. This intelligence centers around optimizing processes, adhering to rules, and identifying patterns in large datasets. Weber's focus on bureaucratic efficiency and rule-based systems highlights this, as both bureaucracies and corporations strive to streamline their operations and standardize decision-making. AI, with its ability to analyze vast amounts of data and identify patterns, is ideally suited to automate and optimize many of these functions. These AI systems can predict preferences, personalize marketing, and shape choices, replicating the core "intelligence" behind many corporate profit-making strategies.
One must then consider the ultimate price of commodified intelligence. Is it the reduction of human circumstance into a series of quantifiable data points? Might a more desirable form of AI emerge – one capable of navigating ambiguity and the unquantifiable aspects of the human experience? Perhaps the future of artificial intelligence hinges not merely on computational advancement, but on the development of systems that honor the complexities of human life. This AI would draw understanding not solely from data, but from lived experience, from empathy. Only then might AI transcend its genesis as a tool of control and become a force for genuine comprehension, fostering progress that acknowledges the multifaceted nature of humanity.