When the World Remade Work
Generative AI reached a hundred million users in two months. No technology in history has been adopted this fast. Enterprises are spending hundreds of billions on AI infrastructure. Earnings calls, strategy decks, and industry conferences all lead with it. Nobody is sleeping on this.
Previous technological revolutions had two problems. The first was adoption: it took decades for the electric motor to reach most factories, decades more for automobiles to restructure daily life. The second was redesign: even after the technology arrived, organizations kept running on the old logic. Factory owners who installed electric motors in 1900 used them to power the same shaft-and-belt systems they’d built for steam, a single big dynamo replacing the steam engine while everything else stayed exactly as before. It took forty years for anyone to realize the point wasn’t a better power source. It was a completely different factory.
AI has solved the first problem. Adoption was astonishingly fast. But the second problem, the redesign lag, is the one no revolution has ever skipped. Every major technology in the last 250 years hit the same wall: organizations absorbed the new tool into their existing structure and declared the job done. The real transformation came only when someone tore out the old structure and built around what the technology actually made possible. That has happened five times. It is happening again now.
Five revolutions, one script
The economist Carlota Perez mapped the pattern. In Technological Revolutions and Financial Capital (2002), she identified five major technological revolutions since the eighteenth century and showed that each follows a remarkably similar script.
The Industrial Revolution began with Arkwright’s Cromford mill in 1771, when mechanized cotton production demonstrated that machines could replace hand labor at a fraction of the cost. Steam and railways emerged with Stephenson’s Rocket on the Liverpool-Manchester railway in 1829, showing that mechanical power could conquer distance. Steel, electricity, and heavy engineering arrived around 1875, when the Bessemer process made cheap steel available at industrial scale and Edison’s electrical systems lit up lower Manhattan. Oil, automobiles, and mass production crystallized with the Model T in 1908, when Ford showed that complex manufactured goods could be made affordable through assembly-line organization. Information and telecommunications launched with the Intel 4004 microprocessor in 1971, putting computation that had previously filled entire rooms onto a single chip.
Each revolution clustered around a cheap, general-purpose input (water power, coal, steel, oil, microchips) that made a new way of doing everything suddenly feasible. And each revolution was visible. Nobody who saw Stephenson’s Rocket doubted that railways would change the world. Nobody who saw the Model T doubted that cars would reshape transportation. The technology’s potential was never the mystery. The mystery was always: why did it take so long to be realized?
Perez’s central insight is the cycle that follows. Every revolution passes through four phases, grouped into two periods separated by a crisis.
The first period, what Perez calls the installation phase, is dominated by financial capital. The new technology appears and entrepreneurs rush to exploit it. Then speculative capital floods in, inflating bubbles and funding infrastructure at a pace that production alone could never justify. Railway mania in the 1840s. The dotcom bubble of the late 1990s. The AI infrastructure boom of the 2020s. Inequality widens. Paper fortunes multiply.
Then the bubble bursts. The turning point, always a financial crash, separates installation from what comes next. Paper values collapse. And this is where the story gets political: what happens at the turning point depends entirely on the institutional response. After the railway crash of 1847, regulation established frameworks that enabled efficient expansion. After 1929, the New Deal created conditions for thirty years of shared prosperity: unionization, social insurance, demand management.
The second period, the deployment phase, is dominated by production capital. The technology diffuses across the whole economy. Organizations have finally restructured around what the technology makes possible. If the institutional response was adequate, the result is a golden age: stable growth, widespread adoption, rising living standards. Eventually industries saturate, idle capital seeks the next frontier, and the conditions for the next revolution begin forming.
The pattern is not deterministic. Turning points can go badly. After the dotcom crash of 2001, the response was technocratic (central banks cut interest rates) rather than structural. No new institutional settlement emerged. The information revolution’s potential was left half-realized, contributing to the financial crisis of 2008. Perez calls this “history’s longest turning point.” The institutional software running Western economies, she argues, is still written in the logic of the fifth revolution even as the sixth begins to emerge.
The common sense of each era
Each revolution brings new tools and, with them, a new techno-economic paradigm, a set of principles that becomes the common sense for how to organize all economic activity. This explains why organizational lag is structural, not a failure of imagination.
The factory system of the first revolution introduced the radical idea that production should be organized by time. Clocks, schedules, synchronized shifts. Innovations as important as the spinning jenny, and so natural now that we forget they were invented.
Steam and railways brought standardization and national-scale hierarchy. For the first time, organizations could coordinate activity across geography, and the management structures of the railroad company became the template for the industrial corporation.
The mass production revolution of the early twentieth century introduced what Frederick Winslow Taylor formalized as “scientific management”: break every activity into its component tasks, identify repetitive routines, deskill or mechanize wherever possible. Henry Ford’s assembly line was the physical embodiment of this logic. But the paradigm went far beyond the factory floor. It reshaped everything from consumer marketing (standardized products for mass markets) to government policy (Keynesian demand management to sustain mass consumption) to education (standardized curricula to produce workers for standardized jobs). Different ideologies competed to harness the same paradigm: fascism, socialism, and liberal democracy each proposed different social frameworks for mass production. Keynesianism prevailed in the postwar West.
The information revolution brought lean production, network organization, and what Perez calls “systemation”: managerial, productive, and technical activities merged into integrated systems. Hierarchies flattened. Knowledge became the critical input. Organizational boundaries blurred.
Each transition was painful because the old paradigm’s common sense was deeply embedded in everything: factory layouts, org charts, educational systems, regulatory frameworks, management theory, cultural assumptions about what “good work” looked like. The resistance was not irrational. The people defending the old paradigm were often the most competent practitioners of it. Master weavers who opposed the power loom were not anti-technology. They were experts in a system being made obsolete. Factory managers who clung to the shaft-and-belt layout were not ignorant of electricity. They were optimizing within the only framework they had ever known.
But the paradigm transitions Perez describes are only the most recent layer of a much deeper pattern. The anthropologist James Suzman argues in Work: A Deep History (2021) that the very idea of work as the organizing principle of life is itself historically contingent. For roughly ninety-five percent of Homo sapiens’ 300,000-year history, our ancestors were hunter-gatherers who worked around fifteen hours a week, had few material possessions, and organized their economies around a presumption of abundance rather than scarcity. The obsession with productivity, accumulation, and status through labor that Perez’s revolutions reshape is itself only about 12,000 years old, an artifact of the agricultural revolution that created stored surpluses, property, and the anxiety about scarcity that drives economic life today. Keynes predicted that by 2030, automation would bring us back to fifteen-hour weeks. We passed his productivity thresholds decades ago. We kept working, because the cultural machinery of scarcity is self-sustaining in ways that have nothing to do with actual material need.
This matters for understanding why organizational lag is so stubborn. Each paradigm transition asks people to unlearn more than technical habits. It asks them to rethink what work is for, what good work looks like, how much of their identity is bound up in the work they do. City dwellers have defined themselves by their occupations for thousands of years. Roman artisans organized into trade guilds called collegia that functioned as social communities, complete with their own customs, festivals, and hierarchies. The modern version is not so different. One in three Americans enters a long-term relationship with someone met through work. When AI changes what a job requires, the resistance runs deeper than skills or paychecks. People are defending an identity, not just a livelihood.
The initial response to each new revolution was always the same: shoehorn the new technology into the old paradigm’s logic. Wire an electric motor into a shaft-and-belt layout designed for steam. Buy a mainframe and use it to process the same paperwork faster. Deploy an AI chatbot that reads from the same call center script a human used to read. The productivity gains were modest to nonexistent. The real gains came only when someone asked: what would we build if we started from scratch?
The forty-year gap
The economist Paul David identified the organizational lag in a landmark 1990 paper titled “The Dynamo and the Computer.” His argument was direct: the same gap that delayed electricity’s economic impact was delaying the computer’s. Robert Solow had famously observed in 1987 that “you can see the computer age everywhere but in the productivity statistics.” David’s explanation was that general-purpose technologies always follow this pattern. The technology arrives. The productivity doesn’t, not for decades, because the real bottleneck is organizational, not technical. Factories had to be redesigned for electricity. Businesses had to be redesigned for computing. The redesign, not the invention, is where the value comes from.
The electricity story is worth telling in detail, because it is the closest parallel to what AI faces today.
In 1881, a visitor to a textile mill in New England would have seen something unmistakable: a single massive steam engine in the basement, its power transmitted through a system of shafts, belts, and pulleys that snaked through every floor. The engine turned a central drive shaft. Leather belts connected the shaft to individual machines. The building itself was designed around this architecture, tall and narrow, so that power could flow downward through gravity-fed systems. The machines were arranged not by the logic of production but by the physics of power transmission: the heaviest equipment sat closest to the drive shaft, where the belts were shortest and the power loss smallest.
By 1900, the electric motor had been commercially available for nearly two decades. A factory owner could, in principle, install an electric motor at every workstation, eliminating the shafts, the belts, and the constraints they imposed on layout. The technology was ready. Almost nobody adopted it.
In 1900, electric motors provided less than five percent of mechanical drive in American factories. Adoption crept upward over the next decade, but mostly in the wrong form: factory owners replaced the steam engine with a large electric dynamo while keeping everything else exactly as before. The shafts stayed. The belts stayed. The whole multi-story layout, designed around the physics of steam, stayed. They had taken the most flexible power source in history and shoehorned it into the architecture of steam.
It took until the 1920s for a new generation of factory designers to do what should have been obvious from the start. They gave each machine its own motor. They tore out the shaft-and-belt systems. They built single-story factories with open floor plans where machines could be arranged by the sequence of production rather than the physics of power delivery. Lighting improved. Ventilation became possible. Assembly lines, which required each station to have independent, controllable power, became practical. American manufacturing productivity surged. The 1920s through the 1940s saw the fastest sustained growth in total factor productivity in the country’s history.
Robert Gordon provided the broadest evidence for this pattern in The Rise and Fall of American Growth (2016). Gordon traced the transformation of American life between 1870 and 1970, what he calls the “special century,” and showed that every major technology of that era required reinvention of the institutions surrounding it, not merely adoption.
Electricity required new factories. But it also required new building codes, new safety standards, new professions (the electrical engineer didn’t exist before electricity needed one), new financing models for capital equipment, and new management practices for production flows that steam had never allowed.
The automobile is an equally telling case. The Model T was affordable by 1913. But the car’s true economic impact came from the cascade of institutional changes it set in motion, not from the vehicle itself. Within three decades, the automobile had restructured the geography of American life: suburbs replaced urban density, highway systems replaced rail-centric transportation networks, and the shopping mall replaced the downtown merchant. Entirely new occupations emerged: auto mechanics, truck drivers, highway engineers, traffic police, insurance adjusters. Agriculture was transformed as the horse, which had consumed roughly a quarter of farmland in feed production, gave way to the tractor. Urban sanitation improved as horse waste, a genuine public health crisis in cities like New York, disappeared from the streets. None of this was contained in the Model T itself. It required roads, laws, financing systems, zoning regulations, and a wholesale reimagination of how people live and work.
Indoor plumbing tells a similar story, one often overlooked because it seems mundane. Running water eliminated hours of daily hauling from wells, indoor toilets replaced outhouses, and electric washing machines reduced a full day’s labor to an hour’s task. These changes, compounding over decades, directly enabled the surge in women’s workforce participation that reshaped the twentieth-century economy. Women’s labor force participation climbed from under twenty percent in 1900 to seventy-seven percent by 1999.
The computer repeated the pattern within living memory. American businesses began buying mainframes in the 1960s and minicomputers in the 1970s. They used them to automate existing paper workflows: payroll processing, inventory tracking, billing. The machines were faster than clerks, but the work was the same work. The organizational structure around the computer was the organizational structure that had existed before it — a data processing department in the basement, feeding printed reports to managers who made decisions the same way they always had. For two decades, productivity growth was flat. Solow’s 1987 observation that computers were everywhere except the productivity statistics was not a paradox. It was the dynamo powering old belt drives, all over again.
The breakthrough came in the 1990s, when a new generation of managers redesigned business processes around what networked computing actually made possible. Walmart built supply chain management systems that tracked inventory in real time across thousands of stores and eliminated the lag between sales and restocking that had defined retail logistics for a century. Dell sold computers built to order and collapsed the gap between customer demand and production. Amazon rebuilt retail around a database rather than a storefront. None of these were faster versions of the old workflow. They were new workflows that could not have existed without the computer, just as the assembly line could not have existed without the unit-drive electric motor. The productivity surge of 1994 to 2004, the best decade for American productivity growth since the 1940s, was the deployment phase finally arriving, roughly twenty-five years after the microprocessor.
The technology was always the catalyst. But the transformation came from the organizational restructuring that followed, and that restructuring took a generation to complete.
Who benefits is never automatic
There is a comforting version of this history: technology arrives, disruption happens, eventually everyone benefits. It’s roughly true in the very long run. But “eventually” can mean a hundred years, and the decades in between are not abstract.
Carl Benedikt Frey documented this in The Technology Trap (2019). His central distinction is between technologies that enable workers (augmenting what they can do, raising their productivity and wages) and technologies that replace workers by substituting machines for human labor. The social and political consequences of technology depend almost entirely on which category dominates.
Enabling technologies are welcomed. The telescope, the spreadsheet, the power tool: each made workers more productive without threatening their livelihoods. Replacing technologies face resistance. The power loom displaced skilled weavers, the assembly line replaced craftsmen with machine tenders, and self-checkout kiosks are doing the same to cashiers now. When the displaced workers have political power, they can block adoption for decades or centuries. Medieval guilds suppressed mechanization across Europe. The Luddites smashed frames. Both were acting rationally in their own interest. The technology genuinely threatened their livelihoods, and history offered no guarantee that “eventually” would come within their lifetimes.
Most technologies do both simultaneously. The computer enabled knowledge workers while replacing clerical workers. AI enables the analyst who uses it to review forty thousand contracts while threatening the paralegal who used to do that work one contract at a time. Brynjolfsson and McAfee call this “bounty and spread”: total output grows while the distribution of that output becomes more unequal.
Daron Acemoglu, who received the 2024 Nobel Prize in Economics for his work on institutions and prosperity, sharpened this analysis in Power and Progress (2023), written with Simon Johnson. He argues that technology direction is a choice, not a destiny. The same AI system can be deployed to replace customer service workers with chatbots, what Acemoglu calls “machine intelligence,” or to give those workers better diagnostic tools that expand what they can handle, which he calls “machine usefulness.” The replacement approach produces what he terms “so-so technologies”: they displace workers without generating real productivity improvement. Self-checkout kiosks are his go-to example. They eliminate cashier jobs while transferring the work to customers, with no meaningful gain in efficiency or experience.
Acemoglu’s historical evidence is uncomfortable. During the Industrial Revolution, it took roughly a hundred years for productivity gains in textiles and railways to translate into improved wages for workers — Frey’s estimate is more conservative, around fifty years, but even his shorter timeline spans two generations. The disagreement on timing matters less than the shared conclusion: the institutional structures that distribute gains form slowly, and until they do, the gains flow to whoever already holds power. In medieval England, agricultural technology improved steadily for two centuries while peasant wealth remained flat. British peasants saw no net increase in wealth between 1100 and 1300, even as cathedrals rose across Europe on the surplus that lords extracted from their labor. Technology did not trickle down. Power determined who benefited.
The dynamics are even older than the Industrial Revolution. Suzman traces them back to the Roman economy, which was sustained by what were, from the perspective of free citizens, intelligent working machines: slaves. The consequences look familiar. Slave-owning estates displaced smallholder farmers. Free workers competing with slave labor faced structural disadvantage. Wealth concentrated at the top — during the final century of the Roman Empire, three families may have been the richest private landowners in all of history. The institutional response also looks familiar: Roman workers organized into trade guilds, the collegia, that lobbied for their members, secured public contracts, and functioned as social safety nets. Aristotle, who considered slavery natural, conceded that it would only end if machines could work autonomously, “obeying and anticipating the will of others.” He thought this was fantasy. We are building those machines now. The question of who captures the value they produce is the same question the Romans faced, and they answered it badly.
The postwar period of 1945 to 1975 is the great exception, and it’s instructive precisely because it was exceptional. For three decades, the American economy grew spectacularly and the gains were widely distributed. Real wages rose for workers at every income level. The middle class expanded. Home ownership surged. But this outcome was not automatic. It was produced by a specific institutional settlement: strong unions that bargained for wage increases tied to productivity, progressive taxation that funded public goods, educational expansion that broadened the skill base, and Keynesian demand management that sustained purchasing power. When those pillars started coming down in the 1980s, the distribution of gains reverted to the historical default: concentrated at the top.
The pattern holds with uncomfortable consistency. Shared prosperity has never been the automatic byproduct of technological progress. It has always required organized countervailing power to redirect the gains. AI will be no different.
Bubbles build the future
Perez makes a counterintuitive case for bubbles: the speculative frenzy funds infrastructure that rational investment alone would never build.
Take the railway mania of the 1840s. British investors poured capital into rail networks far beyond what passenger and freight demand could justify. At the peak, Parliament was receiving hundreds of applications for new rail lines, many running through the same corridors to the same destinations. Fortunes were made and lost. Dozens of railway companies went bankrupt. The crash of 1847 wiped out vast amounts of paper wealth and ruined thousands of investors. But when the dust settled, Britain had a national railway network. Tracks that would never have been laid through rational cost-benefit analysis were in the ground. The infrastructure remained after the speculation vanished, and it became the foundation for the Victorian economic boom — the golden age that Perez’s framework predicts should follow a turning point.
Every revolution repeated the pattern. Canal mania in the 1790s built waterways, steel and telegraph speculation in the 1870s wired continents, and the automobile boom of the 1920s paved roads and built filling stations. The dotcom bubble of the late 1990s was no different. Pets.com, Webvan, and a thousand other ventures burned through capital and disappeared. But the speculation also funded the laying of fiber optic cable across ocean floors, the construction of data centers, and the training of a generation of software engineers. The companies vanished. The infrastructure remained. Amazon, Google, and the modern internet economy were built on fiber and servers that speculation had paid for.
The AI frenzy of the 2020s looks identical. Hundreds of billions of dollars are flowing into physical infrastructure. New chip foundries are under construction in Arizona, Ohio, and across East Asia, each costing tens of billions. Hyperscale data centers are going up so fast that they are straining regional power grids — a single large training cluster can consume as much electricity as a small city. Model training runs now cost hundreds of millions of dollars, and the companies funding them cannot yet demonstrate commensurate returns. Nvidia’s market cap has multiplied several times over. Startup valuations have decoupled from revenue in ways that recall 1999. Much of this capital will be wasted. Many AI startups will fail. But the chip foundries, the data centers, and the trained models (especially the open-weight models that anyone can run) do not disappear when stock prices fall. Like railways and fiber optics, the capacity to run AI at scale is being built right now, and it will still be there when the correction comes.
The bubble itself is not the danger. In Perez’s framework, bubbles are the mechanism by which societies fund infrastructure they could never justify through careful planning. The danger is what happens at the turning point, the crisis that separates the frenzy from what comes after.
The 1847 railway crash led to institutional reform that created frameworks for efficient expansion. The 1929 crash produced the New Deal, which built social infrastructure (unionization, social insurance, progressive taxation, demand management) that channeled mass production into thirty years of widely shared prosperity. The 2001 crash produced only rate cuts and bank bailouts, no structural reform, and the result was partial deployment, persistent inequality, and a second crash seven years later.
The turning point for AI has not yet arrived. When it does, the institutional response will determine the next thirty years. Whether AI produces a golden age or a prolonged period of concentrated gains will be decided by politics, not engineering.
Gordon’s challenge
There is one serious objection to the optimistic reading of this history. Robert Gordon, the Northwestern economist whose The Rise and Fall of American Growth (2016) provides the most detailed account of how technology transformed American life, argues that the “special century” of 1870 to 1970 was a one-time event that will not repeat.
His evidence is formidable. Between 1870 and 1940, virtually every dimension of American life was transformed simultaneously. Electricity remade factories, homes, and cities. Automobiles restructured geography, commerce, and labor markets. Indoor plumbing and sanitation freed hours of daily domestic labor and contributed directly to women’s entry into the workforce. Telephony collapsed distance for business and personal communication. Radio and television created mass media. Each of these changes could only happen once. You go from no electricity to electricity once. Horse to car, once. Outhouse to indoor plumbing, once. The special century was special because it stacked multiple one-time transformations on top of each other.
By contrast, Gordon argues, the digital revolution has been narrow. Since 1970, computing, communications, and entertainment have been transformed, but housing, food, transportation, and clothing have not changed in fundamental character. A single-sector revolution, however impressive, cannot match simultaneous revolutions across all sectors. The data backs him up: total factor productivity growth peaked during 1994-2004 and then declined. Over 2010 to 2019, the number of industrial robots in US manufacturing doubled while manufacturing productivity grew zero percent.
Gordon is skeptical that AI will produce anything comparable. He forecasts median income growth of only 0.3 percent per year through 2040, roughly one-seventh of the special century’s rate, citing demographic headwinds, rising debt, and growing inequality alongside technological limitations.
This is a serious objection, and it cannot be dismissed. But it has some problems.
Gordon is right that the special century’s breadth was unique. Electricity, cars, plumbing, and telephony transformed daily life in ways that digital technology has not. But AI operates on a different part of the economy. The great inventions of 1870-1970 were primarily physical: they changed how we make things, move things, and maintain our bodies. AI transforms cognition — how we think, decide, plan, and coordinate. In 1870, cognitive work was a small fraction of economic activity. In 2025, it’s the majority. A technology that transforms cognitive work won’t look like the special century. It won’t give us indoor plumbing again. But it reaches a larger share of current economic activity than any single previous revolution did.
There’s also a timing problem with Gordon’s skepticism. His own evidence shows that electricity’s productivity impact came forty years after the technology was available. The internet’s best decade for productivity was 1994-2004, roughly twenty-five years after the microprocessor. Large language models became commercially powerful only around 2022-2023. If the pattern holds, we are in the equivalent of 1885 for electricity or 1975 for computing. The organizational restructuring that would make AI productive is barely underway. Judging AI’s ultimate impact by productivity statistics in 2025 is like judging electricity by factory output in 1890.
And Gordon’s own evidence contains a counter-argument. During World War II, the urgency of wartime production overrode every institutional resistance to new methods. Kaiser shipyards compressed the construction of Liberty freighters from eight months to weeks. The Willow Run plant in Michigan produced 432 B-24 bombers per month. Electric power doubled American machine tool capacity between 1940 and 1945. The innovations in logistics, production management, and quality control developed under that pressure transferred directly into the peacetime economy. The organizational restructuring that normally takes decades was compressed into years. If external pressure can shorten the lag, the question is whether competitive pressure from AI will act as a similar accelerant, or whether the restructuring will follow the slower peacetime pattern.
Brynjolfsson sees a J-curve: initially slow productivity gains followed by rapid growth as organizations restructure. The pandemic compressed digital adoption timelines — twenty years of transformation in twenty weeks, by some estimates — which suggests the organizational lag for AI may be shorter than previous revolutions experienced. The two economists have a standing bet on this: whether US private nonfarm business productivity growth will average over 1.8 percent per year from 2020 to 2029. Brynjolfsson bets yes. Gordon bets no. The stakes are four hundred dollars. The real stakes are rather higher.
What the pattern predicts
If history is any guide, the next two decades will be frustrating for AI optimists. Five revolutions of evidence say the same thing: the technology shows up decades before the payoff, and nothing seems to speed that up much.
Start with the lag itself. Every revolution’s loudest advocates predicted rapid transformation. Every one took at least twenty years to deliver its full economic impact, usually closer to forty. This is structural, not accidental. New organizational forms have to be invented. A generation of practitioners has to be trained. Institutional frameworks have to catch up. None of this can be skipped. For AI, this means the models available today may already be more powerful than what organizations can absorb. The bottleneck has moved from technology to the capacity of human organizations to redesign themselves. When MIT researchers found in 2025 that ninety-five percent of enterprise GenAI pilots failed to move beyond proof of concept, they were measuring the installation-deployment gap in real time.
The lag is not a neutral waiting period. Perez is explicit about what happens during the installation phase: financial capital dominates, inequality widens, and the gains concentrate among a small class of technology entrepreneurs and investors while workers in legacy industries lose ground. The Gilded Age of the 1880s and 1890s was an installation period. So was the dotcom era of the late 1990s. Both produced extraordinary wealth at the top and stagnation or decline for large segments of the workforce. The AI installation phase is producing the same dynamics. The companies building AI infrastructure are among the most valuable in history. The workers whose tasks are most exposed to automation are not sharing in those gains. The political polarization and populist anger visible across every industrialized democracy are what Perez’s framework predicts for a late-frenzy phase: the social signature of an installation period that has not yet reached its turning point.
The bubble will also correct. Perez’s framework predicts a financial crash between the frenzy and the golden age, and the AI boom has every hallmark of a late installation period approaching its turning point. When the correction comes, the political response will determine the next thirty years. The New Deal channeled the 1929 crash into structural reform and thirty years of shared prosperity. The response to the 2001 crash was technocratic half-measures, and the result was partial deployment and persistent inequality. Both paths are available.
And the direction question remains wide open. Whether AI agents expand what workers can do or replace them at lower cost is not a technological question. It’s a power question. Without organized countervailing pressure, the default direction has always favored incumbents and capital owners. The current moment, when AI architecture is still fluid and organizational norms are still forming, is the window in which that direction gets set.
None of this is destiny. The patterns are strong, but every revolution produced variations. The postwar golden age was better than the Gilded Age. The digital revolution’s institutional response was worse than the New Deal’s. Outcomes vary because people make different choices. Understanding the pattern doesn’t predict the future. It reveals the choice points. The organizational, institutional, and political decisions being made right now about how to deploy AI are the ones that will matter most.
The counter-movement
Karl Polanyi saw the pattern before Perez formalized it. In The Great Transformation (1944), he argued that every major expansion of market-driven technology triggers a counter-movement from society to protect itself. Market forces push outward, turning labor, land, and even money into commodities to be traded. When the disruption cuts deep enough, society pushes back with demands for regulation and redistribution. Luddites smashing frames and guilds blocking mechanization were not irrational. Neither were the Roman collegia who lobbied against slave labor. Each was a predictable response to economic disruption that genuinely threatened livelihoods. Polanyi called this the “double movement,” and he saw it as the defining dynamic of industrial capitalism.
The counter-movement can go two ways, and the distinction matters enormously. When it blocks adoption entirely, you get what Frey calls a technology trap: centuries of suppressed innovation in medieval Europe, decades of delayed mechanization in sectors where guilds held power. When it channels disruption into institutional reform, you get the conditions for a golden age: the labor laws, public education, social insurance, and collective bargaining that turned the mass production revolution into the broadest period of shared prosperity in human history. Both responses are rational. Both have deep historical precedent. The outcome depends on whether the counter-movement is captured by those who want to freeze the old order in place or by those who want to negotiate new terms for the emerging one.
For AI, the counter-movement is already forming. The European Union’s AI Act is the most comprehensive regulatory framework yet attempted for autonomous systems. Proposals to tax automation, restrict algorithmic hiring decisions, and protect workers from AI-driven surveillance are multiplying across jurisdictions. Public polling shows strong support for AI regulation, mirroring what Frey documents in response to every previous wave of labor-displacing technology. At the same time, some responses look more like technology traps than institutional reform: blanket bans on AI in creative industries, regulatory frameworks so rigid they would freeze the current organizational paradigm in place rather than allowing a new one to emerge.
History won’t settle the question. But it does tell us what to watch for. Every revolution eventually produced a new organizational paradigm, a crystallized model that made the new logic legible. The factory system crystallized around time-keeping and synchronized shifts. Mass production got Taylorism and the assembly line. Lean production and network organization served the same role for the information revolution. For AI, that paradigm does not exist yet. We can see its outlines in early experiments: domain teams that own their AI capabilities, workflows where humans review AI output rather than producing it from scratch, organizations that prize judgment over raw throughput. But nobody has built the equivalent of Ford’s assembly line for the AI era. Nobody has made the new common sense obvious yet. When that model emerges, the deployment phase begins in earnest.
Suzman ends Work: A Deep History with an observation that cuts beneath all of this: “We are a stubborn species, deeply resistant to making profound changes in our behavior and habits, even when it is clear that we need to do so. But when change is forced upon us we are astonishingly versatile.” That captures the full pattern. Five revolutions of evidence say organizations will resist, the redesign lag will frustrate, and the eventual restructuring will be driven by pressure more than by vision. But the same evidence says the restructuring will come. The choices being made right now about institutions, power, and the direction of AI will determine whether the result is a golden age or another Gilded one.
The next chapter turns from history to the present, where those choices are being made right now.