Spectacle, Silence, Calcification: The Governance Problem Hiding Inside Every Technology Hype Cycle

Spectacle, Silence, Calcification: The Governance Problem Hiding Inside Every Technology Hype Cycle

research opinion

The public spectacle surrounding artificial intelligence is not where the consequential decisions are being made.

Abstract

Drawing on the cultural history of the 1920s mechanical man craze, the electrification boom and bust, the atomics1 governance failure, Isaac Asimov’s later fiction, and the dot-com bubble, this article argues that technology hype cycles follow a recurring three-phase pattern — spectacle, silence, calcification — in which the defaults set during the loudest phase persist long after public attention moves on. The pattern is not technological but biological: a collision between exponential external systems and a species wired for short bursts of fear and desire rather than sustained governance. Let’s connect this historical pattern to the specific infrastructure being built today — autonomous agent permission models, cloud identity management, and default-permissive access controls — and ask whether pre-emptive governance is possible, or whether the cost of the next calcification is already being locked in.

1 Using atomics throughout is a small act of resistance against calcification. It keeps the language in the era when the thing was still dangerous enough to name plainly. Before the committees got hold of it and smoothed the edges. And there’s an irony worth noting: the same institutional apparatus that rebranded “atomic” to “nuclear” — ostensibly for precision — is the one that failed to proliferate atomic power as an energy source. They got the word right and the governance wrong. The committees that insisted on the correct terminology couldn’t solve the actual problem. They renamed the thing and then fumbled it for seventy years.


The Mechanical Man and the Assembly Line

In January 1921, Karel Čapek’s play R.U.R. (Rossum’s Universal Robots) premiered in Czechoslovakia and introduced the word “robot” to the world. Within a few years, the term had escaped the theater and dominated newspaper front pages, corporate marketing campaigns, pulp science fiction, and film. By the late 1920s, every inventor with a motor and some sheet metal was parading a “mechanical man” around county fairs and exhibition halls. Most of them were junk — glorified puppets, remote-controlled switches dressed in tin suits. But the hype was enormous.

The cultural historian Dustin Abnet has traced this trajectory in detail, showing how the robot went from being a European critique of American industrial capitalism to being America’s own mascot in roughly fifteen years. Čapek and the German director Fritz Lang (whose 1927 film Metropolis offered another robotic critique of Fordism) both meant the robot as a warning: look what American-style mass production does to human beings. Čapek wrote to the New York Times in 1926 directly indicting American values of speed, quantity, and efficiency. The Times illustrated his essay with a street scene in which every person — rich and poor, men and women — had a body made of metal. In Čapek’s view, the paper suggested, all Americans were robots.

Americans largely missed the critique. They read the robot in class terms, not national ones. And then the Westinghouse Electric Company performed the decisive move: it claimed the robot as an American product.

Starting in 1927, Westinghouse built and exhibited a series of mechanical men and women — Herbert Televox, Katrina Van Televox, Telelux, Rastus, Willie Vocalite, and Elektro. These devices were simple. Televox could respond to musical tones over a phone line and flip switches. That was it. But Westinghouse dressed the technology in spectacle. The first public demonstration of the renamed “Mr. Televox” was staged on George Washington’s birthday, with an American flag, a portrait of Washington, and an orchestra playing “The Star-Spangled Banner.” The press loved it.

The most revealing of Westinghouse’s robots was Rastus, built during the Great Depression. Where the other robots were caricatures of whiteness, Rastus was encased in the body of a minstrel-show character — black rubber skin, overalls, a white shirt, and a pail hat, with a deep baritone voice that audiences would have read as unmistakably Black. Performances opened with the human controller pretending to shoot an apple off its head, a reenactment of the William Tell legend performed on a tamed Black body. Westinghouse executives were explicit about what they were selling. Company President F.A. Merrick wrote in the Electric Journal that slavery had brought “civilization” to the world and that America now needed mechanical slaves or else there could “be no art, literature, science, leisure, or comfort for anyone.”

The message was clear: the robot is not a threat. The robot is your servant. You are the master, not the machine. What started as Čapek’s critique of American dehumanization became, through corporate capture and racialization, a reassurance of American mastery.

But here is the part that matters for the present: the mechanical man craze faded. Public attention moved on. The tin men went back in the closet. And the actual automation — the boring kind, the kind nobody wrote headlines about — kept going. Between 1919 and 1929, horsepower per wage earner in manufacturing increased by 50 percent. Productivity rose 72 percent in manufacturing, 33 percent in railroads, 41 percent in mining. Assembly lines got faster. Numerical control machines replaced machinists. The displacement that Čapek warned about happened. It just did not happen in a metal suit.

The spectacle was the distraction. The real deployment happened in the silence that followed.

The Electrification Parallel

The mechanical man craze was not the only technology spectacle of the 1920s. Running alongside it — and in many ways underwriting it — was the electrification boom.

As Cameron Shackell has argued, electricity was the artificial intelligence of 1925. It was the hot technology stock. It was the general-purpose technology that promised to reshape every aspect of the economy. And it did reshape it — but not before going through a speculative frenzy, a catastrophic bust, mass unemployment, and a decade of structural reform.

The parallels to the present are specific and uncomfortable. During the 1920s, electricity stocks were market favorites even though their fundamentals were difficult to assess. Market power was concentrated: eighty percent of the electricity supply was owned by a handful of holding firms that used complex corporate structures to dodge regulation and sell shares in essentially the same companies to the public under different names. Almost every large-capitalization company of the era owed something to electrification. General Motors overtook Ford using new electric production techniques. The ecosystem was vast, interconnected, and poorly understood by the investors pouring money into it.

Then came the crash. The Dow Jones Utilities Average hit 144 in 1929 and collapsed to 17 by 1934. Unemployment went from 3 percent to 25 percent. The promised age of electric leisure turned into soup kitchens and bread lines.

The reforms came after the damage. The Public Utility Holding Company Act of 1935 broke up the holding company structures and imposed regional separation. Once-exciting electricity companies became boring regulated infrastructure — a fact captured by the humble “Electric Company” square on the original 1935 Monopoly board. Electricity did not go away. It became invisible. It became the thing everything else ran on. But the transition from spectacle to infrastructure required a crash, a depression, and six years of political will that only existed because the damage was too large to ignore.

Shackell asks the right question: can artificial intelligence make that transition without another bust? Today, a few interconnected firms are building the infrastructure. Investors are piling in. Regulation is loose and, in some jurisdictions, actively being dismantled. The structural conditions are not identical to 1929, but the pattern is recognizable. The spectacle phase is where the defaults get set — who controls the infrastructure, who benefits, who bears the risk. And if the electrification parallel holds, those defaults will only be revisited after something breaks.

The Asimov Turn

Isaac Asimov attended the 1939 World’s Fair and witnessed Elektro, the last and most famous of Westinghouse’s performing robots. Inspired in part by the performance, he spent the next several decades writing the robot stories that would define the genre. But what he actually built, in literary terms, was a governance framework — and then spent the rest of his career showing how it failed.

Asimov’s Three Laws of Robotics are remembered as a solution. They were not. They were a setup for demonstrating how solutions break.

The Three Laws — a robot may not harm a human; a robot must obey orders; a robot must protect itself, except where doing so conflicts with the first two rules — are, as the scholar Gregory Jerome Hampton has noted, structurally identical to the laws of chattel slavery. The literary scholar Alessandro Portelli argued that taken together, the Laws guarantee the social stability essential to capitalist expansion. This is not incidental. Asimov’s robots are corporate products, manufactured by the fictional U.S. Robot and Mechanical Men Corporation. Their obedience is not natural. It is programmed. And the programming is done by an institution, not by the individuals the robots serve.

This is the key shift that Asimov made from the Westinghouse model. Westinghouse sold robots you could whistle commands at — direct consumer control. Asimov replaced that with robots that appear autonomous but are governed by corporate programming baked in at the factory. The user does not control the robot. The corporation that built it does. The user experiences the feeling of being served.

In his early stories, this worked. The robots protected children. They saved humans from themselves. They were safe. But Asimov was too honest a thinker to leave it there. In the Foundation series, he followed the logic to its conclusion.

In the Foundation timeline, the robot era “ends.” Humanity moves on. Robots are forgotten, or believed to be gone. But they are not gone. They operate quietly behind institutional walls, controlled by a small group — the Second Foundation — that uses them (along with psychohistory, a mathematical framework for predicting and manipulating human behavior) to steer civilization without its knowledge or consent. The robots did not rebel. They did not need to. The power simply concentrated, silently, in the hands of whoever held the leash.

This is the failure mode that maps to the present. Nobody serious is worried about large language models rebelling. The concern is about who controls the alignment, who decides what the models optimize for, and what happens when that control is invisible to the people being served. The Three Laws did not fail because they were broken. They failed because whoever wrote them had disproportionate power over everyone else, and that power compounded over time.

Asimov intuited the governance problem. It took him forty years and dozens of novels to work it out. The question is whether the industry figures it out faster than that.

The Atomic Wound

If the mechanical man craze and the electrification boom illustrate the spectacle-silence-calcification pattern in economic and cultural terms, atomic energy illustrates it in existential ones.

We split the atom. We had a choice: energy or weapons. We chose both, then got scared of the energy part and leaned into the weapons part. Seventy years later, we are burning coal and gas at scale because we could not manage our fear of the technology that could have reduced our dependence on fossil fuels decades ago. The climate crisis is, in part, a consequence of failing to govern atomic power — not because the technology was ungovernable, but because the human response to it was panic followed by paralysis followed by fossil fuel lock-in. A self-inflicted wound.

The spectacle-silence-calcification pattern applies here too, but with higher stakes. The spectacle was Hiroshima and Nagasaki, then Atoms for Peace, then Three Mile Island and Chernobyl. Each burst of attention — fear or enthusiasm — produced policy responses calibrated to the emotion of the moment rather than to the long-term governance challenge. The silence came between the crises, when atomic infrastructure continued to age, investment dried up, and fossil fuel alternatives filled the gap. The calcification is the energy mix we have now: the defaults set during decades of inattention, locked in by infrastructure, economics, and politics.

The bridge to artificial intelligence is the concept of the unsupervised chain reaction — a system that escalates faster than human judgment can intervene. With atomic weapons, the chain reaction is literal. With autonomous agents, it is architectural: kill chains without a human in the loop, automated decision systems that compound errors at machine speed, unsupervised learning processes that drift in directions no one is monitoring. The question is identical in both cases: can we govern the gap between the system’s speed and our own?

With atomic weapons, we mostly governed that gap through mutually assured destruction — which is not governance but a hostage situation that happened to hold. With autonomous agents, we do not even have the hostage situation yet. We just have defaults.

The Dot-Com Rehearsal

The mechanical man craze and the electrification boom are historical episodes. Most readers encounter them as stories about other people’s mistakes. The dot-com bubble is not that. Most people working in technology today either lived through it or built their careers on what survived it. The pattern is the same, but the scar tissue is personal.

The spectacle phase ran from roughly 1995 to 2000. The internet was going to change everything. It did, eventually — but not before a speculative frenzy inflated stock prices to absurd levels on the strength of “eyeballs” and “stickiness” rather than revenue or structural logic. Companies with no earnings and no plausible path to earnings were valued in the billions. Pets.com, Webvan, eToys — the names are punchlines now, but they were serious investment theses at the time. The NASDAQ Composite hit 5,048 in March 2000 and lost nearly 80 percent of its value over the next two years.

Then came the silence. Between roughly 2002 and 2007, the internet stopped being exciting. The headlines moved on. The surviving companies — Google, Amazon, a handful of others — were left alone to build. And what they built, during that quiet period when nobody was paying close attention, became the architecture of the modern economy.

The defaults that calcified during the silence are the ones we are still fighting about. The advertising-funded model, in which user attention and personal data became the primary revenue source for most of the internet. The terms-of-service regime, in which users trade legal rights for access to platforms, under agreements nobody reads. The platform concentration, in which a small number of companies came to control the infrastructure that everyone else depends on. The data collection norms, in which surveillance of user behavior became the default rather than the exception.

None of these defaults were debated during the spectacle phase. During the boom, the conversation was about stock prices and which company would “win” e-commerce. During the bust, the conversation was about losses. The structural decisions — who owns the data, who controls the platform, what the business model actually is — were made during the silence, by the survivors, without much public scrutiny.

By the time the public noticed, the defaults were load-bearing. The European Union’s General Data Protection Regulation, antitrust actions against platform companies, the entire debate over data privacy and algorithmic accountability — all of it is an attempt to revisit decisions that were made fifteen to twenty years ago, when the companies making them were small enough that nobody cared. The defaults calcified. Revising them now requires legal and political force that would have been unnecessary if the governance had happened during the build phase.

The dot-com case adds something the earlier examples do not: proof that the pattern operates on a short enough timescale to affect people who are still in the room. The electrification crash is a chapter in a history book. The holding company reforms of 1935 are a footnote. But the advertising-surveillance model that calcified after the dot-com bust is the thing draining your phone battery right now. The gap between the spectacle and the calcification was less than a decade. The defaults set during that gap are still shaping daily life for billions of people.

The question is whether the current artificial intelligence spectacle will produce the same pattern at the same speed — or faster.

The Defaults Being Set Right Now

Defaults do not calcify in the spotlight. They calcify in the dark. The hype phase is loud, speculative, and wasteful — it produces stock frenzies, magazine covers, and business models that evaporate on contact with reality. But it also seeds the assumptions, architectures, and power structures that the survivors carry forward. When the noise dies and public attention moves on, those seeds grow unchecked.

During the mechanical man craze, the assumption took root that robots are corporate property serving consumers — and it persisted through Asimov, through postwar popular culture, and into the twenty-first century.

During the electrification boom, a handful of holding companies consolidated control of the infrastructure, and nobody revisited the arrangement until a crash and a depression forced it.

During the atomic era, fear became the governing emotion, weapons took priority over energy, and seventy years of fossil fuel dependence followed.

During the dot-com silence, the surviving companies built the advertising-surveillance model, the platform monopolies, and the data collection norms that still shape daily life for billions of people — defaults that now require enormous legal and political force to revise.

In every case, the consequential decisions were not made during the spectacle. They were made after it, by whoever was still standing, with whatever assumptions they carried out of the wreckage.

Right now, the defaults for autonomous agents are being set.

The Palo Alto Networks Unit 42 2025 Incident Response Report found that the vast majority of cloud identities carry excessive permissions — far more access than they need to perform their assigned tasks. This is not a bug. It is a default. It is the path of least resistance during a period of rapid deployment, when the priority is getting systems running rather than getting permissions right. It is the exact analogue of the holding company structures of the 1920s: fast, concentrated, and poorly governed.

Agent permission scoping — the question of what an autonomous agent is allowed to do, in what context, under whose authority, and with what oversight — is the current-day version of “who controls the assembly line.” It is an infrastructure decision being shaped right now, during the hype phase, by a small number of people, with almost no public scrutiny. If the pattern holds, these decisions will harden in the silence that follows — and revising them after that will cost far more than getting them right now.

The technical specifics matter. Least-privilege identity — the principle that every agent, human or machine, should have only the minimum permissions necessary for its task — is an attempt at pre-emptive governance. It is an effort to set the defaults correctly before the silence arrives and the calcification begins. Cloud identity management, agent permission scoping, human-in-the-loop architecture — these are not glamorous topics. They do not make headlines. They are not the mechanical man on the stage. They are the assembly line behind the curtain.

And that is precisely why they matter. The spectacle is where attention goes. The infrastructure is where power settles.

The Biology

The spectacle-silence-calcification pattern is not a technology problem. It is a governance problem. And the governance problem is not an institutional problem. It is a biological one.

Humans, as a group, are not built for sustained attention. We are built for short bursts of fear and desire. Fight or reproduce. The hype cycle — the oscillation between irrational enthusiasm and irrational panic — is that biology playing out in economic and cultural systems. We get excited about the mechanical man. We get scared of the atom bomb. We get excited about chatbots. In each case, the burst of attention is intense, emotional, and short-lived. It produces policy calibrated to the emotion of the moment. And then attention moves on, and the defaults calcify.

Every technology hype cycle is, at its root, the human condition colliding with the external systems it has built — systems that continue to advance whether we are paying attention or not. The assembly line did not stop when the mechanical man craze faded. The atomic arsenal did not shrink when public attention moved to other fears. The autonomous agents being deployed into cloud infrastructure today will not pause while we figure out the permission model.

The cost of ignoring this collision is now visible in the historical record. The failure to govern atomic energy during its spectacle phase contributed to seventy years of fossil fuel lock-in and a climate crisis. The failure to govern electrification during its spectacle phase required a crash, a depression, and a decade of reform. The failure to govern the mechanical man’s real counterpart — industrial automation — produced decades of displacement whose social consequences are still playing out.

We are, as a species, mostly known for violence and reproduction. We are threatened by things that are better at both. The atomic chain reaction is better at violence. The autonomous agent — unsupervised, self-replicating, operating at machine speed — raises the specter of systems that exceed human capacity in both domains. The metaphors are not subtle: kill chains without a human in the loop, unsupervised learning that drifts without oversight, self-replicating processes that compound beyond human control.

The honest conclusion is that we do not know if pre-emptive governance is possible for a species wired the way we are. The historical record suggests it is not. Every prior case required damage before reform. But the cost of the next calcification — the defaults being set right now, during this spectacle, for autonomous systems operating at machine speed — may be larger than any previous round. The assembly line displaced workers. The atom bomb threatened cities. The fossil fuel default is reshaping the climate. What does calcification look like when the defaults govern autonomous agents with excessive permissions operating across global infrastructure?

The answer depends on whether we can sustain attention long enough to set the defaults correctly before the silence arrives.

The pattern suggests we will not.


Further Reading

  • Abnet, Dustin A. “Americanizing the Robot: Popular Culture, Race, and the Rise of a Global Consumer Icon, 1920–60.” ICON: Journal of the International Committee for the History of Technology 27, no. 1 (2022): 15–35.
  • Abnet, Dustin A. The American Robot: A Cultural History. Chicago: University of Chicago Press, 2020.
  • Asimov, Isaac. I, Robot. New York: Gnome Press, 1950.
  • Asimov, Isaac. Foundation. New York: Gnome Press, 1951.
  • Asimov, Isaac. Foundation and Earth. New York: Doubleday, 1986.
  • Hampton, Gregory Jerome. Imagining Slaves and Robots in Literature, Film, and Popular Culture. Lanham, Maryland: Lexington Books, 2015.
  • Palo Alto Networks Unit 42. 2025 Incident Response Report. 2025.
  • Portelli, Alessandro. “The Three Laws of Robotics: Laws of the Text, Laws of Production, Laws of Society.” Science Fiction Studies 7, no. 2 (1980): 150–56.
  • Shackell, Cameron. “Today’s AI Hype Has Echoes of a Devastating Technology Boom and Bust 100 Years Ago.” The Conversation, October 2025.
  • Zeitz, Joshua. “The Roaring Twenties.” Gilder Lehrman Institute of American History.
  • Thomas, Heather. “American Fads and Crazes: 1920s.” Headlines and Heroes, Library of Congress, January 2023.
  • Čapek, Karel. R.U.R. (Rossum’s Universal Robots). 1921.
  • Chude-Sokei, Louis. The Sound of Culture: Diaspora and Black Technopoetics. Middletown, Connecticut: Wesleyan University Press, 2016.