John von Neumann’s 1955 essay “Can We Survive Technology?” has long stayed in my mind because it makes an abstract problem concrete. He observed that the overall capacity of computing machines had “nearly doubled every year” since 1945, and he did the math the way any good applied mathematician would. Ten years of doubling is about a thousandfold increase. If that is the trajectory, the question is not whether technological power grows. The question is whether our political and social order can adjust at the speed that power is growing.
Read that next to Dario Amodei’s new essay, “The Adolescence of Technology,” and you get an odd feeling of continuity, although the analysis is rhymed, not repeated. Continuity because Amodei is making a recognizable von Neumann move. He is trying to name a systems-level crisis without becoming mystical about it. He calls for sober risk talk, an explicit admission of uncertainty, and interventions narrow enough to survive institutions. However, Amodei’s object of concern has changed in a way von Neumann only partly anticipated. Von Neumann worried about the scale of human action in a finite world. Amodei worries about the speed and multiplicity of non-human action inside the same world.
Von Neumann’s core framing was geographic and institutional. He said the world was becoming “dangerously small” for the explosiveness humans would be able to “contrive by 1980,” and he described a “safety factor” that had quietly cushioned prior industrial progress: more room, more integration, and more ability to absorb shocks by expanding into new space and new markets. In his account, that cushion was disappearing. We were “running out of room,” and the relationship between technology and political organization was becoming inherently destabilizing. His examples are what you would expect in 1955: nuclear weapons; automation driven by electronic control; and, in a remarkably prescient aside, the possibility that industrial CO₂ could warm the world measurably.
Amodei is writing in a world where the “room” problem is still real, but the mechanism of danger is not primarily geography. His definition of “powerful AI” is explicitly about interface and agency: a model that can use the same interfaces a remote worker uses, can be given long tasks, and can be replicated at scale. He borrows a metaphor meant to make the governance problem concrete: a “country of experts in a datacenter,” operating faster than humans and in enormous numbers. That framing matters because it changes what the bottleneck is.
Von Neumann treated human reaction time as a fixed constant. He wrote that increased speed did not usually shorten the time required for processes so much as expand the area affected, because most time scales were fixed by human habits and physiology. Amodei is explicitly arguing that this constant is breaking. If an AI system can run at 10 to 100 times human speed, and can be copied into millions of instances, the limiting factor is no longer human tempo. The limiting factor becomes our ability to monitor, constrain, and respond before something cascades.
This is the first major difference between the two essays. Von Neumann thought the world was getting smaller because human power was reaching planetary scale. Amodei thinks the world is getting smaller because cognition and execution can be industrialized. In practice, that means the “dangerous” scenario is no longer only the deliberate, centralized use of destructive capacity by a state. It includes the routine, distributed use of powerful tools by many actors, some of whom will be careless and some of whom will be malicious. That is why Amodei spends so much time on misuse and on biology. He is worried about what happens when a small number of people can rent or copy capability that used to be the domain of specialized institutions.
The second major difference is the proposed safety stance. Von Neumann was skeptical of bright-line bans and of attempts to separate “good” technology from “bad” technology. He said, in effect, that useful and harmful techniques lie too close together to separate cleanly, and that even classification regimes are transient because the boundary between open and closed knowledge erodes. His most honest line may be the bleakest: “For progress there is no cure.” The best we can hope for is relative safety achieved through day-to-day judgment, a long sequence of small correct decisions, and new political forms that reduce incentives for war.
Amodei does not disagree with the “no cure” premise, but he thinks there is more engineering leverage than von Neumann assumed. He is betting on measurement and iterative control systems: alignment techniques that shape how models behave; interpretability tools that try to detect problematic internal tendencies; monitoring that looks for failures in the wild; and disclosure regimes (including new laws) that force frontier developers to report what they are building and what is breaking. If von Neumann’s instinct was that society would need to mature its politics, Amodei’s instinct is that we also need to mature our instrumentation.
This is not a small shift. It reflects the fact that AI, unlike many mid-century technologies, is delivered through a small number of large platforms, at least for now. That creates a control surface that did not exist in von Neumann’s day. You can audit a model release. You can publish a release safety note. You can attach conditions to deployment. You can demand transparency from a defined set of frontier actors. That does not solve everything, and von Neumann’s warning about diffusion still bites. But it changes what caution looks like in the near term. The most practical governance tools are not utopian world government and not an impossible ban. They are boring reporting rules, incentives for safer defaults, and procurement standards that treat high-agency systems the way we treat other high-risk tools.
If you bring this down from the abstract to the local, it resembles other risk domains we already know how to handle imperfectly. We do not avoid wildfire by banning heat. We do it with building codes, defensible space, measured warnings, and hard conversations about where we should and should not build. Those measures do not eliminate fire. They reduce the chance that a predictable hazard becomes a cascading disaster. Amodei is arguing for something similar: build a governance layer that accepts uncertainty but still tightens the loop between capability, testing, and accountability.
The throughline, and the difference, can be said in one sentence. Von Neumann warned that accelerating human power would outgrow the world and its political units. Amodei is warning that we are starting to mass-produce agency itself, and that this changes the time scale on which safety has to operate. If von Neumann was right that “progress has no cure,” then the only question left is which habits we build while we still have time to build them. We treat transparency, monitoring, and bounded deployment as ordinary risk management, before we are forced into a crisis posture by the first large failure.