TL;DRWhy This Matters
There is a particular kind of dread that comes not from sudden crisis but from slow accumulation — the feeling of watching a tide come in while everyone argues about whether it is really rising. The Doomsday Clock was invented precisely to interrupt that complacency, to take the abstract mathematics of annihilation and press them into something a child could read. It has been doing this work since 1947, through the hydrogen bomb and the Cuban Missile Crisis, through détente and the fall of the Soviet Union, through the uneasy optimism of the early 21st century. And yet here we are: closer to midnight than at any point in the clock's history, including the years when the United States and the Soviet Union had their nuclear weapons on hair-trigger alert and their leaders had never met face to face.
What makes this moment different from previous close calls is not the presence of a single catastrophic threat but the convergence of several. Nuclear arsenals are expanding again after decades of arms control agreements that once seemed irreversible. The climate system is registering changes that will reshape where billions of people can live and grow food. Artificial intelligence is being woven into military systems faster than anyone has developed frameworks for governing its use. And the international institutions designed to manage these risks — the treaties, the diplomatic channels, the shared norms built painfully over generations — are fraying in ways that even their architects could not have anticipated.
The reason this matters beyond the headlines is that the clock measures something subtle: not just how dangerous the world is, but how well or poorly we are managing that danger. A world with nuclear weapons that is actively negotiating arms reductions, sharing early-warning data, and investing in diplomacy looks different on the clock than a world with the same weapons but collapsing communication channels and open great-power rivalry. The clock moved to 85 seconds in January 2026 not merely because new weapons appeared, but because the architecture of cooperation that once constrained those weapons is visibly deteriorating. That is the signal worth understanding.
For younger generations who grew up after the Cold War, the Doomsday Clock can seem like a relic — a piece of theatrical science from a more anxious era. For those who lived through the Cuban Missile Crisis or the Able Archer 83 war scare, the current reading carries a different weight: the recognition that catastrophe does not always announce itself clearly, that history's most dangerous moments often felt, to people living through them, like ordinary geopolitical friction. The gap between those two perceptions — between dismissal and recognition — is exactly the gap the clock was designed to bridge.
The Origins: Scientists With a Conscience
To understand what the Doomsday Clock is, it helps to understand what it was trying to be when it was created. In 1945, the scientists who had worked on the Manhattan Project found themselves in an uncomfortable position. They had solved an extraordinary technical problem, and in doing so had handed humanity a weapon capable of ending civilization. Some of them, including J. Robert Oppenheimer and the physicist James Franck, had argued before the Hiroshima bombing that a demonstration strike on an uninhabited area might achieve Japan's surrender without mass civilian death. They were overruled. When the bombs fell, many of the scientists who built them began asking a question that had no comfortable answer: what do technically skilled people owe to the societies that fund their work and live with its consequences?
The Bulletin of the Atomic Scientists was their attempt at an answer. Founded in 1945 by Albert Einstein, Oppenheimer, and colleagues at the University of Chicago, it began as a mimeographed newsletter circulated among physicists who wanted to discuss the ethical and policy implications of nuclear weapons. Two years later, the artist Martyl Langsdorf — whose husband Alexander Langsdorf was one of the Manhattan Project scientists — designed a clock face for the Bulletin's cover. The minute hand was placed at seven minutes to midnight. The choice was largely aesthetic: it looked good on the page. But the symbolism was instantly legible, and it stuck.
What is important to note here — and what the Bulletin itself emphasizes — is what the clock is not. It is not a prediction. It is not a seismograph measuring actual danger in some precise physical sense. It is a judgment rendered by a specific group of experts, the Science and Security Board, who assess global trends each year and decide, collectively, where the minute hand should sit. That judgment is informed by deep expertise and by consultation with a Board of Sponsors that currently includes eight Nobel laureates. But it is a human judgment, and like all human judgments, it carries assumptions, blind spots, and theoretical commitments that are worth understanding rather than simply accepting.
The Clock's History: A Timeline of Fear and Hope
The clock has not always read close to midnight. Its history traces, imperfectly but meaningfully, the arc of humanity's relationship with existential risk. When it was first set in 1947, the minute hand sat at seven minutes to midnight — a number chosen for design reasons but quickly understood as a symbol of genuine postwar anxiety. By 1949, when the Soviet Union tested its first nuclear device and the Cold War arms race began in earnest, it moved to three minutes. In 1953, after both superpowers tested thermonuclear weapons — hydrogen bombs exponentially more powerful than the Hiroshima device — it stood at two minutes to midnight. That was the previous closest reading before 2020.
Then something remarkable happened. The clock moved backward. In 1963, the Partial Nuclear Test Ban Treaty pushed it to twelve minutes. In 1972, following the SALT I arms control agreements, it sat at twelve minutes again. Its furthest point from midnight came in 1991, the year the Cold War ended and the United States and Soviet Union signed the Strategic Arms Reduction Treaty (START): seventeen minutes. This was the clock reflecting genuine diplomatic achievement, and it is worth pausing on that number — because it suggests that the clock can move in both directions, and that collective action at the level of great powers is actually capable of reducing civilizational risk in measurable ways.
The current trajectory, however, runs the other way. The clock has moved steadily toward midnight since 2017. In 2020, the Bulletin moved it to 100 seconds for the first time, citing the erosion of arms control agreements and growing nuclear tensions. In 2023, it moved to 90 seconds. In January 2026, it moved again: to 85 seconds, the new closest reading in the clock's history. Each increment represents not just a symbolic adjustment but a considered argument that the world's major powers are failing to manage the risks they have created — and that the failure is accelerating.
The Nuclear Dimension: Old Fears, New Configurations
The original reason for the clock — nuclear weapons — remains central to its current reading, but the nature of the nuclear risk has shifted in ways that make it genuinely different from Cold War dangers. During the Cold War, the primary concern was a deliberate first strike by one superpower against another, or a miscalculation during a crisis that spiraled out of control. Arms control treaties, hotlines between leaders, and elaborate protocols for verifying each side's behavior all emerged as tools to manage that specific risk. They worked imperfectly, but they worked.
Today's nuclear landscape is more complicated in at least three ways. First, it is multipolar. The Cold War was fundamentally a two-player game, which made deterrence theory simpler even when the stakes were higher. Today, China is rapidly expanding its nuclear arsenal, adding warheads and delivery systems at a pace that analysts describe as historically unprecedented for any nation in the post-Cold War era. The United States is in the middle of a comprehensive modernization of its own nuclear delivery systems. Russia has both continued its modernization and made explicit rhetorical references to nuclear weapons use in the context of the Ukraine conflict in ways that would have been considered dangerously escalatory by Cold War standards. A three-player nuclear deterrence game is mathematically and strategically far more complex than a two-player one, and there are no robust treaties governing it.
Second, the regional nuclear dimension has become acute. In May 2025, India and Pakistan — both nuclear-armed states — engaged in cross-border drone and missile strikes following a terrorist attack in Kashmir, with explicit nuclear brinkmanship on both sides. This was not a theoretical scenario from a policy paper; it was an actual exchange of strikes between two countries that together possess over 300 nuclear warheads and have no functional hotline of the kind the United States and Soviet Union established after the Cuban Missile Crisis. The fact that this crisis did not escalate further may reflect good decisions by leaders on both sides — or it may reflect luck. Distinguishing between those two explanations is harder than it sounds, and more important.
Third, the Iran nuclear question has moved into new territory. In June 2025, Israel and the United States conducted aerial attacks on Iranian nuclear facilities suspected of advancing a weapons program. Whether those attacks set back Iran's nuclear timeline — or whether they provided the political and strategic justification for Iran to pursue a weapon covertly — remains, as the Bulletin notes, genuinely unclear. This ambiguity is itself dangerous: it creates conditions in which Iran's neighbors and adversaries may act on worst-case assumptions, potentially triggering escalation based on incomplete information.
Climate, AI, and the Compounding of Catastrophic Risk
One of the most significant intellectual developments in how scientists and policymakers think about existential risk is the recognition that catastrophic risks do not operate in isolation. They interact. They compound. A climate disruption that reduces food security in a region of existing ethnic tension can increase the probability of conflict. A conflict that involves nuclear-armed states interacts with any weakness in nuclear command-and-control systems. An artificial intelligence system embedded in military decision-making may behave in ways its designers did not anticipate when it encounters novel conditions created by climate or geopolitical stress.
The Doomsday Clock has recognized this compounding effect by expanding its scope. While it began as a nuclear risk indicator, the Bulletin has incorporated climate change as a factor since the early 2000s, and more recently has begun weighing disruptive technologies — including artificial intelligence and biotechnology — in its annual assessment. This expansion is intellectually honest but also genuinely complicated: it means the clock is now tracking several different kinds of risk, each with its own dynamics and each with different communities of experts who think about it. Whether a single clock face can adequately represent that complexity is a fair question.
What the compounding framework captures, however, is something real. Climate change is already creating the conditions for increased geopolitical instability: displacement, resource competition, and stress on agricultural systems that will only intensify over coming decades. The Intergovernmental Panel on Climate Change has established with high confidence that global average temperatures have risen approximately 1.1 to 1.2 degrees Celsius above pre-industrial levels, and that current national commitments — even if fully honored — are not sufficient to stay within the 1.5-degree threshold that scientists associate with the most severe risks. The distance between current trajectories and that threshold is not comfortable, and it is closing.
Artificial intelligence presents a different kind of challenge, partly because the risks are less well characterized. What is established: AI systems are being integrated into military logistics, surveillance, threat assessment, and, in some cases, weapons guidance functions, by multiple major powers simultaneously and without meaningful international governance frameworks. What is debated: how much genuine decision-making autonomy these systems have or will have, and whether human oversight mechanisms are adequate to prevent consequential errors. What is speculative — though increasingly less so — is the question of how autonomous weapons systems will interact with human command structures in high-stress crisis situations, where decision timelines may be measured in minutes or seconds.
Biotechnology adds yet another dimension. The COVID-19 pandemic demonstrated that a novel pathogen — whether naturally occurring or otherwise — can disrupt global society more profoundly than most risk analyses had projected. The scientific tools that allow for rapid vaccine development also lower the barriers to engineering dangerous organisms. The governance frameworks for biosecurity are thinner than those for nuclear weapons, and they are not keeping pace with the technology.
The Architecture of Cooperation: What Is Being Lost
Perhaps the most underappreciated aspect of the current moment is what is not happening. The treaties that do not get signed, the communication channels that go unused, the multilateral forums that produce declarations without implementation — these absences are harder to see than a new missile test or a border clash, but they matter enormously. The Bulletin's 2026 statement emphasizes that hard-won global understandings are collapsing, and that phrase deserves unpacking.
Over the second half of the 20th century, the international community built an elaborate architecture for managing nuclear risk. The Nuclear Non-Proliferation Treaty (NPT), signed in 1968, committed nuclear weapons states to eventual disarmament and non-weapons states to forgo developing them in exchange for access to civilian nuclear technology. The Comprehensive Nuclear-Test-Ban Treaty established a norm — if not a fully ratified legal prohibition — against nuclear testing. The Intermediate-Range Nuclear Forces (INF) Treaty between the United States and Soviet Union, signed in 1987, eliminated an entire category of nuclear weapons. The New START Treaty set limits on deployed strategic warheads and delivery systems and established a verification regime that gave both sides visibility into each other's arsenals.
The INF Treaty was terminated in 2019. New START lapsed in 2026 with no successor agreement in place. The NPT review conferences have repeatedly failed to produce consensus documents. The international institutions that were designed to provide forums for managing these risks — the United Nations, the Conference on Disarmament — are functional but increasingly gridlocked along great-power lines. The proposed Golden Dome missile defense system in the United States, which would include space-based interceptors, raises the prospect of militarizing space in ways that could fuel a new arms race in that domain, adding another dimension of instability to an already complicated picture.
What is being lost is not just a set of agreements. It is a practice — the habit and infrastructure of negotiation, verification, and compromise that allows adversaries to manage shared risks even when they distrust each other. That practice took decades to build. It is not obvious how quickly it could be rebuilt once lost, or whether the institutional knowledge required to do so would survive an extended period of pure great-power competition.
The Debate Around the Clock Itself
Intellectual honesty requires engaging seriously with criticisms of the Doomsday Clock, because there are credible ones. The most common objection is that the clock is not a scientific instrument — it does not measure anything physical, it does not follow a reproducible methodology, and its settings reflect the judgments and priorities of a specific group of American and Western-aligned scientists who may have blind spots. This is fair. The Bulletin acknowledges that the clock is a communication tool, not a geophysical sensor, and that reasonable experts might assess the same facts differently.
A second criticism is that the clock's expanding scope — from nuclear weapons to climate to AI to biotechnology — may dilute its signal. When a single indicator tries to capture too many different kinds of risk, it becomes harder to interpret what any particular movement means. A move from 90 to 85 seconds might reflect a genuine worsening of nuclear risk, or it might reflect growing concern about AI, or some combination of factors that the single number cannot disaggregate. Critics argue that this ambiguity reduces the clock's usefulness as a policy communication tool.
A third and more pointed criticism comes from some security analysts who argue that the clock may actually be counterproductive in certain respects — that by consistently signaling maximum danger, it creates a form of apocalyptic fatigue that makes people feel powerless rather than motivated to act. There is genuine psychological research suggesting that catastrophic framings of existential risk can trigger disengagement rather than action, particularly when the perceived scale of the problem overwhelms any sense of individual agency.
Against these criticisms, defenders of the clock argue that its imprecision is not a bug but a feature: it forces conversation about what the number means, which is more valuable than a false precision. They also note that the clock's history — including its significant movements in both directions — demonstrates that it is not simply an alarm system set permanently to maximum. When things genuinely improved, the clock said so. The fact that it is currently at its closest reading in history reflects a considered judgment by people who have spent their careers studying these risks, and that judgment deserves engagement rather than dismissal.
What Individuals and Societies Can Actually Do
One of the most common responses to news like this is a quiet despair — the sense that decisions of this magnitude are made by governments and military establishments so far removed from ordinary life that the question of personal or collective response is almost absurd. That response is understandable. It is also, historically, not entirely accurate.
The movements that produced the Partial Test Ban Treaty, the Nuclear Non-Proliferation Treaty, and arms control agreements of the 1970s and 1980s were not purely the product of elite diplomatic calculation. They were shaped by broad public pressure — by scientists who spoke publicly, by peace movements that changed the political calculus for leaders, by journalists who explained technical subjects in human terms, and by electorates that signaled, clearly enough, that they cared. The Bulletin of the Atomic Scientists itself is an example of this: technical experts choosing to communicate with public audiences rather than only with each other.
What this history suggests is not that individual action is sufficient — it clearly is not — but that the quality of public understanding of existential risks has a genuine effect on the political environment in which leaders make decisions. A public that understands why arms control agreements matter, what missile defense systems do to strategic stability, and why biosecurity funding is not merely a line item in a health budget is a different political constituency than one that does not. The clock is, in this sense, an educational instrument as much as it is an alarm.
This does not mean accepting the Bulletin's assessments uncritically. Reading the clock well means understanding its methodology, its limitations, and the debates within the expert community about how to weight different risks. It means distinguishing between what is established by scientific consensus, what is genuinely debated among credible experts, and what is speculative but worth taking seriously. It means being willing to sit with uncertainty rather than resolving it prematurely in either direction — toward denial or toward paralysis.
The Questions That Remain
Five open questions seem most important to carry forward — not as rhetoric, but as genuine problems that the scientific and policy communities have not resolved, and that matter enormously for what comes next.
Can great-power arms control be rebuilt in a multipolar world? All the major arms control successes of the 20th century were essentially bilateral, between the United States and the Soviet Union. The strategic landscape now includes China as a major and growing nuclear power, with India, Pakistan, North Korea, Israel, France, and the United Kingdom each occupying different positions in the global nuclear order. Whether the verification frameworks, the strategic concepts, and the diplomatic practices developed for a two-player game can be adapted for a more complex configuration is genuinely unknown. No adequate framework currently exists, and there is no active negotiation toward one.
How does artificial intelligence change the dynamics of nuclear crisis management? The integration of AI into military systems is proceeding faster than governance frameworks can track. In a high-stakes crisis where decision timelines are compressed, how will AI-assisted threat assessment interact with human judgment? Will it slow decisions down by providing better information, or speed them up in ways that reduce the time for de-escalation? The research on this question is active but inconclusive.
Is nuclear deterrence stable with nine nuclear-armed states? Cold War deterrence theory was developed for a dyad. When extended to a system of nine nuclear-armed actors with different doctrines, different thresholds, and different levels of command-and-control sophistication, does the underlying logic of mutual assured destruction still hold? Theoretical work on this is ongoing, but there is no consensus, and the India-Pakistan crisis of 2025 raised empirical questions that the existing theory does not fully answer.
What is the relationship between climate disruption and conflict risk over the next three decades? The causal pathways between climate change and armed conflict are real but contested in their specifics. Which regions face the highest risk? Which forms of climate impact — drought, sea level rise, extreme heat, crop failure — are most strongly associated with instability? What interventions reduce that risk most effectively? These are not rhetorical questions. They are research questions with significant policy implications, and the answers will shape decisions about where to invest in both climate adaptation and conflict prevention.
Can new forms of international governance emerge fast enough to manage disruptive technologies? Biotechnology and artificial intelligence are developing faster than any previous dual-use technology, and the gap between what is technically possible and what international frameworks can govern is widening. The nuclear case offers a partial model — treaties, verification regimes, shared norms — but also a cautionary tale: those mechanisms took decades to build and are now eroding. For technologies moving at the pace of AI and synthetic biology, decades may not be available. What faster pathways to governance exist, and what political conditions would make them possible?
The Doomsday Clock is 85 seconds from midnight. It is a number built of human judgment, carrying all the limitations that implies. It is also the considered verdict of people who have spent their lives studying what threatens us, and who have — when the evidence supported it — moved the hand in the other direction. What it is asking us to do, in its blunt and theatrical way, is not to despair, but to look clearly at what is happening and take it seriously enough to ask better questions. The hour is not fixed. The hand has moved before. The scientists who built the bomb also built the clock, which means they understood, from the beginning, that knowledge without conscience leads exactly here — and that the same human capacity that creates catastrophic risk contains the possibility, not the guarantee, of something better.