We need to talk about p(doom). Not because it's wrong, exactly, but because it's weaponized wrongness—the kind of intellectual framework that turns otherwise rational people into true believers, complete with the glazed eyes and unshakeable certainty that used to be the exclusive province of street-corner prophets holding cardboard signs.
The seduction is this: p(doom) presents itself as the ultimate rational framework. It's quantified! It's probabilistic! It acknowledges uncertainty! But scratch the surface and you'll find something much more primitive lurking underneath—something that looks suspiciously like the oldest story humans tell themselves about being special, about living at the crucial moment, about being the chosen generation that gets to save or damn the world.
Ask someone their p(doom) and watch what happens. You've just posed what philosophers might recognize as a manipulative question—the kind that has no good answer precisely because it's designed not to elicit information but to constrain thinking.
Low numbers? You're now subject to Pascal's mugging, where vanishingly small probabilities of infinitely bad outcomes become the only thing that matters. A 0.1% chance of human extinction suddenly dominates every other consideration, every other value, every other possible future. Your entire ethical framework collapses into a single number.
High numbers? Welcome to paralysis and savior complex syndrome. If there's a 30% chance of doom, then obviously nothing else matters—not your relationships, not your career, not art or music or the simple pleasure of a good meal. Everything becomes subordinated to The Work of preventing the apocalypse.
But here's the really insidious part: even low numbers feel massive due to what psychologists call the possibility effect. Once something becomes possible in our minds, we systematically overweight it. A 1% chance of extinction doesn't feel like "probably fine"—it feels like "holy shit, there's a chance we all die."
The question itself is the trap. It forces you to inhabit a mental universe where extinction becomes the overwhelming consideration, where all other human values—beauty, love, discovery, growth, meaning—become footnotes to the grand calculation of survival.
There's something almost comically convenient about apocalyptic value systems: they invariably conclude that we—the current generation of researchers, policymakers, concerned citizens—happen to be living at the most important moment in human history.
What are the odds? Out of the roughly 10,000 generations of humans who have lived and died, we get to be the special ones. We get to be the protagonists of the ultimate story. Not our great-grandparents, who were apparently too primitive to face existential risk. Not our great-grandchildren, who will apparently inherit either utopia or wasteland depending on what we do in the next few years.
This is chronological narcissism dressed up as altruism. It's the secular version of every religious movement that's ever convinced its followers they're living in the end times—with the same psychological payoffs. Suddenly your life has ultimate meaning. Your choices have cosmic significance. You're not just another person muddling through existence; you're a guardian of humanity's future.
The seduction is profound because it's not obviously selfish. You're not claiming to be special for personal glory—you're claiming to be special in service of everyone. It's narcissism wearing a saint's robes, and that makes it almost impossible to critique without seeming callous or short-sighted.
But here's what really worries me about p(doom) psychology: it's a totalizing ideology. It doesn't just add another consideration to your moral calculus—it consumes everything else.
When extinction risk becomes the dominant frame, all other human experiences get relegated to rounding errors. Art? Nice hobby, but have you considered that AI might kill everyone? Education? Sure, but shouldn't those resources go toward alignment research? Healthcare? Important, but not if there won't be any humans left to heal.
This is precisely what makes apocalyptic thinking so dangerous: it omits the entirety of the rest of life. It creates a mental universe where the only thing that matters is the prevention of catastrophe, where all the complexity and richness of human existence gets flattened into a single variable.
Normal people—and I use that term with genuine affection—don't think this way. They care about their kids' education and potential future risks. They want to solve climate change and create beautiful art. They can hold multiple values simultaneously without needing to reduce everything to a single utility function.
Apocalyptic value systems, by contrast, are fundamentally reductionist. They take the incredible complexity of human values and slam them through a funnel marked "extinction risk," claiming that anything that doesn't fit is luxury we can't afford.
Let's get mathematical for a moment, because this is where things get really interesting. When dealing with unprecedented catastrophic risks, we're supposedly working with probability distributions that have:
In such cases, expected value calculations become pathological:
Infinite/Undefined Expectations: If the true underlying distribution has sufficiently heavy tails, the expected value may not exist mathematically. Adding more extreme scenarios to your model can flip the sign of your expected utility indefinitely.
Parameter Sensitivity: Small changes in tail thickness parameters can swing expected values by orders of magnitude. Since we have no reliable way to estimate these parameters for unprecedented events, any specific calculation is arbitrary.
Model Uncertainty Dominance: The uncertainty about which probability model applies swamps the uncertainty within any given model. Are we dealing with exponential tails? Power law? The choice matters more than the parameters, but we have no principled way to choose.
Reference Class Problem: Computing probabilities requires assuming some reference class of "similar" events, but truly unprecedented technologies have no meaningful reference class.
This is the St. Petersburg Paradox generalized: catastrophic risk scenarios with heavy tails create situations where adding increasingly unlikely but severe outcomes can dominate all decision-making, even when those scenarios are pure speculation.
The result? Your careful probability estimates aren't careful at all—they're mathematical theater, precision without accuracy, a way of making wild speculation look rigorous.
Here's perhaps the most ingenious aspect of small p(doom) estimates: they create a belief system that's virtually immune to disconfirmation.
Assign a 1-5% probability to AI doom, and you've constructed a cognitive fortress. Every day that passes without catastrophe is perfectly consistent with your model—after all, you predicted a 95-99% chance nothing would happen today. Your confidence can remain unchanged indefinitely.
The Asymmetric Evidence Problem: Your belief can only be definitively falsified by actual extinction—but if extinction occurs, there's no one left to update. Meanwhile, every peaceful day confirms your prediction.
The Precautionary Ratchet: Finding new potential risk factors can increase your p(doom), but discovering that previous concerns were overblown doesn't proportionally decrease it—because there might be other risks you haven't thought of yet.
The Meta-Uncertainty Escape: Challenge the specific mechanisms? Retreat to meta-uncertainty: "Maybe I'm wrong about the details, but there's so much we don't know that the overall risk level seems right."
This creates what philosophers of science call a "degenerating research program"—a belief system that becomes progressively better at explaining away contrary evidence rather than making novel, testable predictions.
The genius of small probabilities is that they feel epistemically humble while actually being maximally resistant to revision. You're not claiming certainty—just concern! Who could argue with that?
Want to see apocalyptic thinking in action? Look at the curious case of "gain of function" research with AI—where the very people most concerned about catastrophic risk are systematically creating the conditions they claim to fear.
"The only major bioweapons discovery programs that led to pandemics were ostensibly due to prevention."
Think about this carefully. We're creating teams of brilliant researchers and engineers whose job is to figure out novel, dangerous capabilities in AI systems—ostensibly to prove whether these systems are safe. But what we're actually doing is red-teaming reality, systematically exploring the space of possible harms.
The pattern is almost too obvious to notice:
Normal people don't spend their time thinking about bioweapons blackmail or novel catastrophic attack vectors. But push hard enough on these technological frontiers "for safety," and you might just hyperstition them into existence.
The apocalyptic mindset creates a kind of prophetic recursion: the very act of taking the risk seriously and working to prevent it becomes one of the primary ways the risk gets realized.
Apocalyptic value systems don't just distort individual cognition—they create powerful social dynamics that reinforce themselves.
Once you're inside the doom community, every piece of news becomes evidence for your worldview. AI capabilities advance? That's evidence for short timelines and higher risk. AI capabilities stagnate? That's evidence that we're in a dangerous overhang period. Governments start regulating AI? That shows they're taking the threat seriously. Governments don't regulate AI? That shows how dangerously complacent they are.
The community develops its own language, its own heroes and villains, its own markers of tribal membership. You learn to spot the true believers by their casual use of terms like "s-risk" and "pivotal acts." You learn who's properly concerned (has the right p(doom)) and who's dangerously optimistic.
Doubt the framework? You're not thinking clearly about tail risks. Suggest that other problems might also matter? You don't understand the stakes. Point out that the predictions keep not coming true? You're suffering from normalcy bias.
This isn't conscious deception—it's something much more human and therefore much more dangerous. It's what happens when smart, well-meaning people organize their entire worldview around preventing a specific catastrophe. The catastrophe becomes not just a possibility to consider, but the lens through which everything else gets interpreted.
Perhaps most fundamentally, apocalyptic thinking commits a profound category error: it treats genuinely different types of catastrophic events as if they're all variants of the same thing.
Bioweapons discovery, AI alignment failure, nuclear war, climate change, asteroid impacts—these get lumped together under the banner of "existential risk" as if they're all points on the same spectrum. But they're not. They have different causal structures, different mitigation strategies, different timescales, different probabilities.
A bioweapons leak is not the same category of event as an AI alignment failure, any more than a house fire is the same category of event as a heart attack just because they can both kill you.
By treating all potential catastrophes as variations on "doom," we lose the ability to think clearly about any of them. We end up with solutions that are simultaneously over-general (trying to solve all possible risks) and under-specific (failing to address the particular dynamics of any single risk).
This is why so much existential risk thinking feels simultaneously urgent and useless—it's operating at the wrong level of abstraction, trying to solve "catastrophe in general" rather than specific, tractable problems.
What would it look like to think about these issues differently?
Start with this: instead of asking "What's your p(doom)?" try asking "What's your p(life)?"—the probability that you and the people you care about will be alive and flourishing in a few hundred years.
Suddenly the frame shifts. p(life) includes extinction risk, yes, but it also includes all the other ways people die—cancer, heart disease, aging, accidents. It includes all the ways life can be diminished even if it continues—poverty, oppression, meaninglessness.
p(life) forces you to balance risks against benefits instead of treating prevention of catastrophe as the only thing that matters. It integrates extinction risk rather than pedestalizing it.
This isn't about being blasé about genuine risks. It's about maintaining the kind of cognitive flexibility that lets you take seriously both the possibility of catastrophe and the possibility of extraordinary flourishing. It's about building resilience rather than just preventing specific failures.
Most importantly, it's about remembering that we're not just trying to avoid bad futures—we're trying to create good ones. And good futures are built by people who can hold complexity, who can care about multiple things simultaneously, who haven't reduced the entire human project to a single variable.
The seductive power of apocalyptic thinking is that it offers clarity in an uncertain world. It tells you exactly what matters most, exactly how to prioritize your efforts, exactly who the good guys and bad guys are.
But this clarity comes at a cost: it requires you to flatten the irreducible complexity of existence into a simple story about survival versus extinction. It asks you to live as if the only thing that matters is preventing the worst case, rather than building toward the best.
The world is more interesting than that. The future is more open than that. And we—all of us, not just the self-appointed guardians of humanity's fate—deserve better frameworks for thinking about what comes next.
Sometimes the most radical thing you can do is refuse to be afraid.
Originally published September 2025. For more writing, visit jeremynixon.github.io