Extinction Bounties

Policy-based deterrence for the 21st century.

Policy-Research Disclaimer (click to close)

Extinction Bounties publishes theoretical economic and legal mechanisms intended to stimulate scholarly and public debate on catastrophic-risk governance. The site offers policy analysis and advocacy only in the sense of outlining possible legislative or contractual frameworks.

  • No Legal or Financial Advice. Nothing here should be treated as a substitute for qualified legal counsel, financial due-diligence, or regulatory guidance. Stakeholders remain responsible for ensuring their actions comply with the laws and professional standards of their own jurisdictions.
  • Exploratory & Personal Views. All scenarios, numerical examples and opinions are research hypotheses presented by the author in an academic capacity. They do not represent the views of the author’s employer, funding bodies, or any governmental authority.
  • Implementation Caveats. Any real-world adoption of these ideas would require democratic deliberation, statutory authority, and robust safeguards to prevent misuse. References to enforcement, penalties, or “bounties” are illustrative models, not instructions or invitations to engage in private policing or unlawful conduct.
  • No Warranty & Limited Liability. Content is provided “as is” without warranty of completeness or accuracy; the author disclaims liability for losses arising from reliance on this material.

By continuing beyond this notice you acknowledge that you have read, understood, and accepted these conditions.

Our 2-minute elevator pitch


Background Assumptions about Frontier AI Development

Background Assumptions about Frontier AI Development

  1. AI progress is almost certainly the largest existential risk we face.
    Decades of expert analysis—e.g., the 80,000 Hours overview of AI risk—show that as AI capabilities accelerate, the probability of a single failure mode with civilization-ending potential rises dramatically. In comparison, other risks (nuclear, pandemics, climate) are either better understood or more easily contained.

  2. It is almost certainly fundamentally impossible to align a superintelligence.

    • A breadth of arguments—from the inevitability of goal drift under recursive self-improvement to the “value loading” problem—suggests no known technique can guarantee a superintelligent system will remain under human-compatible objectives.
    • To date, there are robust impossibility results or counterexamples for every proposed alignment scheme; meanwhile, there are no universally accepted proofs or even plausible blueprints for solving alignment at superhuman scales.
  3. Many individuals sincerely believe there is a moral imperative to end humanity and create AI, and have good philosophical arguments for why this is the case.

    • Transhumanist or “digital utilitarian” philosophies argue that digital minds could vastly exceed biological welfare and should therefore replace humanity.
    • We respect that these individuals hold coherent, internally consistent positions—so our goal is not to “convert” them, but to ensure such views cannot monopolize the future without broad consent.
  4. The onus is on such individuals to convince a large supermajority of humankind (≫99%) before they take any irreversible steps to hasten humanity’s end.

    • Given the stakes, unilateral action by any small group or single actor is unacceptable.
    • We insist on transparent, democratic vetting of any proposal that would replace or jeopardize the human species—even if a philosophical case can be made for it.

Next steps: