Extinction Bounties

Policy-based deterrence for the 21st century.

Policy-Research Disclaimer (click to close)

Extinction Bounties publishes theoretical economic and legal mechanisms intended to stimulate scholarly and public debate on catastrophic-risk governance. The site offers policy analysis and advocacy only in the sense of outlining possible legislative or contractual frameworks.

  • No Legal or Financial Advice. Nothing here should be treated as a substitute for qualified legal counsel, financial due-diligence, or regulatory guidance. Stakeholders remain responsible for ensuring their actions comply with the laws and professional standards of their own jurisdictions.
  • Exploratory & Personal Views. All scenarios, numerical examples and opinions are research hypotheses presented by the author in an academic capacity. They do not represent the views of the author’s employer, funding bodies, or any governmental authority.
  • Implementation Caveats. Any real-world adoption of these ideas would require democratic deliberation, statutory authority, and robust safeguards to prevent misuse. References to enforcement, penalties, or “bounties” are illustrative models, not instructions or invitations to engage in private policing or unlawful conduct.
  • No Warranty & Limited Liability. Content is provided “as is” without warranty of completeness or accuracy; the author disclaims liability for losses arising from reliance on this material.

By continuing beyond this notice you acknowledge that you have read, understood, and accepted these conditions.

Our 2-minute elevator pitch


04. Making It Global - Treaties, Coalitions, Extraditions

Fine-Insured Bounties (FIBs) sound effective in theory—but artificial intelligence development isn’t limited to any one city, state, or country. Without international cooperation, unsafe AI projects might simply move to jurisdictions without such tough rules. How do we stop this?

The solution lies in international coordination—turning FIBs from a national experiment into a global standard. Compared to other proposed international treaties, there is reason to believe FIBs will be an easier sell.

The Challenge of Global Enforcement

If just one or two countries adopt Fine-Insured Bounty policies, risky AI projects can simply shift elsewhere. Countries unwilling to slow AI progress might welcome these risky projects as economic opportunities or strategic advantages, creating dangerous AI “safe havens.”

To effectively prevent unsafe AI development, we need international agreements that discourage jurisdictional shopping.

Best case scenario: Global AI Safety Treaty

Imagine a global AI safety treaty—a multinational agreement similar in ambition to nuclear arms control treaties. Countries would jointly define prohibited AI activities, such as training exceptionally powerful AI systems without strict safety verifications.

Each signatory nation would commit to:

This treaty would create a unified front. Suddenly, developing unsafe AI would become financially ruinous almost everywhere, dramatically shrinking the space available to bad actors.

Still workable: Coalitions of the Willing

But international treaties take time and negotiation. What if universal agreement proves difficult?

A smaller group—perhaps influential tech powers like the U.S., the EU, Canada, Japan, South Korea, and Australia—could lead by example, forming a coalition implementing coordinated FIB legislation. Even this partial coordination sends a strong global message:

This coalition would exert economic and diplomatic pressure, gradually encouraging holdout nations to join rather than risk isolation.

Tough love: Unilateral enforcement by a superpower

Even without full treaty cooperation, a single powerful nation could dramatically extend the reach of FIB enforcement through unilateral legal authority. The United States, for example, already applies laws extraterritorially in areas like anti-corruption (the Foreign Corrupt Practices Act), financial crime (using the U.S. dollar as jurisdictional hook), and national security (through export controls and sanctions).

A similar model could apply here: the U.S. could declare that any person or organization operating within its borders—or doing business with its citizens—must comply with AI safety regulations, including FIB enforcement. Violators, including foreign nationals, would face enormous fines payable to whoever reports them, regardless of where the actual offense took place.

This approach has real teeth because:

By asserting jurisdiction in this way, a single nation could effectively enforce a global bounty regime across a substantial portion of the AI ecosystem—especially if it invites global whistleblowers to participate.

The message becomes: If you operate anywhere near us, you play by these rules—or pay up.

Such a bold move would be risky, but certainly not be unprecedented. It mirrors existing U.S. enforcement patterns and could serve as both a deterrent and a nudge toward wider international adoption. It seems to us very unlikely that such a keyhole escalation would lead to actual armed conflict, given that the scope at present affects only a few thousand top AI researchers worldwide. (All the moreso because many of those researchers themselves are deeply concerned about the seeming inevitability of this doomsday scenario they are contributing to!) If done transparently and fairly, it might even be welcomed by nations not yet ready to implement FIBs themselves but who benefit from the risk reduction such enforcement brings.

Encouraging Compliance Through Market Forces

With or without full global consensus, private market forces could easily push toward international alignment on this keyhold policy with the support of even a single great power. Insurance companies operating internationally would standardize safety practices to minimize their financial risks, effectively spreading high safety standards globally. Companies needing insurance would face escalating premiums if they tried evading stringent requirements, reinforcing global norms.

Conclusion: A Global Safety Network

AI is inherently global. Effective control requires broad international participation. Fine-Insured Bounties offer a powerful enforcement tool—but they depend on some limited level of international cooperation.

With careful treaty-making, influential coalitions, and market-driven alignment, the international community can close loopholes and ensure global compliance, safeguarding humanity from reckless AI development.

Next we will do a brief compare-and-contrast between FIBs and some similar tools being proposed for policy-based deterrence, and why we think FIBs win out, in Compare and Contrast - FIBs vs Other Tools.