Extinction Bounties

Policy-based deterrence for the 21st century.

Policy-Research Disclaimer (click to close)

Extinction Bounties publishes theoretical economic and legal mechanisms intended to stimulate scholarly and public debate on catastrophic-risk governance. The site offers policy analysis and advocacy only in the sense of outlining possible legislative or contractual frameworks.

  • No Legal or Financial Advice. Nothing here should be treated as a substitute for qualified legal counsel, financial due-diligence, or regulatory guidance. Stakeholders remain responsible for ensuring their actions comply with the laws and professional standards of their own jurisdictions.
  • Exploratory & Personal Views. All scenarios, numerical examples and opinions are research hypotheses presented by the author in an academic capacity. They do not represent the views of the author’s employer, funding bodies, or any governmental authority.
  • Implementation Caveats. Any real-world adoption of these ideas would require democratic deliberation, statutory authority, and robust safeguards to prevent misuse. References to enforcement, penalties, or “bounties” are illustrative models, not instructions or invitations to engage in private policing or unlawful conduct.
  • No Warranty & Limited Liability. Content is provided “as is” without warranty of completeness or accuracy; the author disclaims liability for losses arising from reliance on this material.

By continuing beyond this notice you acknowledge that you have read, understood, and accepted these conditions.

Our 2-minute elevator pitch


06. The Catches - Collusion, False Tips, and Chilling Effects

Fine-Insured Bounties (FIBs) promise a powerful way to deter dangerous activities—from littering to reckless AI development. But no mechanism is foolproof. Let’s examine three critical challenges that must be addressed: collusion, false accusations, and innovation chilling.

1. Bounty Hunter Collusion

Here’s the problem: the bounty hunter who discovers wrongdoing might prefer negotiating quietly with offenders rather than reporting them. Imagine a bounty hunter finds a lab secretly training a dangerous AI model, which would incur a $50 million fine, of which the bounty hunter might officially claim $10 million. But what if the lab offers the bounty hunter $15 million to stay silent?

This type of collusion—a form of bounty hunter blackmail—is a serious threat because it undermines the whole system. Instead of deterring crimes, it creates a secretive black-market negotiation.

Possible solutions include:

2. False or Frivolous Tips

Another risk is the potential for false or frivolous bounty claims. Opportunistic actors might submit dubious accusations, clogging courts or investigative resources, and harassing innocent organizations.

But there’s built-in protection: bounty hunters only receive payment if the accusation results in a proven conviction. This strongly discourages completely baseless claims.

To further reduce frivolous claims:

While false tips are possible, careful policy design can keep them manageable.

3. Over-Chilling Innovation

Perhaps the biggest concern is that FIB systems could inadvertently “over-chill” innovation. If fines are massive and violations poorly defined, researchers might avoid any advanced or boundary-pushing work out of fear of accidentally breaking the rules.

This chilling effect is especially problematic for beneficial research, including work directly aimed at improving AI safety itself.

To address this:

The goal is balance: deterring reckless danger without paralyzing legitimate, responsible research.

Conclusion: Careful Design Needed

None of these challenges are insurmountable. Each requires thoughtful legal, economic, and practical design considerations. Collusion can be discouraged, false accusations minimized, and innovation protected—all through careful policy crafting.

The fine-insured bounty concept is powerful precisely because it’s flexible. With the right safeguards, it remains a compelling mechanism to deter high-stakes wrongdoing, especially in fields as critical as artificial intelligence.

We’re almost done! Have we convinced you? If so, proceed to Next Steps - Pilots, Law Drafts, and Public Buy-In.