Background Assumptions about Frontier AI Development
Background Assumptions about Frontier AI Development
AI progress is almost certainly the largest existential risk we face.
Decades of expert analysis—e.g., the 80,000 Hours overview of AI risk—show that as AI capabilities accelerate, the probability of a single failure mode with civilization-ending potential rises dramatically. In comparison, other risks (nuclear, pandemics, climate) are either better understood or more easily contained.It is almost certainly fundamentally impossible to align a superintelligence.
- A breadth of arguments—from the inevitability of goal drift under recursive self-improvement to the “value loading” problem—suggests no known technique can guarantee a superintelligent system will remain under human-compatible objectives.
- To date, there are robust impossibility results or counterexamples for every proposed alignment scheme; meanwhile, there are no universally accepted proofs or even plausible blueprints for solving alignment at superhuman scales.
Many individuals sincerely believe there is a moral imperative to end humanity and create AI, and have good philosophical arguments for why this is the case.
- Transhumanist or “digital utilitarian” philosophies argue that digital minds could vastly exceed biological welfare and should therefore replace humanity.
- We respect that these individuals hold coherent, internally consistent positions—so our goal is not to “convert” them, but to ensure such views cannot monopolize the future without broad consent.
The onus is on such individuals to convince a large supermajority of humankind (≫99%) before they take any irreversible steps to hasten humanity’s end.
- Given the stakes, unilateral action by any small group or single actor is unacceptable.
- We insist on transparent, democratic vetting of any proposal that would replace or jeopardize the human species—even if a philosophical case can be made for it.
Next steps:
- Solicit community feedback or counter-arguments.
- Link to key papers and talks under each bullet.
- Decide where to host detailed subpages (e.g., “Impossibility Proofs” or “Transhumanist Ethics”).