There is a simple truth - humanity’s extinction is possible. Recent history has also shown us another truth - we can create artificial intelligence (AI) that can rival humanity.
While most AI development is beneficial, artificial superintelligence threatens humanity with extinction. We have no method to currently control an entity with greater intelligence than us. We currently have no ability to predict the intelligence of advanced AIs prior to developing them, and we have incredibly limited methods to accurately measure their competence after development.
We now stand at a time of peril. Companies across the globe are investing to create artificial superintelligence – that they believe will surpass the collective capabilities of all humans. They publicly state that it is not a matter of “if” such superintelligence might exist, but “when”.
We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species. But humanity can choose a different future: there is a narrow path through.
A new and ambitious future lies beyond a narrow path. A future driven by human advancement and technological progress. One where humanity fulfills the dreams and aspirations of our ancestors to end disease and extreme poverty, achieves virtually limitless energy, lives longer and healthier lives, and travels the cosmos. That future requires us to be in control of that which we create, including AI.
We are currently on an unmanaged and uncontrolled path towards the creation of AI that threatens the extinction of humanity. This document is our effort to comprehensively outline what is needed to step off that dangerous path and tread an alternate path for humanity.
To achieve these goals, we have developed proposals intended for action by policymakers, split into three Phases:
Phase
0
Safety
Focus
Build up our defenses to restrict the development of artificial superintelligence.
Goal
Ensure Safety: Prevent the Development of Artificial Superintelligence for 20 Years.
Actions
New institutions, legislation, and policies that countries should implement immediately that prevent development of AI that we do not have control of. With correct execution, the strength of these measures should prevent anyone from developing artificial superintelligence for the next 20 years.
Phase
1
Stability
Focus
Once we have halted the immediate danger, build a stable international system.
Goal
Ensure Stability: Build an International AI Oversight System that Does Not Collapse Over Time.
Actions
International institutions that ensure measures to control the development of AI do not collapse under geopolitical rivalries or rogue development by state and non-state actors. With correct execution, these measures should ensure stability and lead to an international AI oversight system that does not collapse over time.
Phase
2
Flourishing
Focus
With a stable system and humanity secure, build transformative AI technology under human control.
Goal
Ensure Flourishing: Build Controllable, Transformative AI.
Actions
With the development of rogue superintelligence prevented and a stable international system in place, humanity can focus on the scientific foundations for transformative AI under human control. Build a robust science and metrology of intelligence, safe-by-design AI engineering, and other foundations for transformative AI under human control.