Media

Support

"I support this plan. We can such amazing innovation with AI if we refrain from building a smarter-than-human bot species that we don't know how to control."

"Lots of interesting and concrete ideas here on avoiding existential risk from AI. The authors isolate the biggest risks in superintelligent systems, proposing methods for ensuring these aren't built until we are ready — while enjoying the fruits of regular AI systems until then."

"I don’t happen to agree with every assumption in here, but even so this plan seems a hell of lot better than letting some wealth, power-hungry individual with a truthiness problem decide the fate of humanity.

Definitely worth a read; comments open to all."

Get Updates

Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.

If you have feedback on The Plan or want to know how you can help to support it please get in touch with us directly

If you have feedback on The Plan or want to know how you can help to support it please get in touch with us directly