Media
Support
"I support this plan. We can such amazing innovation with AI if we refrain from building a smarter-than-human bot species that we don't know how to control."
"Lots of interesting and concrete ideas here on avoiding existential risk from AI. The authors isolate the biggest risks in superintelligent systems, proposing methods for ensuring these aren't built until we are ready — while enjoying the fruits of regular AI systems until then."
"I don’t happen to agree with every assumption in here, but even so this plan seems a hell of lot better than letting some wealth, power-hungry individual with a truthiness problem decide the fate of humanity.
Definitely worth a read; comments open to all."