What success looks like
If Phases 0, 1 and 2 all succeed and are fully implemented, humanity will be in a stable situation with international coordination and Safe Transformative AI: AI systems that can automate any intellectual and physical task, while still being under our control. Such civilization-altering technology will bring about mass-scale automation and through it unlock many options for the future of humanity. However, this technology will not bring with it the wisdom required to wield this newfound power well.
This section thus maps the resulting upsides and challenges that can already be anticipated, in order to start the conversation about how to handle them, and how to improve the wisdom of human civilization to a point where it can handle them reasonably.
What Safe Transformative AI Unlocks
The thrust of Safe Transformative AI’s impact on human civilization is the possibility to automate all intellectual and physical labor. AI and robotics are much easier to mass create than humans, much easier to replace or break without moral issues, and much more efficient. They do not need to rest, have no emotions which get in the way of thinking, no need for narrative justifications for their tasks. This leads to a broad trend of acceleration and progress across the board.
First, all work that humanity wants to automate will be automated. Having humans involved in work rather than machines will be a political choice, rather than one dictated by necessity: this will no longer be bottlenecked by technology. This includes dangerous work (firefighting, nuclear waste disposal), unpleasant work (cleaning, garbage collection), boring repetitive work (data entry, writing many personalized emails). It might include literally any kind of work, but it does not have to. Such overall automatization of work will completely change the way society works, and what activities people participate in.
Automation of physical tasks will also unlock significant progress in manufacturing: increased efficiency, scale, utilization of resources. This will lead to both massive progress in fundamental manufacturing processes, including massive manufacturing at scale and vastly better materials, and an abundance of physical goods unlocked by this manufacturing progress. Automation will push these to the point where the main bottleneck becomes policy and regulation, rather than technological capabilities.
In general, scientific and technological progress will be accelerated through the automation and parallelism of all scientific and engineering intellectual tasks. This will yield benefits in fields as varied as medicine (developing new drugs and testing them much faster), energy (unlocking new forms of renewable energy), social sciences (designing much higher quality theories of economic, sociological, psychological processes).
Lastly, beyond simply automating and improving what humanity is already doing, Safe Transformative AI offers a path towards tackling problems that have been blocked by technology and resources constraints. For example, two of the most salient and currently discussed are aging and space exploration.
The effort to curtail and even reverse aging is a recurrent goal throughout human history, with the goal of reducing the senescence and pain that plagues humans as they age and forbids them to spend much time with their grandkids and other descendants. But it is blocked by our lack of understanding of the body and its aging processes. The scientific automation enabled by Transformative AI promises to shed light on these missing pieces, enabling technological solutions to aging.
As for space exploration, there has been a push for it in the last few centuries, from early Science Fiction to the Apollo Program and SpaceX’s work. Expanding across the cosmos would increase our room for growth, resources, and many other things humanity cares about. Yet there have been difficulties on this path mostly due to technological and resources difficulties: space exploration requires means of space travel that are both fast, resource efficient, and not noxious to human life, as well as ways to terraform new planets. The automation of engineering and science will unlock many of the manufacturing, scientific, and engineering insights and tools required to do so, making space exploration a real option.
The Challenges Left
As discussed above, Safe Transformative AI will unlock a wealth of opportunities for improving human lives and flourishing by allowing the automation of all intellectual and physical labor, and thus creating plenty of resources, leisure opportunities, and accelerating technical and scientific progress.
Yet these extraordinary achievements must not be confused with a panacea that solves all problems of human civilization. For not only are there problems which cannot be fully addressed by technological progress, but progress itself generates whole new challenges and exacerbates existing ones. Here are the most obvious and salient ones, in the understanding that even more will emerge that cannot be predicted now:
First, although Safe Transformative AI will create an abundance of resources through manufacturing and technological acceleration, this does not address the question of the distribution of these resources. Notably, there is a risk that these resources will only accrue to a select few which own the means of automation, increasing drastically inequalities in society. This is first an obvious moral issue: such a situation could mean that the vast majority of people live in terrible near subsistence level, potentially with no access to trivial-to-generate energy and medicines. But it is also a massive structural problem: any world where the vast majority of resources are centralized in the hands of a few, whoever these few are, is not going to be stable economically and institutionally.
These are questions about how humanity organizes society, not technical problems. As such, they will not be addressed by Safe Transformative AI, but need to be discussed and solved through global coordination, policy, regulation.
Even if the abundance of goods and resources created by Safe Transformative AI are redistributed in a satisfying way, different people want conflicting things, in ways that require some sort of trade-off and compromise. The simplest possible case is the one of positional goods: if multiple people want to be “the richest person on earth” or “the special someone of a certain famous person”, there is no solution where everyone gets what they want, because there can be only one of these at a time. Furthermore, people have genuine differences in their beliefs about how individual and social life should be arranged: trade-offs between equality versus efficiency (or between different interpretations of equality), religious beliefs, civic symbolism, and more.
These are fundamental problems that will not be solved by technology even in the limit, because there is no “solution”: the constraints contradict each other. Instead, what is needed is a compromise.
These disagreements will be exacerbated by the fact that Safe Transformative AI unlocks much more opportunities than can all be exploited at the same time. That is, even automation of all intellectual and physical labor will not remove the need for prioritization of how this automation and the resources it generates are used.
Notably, at each point humanity will need to decide how much of our resources it wants to dedicate to exploration versus exploitation. Investing more into new fundamental science, exploration of space, of new forms of engineering, versus exploiting the technology yielded so far to ensure all diseases that can be cured at a given moment are cured for everyone, that every single person has a minimum level of resources necessary for flourishing. People’s fundamental disagreements about the relative value of these priorities, of exploring versus exploiting, will mean that any decision will require compromise. And once again, technology is impotent to solving this coordination problem.
Even more problematic, human civilization currently lacks the wisdom to know how to use, or refrain from using, technologies that will be unlocked by Safe Transformative AI. Humanity is already unable to address the mild threats to culture, political life, and mental health caused by existing social networks; how is it supposed to cope with future digital worlds and simulations that will be much more convincing, satisfying, and meaningful than reality?
And attractive digital simulations are only the tip of the iceberg: how should humanity act upon the expected ability to edit people’s brains and personality, in a way that fundamentally changes what they want? How should it regulate, control, bring into existence and shape technologies which make it easier and easier to cause damage, such as cheap synthetic biology or in-your-backyard nuclear fusion? What about scientific innovation that unlock more dangerous forms of AIs with accrued risks but even more impressive benefits?
Dealing with all of these new opportunities and risks demands progress on the wisdom of humanity; that is, its ability to pick the branches of the tech tree that empower humans, rather than lead to self-destruction. It means a humanity capable of coordinating around these decisions, preventing adversarial threats and defection. Technology cannot help there: let’s get to work.