Close the Gates

How we can keep the future human by choosing not to develop superhuman general-purpose artificial intelligence.

17 Nov 2023

By Anthony Aguirre

View on SSRN


Executive Summary

Dramatic advances in artificial general intelligence over the past decade (for narrow-purpose AI) and the last several years (for general-purpose AI) have transformed AI from a niche academic field to the core business strategy of many of the world’s largest companies, with hundreds of billions of dollars in annual investment in the techniques and technologies for advancing AI’s capabilities.

We now come to a critical juncture. As the capabilities of new AI systems begin to match and exceed those of humans across many cognitive domains, humanity must decide: how far do we go? Our current trajectory, and implicit choice, is an unchecked race toward ever-more powerful and high-risk systems, driven by economic incentives of a few huge technology companies seeking to automate large swathes of current economic activity and human labor. But we can make another choice: via our governments, we can take control of the AI development process and impose clear limits, lines we won’t cross, and things we simply won’t do – as we have for nuclear technologies, weapons of mass destruction, space weapons, environmentally destructive processes, the bioengineering of humans, and eugenics.

This essay argues that we should close the gates to developing superhuman general-purpose AI – sometimes called “AGI” – and especially to the highly-superhuman version sometimes called “superintelligence.” In particular it makes the following claims.

We are at the threshold of creating expert-competitive and superhuman general-purpose AI systems in a time that could be as short as a few years. While predictions and definitions are contentious, evidence points in this direction:

The idea that superhuman AI is decades away or more is simply no longer tenable to the vast majority of experts in the field. Disagreements now are about how many months or years it will take.

Such “outside the Gates” systems pose profound risks to humanity, ranging from at minimum wholesale disruption of society, to at maximum permanent human disempowerment or extinction. These can be grouped in a few major classes.

Many of these risks remain even if the technical “alignment” problem is solved of ensuring that advanced AI reliably does what humans want it to do: AI presents an enormous challenge in how it will be managed, and very many aspects of this management become incredibly difficult or intractable as human intelligence is breached.

Most fundamentally, the type of superhuman general-purpose AI currently being pursued would, by its very nature, have goals, agency, and capabilities exceeding our own. It would not be a technological tool for human use, but a second species of intelligence on Earth alongside ours. If allowed to progress further, it would constitute not just a second species but a successor species. Perhaps it would treat us well, perhaps not. But the future would belong to it, not us.

AI has enormous potential benefits. However, humanity can reap nearly all of the benefits we really want from AI with systems inside the Gates. The Sustainable Development Goals are a set of 17 consensus aims for humanity’s self improvement. They are, in effect, the progress that humanity has expressed that it wants. All of them are addressable with inside-the-gates AI. Current general AI can be improved for accuracy, reliability, and trustworthiness, and combined with powerful narrow capabilities, such as in medicine or scientific simulation. Such AI could provide transformative capability to solve real problems such as medicine and longevity, new material design, better democratic and deliberative processes, and personalized education. Not among the Sustainable Development Goals are items such as human immortality, colonizing the galaxy, developing powerful nanotechnology, mind uploads, and so on. These will take a long time with inside-the-Gates AI. But even if we want them, are we in such a rush? Many of these are also double-edged technologies with large risk. If there are benefits that can only be realized with superhuman systems, we can always choose, as a species, to develop those systems later once we judge them to be sufficiently – and preferably guaranteed – safe. Once we develop them there is very unlikely to be any going back, and such a one-way decision is one that should be made via a robust and inclusive process involving humanity as a whole, not by a few silicon valley tech titans.

Systems inside the Gates will still be very disruptive and pose a large array of risks – but these risks are potentially manageable with good governance. We should not underestimate either the benefits or the risks of human-competitive and narrow AI. They are likely to turbocharge the economy if managed well, while causing major social issues while we work out how to do so. But it is possible: we have examples of both poor and excellent management of powerful new technologies, and if we can keep the Gates closed, AI will be another technological tool rather than a successor species.

Finally, we not only should but can implement a “Gate closure.” The creation of superhuman AI is far from inevitable. It could be prevented tomorrow by a US executive order requiring that – rather than the mere notification currently required – AI model training runs exceeding 10^26 FLOP of computation make an affirmative and convincing case that they will be safe and beneficial, prior to training. The EU and China should follow suit. This is a standard that is applied to other scientific experiments, but the nature of current techniques is that for giant AI experiments this is not possible (which is the heart of the problem!) Maintaining such a requirement would take verification and enforcement, and this can be done via the incredibly tight supply chain for the AI-specialized computation needed by large training runs, which runs through a handful of companies. While these chips have many uses, governance could be done in a flexible and intelligent way using on-chip security mechanisms much like those already used in consumer products and present (but largely unused) in cutting-edge AI chips. Longer-term, containment of the power and risk of AI systems would require international coordination, just as controlling nuclear weapon proliferation does now.

People want the good that comes from AI: useful tools that empower them, economic opportunities and growth, potential for breakthroughs in science, technology, and education. Why wouldn’t they? But when asked, overwhelming majorities of the general public want slower and more careful AI development, and do not want superhuman AI that will replace them in their jobs and elsewhere, fill their culture and information commons with non-human content, concentrate power in a tiny set of corporations, and pose extreme large-scale global risks. Why would they?

We can have one without the other. Let’s close the Gates.


Read the full essay

The full essay is available online below: