Statement on Superintelligence
64,425 signatures
Including 36,402 from the same petition by Ekō
Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.
For corrections, technical support, or press enquiries, please contact letters@futureoflife.org
Statement
We call for a prohibition on the development of superintelligence, not lifted before there is
1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in.
Key polling results on Superintelligence
- 5%
- 64%
- 73%
Comments from signatories (Click to expand)
Yoshua Bengio
> Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years. These advances could unlock solutions to major global challenges, but they also carry significant risks. To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future.
Sir Stephen Fry
> To get the most from what AI has to offer mankind, there is simply no need to reach for the unknowable and highly risky goal of superintelligence, which is by far a frontier too far. By definition this would result in a power that we could neither understand nor control.
Mary Robinson
> AI offers extraordinary promise to advance human rights, tackle inequality, and protect our planet, but the pursuit of superintelligence threatens to undermine the very foundations of our common humanity. We must act with both ambition and responsibility by choosing the path of human-centred AI that serves dignity and justice.
Johnnie Moore
> We should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control. Creating superintelligent machines is not only unacceptably dangerous and immoral, but also completely unnecessary.
Prince Harry, Duke of Sussex
> The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer.
Joseph Gordon-Levitt
Actor, Filmmaker
> Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that. But that’s what these big tech companies mean when they talk about building ‘Superintelligence’.
Stuart Russell
> This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?
Walter Kim
> If we race to build superintelligence without clear and morally informed parameters, we risk undermining the incredible potential AI has to alleviate suffering and enable flourishing. We should intentionally harness this amazing technology to help people, not rush to build machines and mechanisms we cannot control.
Yuval Noah Harari
> Superintelligence would likely break the very operating system of human civilization - and is completely unnecessary. If we instead focus on building controllable AI tools to help real people today, we can far more reliably and safely realize AI’s incredible benefits.
Mark Beall
> When AI researchers warn of extinction and tech leaders build doomsday bunkers, prudence demands we listen. Superintelligence without proper safeguards could be the ultimate expression of human hubris—power without moral restraint.
Public statements by non-signatories (Click to expand)
Rep. Don Beyer
> We won't realize AI's promising potential to improve human life, health, and prosperity if we don’t account for the risks. Developers and policymakers must consider the potential danger of artificial superintelligence raised by these leading thinkers.
Sam Altman
CEO, OpenAI
> Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.
Dario Amodei
CEO, Anthropic
> I think there’s a 25% chance that things go really, really badly
Mustafa Suleyman
> Until we can prove unequivocally that it is \[safe\], we shouldn’t be inventing it.
David Sacks
> AI is a wonderful tool for the betterment of humanity. AGI is a potential successor species.
Elon Musk
> I think the probability of a good outcome is like 80% likely... only 20% chance of annihilation.
📣 Join these people, show your support!
64,425 signatures
0 commentaire