Over 200 prominent politicians and scientists, including 10 Nobel Prize winners and many leading artificial intelligence researchers, released an urgent call for binding international measures against dangerous AI uses on Monday morning.
Warning that AI’s “current trajectory presents unprecedented dangers,” the statement, termed theĀ Global Call for AI Red Lines, argues that “an international agreement on clear and verifiable red lines is necessary.” The open letter urges policymakers to enact this accord by the end of 2026, given theĀ rapid progress of AI capabilities.
Nobel Peace Prize Laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly’s High-Level Week Monday morning. She implored governments to come together to “prevent universally unacceptable risks” from AI and to “define what AI should never be allowed to do.”
In addition to Nobel Prize recipients in Chemistry, Economics, Peace and Physics, signatories include celebrated authors like Stephen Fry and Yuval Noah Harari as well as former heads of state, including former President Mary Robinson of Ireland and former President Juan Manuel Santos of Colombia, who won the Nobel Peace Prize in 2016.
Geoffrey Hinton and Yoshua Bengio, recipients of the prestigious Turing Award andĀ two of the three so-called ‘godfathers of AI,’Ā also signed the open letter. The Turing Award is often regarded as the Nobel Prize for the field of computer science. HintonĀ left a prestigious position at GoogleĀ two years ago to raise awareness about the dangers of unchecked AI development.
The signatories hail from dozens of countries, including AI leaders like the United States and China.
“For thousands of years, humans have learned ā sometimes the hard way ā that powerful technologies can have dangerous as well as beneficial consequences,” Harari said. “Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”
The open letter comes as AI attracts increasing scrutiny. In just the past week, AI made national headlines for its use inĀ mass surveillance, its alleged role in aĀ teenager’s suicide, and its ability toĀ spread misinformationĀ and even undermine our shared senseĀ of reality.
However, the letter warns that today’s AI risks could quickly be overshadowed by more devastating and larger-scale impacts. For example, the letter references recent claims from experts that AI could soon contribute toĀ mass unemployment,Ā engineered pandemicsĀ andĀ systematic human-rights violations.
The letter stops short of providing concrete recommendations, saying government officials and scientists must negotiate where red lines fall in order to secure international consensus. However, the letter offers suggestions for some limits, like prohibitingĀ lethal autonomous weapons,Ā autonomous replication of AI systemsĀ andĀ the use of AI in nuclear warfare.
“It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly,” said Ahmet Ćzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons (OPCW), which was awarded the 2013 Nobel Peace Prize under Ćzümcü’s tenure.
As a sign of the effort’s feasibility, the statement points to similar international resolutions that established red lines in other dangerous arenas, like prohibitionsĀ on biological weaponsĀ orĀ ozone-depleting chlorofluorocarbons.
Warnings about AI’s potentially existential threatsĀ are not new. In March 2023, more than 1,000 technology researchers and leaders, including Elon Musk,Ā called for a pauseĀ in the development of powerful AI systems. Two months later, leaders of prominent AI labs, including OpenAI’s Sam Altman, Anthropic’s Dario Amodei and Google DeepMind’s Demis Hassabis,Ā signed a one-sentence statementĀ that advocated for treating AI’s existential risk to humanity as seriously as threats posed by nuclear war and pandemics.
Altman, Amodei and Hassabis did not sign the latest letter, though prominent AI researchers like OpenAI co-founder Wojciech Zaremba and DeepMind scientist Ian Goodfellow did.
Over the past few years, leading American AI companies have often signalled a desire to develop safe and secure AI systems, for example by signing a safety-focusedĀ agreement with the White HouseĀ in July 2023 and joining theĀ Frontier AI Safety CommitmentsĀ at the Seoul AI Summit in May 2024. However, recent research has shown that, on average, these companies are only fulfilling aboutĀ half of those voluntary commitments, and global leaders have accused them ofĀ prioritizing profit and technical progressĀ over societal welfare.
Companies likeĀ OpenAIĀ andĀ AnthropicĀ also voluntarily allow the Center for AI Standards and Innovation, a federal office focused on American AI efforts, and the United Kingdom’s AI Security Institute to test and evaluate AI models for safety before models’ public release. Yet many observers have questionedĀ the effectivenessĀ andĀ limitationsĀ of such voluntary collaboration.
Though Monday’s open letter echoes past efforts, it differs by arguing for binding limitations. The open letter is the first to feature Nobel Prize winners from a wide range of scientific disciplines. Nobel-winning signatories include biochemist Jennifer Doudna, economist Daron Acemoglu, and physicist Giorgio Parisi.
The release of the letter came at the beginning of the U.N. General Assembly’s High-Level Week, during which heads of state and government descend on New York City to debate and lay out policy priorities for the year ahead. The U.N. willĀ launch its first diplomatic AI bodyĀ on Thursday in an event headlined by Spanish Prime Minister Pedro Sanchez and U.N. Secretary-General António Guterres.
Over 60 civil-society organizations from around the world also gave their support to the letter, from the Demos think tank in the United Kingdom to the Beijing Institute of AI Safety and Governance.
The Global Call for AI Red Lines is organized by a trio of nonprofit organizations: theĀ Center for Human-Compatible AIĀ based at the University of California Berkeley,Ā The Future SocietyĀ and theĀ French Center for AI Safety.












































