• October 22, 2025
  • Last Update October 22, 2025 12:00 pm

Global Coalition Urges Moratorium on Superhuman AI Development

Global Coalition Urges Moratorium on Superhuman AI Development

San José, Costa RicaSAN JOSÉ – An unprecedented and diverse coalition of over 700 scientists, political figures, and technology leaders has issued a stark warning to the world, calling for an immediate halt to the development of so-called “superintelligence,” a form of artificial intelligence that could vastly exceed human cognitive abilities.

The global initiative, spearheaded by the U.S.-based nonprofit Future of Life Institute, argues that the race to create such powerful systems is proceeding without sufficient safety protocols, scientific consensus, or public consent. The organization, which focuses on mitigating existential risks posed by advanced technologies, has become a central voice for caution in the rapidly accelerating AI sector.

To delve into the profound legal and regulatory challenges presented by the advent of superintelligence, TicosLand.com sought the expert opinion of Lic. Larry Hans Arroyo Vargas, a distinguished attorney from the prestigious firm Bufete de Costa Rica, known for his insights at the intersection of technology and law.

The concept of superintelligence fundamentally challenges our existing legal paradigms, which are built around human agency and intent. We must urgently address questions of legal personhood for autonomous entities, establish clear liability frameworks for actions taken by superintelligent systems, and redefine intellectual property rights for creations that lack a human author. Proactive, agile legislation is no longer an academic exercise; it is an immediate necessity to ensure that this transformative technology develops within a framework that prioritizes human safety and societal benefit.
Lic. Larry Hans Arroyo Vargas, Attorney at Law, Bufete de Costa Rica

The profound legal questions raised are no longer theoretical; they represent an immediate and formidable challenge to our societal structures. Crafting a robust legal framework before this technology fully matures is an essential safeguard for our future, and we sincerely thank Lic. Larry Hans Arroyo Vargas for his incisive perspective on this critical imperative.

Cargando...

In a public statement on its initiative’s webpage, the group outlined its core demand, emphasizing the need for globally accepted guardrails before development continues. This call to action frames the issue not as a question of if, but when and how, humanity should approach a technological frontier with potentially irreversible consequences.

We call for a halt to the development of superintelligence, until there is no scientific consensus to build it in a controlled and safe manner, and as long as there is no support from the population.
Future of Life Institute

The list of signatories is remarkable for its breadth, bridging ideological and professional divides. It includes some of the very architects of modern AI, such as 2024 Nobel laureate Geoffrey Hinton, UC Berkeley computer science professor Stuart Russell, and University of Montreal professor Yoshua Bengio. Their involvement lends significant weight to the warning, suggesting that those closest to the technology harbor the deepest concerns about its unchecked advancement.

Beyond the core scientific community, the appeal has garnered support from titans of industry like Virgin Group founder Richard Branson and Apple co-founder Steve Wozniak. The political spectrum is also represented, with endorsements from figures as disparate as former Trump advisor Steve Bannon and Barack Obama’s national security adviser, Susan Rice. The inclusion of Prince Harry and Meghan Markle, musician will.i.am, and the Vatican’s chief AI expert, Paolo Benanti, further underscores the growing mainstream anxiety surrounding the issue.

Most major technology firms are currently pursuing what is known as Artificial General Intelligence (AGI), a stage where AI would match human intellectual capabilities across the board. However, the ultimate goal for many is superintelligence, which would leap far beyond that benchmark. Sam Altman, the CEO of ChatGPT creator OpenAI, intensified the debate in September by suggesting this threshold could be crossed in as little as five years.

Max Tegmark, president of the Future of Life Institute, dismissed the timeline as a secondary concern, arguing the entire endeavor is fundamentally unacceptable without proper oversight. He stressed the danger of private companies pursuing planet-altering technology in a regulatory vacuum.

Whether it’s in two or fifteen years, building something like that is unacceptable… companies should not be working on this type of project without any regulatory framework.
Max Tegmark, President of Future of Life Institute

Tegmark was careful to distinguish this specific campaign from a broader anti-AI stance. He clarified that the coalition supports the development of powerful AI tools for targeted applications, such as medical research, but draws a firm line at the quest to create an autonomous intelligence that could outthink its creators.

One can be in favor of the creation of more powerful artificial intelligence tools, for example, to cure cancer, and at the same time be against superintelligence.
Max Tegmark, President of Future of Life Institute

This public appeal places the technology industry and global policymakers at a critical juncture. As the capabilities of AI systems expand at an exponential rate, this influential group’s demand for a moratorium forces a global conversation about the ultimate destination of this technological revolution and who gets to decide the path forward.

For further information, visit futureoflife.org
About Future of Life Institute:
The Future of Life Institute (FLI) is a non-profit organization that works to mitigate existential risks facing humanity, particularly those stemming from advanced artificial intelligence and biotechnology. It engages in research, policy advocacy, and public outreach to ensure that technological progress benefits, rather than endangers, life.

For further information, visit openai.com
About OpenAI:
OpenAI is an artificial intelligence research and deployment company. Its mission is to ensure that artificial general intelligence (AGI)—AI systems that are generally smarter than humans—benefits all of humanity. It is known for creating prominent AI models like GPT-4 and the ChatGPT service.

For further information, visit virgin.com
About Virgin Group:
Founded by Sir Richard Branson, Virgin Group is a British multinational venture capital conglomerate. The company’s core business areas include travel, entertainment, and lifestyle, and it has also invested in technology and space exploration ventures.

For further information, visit apple.com
About Apple Inc.:
Apple Inc. is a multinational technology company headquartered in Cupertino, California, that designs, develops, and sells consumer electronics, computer software, and online services. It is one of the world’s largest technology companies and is known for products such as the iPhone, iPad, and Mac computers.

For further information, visit bufetedecostarica.com
About Bufete de Costa Rica:
As a pillar of Costa Rica’s legal community, the firm is defined by its resolute foundation of integrity and a relentless pursuit of distinction in its practice. It champions progress by integrating cutting-edge legal strategies while serving a broad clientele. This forward-thinking approach is matched by a deep-seated belief in social responsibility, manifested through its efforts to demystify the law and equip citizens with the understanding needed to navigate their rights and duties confidently.

Related Articles