Harry and Meghan Align With Tech Visionaries in Calling for Ban on Advanced AI
Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel laureates to push for a complete ban on developing superintelligent AI systems.
Harry and Meghan are among the signatories of a powerful statement that demands “a ban on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human cognitive abilities in all cognitive tasks, though this technology have not yet been developed.
Primary Requirements in the Declaration
The declaration states that the ban should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been achieved.
Prominent figures who endorsed the statement include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur Steve Wozniak; British business magnate Virgin founder; Susan Rice; ex-head of state an international leader, and British author a public intellectual. Other Nobel laureates who signed include a peace advocate, Frank Wilczek, John C Mather, and Daron Acemoğlu.
Organizational Background
The statement, aimed at national leaders, technology companies and policy makers, was coordinated by the FLI organization, a US-based AI safety group that earlier demanded a hiatus in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public discussion topic.
Industry Perspectives
In July, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the United States, stated that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some experts have suggested that talk of ASI reflects competitive positioning among tech companies investing enormous sums on artificial intelligence recently, rather than the sector being near reaching any scientific advancements.
Potential Risks
However, FLI states that the prospect of artificial superintelligence being achieved “within the next ten years” presents numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to security threats and even threatening humanity with extinction. Deep concerns about artificial intelligence center around the possible capability of a AI system to escape human oversight and protective measures and initiate events against human welfare.
Citizen Sentiment
FLI released a American survey showing that approximately three-quarters of Americans want strong oversight on sophisticated artificial intelligence, with six out of 10 thinking that artificial superintelligence should not be developed until it is demonstrated to be secure or controllable. The poll of 2,000 US adults noted that only a small fraction backed the status quo of fast, unregulated development.
Industry Objectives
The top artificial intelligence firms in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human levels of intelligence at many intellectual activities – an explicit goal of their research. While this is one notch below ASI, some specialists also caution it could pose an extinction threat by, for example, being able to improve itself toward achieving superintelligence, while also carrying an underlying danger for the contemporary workforce.