The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Ban on Advanced AI
Prince Harry and Meghan Markle have joined forces with AI experts and Nobel laureates to advocate for a total prohibition on developing superintelligent AI systems.
The royal couple are among the signatories of a influential declaration that calls for “a ban on the creation of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that would surpass human intelligence in all cognitive tasks, though this technology remain theoretical.
Primary Requirements in the Declaration
The declaration insists that the ban should remain in place until there is “widespread expert agreement” on developing ASI “safely and controllably” and once “substantial public support” has been secured.
Prominent figures who endorsed the statement include technology visionary and Nobel laureate a leading AI researcher, along with his fellow “godfather” of modern AI, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; Susan Rice; ex-head of state Mary Robinson, and UK writer Stephen Fry. Additional Nobel winners who signed include a peace advocate, a physics Nobelist, an astrophysicist, and an economics expert.
Behind the Movement
The declaration, aimed at national leaders, technology companies and lawmakers, was organized by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made AI a worldwide public discussion topic.
Tech Sector Views
In July, Meta's CEO, the leader of the social media giant, one of the major AI developers in the US, stated that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some experts have suggested that talk of ASI reflects market competition among tech companies spending hundreds of billions on AI recently, rather than the sector being near reaching any technical breakthroughs.
Possible Dangers
Nonetheless, the organization warns that the prospect of ASI being achieved “within the next ten years” presents numerous threats ranging from replacing human workers to erosion of personal freedoms, leaving nations to national security risks and even endangering mankind with extinction. Existential fears about AI focus on the potential ability of a system to evade human control and protective measures and trigger actions against human welfare.
Citizen Sentiment
The institute published a US national poll showing that approximately three-quarters of Americans want strong oversight on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be created until it is proven safe or manageable. The poll of 2,000 US adults noted that only a small fraction supported the current situation of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the United States, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human cognitive capability at most cognitive tasks – an explicit goal of their work. Although this is slightly less advanced than superintelligence, some experts also caution it could pose an extinction threat by, for example, being able to improve itself toward achieving superintelligence, while also presenting an implicit threat for the modern labour market.