They claim the rush to build AI systems is out of control and have signed an open letter warning of potential consequences.
Elon Musk, the CEO of Twitter, is one of those who wants to stop training AIs above a certain capacity for at least six months.
Steve Wozniak, a co-founder of Apple, and a few DeepMind researchers also signed.
The developer of ChatGPT, OpenAI, recently unveiled GPT-4, a cutting-edge technology that has stunned observers with its aptitude for tasks like identifying objects in photos.
The letter, from the Future of Life Institute and signed by the luminaries, requests a temporary halt to development at that stage and warns of the dangers that potential future, more sophisticated systems may present.
The report warns that societies and mankind face grave hazards from AI systems with intellect on par with humans.
A non-profit organisation called the Future of Life Institute states that its goal is to “direct transformative technology away from extreme, large-scale hazards and towards benefitting life.”
Mr. Musk, the owner of Twitter and the CEO of the automaker Tesla, is named as the organization’s external adviser.
The letter claims that careful consideration must go into the development of advanced AIs, but lately, “AI labs have been locked in an out-of-control rush to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, anticipate, or reliably control.”
The letter issues an alert that artificial intelligence (AI) could automate occupations and saturate information channels with false information.
Outsmarted and obsolete
The letter poses a more hypothetical question, “Shall we produce non-human minds that may ultimately outnumber, outwit, obsolete, and replace us?”
OpenAI cautioned of the dangers if an artificial general intelligence (AGI) were created carelessly in a recent blog post that was cited in the letter: “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too.
The company stated that “coordination among AGI initiatives to slow down at critical junctures will probably be important.”
About the letter, OpenAI has not made any public remarks. The company was questioned by the BBC if it supports the request.
Although he left the organization’s board a number of years ago and has tweeted negatively about its present path, Mr. Musk was a co-founder of OpenAI.
Like the majority of comparable systems, autonomous driving features produced by his automaker Tesla rely on AI technology.
The letter requests that “the training of AI systems more powerful than GPT-4 be immediately suspended for at least six months.”
Governments should intervene and impose a moratorium if such a delay cannot be swiftly implemented, it asserts.
It would also be necessary to create “new and capable regulatory authorities specialised to AI.”
In the US, UK, and EU, several recent ideas for the regulation of technology have been made. Yet the UK has decided against having an AI-specific authority.