Twitter CEO Elon Musk has joined dozens of artificial intelligence (AI) experts and industry executives in signing an open letter calling on all AI labs to “immediately pause” training of systems more powerful than Chat GPT-4 for at least six months.
The letter, issued by the non-profit Future of Life Institute, has been signed by more than 1,100 individuals, including Apple co-founder Steve Wozniak, Stability AI founder and CEO Emad Mostaque, and engineers from Meta and Google, among others.
They argue that AI systems with human-competitive intelligence can pose “profound risks to society and humanity,” and change the “history of life on Earth,” citing extensive research on the issue and acknowledgments by “top AI labs.”
Experts go on to state that there is currently limited planning and management regarding Advanced AI systems despite companies in recent months being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”
“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders,” the letter states.
Safety Protocols Needed
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it adds.
The letter then calls for a public and verifiable minimum six-month pause on the training of AI systems more powerful than GPT-4 or a government-issued moratorium on such training if the pause cannot be enacted quickly.
During such a pause, AI labs and independent experts should use the time to create and implement a set of shared safety protocols for advanced AI design and development that are “rigorously audited” and overseen by independent third-party experts, the letter states.
Such protocols should be designed to make sure that systems adhering to them are safe beyond a reasonable doubt, accurate, trustworthy, and aligned, experts said.
Additionally, the letter calls on policymakers to swiftly develop robust AI governance systems such as authorities who are able to provide oversight and track highly capable AI systems, raise funding for additional AI safety research, and establish institutions that can cope with what they say will be the “dramatic economic and political disruptions (especially to democracy) that AI will cause.”
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” experts noted in their letter.
The letter comes just two weeks after OpenAI, the creator of the artificial intelligence system ChatGPT, released the long-awaited update of its AI technology on March 14—Chat GPT-4, the most powerful AI system ever.
ChatGPT Update Released
According to Microsoft-backed-OpenAI, the updated system has a string of new capabilities, such as accepting images as inputs and generating captions, classifications, and analyses but is safer and more accurate than its predecessor.
In a February statement, OpenAI acknowledged that at some point, it may be important to get an “independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”
“We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale,” the company said.
Earlier this week, Europol, the European Union’s law enforcement agency, warned of the severe implications of ChatGPT being used for cybercrime and other malicious activities including spreading disinformation.
Further concerns have been raised over the software, which is trained using reinforcement learning from human feedback (RLHF), particularly regarding how it can be used to help students cheat on their exams and homework.
Despite those concerns, the popularity of ChatGPT has prompted rival firms to launch similar products.
Last week, Google announced it had launched its AI app, known as “Bard,” for testing in the United Kingdom and the United States, although the company has notably been much slower to release the technology compared to its rival, citing the need for more feedback about the app.
From The Epoch Times