Elon Musk and Leading AI Experts Highlight Risks to Humanity, Call for Joint Action

Elon Musk, along with other prominent artificial intelligence experts, has issued an open letter calling for a halt to the development of AI systems with human-competitive intelligence. The letter highlights the risks posed by such technology, as shown by extensive research, and calls for a pause of at least six months in the training of AI systems more powerful than GPT-4.

The letter argues that the development of advanced AI could represent a profound change in the history of life on Earth and should be planned and managed with commensurate care and resources. However, despite the risks involved, recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control.

The AI experts warn that contemporary AI systems are now becoming human-competitive at general tasks, raising concerns about the possibility of machines flooding our information channels with propaganda and untruth and automating away all the jobs, including the fulfilling ones. Moreover, developing nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us, risks loss of control of our civilization.

The letter calls for a public and verifiable pause, including all key actors, in the training of AI systems more powerful than GPT-4. The pause should be enacted quickly, and if not, governments should step in and institute a moratorium. During the pause, AI labs and independent experts should jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.

The letter stresses that the pause does not mean a halt to AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities. Instead, AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems. These should at a minimum include new and capable regulatory authorities dedicated to AI, oversight and tracking of highly capable AI systems and large pools of computational capability, provenance and watermarking systems to help distinguish real from synthetic and to track model leaks, a robust auditing and certification ecosystem, liability for AI-caused harm, robust public funding for technical AI safety research, and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

The letter concludes by urging humanity to enjoy a flourishing future with AI, but only after taking the necessary precautions to ensure safety and manage risks. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.

Previous
Previous

Daniela Uribe's Gender-Inclusive Footwear Brand Goes Digital with Threedium Partnership

Next
Next

MAD Global Presents Spatial Metaverse Fashion Week Curation