
In what might be the most disturbing of ironies, a newly declassified government-commissioned report shows that the US government now acknowledges that artificial intelligence (AI) technology has advanced so rapidly that it poses a clear and present ‘extinction-level threat to the human species’ – a pressing national security concern of the highest order.
Titled ‘An Action Plan to Increase the Safety and Security of Advanced AI,’ the report suggests that we must immediately address this technology’s great dangers. Like the launch of the first nuclear weapons, the development of advanced AI poses enormous risks to humanity, yet our current policy responses to those risks appear insufficient. It is imperative that we fundamentally change the current trajectory of AI development.
The recommendations, developed over several months of research and consultations with more than 200 government officials, experts, and AI industry representatives from prominent AI labs such as OpenAI, Google DeepMind, Anthropic, and Meta, are compiled in the report that outlines a series of critical recommendations aimed at reshaping the AI landscape.
One of the big ideas is to establish new legal rules that limit the training of AI models beyond certain computing power thresholds. The hope is that this would slow the arms race over new AI models and the hardware that powers them. The bill also proposes criminal punishments for secrecy violations related to publishing details about powerful AI models’ inner workings, known as ”weights,” in a bid to enhance transparency and accountability.
Meanwhile, the report suggests that AI chips should be more tightly regulated in manufacturing and exportation and calls for greater federal investment in initiatives to improve the safety of advanced AI systems.
Most alarming are the risks of weaponizing AI and losing control over the functions of such advanced hardware. At several points, the report stresses the industry’s focus on speed over safety and correctly notes that regulation of AI hardware is key to maintaining the security of the globe.
The State Department commissioned the report in November 2022, and Gladstone AI, a boutique firm that has provided technical briefings on AI to US government agencies, conducted the work. The recommendations in the report mark an important milestone in addressing all the risks AI presents. However, they do not necessarily represent the US Department of State’s or federal government’s views.
Source: Jim Love, IT World Canada March 12, 2024