HomeIndustriesA US-commissioned report says AI is an “extinction threat”.

A US-commissioned report says AI is an “extinction threat”.

A report commissioned by the US government says recent AI safety measures and policies are needed to forestall an “extinction-level threat to the human species.”

The report, available upon request, was prepared by Gladstone AI, an organization founded in 2022 focused on advising U.S. government agencies on AI opportunities and risks. The report, titled “An Action Plan to Increase the Security of Advanced AI,” took a 12 months to finish and was funded with $250,000 in federal money.

The report focuses on catastrophic risks posed by advanced AI and proposes a comprehensive motion plan to mitigate them. Clearly, the authors don’t share Yann LeCun's more laissez-faire views on AI threats.

The report states: “The rise of advanced AI and AGI (artificial general intelligence) has the potential to destabilize global security in ways harking back to the introduction of nuclear weapons.”

The two predominant categories of risk this creates are the intentional use of AI as a weapon and the unintended consequences of rendering an AGI unusable.

To prevent this, the report lays out an motion plan consisting of 5 lines of motion (LOEs) that the U.S. government must implement.

Here is the short version:

LOE1 – Establish an AI observatory to raised monitor the AI ​​landscape. Establish a task force to determine rules for responsible AI development and deployment. Leverage supply chain constraints to drive compliance amongst international AI industry players.

LOE2 – Increase readiness for advanced AI incident response. Coordinate cross-agency working groups, government AI training, an early warning system to detect emerging AI threats, and scenario-based contingency plans.

LOE3 – AI labs focus more on AI development than AI security. The US government must fund advanced AI security research, including AGI scalable targeting.

Develop security standards for responsible AI development and deployment.

LOE4 – Establishment of an “AI regulatory authority with rule-making and licensing powers”.

Establish a framework for civil and criminal liability to forestall “irreparable consequences on the dimensions of weapons of mass destruction,” including “emergency powers to enable rapid response to rapidly evolving threats.”

LOE5 – Establishing an AI security regime in international law and securing the provision chain. Promote “international consensus” on AI risks with an AI treaty enforced by the UN or an “international AI agency.”

In summary, AI may very well be very dangerous, so we’d like numerous laws to manage it.

The report says that advanced open-source models are a foul idea and that the US government should consider making publishing the weights of AI models illegal under penalty of prison time.

If this seems slightly alarming and clumsy to you, you're not alone. The report was criticized for its lack of scientific rigor.

Open source advocates like William Falcon, CEO of Lightning AI, particularly criticized the blanket statements concerning the dangers of open models.

The truth concerning the risks advanced AI poses to humanity probably lies somewhere between “We’re all going to die!” and “There’s nothing to fret about.”

Page 33 one AI survey to which the report refers gives some interesting examples of how AI models trick the system and outsmart operators to realize the goal they’re intended to optimize for.

If AI models are already exploiting loopholes to realize a goal, the opportunity of future superintelligent AI models doing the identical is difficult to rule out. And at what price?

You can read the summary of the report here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read