To counter the perception that its “open” AI helps foreign adversaries, meta Today said it’s making its Llama series of AI models available to U.S. government agencies and national security contractors.
“We are pleased to substantiate that we’re making Llama available to U.S. government agencies, including those working on defense and national security applications, in addition to private sector partners who support their work,” Meta wrote in a blog post. “We work with corporations like Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies.”
For example, in line with Meta, Oracle uses Llama to process aircraft maintenance documents. Scale AI optimizes Llama to support specific national security team missions. And Lockheed Martin offers Llama to its defense customers to be used cases equivalent to computer code generation.
Meta's policy typically prohibits developers from using Llama for projects related to military, war, or espionage missions. But the corporate is making an exception on this case told Bloomberg and exemptions for similar government agencies (and contractors) within the United Kingdom, Canada, Australia and New Zealand.
Last week, Reuters reported that Chinese researchers with ties to the People's Liberation Army (PLA), the military wing of China's ruling party, used an older Llama model, Llama 2, to develop a tool for defense applications. Chinese researchers, including two affiliated with a People's Liberation Army research and development group, have developed a military-focused chatbot designed to gather and process information and supply information for operational decision-making.
Meta told Reuters in an announcement that use of the “sole and outdated” Llama model was “unauthorized” and violated its acceptable use guidelines. But the report has added loads of momentum to the continuing debate in regards to the advantages and risks of open AI.
The use of open or “closed” AI for defense is controversial.
According to a recent study According to the nonprofit AI Now Institute, the AI ​​used for military intelligence, surveillance and reconnaissance today poses dangers since it relies on personal data that could be filtered out and weaponized by adversaries. There are also vulnerabilities equivalent to biases and an inclination to hallucinate that can’t currently be addressed, write the co-authors, who recommend developing AI that’s separate and isolated from “industrial” models.
Employees of several Big Tech corporations, including Google and Microsoftprotested their employer's contracts to construct AI tools and infrastructure for the US military.
Meta claims that open AI can speed up defense research while advancing America's “economic and security interests.” But it was the US military slow to adopt the technology – and skeptical about its ROI. So far the US Army is the only Branch of the US armed forces with a generative AI deployment.