Stay up up to now with free updates
Simply log in Artificial intelligence myFT Digest – delivered straight to your inbox.
Artificial intelligence is already getting used on the battlefield. An accelerated introduction is in sight. This 12 months, Meta, Anthropic and OpenAI all said their AI foundation models were available to be used by U.S. national security. AI warfare is controversial and widely criticized. But a more insidious set of AI use cases have already been quietly integrated into the U.S. military.
At first glance, the tasks for which AI models are used could seem insignificant. They help with communication, coding, IT ticket resolution and data processing. The problem is that even mundane applications can pose risks. The ease with which they may be deployed could threaten the safety of civil and defense infrastructure.
Consider US Africa Command, considered one of the US Department of Defense's combatant commands, which has explicitly expressed its use of an OpenAI tool for “unified analytics for data processing.”
Such administrative tasks may be incorporated into business-critical decisions. But as repeated demonstrations have shown, AI tools always fabricate results (so-called hallucinations) and introduce novel vulnerabilities. Using them can result in an accumulation of errors. Because USAfricom is a warfare force, small errors over time can result in decisions that cause civilian harm and tactical errors.
USAfricom will not be alone. This 12 months, the US Air Force and Space Force launched a generative AI chatbot called the Non-classified Internet Protocol Generative Pre-training Transformer (NIPRGPT). It can “answer questions and help with tasks equivalent to correspondence, background papers, and code.” Meanwhile, the Navy has developed a conversational AI technical assistance tool model it calls Amelia.
Military organizations justify using AI models on the grounds that they increase efficiency, accuracy and scalability. In reality, their procurement and deployment show a worrying ignorance of the risks involved.
These risks also include opponents poisoning Data sets on which models are trained in order that they will subvert certain results Trigger keywords are used, even on supposedly “secure” systems. Enemies could also weaponize hallucinations.
Yet U.S. military organizations haven’t addressed or provided assurances about how they plan to guard critical defense infrastructure.
Lack of convenience poses as many, if no more, security risks as intentional attacks. The nature of AI systems is to provide results based on statistical and probabilistic correlations from historical data, not on factual evidence, reasoning, or “cause.” Take on code generation and IT tasks wherever Researcher at Cornell University found last 12 months that OpenAI's ChatGPT, GitHub Copilot and Amazon CodeWhisperer generated correct code only 65.2 percent, 46.3 percent and 31.1 percent of the time, respectively.
Even though AI firms assure us that they’re working on improvements, there isn’t a denying that current error rates are too high for applications that require precision, accuracy and security. Over-reliance and reliance on the tools could lead on users to do that overlook also their mistakes.
This begs the query: How have military organizations managed to obtain AI and implement models with ease?
One answer comes from the undeniable fact that they seem like viewed as an extension of the IT infrastructure, when in point of fact they may very well be used as analytical tools that may change the outcomes of necessary missions. This ability to categorise using AI as infrastructure while bypassing appropriate procurement channels that classify it as suitable for business-critical purposes should give us pause.
When on the lookout for potential future efficiencies, there are real risks in using AI management tools by military agencies. This is a compromise that their supposed advantages cannot justify.