The United States is needs to be “the undisputed (artificial intelligence-based) fighting force on this planet.”
At least that's the view of the country's War Department, which earlier this month released a brand new technique to speed up the usage of AI for military purposes.
The “AI Acceleration Strategy” sets out the clear goal of constructing the U.S. military a frontrunner in AI warfare. But all of the hype surrounding the strategy ignores the realities and limitations of AI capabilities.
You can consider it as a sort of “AI peacocking” – a loud public signal for AI adoption and governance that obscures the truth of unreliable systems.
What does the US AI strategy entail?
Several militaries around the globe including China And Israelintegrate AI into their work. But the AI-first mantra of the U.S. War Department's latest strategy sets it apart.
The strategy goals to make the US military more lethal and efficient. It suggests that AI is the one technique to achieve this goal.
The department will encourage experimentation with AI models. It also eliminates so-called “bureaucratic barriers” to implementing AI across the military, supports investment in AI infrastructure, and pursues quite a lot of large AI-powered military projects.
One of those projects goals to make use of AI to show information into weapons “in hours, not years.” This is concerning given how this approach has been used elsewhere.
There is, for instance ongoing reports in regards to the increased civilian death toll in Gaza resulting from the Israeli military's use of AI-powered decision support systems, which essentially convert intelligence information into weaponized targeting information at unprecedented speed and scale. Further acceleration of this pipeline risks unnecessary escalation of harm to civilians.
Another major project goals to place American AI models – presumably those intended to be used in military contexts – “directly into the hands of our three million civilian and military personnel of all classification levels.”
It just isn’t made clear why three million civilian Americans need access to military AI systems. Nor what effects a widespread dissemination of military capabilities among the many civilian population would have.
The narrative vs. reality
In July 2025 a MIT study found that 95% of corporations achieved zero return on investment in generative AI.
The fundamental reason was technical limitations of generative AI tools similar to ChatGPT and Copilot. For example, most are unable to retain feedback, adapt to latest contexts, or improve over time.
This study focused on generative AI in a business context. But the findings apply more generally. They indicate the shortcomings of AI, that are too often obscured by the marketing hype surrounding the technology.
AI is an umbrella term. It is used to incorporate a Range of skills – from large language models to computer vision models. These are technologically different tools with different uses and purposes.
Although their applications, capabilities and success rates vary significantly, most AI applications have been bundled right into a globally successful marketing agenda.
This is harking back to the dot-com bubble of the early 2000s, which viewed marketing as a legitimate business model.
This approach now appears to have impacted how the US desires to position itself in the present geopolitical climate.
A Guide to “AI Peacocking”
The War Department’s AI-First strategy reads more like a guide to “AI peacocking” than a legitimate technology implementation strategy.
AI is postulated as an answer to each problem – even people who don’t exist. The marketing behind AI has created a manufactured fear of falling behind. The War Department's latest AI strategy feeds into this fear by alluding to a technically advanced military strategy.
However, the truth is that these technological capabilities fall in need of their supposed effectiveness. And in a military environment, these restrictions can have devastating consequences, including increased civilian deaths.
The US relies heavily on a marketing-driven business model to implement AI across its military without technical rigor and integrity.
This approach is prone to expose a vulnerable vacuum across the War Department when these fragile systems fail – and certain in times of crisis once they are deployed in military environments.

