HomeEthics & SocietyModern battlefields have change into a breeding ground for experimental AI weaponry

Modern battlefields have change into a breeding ground for experimental AI weaponry

As conflicts rage across Ukraine and the Middle East, the trendy battlefield has change into a testing ground for AI-powered warfare.

From autonomous drones to predictive targeting algorithms, AI systems are reshaping the character of armed conflict.

The US, Ukraine, Russia, China, Israel, and others are locked in an AI arms race, each vying for technology supremacy in an increasingly volatile geopolitical landscape.

As these recent weapons and tactics emerge, so do their consequences.

We now face critical questions on warfare’s future, human control, and the ethics of outsourcing life-and-death decisions to machines.

AI might need already triggered military escalation

Back in 2017, Project Maven represented the Pentagon’s primary effort to integrate AI into military operations. It goals to enable real-time identification and tracking of targets from drone footage without human intervention.

While Project Maven is usually discussed when it comes to analyzing drone camera footage, its capabilities likely extend much further.

According to the non-profit watchdog Tech Inquiry’s research, the AI system also processes data from satellites, radar, social media, and even captured enemy assets. This broad range of inputs is often known as “all-source intelligence.”

In March 2023, a military incident occurred when a US MQ-9 Reaper drone collided with a Russian fighter jet over the Black Sea, causing the drone to crash.

Shortly before that incident, the National Geospatial-Intelligence Agency (NGA) confirmed using Project Maven’s technology in Ukraine.

Lieutenant General Christopher T. Donahue, commander of the XVIII Airborne Corps, later stated quite plainly of the Ukraine-Russia conflict, “At the top of the day, this became our laboratory.”

Project Maven in Ukraine involved advanced AI systems integrated into the Lynx Synthetic Aperture Radar (SAR) of MQ-9 Reapers. As such, AI might need been instrumental within the drone collision.

On the morning of March 14, 2023, a Russian Su-27 fighter jet intercepted and damaged a US MQ-9 Reaper drone, leading to the drone crashing into the Black Sea. It marked the primary direct confrontation between Russian and US Air Forces for the reason that Cold War, a major escalation in military tensions between the 2 nations. Source: US Air Force.

In the aftermath, the US summoned the Russian ambassador to Washington to specific its  objections, while the US European Command called the incident “unsafe and unprofessional.”

Russia denied any collision occurred. In response, the US repositioned some unmanned aircraft to observe the region, which Russia protested.

This situation presented the menacing possibility of AI systems influencing military decisions, even contributing to unexpected escalations in military conflicts.

As TechInquiry asks, “It is value determining whether Project Maven inadvertently contributed to probably the most significant military escalations of our time.”

Ethical minefields

Project Maven’s performance has been largely inconsistent thus far.

According to Bloomberg data cited by the Kyiv Independent, “When using various kinds of imaging data, soldiers can appropriately discover a tank 84% of the time, while Project Maven AI is closer to 60%, with the figure plummeting to 30% in snowy conditions.”

While the moral implications of using AI to make life-or-death decisions in warfare are deeply troubling, the danger of malfunction introduces a fair more chilling aspect to this technological arms race.

It’s not only a matter of whether we should always use AI to focus on human beings, but whether we will trust these systems to operate as intended within the fog of war.

What happens when civilians nearby are labeled as targets and destroyed autonomously? And what if the drone itself goes haywire and malfunctions, traveling into environments it’s not trained to operate in?

AI malfunction on this context isn’t merely a technical glitch – it’s a possible catalyst for tragedy on an unimaginable scale. Unlike human errors, which is likely to be limited in scope, an AI system’s mistake may lead to widespread, indiscriminate carnage in a matter of seconds.

Commitments to slow these developments and keep weapons under lock and key have already been made, as shown when 30 countries joined US guardrails on AI military tech.

The US Department of Defense (DoD) also released five “ethical principles for artificial intelligence” for military use, including that “DoD personnel will exercise appropriate levels of judgment and care, while remaining answerable for the event, deployment, and use of AI capabilities.”

However, recent developments indicate a disconnect between these principles and practice.

For one, AI-infused technology is probably going already answerable for serious incidents outside its intended remit. Secondly, the DoD’s generative AI task force involves outsourcing to personal firms like Palantir, Microsoft, and OpenAI

Collaboration with business entities not subject to the identical oversight as government agencies solid doubt on the DoD’s ability to regulate AI development.

Meanwhile, the International Committee of the Red Cross (ICRC) has initiated discussions on the legality of those systems, particularly regarding the Geneva Convention’s “distinction” principle, which mandates distinguishing between combatants and civilians. 

AI algorithms are only nearly as good as their training data and programmed rules, in order that they may struggle with this differentiation, especially in dynamic and unpredictable battlefield conditions.

As indicated by the Black Sea drone incident, these fears are real. Yet military leaders worldwide remain bullish about AI-infused war machines. 

Not way back, an AI-powered F-16 fighter jet out-maneuvered human pilots in a test demo.

US Secretary of the Air Force Frank Kendall, who experienced it firsthand, summed up the inertia surrounding AI military tech: “It’s a security risk to not have it. At this point, now we have to have it.”

On the face of it, that’s a grim admission.

Despite millennia of warfare and its devastating consequences, the mere considered being one step behind ‘the enemy’ – this primal anxiety, perhaps deeply rooted in our psyche – continues to override reason.

Homegrown AI weaponry

In Ukraine, young firms like Vyriy, Saker, and Roboneers are actively developing technologies that blur the tenuous line between human and machine decision-making on the battlefield.

Saker developed an autonomous targeting system to discover and attack targets as much as 25 miles away, whereas Roboneers created a remote-controlled machine gun turret that may be operated using a game controller and a tablet.

Reporting on this recent state of AI-powered modern warfare, the New York Times recently followed Oleksii Babenko, a 25-year-old CEO of drone maker Vyriy, who showcased his company’s latest creation. 

In a real-life demo, Babenko rode a bike full-pelt because the drone tracked him, free from human control. The reporters watched the scene unfold on a laptop screen. 

The advanced quadrocopter eventually caught him, and within the reporters’ words, “If the drone had been armed with explosives, and if his colleagues hadn’t disengaged the autonomous tracking, Mr. Babenko would have been a goner.” 

Similarly to Ukraine, the Israel-Palestine conflict is proving a hotbed for military AI research.

Experimental AI-embedded or semi-autonomous weapons include remote-controlled quadcopters armed with machine guns and missiles and the “Jaguar,” a semi-autonomous robot used for border patrol.

The Israeli military has also created AI-powered turrets that establish what they term “automated kill-zones” along the Gaza border.

AI weaponsJaguar’s autonomous nature is given away by its turret and mounted camera.

Perhaps most concerning to human rights observers are Israel’s automated goal generation systems. “The Gospel” is designed to discover infrastructure targets, while “Lavender” focuses on generating lists of individual human targets.

Another system, ominously named “Where’s Daddy?“, is reportedly used to trace suspected militants once they are with their families.

The left-wing Israeli news outlet +972, reporting from Tel Aviv, admitted that these systems almost actually led to high civilian casualties.

The path forward

As military AI technology advances, assigning responsibility for mistakes and failures becomes an intractable task – a spiraling ethical and moral void we’ve already entered. 

How can we prevent a future where killing is more automated than human, and accountability is lost in an algorithmic fog?

Current events and rhetoric fail to encourage caution. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read