HomeNewsGoogle has its promise not to make use of AI for weapons....

Google has its promise not to make use of AI for weapons. It is a component of a troubling trend

Last week, Google quietly gave up a protracted -term commitment to make use of no technology for artificial intelligence (AI) for weapons or surveillance. In an update to his AI principles that were Published first in 2018The tech giant removed statements that promised to not follow:

  • Technologies that cause general damage or probably cause
  • Weapons or other technologies, the predominant purpose of which consists of it or implementation
  • Technologies that collect or use information for surveillance that violates internationally recognized standards
  • Technologies, the aim of which is widely accepted principles of international law and human rights.

The update took place after the President of the United States, Donald Trump revoke The aim of promoting protected, protected and trustworthy development and use of AI.

Google's decision follows a recent trend by Big Tech, which enters the national security arena and corresponds to more military applications from AI. Why does that occur now? And what’s going to the influence of the more military use of AI?

The growing trend of the militarized AI

In September, High -ranking civil servants from the Biden government met With bosses of leading AI firms corresponding to Openaai to debate KI development. The government then announced a task force to coordinate the event of knowledge centers and at the identical time weigh up economic, national security and environmental goals.

The following month the bid government bid Published a memo This was partly used to “use AI to fulfill the national security goals”.

Big Tech company quickly paid attention to the news.

In November 2024, Tech -Riese Meta announced that it provides its AI models “Lama” for presidency agencies and personal firms which can be involved in defense and national security.

This was despite Metas own politics This prohibits using Lama for “(m) ilitary, warfare, nuclear industries or applications”.

Around the identical time the AI ​​company Anthropically also announced The data evaluation company Palantir and Amazon Web Services have teamed up with the info evaluation company as a way to grant us intelligence and defense agencies to its AI models.

In the next month, Openai announced The defense startup Anduril Industries had teamed as much as develop AI for the US Department of Defense.

Companies claim that they’d mix openais GPT-4O and O1 models with Anduril's systems and software to enhance the defense of the US military against drone attacks.

Openai works with a defense startup to develop AI for the US Department of Defense.
Michael Dwyer/AP

Defend national security

The three firms defended the changes of their guidelines on the premise of the United States' national security interests.

Take Google. In a blog post that was published at first of this monthThe company quoted the worldwide AI competition, complex geopolitical landscapes and national security interests as reasons for changing its AI principles.

In October 2022 the USA output Export controls Restriction of China's access to certain kinds of high-end computer chips used for AI research. In response to China issued its own export control measures on high-tech metals which can be of crucial importance for the AI ​​chip industry.

The tensions from this trade war have been escalated prior to now few weeks due to the publication of highly efficient AI models by the Chinese technology company Deepseek. Deepseek Bought 10,000 Nvidia A100 chips Before the US export control measures and allegedly they allegedly used their AI models.

It was not clarified how the militarization of economic AI would protect the national interests of the United States. However, there are clear signs of tensions, since the most important geopolitical rival within the United States, China, influence the choices made.

An awesome tribute to human life

What is already clear is that using AI in military contexts has a proven coat on human life.

For example within the war in Gaza, the Israeli military was rely heavily on advanced AI tools. These tools require enormous amounts of knowledge and bigger computer and storage services which can be situated Provided by Microsoft And Google. However, these AI tools are used to discover potential goals, but are sometimes inaccurate.

Israeli soldiers said This inaccuracies have accelerated the fatalities within the war that’s now More than 61,000According to the authorities in Gaza.

A pickup loaded with people drives under destroyed buildings on an unpaved road.
The Israeli military has rely heavily on advanced AI tools within the war within the Gaza.
Mohammed Know/EPA

Google remove the “Damage” clause from its AI principles against the International law on human rights. This identifies “security of the person” as a central measure.

It is worrying to examine why a business technology company has to remove a clause for damage.

Avoid the risks of AI-capable warfare

In his Updated principlesGoogle says its products will proceed to match “widespread principles of international law and human rights”.

Nevertheless, Human Rights Watch has criticized the space of the more explicit statements in regards to the development of weapons in the unique principles.

The organization also points out that Google has not explained exactly how its products will match human rights.

That is why Joe Bidens also handled revoked Executive Order about AI.

Bidens initiative was not perfect, however it was a step to bring guidelines for the power Responsible development and use of AI technologies.

Such guidelines at the moment are more needed than ever, since Big Tech is smeared more with military organizations-and the chance related to AI-capable warfare, and the violation of human rights increases.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read