HomeNewsMicrosoft bans US police departments from using corporate AI tools for facial...

Microsoft bans US police departments from using corporate AI tools for facial recognition

Microsoft has affirmed banning US police departments from using generative AI for facial recognition via Azure OpenAI Service, the corporate's fully managed, enterprise-focused wrapper around OpenAI technology.

On Wednesday, language was added to the Azure OpenAI Service terms of service that more clearly prohibits integrations with Azure OpenAI Service from getting used “by or for” police departments for facial recognition within the U.S., including integrations with OpenAI's current – and potentially future – image evaluation Models.

A separate recent bullet point covers “any law enforcement agency worldwide” and specifically prohibits the usage of “real-time facial recognition technology” on mobile cameras equivalent to body cameras and dash cams to try to discover an individual in “uncontrolled-wild” environments.

The policy changes come every week after Axon, a maker of technology and weapons products for military and law enforcement, announced a New product which uses OpenAI's GPT-4 generative text model to summarize audio from body cameras. Critics were quick to indicate the potential pitfalls, equivalent to hallucinations (even the very best generative AI models now make up facts) and racial bias emerging from the training data (which is especially concerning considering that these are people of color). such). They are much more prone to be stopped by the police than their white colleagues).

It is unclear whether Axon used GPT-4 through the Azure OpenAI Service and, in that case, whether the updated policy was in response to Axon's product launch. OpenAI had previously restricted the usage of its models for facial recognition via its APIs. We've contacted Axon, Microsoft and OpenAI and can update this post if we hear back.

The recent conditions give Microsoft leeway.

The complete ban on the usage of the Azure OpenAI service only affects US police, not international law enforcement. And it doesn't cover facial recognition performed with cameras in environments like a back office (although the terms prohibit any use of facial recognition by US police).

This aligns with Microsoft and its close partner OpenAI's recent approach to AI-related law enforcement and defense contracts.

In January, Bloomberg reported revealed that OpenAI is working with the Pentagon on a variety of projects, including cybersecurity capabilities – a departure from the startup's previous ban on providing its AI to the military. Elsewhere, Microsoft has promoted the usage of OpenAI's DALL-E image generation tool to assist the Department of Defense (DoD) develop software to conduct military operations. Per The interception.

The Azure OpenAI service became available in Microsoft's Azure Government product in February, providing additional compliance and management capabilities for presidency agencies, including law enforcement. In one blog entryCandice Ling, SVP of Microsoft's government-focused division Microsoft Federal, promised that the Azure OpenAI service can be “submitted for extra approval” to the Department of Defense for workloads in support of defense missions.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read