HomeNewsAI utilized by police cannot distinguish black people and other the reason...

AI utilized by police cannot distinguish black people and other the reason why Canada's AI laws urgently should be revised

Artificial intelligence (AI) is a strong tool.

In the hands of public police and other law enforcement agencies, AI can result in injustice. For example, the Detroit resident Robert Williams was arrested in front of his children and held in custody overnight after a false positive test result. in an AI facial recognition system. Williams' name was stored within the system for years and he needed to sue the police and native government for wrongful arrest to have it removed. Eventually he discovered that a faulty AI had identified him as a suspect.

Around the corner, the identical thing happened to a different Detroit resident. Michael Oliverand in New Jersey, to Nijeer ParksThese three men have two things in common. First, they’re all victims of false positives in AI facial recognition systems. And second, they’re all black men.

It seems: AI-based facial recognition systems cannot distinguish most individuals with dark skin from one another. According to a study The error rate is highest amongst black women at 35 percent.

These examples highlight the critical issues related to using AI in policing and justice, especially in today's climate when AI is getting used more widely within the criminal justice system and in the private and non-private sectors than ever before.

In Canada: New laws, old problems

Two recent laws are currently being considered in Canada with significant implications for using AI in the approaching years. Both fail to supply adequate protections for the general public in relation to using AI by police. As academics studying computer science, policing and law, we’re concerned about these gaps.

In Ontario, Bill 194or the Strengthening Cybersecurity and Building Trust within the Public Sector Act, focuses on using AI in the general public sector.

The federal Bill C-27 would pass the Artificial Intelligence and Data Act (AIDA). Although AIDA's focus is on the private sector, it has implications for the general public sector on account of the high variety of public-private partnerships in government.

Public police use AI because the owner and operator of AI. They may also hire a personal agency as a proxy to perform AI-driven analytics.

Because of this public use of AI within the private sector, even laws designed to manage using AI within the private sector should provide rules of engagement for law enforcement agencies using this technology.

Protesters against Amazon's facial recognition software hold pictures of Amazon founder and former CEO Jeff Bezos in front of their faces in front of Amazon headquarters.
(AP Photo/Elaine Thompson)

Racial profiling and AI

AI has powerful predictive capabilities. Using machine learning, AI will be fed a database of profiles to “determine” the likelihood of who might do what, or to match faces to profiles. AI could also use data on past crime to find out where to send police patrols.

These techniques sound like they might increase efficiency or reduce bias, but police use of AI may result in more racial profiling and unnecessary police deployments.

Civil rights and privacy groups have written reports on AI and Monitoring practicesThey provide examples of racist prejudice Places where police use AI technology. And they point to the various illegal arrests.

In Canada, the Royal Canadian Mounted Police (RCMP) and other police agencies, including the Toronto Police Service and the Ontario Provincial Police, have already been notified by the Office of the Privacy Commissioner of Canada for his or her use of the Clearview AI technology perform Mass surveillance.

Clearview AI has a database of over three billion images collected without consent by scraping the web. Clearview AI compares faces from the database with other footage. This violates Canadian privacy laws. The Office of the Privacy Commissioner of Canada has criticized the RCMP's use of this technology And Toronto Police have suspended using this product.

By excluding law enforcement regulation in Bills 194 and C-27, AI firms could enable similar mass surveillance in Canada.



The EU is moving forward

At the international level, progress has been made in regulating using AI in the general public sector.

Until now, the European Union’s AI law is the perfect law on the planet in relation to protecting the privacy and civil liberties of its residents.

The EU AI Law takes a risk- and harm-based approach to regulating AI and expects AI users to take concrete steps to guard personal information and forestall mass surveillance.

In contrast, each Canadian and U.S. laws balances residents' right to not be subject to mass surveillance with firms' desire for efficiency and competitiveness.

A trailer for “Coded Bias”.

Still time for changes

There remains to be time to make changes. Bill 194 is currently being debated within the Legislative Assembly of Ontario. And Bill C-27 is being debated within the Canadian Parliament.

The exclusion of police and law enforcement from Bills 194 and C-27 is a glaring omission and will jeopardise the fame of the justice system in Canada.

The Ontario Law Commission has criticized Bill 194. They say the proposed law doesn’t promote human rights or privacy and would allow the unhindered use of AI in ways that would violate Canadians' privacy. They say Bill 194 would allow public agencies to make use of AI in secret and argue that Bill 194 ignores using AI by police, prisons, courts and other law enforcement agencies.

With regard to Bill C-27, the Canadian Civil Liberties Association (CCLA) a warning note and has requested that the bill be withdrawn. In their opinion, the regulatory measures in Bill C-27 are more focused on increasing productivity and data mining within the private sector than on protecting the privacy and civil rights of Canadian residents.

Given that police and national security agencies often work with private providers in surveillance and intelligence activities, regulations for such partnerships are needed. But police and national security authorities usually are not mentioned in Bill C-27.

The CCLA recommends that Bill C-27 be brought into line with the European Union's AI law and include guardrails to forestall mass surveillance and protect against the abuse of the ability of AI.

These will probably be Canada's first AI laws. We are years behind on the regulations needed to forestall the misuse of AI use in the private and non-private sectors.

At a time when using artificial intelligence by law enforcement is increasing, changes to Bills 194 and C-27 should be made now to guard Canadian residents.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read