HomePolicyAI harm often occurs in secret and builds over time - a...

AI harm often occurs in secret and builds over time – a legal scholar explains how the law can adapt to reply

As you scroll through your social media feed or let your favorite music app put together the right playlist, chances are you’ll feel like artificial intelligence is improving your life – learning your preferences and meeting your needs. But behind this practical façade lurks a growing concern: algorithmic harm.

These damages should not obvious or immediate. They are insidious and construct over time as AI systems silently make decisions about your life without you even realizing it. The hidden power of those systems becomes a big threat to privacy, equality, autonomy and security.

AI systems are embedded in almost every aspect of recent life. They suggest what shows and films you need to watch, help employers resolve who to rent, and even influence judges when deciding who’s eligible for punishment. But what happens when these systems, often viewed as neutral, start making decisions that drawback certain groups or, worse, cause harm in the actual world?

The often neglected consequences of AI applications require regulatory frameworks that may keep pace with this rapidly evolving technology. I study the intersection between law and technology and have designed a legal framework to do exactly that.

Slow burns

One of essentially the most striking elements of algorithmic harm is that its cumulative effects often fly under the radar. These systems typically do in a roundabout way attack your privacy or autonomy in ways which might be easily apparent to you. They collect massive amounts of information about people – often without their knowledge – and use that data to make decisions that impact people's lives.

Sometimes this causes minor inconveniences, comparable to advertisements following you across web sites. However, because AI doesn’t reply to this recurring harm, it could escalate and cause significant cumulative harm to different groups of individuals.

Consider the instance of social media algorithms. They are supposedly designed to advertise useful social interactions. However, behind their seemingly favorable facade, they quietly track users' clicks and create profiles of their political views, skilled affiliations, and private lives. The data collected is utilized in systems that make subsequent decisions – whether to discover you as a pedestrian, be considered for a job, or be considered prone to committing suicide.

Worse, their addictive design traps teenagers in cycles of overuse, resulting in increasing mental health crises comparable to anxiety, depression and self-harm. By the time you realize the total extent, it's too late – your privacy has been violated, your opportunities have been compromised by biased algorithms, and the protection of essentially the most vulnerable has been undermined – all without your knowledge.

This is what I call “intangible, cumulative harm”: AI systems operate within the background, but their effects may be devastating and invisible.

Researcher Kumba Sennaar describes how AI systems perpetuate and exacerbate prejudices.

Why regulation is lagging behind

Despite these growing threats, legal frameworks worldwide are struggling to maintain pace. In the United States, a regulatory approach that emphasizes innovation makes it difficult to set strict standards for the usage of these systems in numerous contexts.

Courts and regulators are accustomed to coping with concrete harms comparable to physical harm or economic loss, but algorithmic harms are sometimes more subtle, cumulative, and difficult to detect. Regulations often don’t keep in mind the broader impacts that AI systems can have over time.

For example, social media algorithms can regularly affect users' mental health. However, because these damages construct up slowly, it’s difficult to combat them inside the applicable legal standards.

Four sorts of algorithmic harm

Based on existing AI and data governance research, I actually have categorized algorithmic harms into 4 legal areas: privacy, autonomy, equality, and security. Each of those areas is vulnerable to the subtle but often uncontrolled power of AI systems.

The first form of harm is erosion of privacy. AI systems collect, process and transmit massive amounts of information, undermining people's privacy in ways in which will not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in private and non-private spaces, effectively making mass surveillance the norm.

The second form of harm is the undermining of autonomy. AI systems often subtly undermine your ability to make autonomous decisions by manipulating the knowledge displayed. Social media platforms use algorithms to point out users content that maximizes the interests of third parties, subtly influencing the opinions, decisions and behavior of hundreds of thousands of users.

The third form of harm is degradation of equality. Although AI systems are designed to be neutral, they often inherit the biases present of their data and algorithms. Over time, this increases social inequalities. In one infamous case, a facial recognition system utilized by retail stores to detect shoplifters disproportionately misidentified women and other people of color.

The fourth form of damage is impairment of safety. AI systems make decisions that impact people's safety and well-being. When these systems fail, the results may be catastrophic. But even after they work as intended, they will still cause harm, comparable to the cumulative impact of social media algorithms on teenagers' mental health.

Because these cumulative harms often arise from AI applications protected by trade secret laws, victims haven’t any strategy to detect or trace the harm. This creates an accountability gap. How does the victim know that a biased hiring decision or wrongful arrest is being made based on an algorithm? Without transparency, it is sort of not possible to carry firms accountable.

In this UNESCO video, researchers from around the globe explain the questions surrounding the ethics and regulation of AI.

Closing the accountability gap

By categorizing the sorts of algorithmic harms, the legal boundaries of AI regulation are delineated and possible legal reforms are presented to shut this accountability gap. Changes that I consider would help include mandatory algorithmic impact assessments, which require firms to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and security – before and after its implementation. For example, firms using facial recognition systems would want to evaluate the impact of those systems throughout their lifecycle.

Another helpful change could be stronger individual rights around the usage of AI systems, which might allow people to opt out of harmful practices and require certain AI applications to opt-in. For example, requiring an opt-in regime for data processing by firms that use facial recognition systems and the power for users to opt out at any time.

Finally, I suggest requiring firms to reveal the usage of AI technology and the expected harm. To illustrate, this will include notifying customers concerning the use of facial recognition systems and the expected harm within the areas described within the typology.

As AI systems develop into more widely utilized in critical societal functions – from healthcare to education to employment – ​​the necessity to manage the harm they cause becomes increasingly urgent. Without intervention, these invisible damages are more likely to proceed to build up, affecting nearly everyone and disproportionately affecting essentially the most vulnerable.

Because generative AI multiplies and exacerbates the harms of AI, I consider it will be significant that policymakers, courts, technology developers, and civil society recognize the legal harms of AI. This requires not only higher laws, but additionally a more thoughtful approach to cutting-edge AI technology – one which prioritizes civil rights and justice within the face of rapid technological advances.

The way forward for AI is incredibly promising, but without the best legal framework, it could also exacerbate inequality and undermine the very civil liberties it’s designed to strengthen in lots of cases.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read