HomeEthics & SocietyEliminating bias in AI could also be unimaginable – a pc scientist...

Eliminating bias in AI could also be unimaginable – a pc scientist explains easy methods to tame it as a substitute

When I asked ChatGPT for a joke about Sicilians the opposite day, it implied that Sicilians are stinky.

ChatGPT can sometimes produce stereotypical or offensive outputs.
Screen capture by Emilio Ferrara, CC BY-ND

As any person born and raised in Sicily, I reacted to ChatGPT’s joke with disgust. But at the identical time, my computer scientist brain began spinning around a seemingly easy query: Should ChatGPT and other artificial intelligence systems be allowed to be biased?

You might say “Of course not!” And that may be an inexpensive response. But there are some researchers, like me, who argue the other: AI systems like ChatGPT should indeed be biased – but not in the way in which you may think.

Removing bias from AI is a laudable goal, but blindly eliminating biases can have unintended consequences. Instead, bias in AI might be controlled to realize a better goal: fairness.

Uncovering bias in AI

As AI is increasingly integrated into on a regular basis technology, many individuals agree that addressing bias in AI is a crucial issue. But what does “AI bias” actually mean?

Computer scientists say an AI model is biased if it unexpectedly produces skewed results. These results could exhibit prejudice against individuals or groups, or otherwise not be according to positive human values like fairness and truth. Even small divergences from expected behavior can have a “butterfly effect,” through which seemingly minor biases might be amplified by generative AI and have far-reaching consequence.

Bias in generative AI systems can come from quite a lot of sources. Problematic training data can associate certain occupations with specific genders or perpetuate racial biases. Learning algorithms themselves might be biased after which amplify existing biases in the information.

But systems is also biased by design. For example, an organization might design its generative AI system to prioritize formal over creative writing, or to specifically serve government industries, thus inadvertently reinforcing existing biases and excluding different views. Other societal aspects, like a scarcity of regulations or misaligned financial incentives, can even result in AI biases.

The challenges of removing bias

It’s not clear whether bias can – and even should – be entirely eliminated from AI systems.

Imagine you’re an AI engineer and also you notice your model produces a stereotypical response, like Sicilians being “stinky.” You might think that the answer is to remove some bad examples within the training data, possibly jokes in regards to the smell of Sicilian food. Recent research has identified easy methods to perform this sort of “AI neurosurgery” to deemphasize associations between certain concepts.

But these well-intentioned changes can have unpredictable, and possibly negative, effects. Even small variations within the training data or in an AI model configuration can result in significantly different system outcomes, and these changes are unimaginable to predict prematurely. You don’t know what other associations your AI system has learned as a consequence of “unlearning” the bias you simply addressed.

Other attempts at bias mitigation run similar risks. An AI system that’s trained to completely avoid certain sensitive topics could produce incomplete or misleading responses. Misguided regulations can worsen, fairly than improve, problems with AI bias and safety. Bad actors could evade safeguards to elicit malicious AI behaviors – making phishing scams more convincing or using deepfakes to govern elections.

With these challenges in mind, researchers are working to enhance data sampling techniques and algorithmic fairness, especially in settings where certain sensitive data is just not available. Some firms, like OpenAI, have opted to have human employees annotate the information.

On the one hand, these strategies can assist the model higher align with human values. However, by implementing any of those approaches, developers also run the danger of introducing latest cultural, ideological or political biases.

Controlling biases

There’s a trade-off between reducing bias and ensuring that the AI system remains to be useful and accurate. Some researchers, including me, think that generative AI systems needs to be allowed to be biased – but in a fastidiously controlled way.

For example, my collaborators and I developed techniques that permit users specify what level of bias an AI system should tolerate. This model can detect toxicity in written text by accounting for in-group or cultural linguistic norms. While traditional approaches can inaccurately flag some posts or comments written in African-American English as offensive and by LGBTQ+ communities as toxic, this “controllable” AI model provides a much fairer classification.

Controllable – and secure – generative AI is essential to make sure that AI models produce outputs that align with human values, while still allowing for nuance and adaptability.

Toward fairness

Even if researchers could achieve bias-free generative AI, that may be only one step toward the broader goal of fairness. The pursuit of fairness in generative AI requires a holistic approach – not only higher data processing, annotation and debiasing algorithms, but additionally human collaboration amongst developers, users and affected communities.

As AI technology continues to proliferate, it’s necessary to keep in mind that bias removal is just not a one-time fix. Rather, it’s an ongoing process that demands constant monitoring, refinement and adaptation. Although developers could be unable to simply anticipate or contain the butterfly effect, they will proceed to be vigilant and thoughtful of their approach to AI bias.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read