HomeNewsUnderstanding AI results: Study shows pro-Western cultural bias in the way in...

Understanding AI results: Study shows pro-Western cultural bias in the way in which AI decisions are explained

People are increasingly using artificial intelligence (AI) to make decisions about our lives. AI, for instance, helps with this Make hiring decisions And offer medical diagnoses.

If you were affected, it’s possible you’ll want a proof as to why an AI system made the choice it did. AI systems are sometimes so computationally intensive that they can’t do that Not even their designers know needless to say how the choices got here about. This is why the event of “explainable AI” (or XAI) it's booming. Explainable AI includes systems which are either easy enough themselves to be fully understood by humans or that provide easy-to-understand explanations for other, more complex AI models. Exits.

Explainable AI systems help AI engineers do that monitor and proper the processing of their models. They also help users make informed decisions about whether to trust AI output or how best to make use of it.

Not all AI systems have to be explainable. But in high-risk areas we are able to expect to see a proliferation of XAI. For example, the one recently passed European AI law, a precursor to similar laws world wide, protects a “right to elucidate.” Citizens have the fitting to receive a proof of an AI decision that affects their other rights.

But what if something like your cultural background influences what explanations you expect from an AI?

In a current systematic review We analyzed over 200 studies from the last decade (2012-2022) that tested the reasons of XAI systems on humans. We desired to see to what extent researchers demonstrated an awareness of cultural differences that could be relevant to the event of satisfactorily explainable AI.

Our results suggest that many existing systems may provide explanations which are primarily tailored to individualistic, typically Western populations (e.g., people within the US or the UK). Additionally, most XAI user studies only surveyed Western populations, but unwarranted generalizations of results for non-Western populations were ubiquitous.

Cultural differences in explanations

There are two common ways to elucidate an individual's actions. One of those involves appealing to the person's beliefs and desires. This explanation is internalist and focuses on what is happening in an individual's mind. The other is externalistic and refers to aspects similar to social norms, rules or other aspects that lie outside the person.

To see the difference, take into consideration how we’d explain a driver stopping at a red light. We could say: “They consider that the traffic light is red and don’t need to interrupt the traffic rules, so that they decided to stop.” This is an internalist explanation. But we could also say, “The lights are red and traffic rules require drivers to stop at red lights, so the motive force stopped.” This is an externalist explanation.



Many psychologically Studies suggest that internalist explanations are preferred in “individualistic” countries, where people often view themselves as more independent of others. These countries They generally live within the West, are educated, industrialized, wealthy and democratic.

However, in “collectivist” societies, similar to those commonly present in Africa or South Asia, where people often view themselves as interdependent, such explanations are clearly not preferred to externalist explanations.

Preferences in explaining behavior are relevant to what a successful XAI output might appear to be. An AI offering a medical diagnosis could be accompanied by a proof similar to: “Since your symptoms are fever, sore throat, and headache, the classifier is that you might have the flu.” This is internalistic because the reason is a ” internal” state of the AI ​​– what it “thinks” – albeit metaphorically. Alternatively, the diagnosis could possibly be accompanied by an announcement that doesn’t mention any internal condition, similar to: “Since your symptoms are fever, sore throat, and headache, the classifier outputs that you might have the flu.” This is externalist. The explanation relies on “external” aspects similar to inclusion criteria, much like how we’d explain stopping at a traffic light by referring to traffic rules.

If people from different cultures prefer various kinds of explanations, this is essential for designing inclusive systems of explainable AI.

However, our research suggests that XAI developers aren’t sensitive to possible cultural differences in explanation preferences.

Ignore cultural differences

A striking 93.7% of the studies we reviewed showed no awareness of cultural differences that could be relevant to the event of explainable AI. Additionally, once we examined the cultural background of the people tested within the studies, we found that 48.1% of the studies didn’t report cultural background in any respect. This suggests that the researchers didn’t consider cultural background as an element that might influence the generalizability of the outcomes.

Of those that reported cultural background, 81.3% surveyed only Western, industrialized, educated, wealthy, and democratic populations. Only 8.4% surveyed non-Western populations and 10.3% surveyed mixed populations.

Sampling just one style of population needn’t be an issue if conclusions are limited to that population or researchers provide reasons to consider that other populations are similar. However, of the studies that reported cultural background, 70.1% prolonged their conclusions beyond the study population – to users, people, people on the whole – and most studies included no evidence of reflection on cultural similarities.



To see how deep the policing of culture is in explainable AI research, we added a scientific “meta” review of 34 existing literature reviews in the sphere. Surprisingly, only two reviews commented on Western-biased sampling in user research, and just one review mentioned overgeneralizations of the XAI study results.

That is problematic.

Why results matter

If findings about explainable AI systems apply only to at least one population, those systems may not meet the explanatory needs of other people affected by or using them. This can weaken trust in AI. If AI systems make high-risk decisions but don’t give you a satisfactory explanation, you might be more likely to distrust them, even when their decisions (e.g. medical diagnoses) are correct and necessary to you.

To counteract this cultural bias in XAI, developers and psychologists should work together to check relevant cultural differences. We also recommend that the cultural background of the samples be reported together with the outcomes of the XAI user study.

Researchers should Condition whether their study sample represents a broader population. You also can use Qualifiers B. “US users” or “Western participants” when reporting their results.

As AI is used for decision-making world wide, systems must provide explanations that folks from different cultures find acceptable. There is currently a risk that enormous populations that may benefit from the potential of explainable AI are being neglected in XAI research.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read