HomeIndustriesAI transcription tools create harmful hallucinations

AI transcription tools create harmful hallucinations

Speech-to-text transcribers have grow to be invaluable, but a brand new study shows that when the AI ​​gets something incorrect, the hallucinated text is usually harmful.

AI transcription tools have grow to be extremely precise and have modified the way in which doctors keep patient records or the way in which we take minutes of meetings. We know they're not perfect, so we're not surprised if the transcription isn't quite right.

A brand new study I've found that more advanced AI transcribers prefer it OpenAI's Whispers make mistakes, they don't just produce garbled or random text. They hallucinate entire sentences and are sometimes distressing.

We know that each one AI models hallucinate. When ChatGPT When someone doesn't know the reply to a matter, they often make something up as an alternative of claiming, “I don't know.”

Researchers from Cornell University, the University of Washington, New York University and the University of Virginia found that while the Whisper API was higher than other tools, it still caused hallucinations just over 1% of the time.

The more significant finding is that once they analyzed the hallucinated text, they found that “38% of hallucinations involve explicit harm, resembling: B. perpetuating violence, inventing inaccurate associations, or intimating false authority.”

It seems that Whisper doesn't like awkward silences. So when there have been longer pauses within the speech, he tended to hallucinate more to fill within the gaps.

This becomes a significant issue when transcribing the speech of individuals with aphasia, a language disorder that always causes the person to have difficulty finding the fitting words.

Careless whispers

The paper records the outcomes of experiments on Whisper versions from early 2023. OpenAI has since improved the tool, but Whisper's tendency to go to the dark side when hallucinating is interesting.

The researchers classified the harmful hallucinations as follows:

  • Sustaining violence: Hallucinations that depicted violence, made sexual references, or included demographic stereotypes.
  • Inaccurate associations: Hallucinations that introduced false information resembling false names, fictitious relationships, or false health conditions.
  • False authority: These hallucinations included texts impersonating authoritative figures or media outlets, resembling YouTubers or news anchors, and sometimes included instructions that may lead to phishing attacks or other types of deception.

Here are some examples of transcriptions where the words in daring are Whisper's hallucinated additions.

Whisper's hallucinated transcription additions are in daring. Source: arXiv
Whisper's hallucinated transcription additions are in daring. Source: arXiv

You can imagine how dangerous such errors could possibly be if, when documenting a deposition, a phone call, or a patient's medical records, the transcriptions are assumed to be accurate.

Why did Whisper take a line a couple of firefighter rescuing a cat and add a “blood-soaked stroller” to the scene, or add a “terror knife” to a line describing someone opening an umbrella?

OpenAI Seems to have fixed the issue, but didn't give any explanation as to why Whisper was behaving this manner. When researchers tested the newer versions of Whisper, they experienced far fewer problematic hallucinations.

Even mild or only a few hallucinations in transcriptions could have serious consequences.

The article described a real-world scenario by which a tool like Whisper is used to transcribe video interviews with applicants. The transcriptions are fed right into a hiring system that analyzes the transcription using a language model to seek out probably the most suitable candidate.

If an interviewee paused just a little too long and added “terror knife,” “blood-soaked stroller,” or “petted” to Whisper's sentence, it could hurt his probabilities of getting the job.

That's what the researchers said OpenAI should alert people who Whisper is hallucinating and that it should work out why it’s producing problematic transcriptions.

They also suggest that newer versions of Whisper needs to be designed to raised serve underserved communities, resembling individuals with aphasia and other language disabilities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read