HomeIndustriesAI-powered productivity tools that could make life harder

AI-powered productivity tools that could make life harder

Paul Meyer has been deaf since birth and has used human interpreters and subtitlers to speak with colleagues throughout his nearly three-decade profession in human resources and technical recruiting.

But as corporations began relying more on video conferencing in the course of the pandemic, he noticed a worrying trend. As meetings moved online, corporations began using AI-powered transcription software frequently. And as this technology became a part of on a regular basis business life, some employers thought it may be utilized in other cases, akin to replacing human interpreters.

The problem, Meyer says, is that there are flaws within the technology that employers aren't aware of that make life difficult for deaf staff.

“The company thought the AI ​​technology for subtitling was perfect. They were confused as to why I used to be missing a variety of information.”

Some content couldn’t be loaded. Check your web connection or browser settings.

Speech recognition technology, introduced into the workplace within the Nineteen Nineties, has improved significantly and created recent opportunities for disabled people to have conversations when an interpreter shouldn’t be available.

It is now increasingly getting used by hearing people as a productivity tool that may help teams, for instance, summarize notes or create meeting transcripts. According to Forrester Research, 39 percent of world staff surveyed said their employers had began or planned to integrate generative AI into video conferencing. Six in ten now use online or video conferences weekly, a number that has doubled since 2020.

Increased adoption has many advantages for deaf staff, but some warn that these tools may very well be harmful to disabled people if employers don't understand their limitations. One concern is the belief that AI can replace trained human interpreters and subtitlers. The concern is compounded by the historic lack of input from individuals with disabilities in AI products, even some marketed as assistive technologies.

Speech recognition models often cannot understand individuals with irregular or accented speech and will perform poorly in noisy environments.

“People have the misunderstanding that AI is ideal for us. “It’s not perfect for us,” says Meyer. He was fired from his job and believes the dearth of adequate housing made him a simple goal as the corporate downsized.

Some content couldn’t be loaded. Check your web connection or browser settings.

Quote from Paul Meyer: “People have the wrong idea that AI is perfect for us. It is not perfect for us.” His voice, transcribed by Google Speech to Text, is: “Half of our principle is not possible.”

Some corporations at the moment are attempting to improve speech recognition technology – for instance, by training their models on a wider range of languages.

Google, for instance, began collecting more diverse language samples in 2019 after realizing that its own models didn't work for all users. In 2021, the Project Relate app was released for Android, which collects individual speech samples to create a real-time transcript of a user's speech. The app is geared toward individuals with non-standard speech, including individuals with deaf accents, ALS, Parkinson's disease, cleft palate and stuttering.

In 2022, 4 other tech corporations – Amazon, Apple, Meta and Microsoft – joined Google in research led by the Beckman Institute on the University of Illinois Urbana-Champaign to gather more voice samples to be shared between them and other researchers.

Google researcher Dimitri Kanevsky, who has a Russian accent and weird language, says the Relate app has allowed him to have spontaneous conversations with contacts, akin to other attendees at a math conference.

“I actually have grow to be rather more social. I could communicate with anyone at any time, in anywhere, and so they could understand me,” says Kanevsky, who lost his hearing on the age of three. “It gave me an incredible feeling of freedom.”

Some content couldn’t be loaded. Check your web connection or browser settings.

Quote from Dimitri Kanevsky: “I have become much more social. I could communicate with anyone at any time, any place and they could understand me. It gave me an incredible feeling of freedom.

A handful of deaf-led startups — like Intel-backed OmniBridge and Techstars-backed Sign-Speak — are working on products focused on translation between American Sign Language (ASL) and English. Adam Munder, the founding father of OmniBridge, says that while he was fortunate at Intel to have access to translators throughout the day, including while walking across the office and within the cafeteria, he knows that many corporations lack such access don't offer.

“With OmniBridge, these hallway and cafeteria conversations may very well be filled,” says Munder.

However, despite progress on this area, there are concerns concerning the lack of representation of disabled people in the event of some more common translation tools. “There are a variety of hearing individuals who have developed solutions or tried to do things considering they know what deaf people need, considering they know the very best solution, but they could not likely understand the entire story says Munder.

At Google, where 6.5 percent of employees report having a disability, Jalon Hall, the one Black woman in Google's deaf and hard-of-hearing worker group, led a project starting in 2021 to higher understand the needs of Black deaf users . Many she spoke to used Black ASL, a variant of American Sign Language that modified largely due to the segregation of American schools within the nineteenth and twentieth centuries. She says the people she spoke to didn't find Google's products worked that well for them.

“While there are various tech-savvy deaf users, they’re typically not included in vital dialogues. They are typically not included in major products as they’re developed,” says Hall. “That means they’ll proceed to lag behind.”

In a current paperA team of 5 researchers who’re deaf or hard of hearing found that the majority recently published studies on sign language didn’t consider deaf perspectives. They also didn’t use datasets that represented deaf people and included modeling decisions that perpetuated false biases against sign language and the deaf community. These prejudices could grow to be an issue for future deaf staff.

“What hearing individuals who don't sign consider to be 'adequate' could lead on to the baseline for bringing products to market to grow to be quite low,” says Maartje De Meulder, senior researcher at Utrecht University of Applied Sciences within the Netherlands. is the co-author of the article. “This raises concerns that the technology is just not adequate or shouldn’t be being voluntarily adopted by deaf staff while being required and even forced to make use of it.”

Ultimately, corporations must prioritize improving these tools for individuals with disabilities. Google has not yet integrated any further developments of its speech-to-text models into business products, despite reports from researchers This made it possible to scale back the error rate by a 3rd.

Hall says she has received positive feedback from senior executives about her work, but no clarity on whether it would impact Google's product decisions.

Meyer hopes for more deaf representation and resources for disabled people. “I feel one problem with AI is that individuals think it makes it easier for them to seek advice from us, nevertheless it may not be easy for us to seek advice from them,” Meyer says.

Some content couldn’t be loaded. Check your web connection or browser settings.

Quote from Paul Meyer:

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read