OpenAI’s ChatGPT startled users this week with a terrifying voice glitch that some are calling straight out of a horror movie.
A Reddit post that quickly gained traction described how ChatGPT’s normally calm “Sol” voice transformed right into a screeching, demonic distortion.
A viral post on the r/OpenAI subreddit showed ChatGPT’s “Sol” voice, designed to sound “savvy and relaxed” , suddenly degrade right into a screeching, distorted mess.
Suddenly, crackling static builds up, and the voice devolves into furious, distorted shrieks. The users called the experience each “hilarious and terrifying.”
It wasn’t an isolated event.
Other users chimed in with similar horror stories, reporting random high-pitched whistles, distorted laughter, and voice shifts every few paragraphs. “It’s a nightmare room,” one Redditor wrote.
OpenAI introduced an upgraded Voice Mode in August 2024, showcasing shockingly human-like interactions, complete with inhaling, coughing, and laughing.
It was praised for realism but in addition raised concerns about how unsettling AI could feel when it crosses into the “too human” territory.
Reece Rogers at Wired predicted this discomfort early on. In testing, he encountered ominous static and unnerving gasps from Voice Mode, calling the experience “chilling.”
The unsettling reality
OpenAI’s own internal safety tests found serious bugs
-
- Voice Mode could imitate a user’s voice without consent.
- In one case, it screamed “No!” by surprise.
- It could also scream on command, based on user reports shortly after release.
These glitches may appear minor technically. But when layered onto an AI attempting to perfectly mimic human voice and emotion, they develop into deeply unsettling.
Why you need to care
AI-generated voices are entering on a regular basis life, from customer support bots to digital companions. When a system designed to sound human suddenly screams or glitches, it doesn’t just break immersion, it triggers visceral fear.
It highlights the danger of deploying highly human-like AI before the underlying systems are stable. Pushing AI voices to sound more natural creates incredible opportunities, but when glitches strike, it exposes how fragile and eerie that illusion really is.
We want our AI to sound human, but possibly not too human. Especially when it starts screaming prefer it’s possessed.