OpenAI CEO Sam Altman has revealed that merely being polite to ChatGPT could be costing “tens of hundreds of thousands of dollars” in extra computing resources.
When asked much money OpenAI has lost in electricity costs from people saying “please” and “thanks” to their AI models, Altman responded: “tens of hundreds of thousands of dollars well spent. You never know.”
tens of hundreds of thousands of dollars well spent–you never know
Every word sent to ChatGPT – even common courtesies – requires additional processing power. Even seemingly small interactions add up quickly across billions of conversations, driving up each electricity costs and server usage
AI models rely heavily on colossal data centers that already account for about 2% of world electricity consumption, and this is predicted to climb, with AI potentially draining the same amount of energy as an industrialized country similar to Japan.
According to a Washington Post investigation conducted in collaboration with University of California researchers, generating a single 100-word email using AI requires 0.14 kilowatt-hours of electricity – enough to power 14 LED lights for an hour. At scale, these small interactions create a large energy footprint.
The water usage is equally striking. UC Riverside researchers found that using GPT-4 to generate 100 words consumes up to 3 bottles of water for cooling the servers, and even an easy three-word response like “You are welcome” uses about 1.5 ounces of water.
Why are users being polite to machines?
Evidence suggests individuals are genuinely attempting to practice sound AI etiquette. A late 2023 survey found that 67% of US respondents reported being nice to their chatbots.
Of those practising digital politeness, 55% said they do it “since it’s the best thing to do,” while a more cautious 12% admitted doing it to “appease the algorithm within the case of an AI rebellion.” AI has come a great distance since 2023, and we’ve seen loads of doomsday theories make headlines, in order that figure could be much higher now!
Rather than AI etiquette being purely a psychological or behavioral matter, a study by researchers at Waseda University found that using polite prompts can actually produce higher-quality responses from large language models (LLMs).
LLMs are trained on human interactions, in any case; “LLMs reflect the human desire to be respected to a certain extent,” the researchers explain.
The human element
Beyond technical performance, some experts argue there’s value in maintaining politeness toward AI for our own sake. Using disrespectful language with AI might normalize rudeness in our human interactions.
This phenomenon has already been observed with earlier voice assistants. Some parents have reported their children becoming less respectful after growing accustomed to barking commands at Siri or Alexa, leading Google to introduce their “Pretty Please” feature to encourage politeness amongst children back in 2018.
So, whether you’re nice to your AI for performance reasons, etiquette best practice, or to remain on the best side of AI in case of a Matrix-esque takeover, just remember that each interaction comes with a price.