Despite its immense popularity, OpenAI is allegedly burning through money at an unsustainable rate and will face a staggering $5 billion loss by the tip of 2024.
That’s in accordance with a shock report from The Information, which cites unreleased internal financial statements and industry figures exposing how OpenAI has already spent roughly $7 billion on training models and as much as $1.5 billion on staffing.
Dylan Patel from SemiAnalysis had earlier told The Information that OpenAI allegedly forked out some $700,000 a day to run its models in 2022, posting losses of just about $500 million that 12 months alone.
Despite generating substantial revenue, estimated at $3.5 billion to $4.5 billion annually, OpenAI’s expenses far outpace its income.
The company has already raised over $11 billion through seven rounds of funding and is currently valued at a staggering $80 billion.
However, despite ChatGPT being a household name with tens of millions of world users, OpenAI might prove an actual money pit for investors if nothing changes.
Microsoft, OpenAI’s biggest backer by far, has already poured billions into the corporate lately.
Its most up-to-date injection of money, $10 billion in early 2023, was rumored to incorporate a 75% slice of OpenAI’s profits and a 49% stake in the corporate, in addition to integrating ChatGPT into Bing and other Microsoft systems.
In return, OpenAI receives access to Azure cloud servers at a substantially reduced rate.
But on the planet of generative AI, there are never enough chips, cloud hardware, or groundbreaking, world-changing ideas that require billions to get off the bottom.
OpenAI is heavily invested in being the primary to realize artificial general intelligence (AGI), an ambitious and incredibly expensive endeavor.
CEO Sam Altman has already hinted that he simply is not going to stop until that is achieved.
He’s involved in developing nuclear fusion and discussed creating an international chip project with UAE and US government backing value trillions.
But even the person on the helm isn’t blind to the difficult journey ahead.
Speaking on the World Economic Forum, Altman said, “We do need far more energy on the planet than we thought we would have liked before. We still don’t appreciate the energy needs of this technology.”
Competition is red-hot
Competition within the generative AI space can be intensifying, with big players like Google, Amazon, Meta, etc, all vying for a slice of the pie.
While ChatGPT stays essentially the most well known AI chatbot, it’s capturing an increasingly smaller portion of the whole revenues up for grabs.
Plus, the open-source division, headed largely by Mistral and Meta, is constructing increasingly powerful models which are cheaper and more controllable than closed lab projects from OpenAI, Google, and others.
As Barbara H. Wixom, a principal research scientist on the MIT Center for Information Systems Research, aptly puts it, “Like any tool, AI creates no value unless it’s used properly. AI is advanced data science, and you want to have the correct capabilities to be able to work with it and manage it properly.”
And therein lies a critical point. If a company has the money and technical know-how to harness generative AI, it doesn’t necessarily must partner with closed-source corporations like OpenAI. Instead, it could generate its own solutions.
Salesforce recently proved that by releasing a cutting-edge compact model for API calls that smashed closed-source competitors.
OpenAI and others are attempting to push the envelope with enterprise solutions like ChatGPT Enterprise, nevertheless it’s tough going, as generative AI is each costly and dubiously definitely worth the investment right away.
Adam Selipsky, CEO of Amazon Web Services (AWS), said himself in 2023, “Loads of the shoppers I’ve talked to are unhappy about the fee that they’re seeing for running a few of these models.”
2023 provided few answers for AI monetization
The 12 months 2023 has acted as a testing ground for various AI monetization approaches, but none are a silver bullet for the industry’s mounting costs.
One of the best challenges of AI monetization is that it doesn’t offer the identical economy as conventional software.
Each user interaction with a model like ChatGPT requires specific computations, which eat energy and construct higher ongoing costs that scale as more users join the system.
This poses an enormous challenge for corporations offering AI services at flat rates, as expenses can quickly outpace revenues.
If subscription costs are raised an excessive amount of, people will simply bail out. Economic surveys suggest that subscriptions are one among the primary things to be culled when people need to in the reduction of their spending.
Microsoft’s recent collaboration with OpenAI on GitHub Copilot, an AI coding assistant, served as a major example of how subscriptions can backfire.
Microsoft charged a $10 monthly subscription for the tool but reported a median monthly lack of greater than $20 per user. Some power users inflicted losses of as much as $80 per thirty days.
It’s likely an identical situation with other generative AI tools. Some casual users subscribe to only one among the various available tools on a monthly basis and will cancel and switch to a unique tool. On the opposite hand, there are non-profitable power users who eat resources without contributing to profits.
Some think OpenAI has attempted dirty tricks to maintain the money flowing. For example, the GPT-4o demo, timed perfectly with Google IO, revealed real-time speech synthesis features that appeared to break latest ground and outshine Google’s announcements.
But we’re still waiting for this to roll out. OpenAI has yet to release those hotly anticipated features to anyone, citing questions of safety.
“We’re improving the model’s ability to detect and refuse certain content,” OpenAI declared in regards to the delay.
“We’re also working on improving the user experience and preparing our infrastructure to scale to tens of millions while maintaining real-time responses. As a part of our iterative deployment strategy, we’ll start the alpha with a small group of users to assemble feedback and expand based on what we learn.”
Premium sign-ups spiked because people were looking forward to using those latest features. Was OpenAI in search of a short-term revenue boost driven by features that were never ready?
Energy costs are one other roadblock
There’s yet one more snag at hand here – power and water.
By 2027, the energy consumed by the AI industry might be similar to that of a small nation. Recent spikes in water usage by tech giants like Microsoft and Google are largely attributed to intensive AI workloads.
AI-induced water shortages recently gripped Taiwan, which began redirecting water from agricultural uses to AI amidst a drought in a bid to maintain manufacturing online.
Taiwan’s head of telecoms said AI needs an ‘entirely latest’ infrastructure to maintain up with demand.
This all comes at a value, each at the corporate level for Microsoft, Google, etc., and likewise for local and national economies.
As the world watches eagerly to see how the AI story unfolds, one thing is definite: the trail forward is proving tricky terrain to traverse.
Futurist visions of an AI-embedded future where intelligent machines live alongside humans could also be inevitable, however the timeline is unattainable to predict.
The coming years will probably be pivotal in shaping generative AI’s trajectory, each by way of its returns or investment, sustainability, and the stress between the 2.
As Barbara H. Wixom from MIT warns, “You have to search out a solution to pay for this. Otherwise, you may’t sustain the investments, after which you may have to drag the plug.”
Will generative AI ever grind to a halt? You’ve got to think that it’s too big to fail. But it does seem stuck in monetization purgatory right away, and something from somewhere needs to manage one other jolt of progress.
It may not take much to push generative AI towards a crucial flashpoint where progress comes cheaply and naturally.
Fusion power, analog low-power AI hardware, lightweight architectures – it’s all within the pipeline – we can only wait and watch to see when all of it clicks into place.