Check your research, WITH: 95% of AI projects don’t fail – quite the alternative.
According to recent data from G2Almost 60% of firms have already got AI agents in production and lower than 2% actually fail once they’re deployed. This paints a really different picture than recent scientific forecasts, which point to widespread stagnation in AI projects.
As one in every of the world's largest crowdsourcing software review platforms, G2's data set reflects real-world adoption trends – showing that AI agents are proving much more durable and “sticky” than early generative AI pilots.
“Our report really shows that agents by way of failure or success are a unique story on the subject of AI,” G2 research director Tim Sanders told VentureBeat.
Handover to AI in customer support, BI, software development
Sanders points out that that is now often quoted MIT studySanders argues that the project published in July only considered generational custom AI projects, and plenty of media outlets generalized this to mean that AI fails 95% of the time. He points out that university researchers analyzed public notices moderately than closed data. If firms didn’t announce any impact on profit and loss, their projects were considered to have failed – even when this was not actually the case.
G2's 2025 AI Agents Insights ReportIn contrast, she surveyed greater than 1,300 B2B decision makers and got here to the next conclusion:
-
57% of firms have agents in production and 70% say agents are “the core of operations”;
-
83% are satisfied with agent performance;
-
Companies now invest a mean of greater than $1 million annually, with one in 4 spending greater than $5 million.
-
9 out of 10 plan to extend this investment in the following 12 months;
-
Companies have seen 40% cost savings and 23% faster workflows, and one in three firms report speed increases of greater than 50%, particularly in marketing and sales.
-
Almost 90% of study participants reported higher worker satisfaction within the departments where agents were deployed.
The most vital use cases for AI agents? Customer service, business intelligence (BI) and software development.
Interestingly, G2 found a “surprising number” (about 1 in 3) of the organizations that Sanders refers to as “Let it rip.”
“Basically, they allowed the agent to perform a task after which either immediately undid it if it was a nasty motion or did QA so that they could undo the bad actions very, in a short time,” he explained.
At the identical time, human-centered agent programs were twice as more likely to achieve cost savings—75% or more—as fully autonomous agent strategies.
This reflects what Sanders described as a “dead heat” between “let it rip” organizations and “leave some human gates” organizations. “In a number of years, an individual will probably be the main target,” he said. “More than half of our respondents said there may be more human control than we expected.”
However, nearly half of IT buyers are comfortable giving agents full autonomy for low-risk workflows comparable to data correction or data pipeline management. In the meantime, consider BI and research as prep work, Sanders said; Agents collect information within the background to arrange people for final steps and final decisions.
A classic example of it is a mortgage loan, Sanders noted: Agents do all the things right until the human analyzes their results and says yes or no to the loan.
If there are errors, they stay within the background. “It’s just not going to be published in your name and put your name on it,” Sanders said. “It makes you trust it more. You use it more often.”
When it involves specific deployment methods, those are Salesforce's Agentforce “Winning out” over pre-built agents and in-house builds, taking 38% of all market share, Sanders reported. However, many firms appear to be moving to hybrid solutions with the goal of eventually adopting internal tools.
Then, because they desire a trusted data source, “they’ll crystallize around Microsoft, ServiceNow, Salesforce, firms with an actual system of record,” he predicted.
AI agents will not be deadline-driven
Why are agents (at the very least in some cases) so significantly better than humans? Sanders identified an idea called Parkinson's lawwhich states that “the work expands to fill the time available for its completion.”
“Individual productivity doesn’t result in organizational productivity because persons are driven only by deadlines,” Sanders said. When firms checked out genetic AI projects, they didn't shift the targets; The deadlines haven’t modified.
“The only option to fix that is to either move the goalpost up or take care of non-humans, because non-humans will not be subject to Parkinson's Law,” he said, stating that they don’t suffer from “human procrastination syndrome.”
Agents don't take breaks. You don't allow yourself to be distracted. “They just grind so that you don’t must change the deadlines,” Sanders said.
“If you give attention to ever-faster QA cycles, even perhaps automated, you’ll have the option to repair your agents faster than your employees.”
Start with business problems and understand that trust will be built slowly
Still, Sanders sees AI following the cloud by way of trust: He remembers 2007, when everyone was quick to adopt cloud tools; then, in 2009 or 2010, “there was sort of a low point in confidence.”
There are also security concerns: 39% of all respondents to the G2 survey said that they had already experienced an issue Security incident for the reason that use of AI; It was severe in 25% of cases. Sanders emphasized that firms have to take into consideration measuring in milliseconds how quickly an agent will be retrained to never repeat a nasty motion again.
Always include IT operations in AI deployments, he advised. They know what went improper with Gen AI and Robotic Process Automation (RPA) and might unravel explainability, resulting in rather more trust.
On the opposite hand, nevertheless, the next applies: Do not trust the providers blindly. In fact, only half of respondents said this was the case; Sanders found that the No. 1 trust signal is agent explainability. “We were told over and once again in qualitative interviews: If you (a provider) can’t explain it, you’ll be able to’t deploy and manage it.”
It's also necessary to start out with the business problem and work backwards, he advised: Don't buy agents after which search for a proof of concept. When leaders deploy agents to essentially the most vulnerable areas, internal users will probably be more forgiving when incidents occur and more willing to repeat, constructing their capabilities.
“People still don’t trust the cloud, they definitely don’t trust generational AI, they might not trust agents until they experience it, after which the sport changes,” Sanders said. “Trust comes on a mule – you don’t just get forgiveness.”

