HomeNewsCompanies are already using agent AI to make decisions, but governance is...

Companies are already using agent AI to make decisions, but governance is lagging behind

Companies are moving quickly to adopt agentic AI – artificial intelligence systems that operate without human leadership – but have been much slower to adopt governance to oversee it, a New survey shows. This mismatch is a significant source of risk when introducing AI. In my opinion it is usually a business opportunity.

I’m a Professor of Management Information Systems at Drexel University LeBow College of Businesswhich recently surveyed greater than 500 data experts Center for Applied AI and Business Analytics. This is what we found 41% of firms use agent AI of their day by day operations. These usually are not just pilot projects or one-off tests. They are part of standard work processes.

At the identical time, governance is lagging behind. Only 27% of firms say their governance frameworks are mature enough to effectively monitor and manage these systems.

In this context, governance shouldn’t be about regulation or unnecessary rules. It means having policies and practices in place that allow people to have clear influence over how autonomous systems operate, including who’s chargeable for decisions, how behavior is reviewed, and when humans ought to be involved.

This mismatch can grow to be an issue when autonomous systems act in real situations before anyone can intervene.

For example, during a recent power outage in San FranciscoAutonomous robotaxis got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation demonstrated that unexpected conditions can result in undesirable outcomes even when autonomous systems behave “as planned.”

This raises a giant query: If something goes fallacious with AI, who’s responsible – and who can intervene?

Why governance matters

When AI systems act independently, responsibility not lies where firms expect it. Decisions are still being made, but ownership is harder to trace. For example within the financial services sector Fraud detection systems Increasingly act in real time to dam suspicious activity before a human ever reviews the case. Customers often only discover when their card is declined.

So what happens in case your card is unintentionally declined by an AI system? In this example, the issue shouldn’t be with the technology itself – it really works the best way it was designed – but with accountability. Research on human-AI governance shows that problems arise when organizations don’t clearly define how humans and autonomous systems work should work together. This lack of clarity makes it difficult to know who’s responsible and once they should intervene.

Without governance designed for autonomy, small problems can spread silently. Control becomes sporadic and trust weakens, not since the systems fail completely, but because people find it difficult to elucidate or stand behind the systems.

If the person enters the cycle too late

In many organizations, persons are “up to the mark” on technology, but only after autonomous systems have already taken motion. People are likely to become involved as soon as an issue becomes apparent – ​​when a price looks fallacious, a transaction is reported, or a customer complains. At this point, the system is already set and human review becomes correction relatively than monitoring.

Late intervention can limit the results of individual decisions, however it rarely clarifies who’s responsible. The results might be corrected, but responsibility stays unclear.

Current instructions shows that human oversight becomes informal and inconsistent when authority is unclear. The problem shouldn’t be human involvement, but timing. Without pre-planned governance, people act as a security valve relatively than responsible decision-makers.

How governance determines who gets ahead

Agentic AI often delivers quick and early results, especially when tasks are initially automated. Our survey found that many firms are seeing these early advantages. However, as autonomous systems grow, firms often add manual reviews and approval steps to administer risk.

Over time, what was once easy slowly becomes more complicated. Decision making is slowing, workarounds are increasing, and the advantages of automation are disappearing. This is going on not since the technology not works, but because People never fully trust autonomous systems.

This slowdown doesn't need to occur. Our survey shows a transparent difference: Many firms see early advantages from autonomous AI, but those with stronger governance usually tend to translate those gains into long-term outcomes, similar to greater efficiency and revenue growth. The key difference lies not in ambition or technical skills, but in preparation.

Good governance doesn’t restrict autonomy. It makes it practical by clarifying who makes decisions, how system function is monitored, and when people should intervene. OECD International Guidelines – the Organization for Economic Co-operation and Development – ​​emphasizes this point: Accountability and human oversight have to be built into AI systems from the beginning, not added later.

Rather than slowing innovation, governance creates the trust firms must expand their autonomy relatively than quietly withdraw it.

The next profit is smarter governance

The next competitive advantage in AI will come not from faster adoption, but from smarter governance. As autonomous systems tackle more responsibility, organizations that clearly define ownership, oversight and intervention from the beginning shall be more successful.

In the age of agent AI, trust will profit the organizations that govern best, not only those who take over first.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read