Companies are already using agentic artificial intelligence to make decisions but governance is lagging behind

0
39


Companies are rapidly adopting agentic artificial intelligence — AI systems that operate without human guidance — but have been much slower to implement governance to oversee them, a new survey shows. This gap is a major source of risk in AI adoption and, in my opinion, also a business opportunity.

I’m a professor of management information systems at Drexel University’s LeBow College of Business, which recently surveyed more than 500 data professionals through its Center for Applied AI & Business Analytics. We found that 41% of organizations already use agentic AI in their daily operations; They are not pilot projects or isolated tests, but part of regular workflows.

At the same time, governance lags: only 27% of organizations say their governance frameworks are mature enough to monitor and manage these systems effectively. In this context, governance does not mean regulation or unnecessary rules, but rather having policies and practices that allow people to clearly influence how autonomous systems work, including who is responsible for decisions, how their behavior is monitored, and when humans should intervene.

This lag can become a problem when autonomous systems act in real situations before someone can intervene. For example, during a recent blackout in San Francisco, autonomous robotaxis were stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems work “as designed,” unexpected conditions can lead to undesired results.

This raises a big question: When something goes wrong with AI, who is responsible and who can intervene?

Get the scoop: AI startup merges with billionaire-backed data center operator in $2.5 billion deal

Why governance matters

When AI systems act on their own, responsibility no longer falls where organizations expect. Decisions still happen, but ownership is harder to track. For example, in financial services, fraud detection systems increasingly act in real time to block suspicious activity before a human reviews the case. Customers often only find out when their card is declined.

If your card is mistakenly rejected by an AI system, the problem is not the technology itself—which is working as designed—but accountability. Human-AI governance research shows that problems arise when organizations do not clearly define how humans and autonomous systems should interact. This lack of clarity makes it difficult to know who is responsible and when to intervene.

Without governance designed for autonomy, small problems can grow silently. Oversight becomes sporadic and trust weakens, not because systems fail, but because people struggle to explain or support what the systems do.

When humans intervene too late

In many organizations, humans are technically “in the loop,” but only after autonomous systems have already acted. People tend to get involved when a problem becomes visible: a price seems wrong, a transaction is flagged, or a customer complains. By then, the decision has already been made and human review becomes corrective, not supervisory.

Late intervention can limit the impact of individual decisions, but it rarely clarifies who is responsible. The results can be corrected, but responsibility remains unclear. Recent guidance shows that when authority is ambiguous, human oversight becomes informal and inconsistent. The problem is not human participation, but timing. Without governance designed from the start, people act as safety valves, not decision makers.

You may be interested in: Leaders of the meetings industry will analyze the future of the sector in NL

How governance determines progress

Agentic AI often produces quick results, especially when tasks are first automated. Our survey found that many companies see these initial benefits. But as autonomous systems grow, organizations often add manual controls and approval steps to manage risk.

Over time, what was once simple becomes more complex. Decision making slows down, shortcuts increase, and the benefits of automation decrease. This happens not because the technology stops working, but because people never fully trust autonomous systems.

This slowdown does not have to happen. Our survey shows a clear difference: Many organizations see early benefits from autonomous AI, but those with stronger governance are more likely to translate those benefits into long-term results, such as greater efficiency and revenue growth. The key difference is not ambition or technical skills, but being prepared.

Good governance does not limit autonomy; It makes it viable by clarifying who makes decisions, how the functioning of systems is monitored, and when people should intervene. OECD international guidance emphasizes this point: human accountability and oversight should be designed into AI systems from the beginning, not added later.

Far from slowing down innovation, governance builds the trust needed to expand autonomy rather than quietly withdraw it.

The next competitive advantage is smart governance

The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibilities, success will belong to organizations that clearly define ownership, oversight, and intervention from the beginning.

In the age of agentic AI, trust will fall to the organizations that govern best, not just those that adopt first.

*Murugan Anandarajan is a professor of Decision Sciences and Management Information Systems at Drexel University.

This article was originally published by The Conversation

Follow the technology information in our specialized section


LEAVE A REPLY

Please enter your comment!
Please enter your name here