
By Harsha Kumar, CEO of NewRocket
Enterprises have not underinvested in AI. They have overconstrained it.
By late 2025, nearly every large organization is using artificial intelligence in some form. According to McKinsey’s 2025 State of AI survey, 88 percent of companies now report regular AI use in at least one business function, and 62 percent are already experimenting with AI agents. Yet only one-third have managed to scale AI beyond pilots, and just 39 percent report any measurable EBIT impact at the enterprise level.
This gap is not a failure of models, compute, or ambition. It is a failure of execution authority.
Most enterprises still treat AI as a recommendation engine rather than an operational actor. Models analyze, suggest, summarize, and predict, but they stop short of acting. Humans remain responsible for stitching insights into workflows, approving routine decisions, and pushing work forward manually. As a result, AI accelerates fragments of work while leaving the system itself unchanged. Productivity improves at the task level but stalls at the organizational level.
The uncomfortable truth is this: AI cannot transform an enterprise if it is not allowed to participate in decisions end to end.
The Pilot Trap Is an Authority Problem

The dominant AI pattern inside enterprises today is cautious experimentation. Models are deployed in isolated functions. Copilots assist individuals. Dashboards surface insights. But the workflow surrounding those insights remains human-driven, sequential, and approval-heavy.
McKinsey’s research shows that nearly two-thirds of organizations remain stuck in experimentation or pilot phases, even as AI usage expands across departments. What distinguishes the small group of high performers is not access to better models, but a willingness to redesign workflows. High performers are nearly three times more likely to fundamentally rewire how work gets done, and they are far more likely to scale agentic systems across multiple functions.
AI creates value when it is embedded into the operating model, not layered on top of it.
This requires a shift in how leaders think about control. Enterprises are comfortable letting machines optimize routes, balance loads, or manage infrastructure autonomously. They are far less comfortable letting AI resolve customer issues, adjust supply decisions, or execute financial actions without human sign-off. That hesitation is understandable, but it is also the primary reason AI impact remains incremental.
Autonomy Is the Next Enterprise Capability
Gartner describes the next phase of enterprise transformation as autonomous business. In this model, systems do not merely inform decisions. They sense, decide, and act independently within defined boundaries.
According to Gartner’s analysis of autonomous business, by 2028, 40 percent of services will be AI-augmented, shifting employees from execution to oversight. By 2030, machine customers could influence up to $18 trillion in purchases. These shifts are not theoretical. They are already reshaping how enterprises compete.
Autonomous operations reroute supply chains during disruptions. AI-driven service platforms resolve issues before a human agent engages. Systems correct performance deviations in real time without escalation. When autonomy works, humans spend less time fixing yesterday’s problems and more time shaping tomorrow’s strategy.
But autonomy does not mean abdication. It requires governance, guardrails, and clarity around when AI acts independently and when it escalates. The most successful organizations define decision classes explicitly. Low-risk, repeatable decisions are fully automated. High-impact or ambiguous decisions are flagged for human review. Over time, as confidence grows, the boundary shifts.
What matters is not perfection. It is momentum.
Why Trust Alone Is Not Enough
Much of the AI debate centers on trust. Can we trust models to make decisions? Should humans always remain in the loop? These questions matter, but they miss a deeper issue. Trust without redesign creates friction. Authority without context creates risk.
Research from Stanford’s Institute for Human-Centered AI reinforces this distinction. Their work does not argue against autonomy. It shows that autonomy must be applied intentionally, based on the nature of the decision being made.
In controlled experiments, decision quality improved when AI systems were designed for complementarity rather than blanket replacement, particularly in high-uncertainty or high-judgment scenarios. In these cases, selective AI intervention helped humans avoid errors without removing human accountability.
But this does not imply that AI should remain advisory across the enterprise. It implies that different classes of decisions demand different execution models. Some workflows benefit from augmentation, where AI guides, flags, or challenges human judgment. Others benefit from full autonomy, where speed, scale, and consistency matter more than discretion.
The real failure mode is not autonomy itself. It is forcing all decisions into the same human-in-the-loop pattern regardless of risk, frequency, or impact. When AI is confined to advisory roles even in low-risk, repeatable workflows, humans either over-rely on recommendations or ignore them entirely. Both outcomes limit value.
Complementary systems succeed because they are designed around how work actually happens. They define when AI acts independently, when it escalates, and when humans intervene. Execution authority is not removed. It is calibrated.
The lesson here is a practical one for enterprises. AI should not be evaluated solely on accuracy. It should be evaluated on how well it integrates into real workflows, decision rights, and accountability structures.
What Changes in 2026
As organizations move into 2026, the question will no longer be whether AI works. That debate is over. The question will be whether enterprises are willing to let AI operate as part of the business rather than as a support function.
McKinsey’s data shows that organizations seeing meaningful AI impact are more likely to pursue growth and innovation objectives alongside efficiency. They invest more heavily. More than one-third of AI high performers allocate over 20 percent of their digital budgets to AI. They scale faster. They redesign workflows intentionally. And they require leaders to take ownership of AI outcomes, not delegate them to experimentation teams.
This is not a technology challenge. It is a leadership one.
Enterprises that succeed will not be those with the most sophisticated models. They will be the ones that redesign work so humans and machines operate as a coordinated system. AI will handle execution at machine speed. Humans will define intent, values, and direction. Together, they will move faster than either could alone.

Until then, AI will remain impressive, expensive, and underutilized.
About the author:
Harsha Kumar is the CEO at NewRocket, helping elevate enterprises with AI they can trust, leveraging NewRocket’s Agentic AI IP and the ServiceNow AI platform.



