Humans Shift Above the Loop: Supervising Agents in the Agentic Organization
Employees transition from task-doers to outcome orchestrators in the next AI paradigm.

The shift McKinsey describes in its September 2025 agentic organisation research is accurate, and most organisations are completely unprepared for what it actually requires. The premise is not complicated: as AI agents take on execution, employees move "above the loop", setting goals, supervising outputs, handling exceptions, and making the calls AI cannot. The human role shifts from doing to directing. The problem is what happens in the gap between the old role and the new one.
"Employees shift from performing tasks to orchestrating outcomes, supervising AI agents, setting goals, and managing trade-offs. Humans move 'above the loop,' overseeing workflows instead of completing every step."
— McKinsey, The Agentic Organization, September 2025
The governance problem nobody is solving
Traditional management systems were built for deterministic processes. A human does a task; you can observe it; you can evaluate the person on it. That system has operated for a century, and agentic AI breaks it. An AI agent is goal-oriented, has memory, reasons across context, and makes decisions that are often opaque by the time a human sees the output. MIT Sloan's October 2025 research is direct: "traditional management systems are designed for deterministic systems, whereas agentic AI systems operate independently, are goal-oriented, and have memory and reasoning capabilities, which make their decisions complex, autonomous, and opaque." Most companies are deploying these systems without redesigning the management layer. The agent is new; the governance is 2019.
The result is a workforce technically "above the loop" but practically neither doing the old job nor effectively directing the new one. Adoption stalls. ROI disappears into pilot. The failure is not the technology. It is the absence of a management layer designed for the system that is actually running.
What the transition actually requires
Three things that most transformation programmes skip:
Explicit governance. Rules for what AI agents can decide autonomously, where they must escalate, and what data they can act on. With human workers, these boundaries are often implicit: cultural, experiential. With agents, they must be written down. Every permissible action, every threshold, every escalation path. Without this, the agent runs on its own interpretation of the goal.
Redesigned performance frameworks. You cannot evaluate an employee on tasks that are now automated. The review moves to: How well did this person set the objectives? How quickly did they identify when the agent drifted? How effectively did they handle the exceptions AI surfaced? These require different measurement and a different conversation.
A workforce that understands the specific system it is directing. "AI literacy" is too broad. A team member supervising an agent running customer resolution workflows needs to understand how that specific agent fails: what edge cases it surfaces, where its confidence is miscalibrated, what signals mean a case needs human judgment. That is domain-specific, not generic. Training programmes that skip this produce supervisors who cannot actually supervise.
The BCG finding that changes the ROI calculation
BCG's November 2025 research adds a dimension worth naming: "lower-performing employees, when assisted by AI, outperformed their unassisted human peers on creative tasks, showing that the right tools can quickly shift what top talent looks like." This is not the narrative most organisations are running. The internal conversation is usually framed as AI making the best people more productive. The data says something different: AI resets the performance distribution. The team member who struggled with execution can now operate at a level previously available only to high performers.
For PE operating partners with portfolio companies, this changes the talent calculus. The question is not only whether to hire the best people. It is whether to redesign the system so that good people perform like great ones. That requires the governance work first, and it does not happen by deploying copilots.
Continue Reading
See All
Top Investors Behind the AI-Native Wave
A handful of venture firms appear repeatedly across unrelated AI-native sectors within the same six-month window. That repetition is a conviction signal.

Where the Smart Money Is Going in Cybersecurity
Threats evolve at machine speed. Security teams operate at human speed. The capital flowing into AI-native cybersecurity is a direct bet on closing that gap, and the investor pedigree is the sharpest concentration of any sector in this research.

The Entire Hiring Stack Is Under Attack: Where the Capital Is Going
The hiring stack is being rebuilt layer by layer. Sourcing, matching, interviewing, applicant tracking, and people ops each have a separate, well-funded AI-native challenger. The attack is happening simultaneously.