Who Guards the Agents? A New Market Is Forming Around AI Oversight
Enterprise AI · Governance

Something broke at Meta and Amazon recently that didn't make the product press releases. Agents deployed to automate tasks wandered off-script. The incidents never went public in detail, but the market response is already visible: a new product category has formed around the question of what happens when an agent does something nobody authorized.

The category is called guardian AI, or supervisor agents. The idea: deploy a second layer of AI to watch what the first layer is doing. ServiceNow has the most developed commercial product here, sold as part of its AI Control Tower. Palo Alto Networks and IBM sit in the monitoring tier, flagging anomalies without intervening. Startups including Wayfound, Holistic AI, CredoAI, and Israeli firm Avon AI are building specialist versions aimed at financial services and enterprise workflow.

The trigger for this market is the same thing that drove enterprises to deploy agents in the first place: scale. Agents complete tasks faster than any human team, which is the value proposition, but also the liability. An agent processing insurance claims at volume is equally capable of generating wrong decisions, data leaks, or compliance violations at the same pace. "You can't have humans actually supervising their work because human brains don't work fast enough," said Tatyana Mamut, who runs Wayfound and previously held executive roles at Amazon Web Services and Salesforce.

$750/mo
Wayfound base plan — 10,000 agent tasks monitored
4 FTEs
Wayfound's team size — $3.2M raised
Multi-platform
ServiceNow AI Control Tower monitors rival vendor agents

The Problem Is Not Reliability. It Is Jurisdiction.

Agents built on platforms like Anthropic's Claude Code, Salesforce's Agentforce, or OpenAI's operator tools are generally reliable within their own guardrails. The gap is enforcement across platforms. Enterprise technology stacks are not single-vendor. An insurance company might run Agentforce agents for customer service, Claude Code agents for internal documentation, and a third-party agent for claims processing. Each vendor handles its own rules. Nobody handles the rules between them. Guardian AI is the attempt to fill that gap.

ServiceNow's approach is to make that governance layer a revenue line. Its AI Control Tower charges a subscription fee plus usage-based pricing, and it monitors agents from Microsoft, Amazon, and other rivals. The bet is that ServiceNow's existing position as an enterprise workflow platform gives it the credibility and integration depth to sit above the agent ecosystem as a governance authority.

Salesforce is reportedly considering a similar move, monitoring agents across non-Salesforce platforms. Whether it would do that comprehensively or selectively, favoring its own agents over rivals, is the unresolved question. Sam Dover, former head of AI strategy at Unilever, put the conflict plainly: companies selling agents may not be incentivized to build honest oversight tools for those same agents. Unilever's answer was to seek an independent governance vendor rather than trust the agent platforms to audit themselves.

"One of the prerogatives at Unilever was wanting that independent vendor of AI governance." — Sam Dover, former head of AI strategy, Unilever

What the Market Structure Looks Like Right Now

Three tiers have taken shape. Palo Alto Networks and IBM sit in the monitoring layer, detecting when agents share proprietary data or behave anomalously without intervening directly. ServiceNow's Control Tower and several startups occupy the active supervision layer, sending alerts and adjusting agent behavior when a rule is broken. A third layer, still early, is pre-deployment governance: tools like CredoAI and Holistic AI evaluate model performance and risk before an agent goes into production rather than watching it after the fact.

The startup economics are early-stage. Wayfound has around a dozen paying customers and a team of four full-time employees. Holistic AI is six years old with guardian agents in preview. CredoAI is running a private preview with undisclosed pricing. Avon AI, founded in 2025 in Israel, charges a licensing fee plus a rate per 100,000 agent conversations monitored. None of these are at scale. But they are at proof-of-concept with enterprise buyers, which is the signal that validates the category rather than just the vendors.

There is also an architectural question nobody has answered cleanly. Guardian AI agents are themselves built on the same foundation models as the agents they monitor. A Wayfound instance powered by Anthropic's models watching a Claude Code agent raises an obvious concern: does a guardian built on Anthropic's models have a native blind spot when reviewing Anthropic-powered agents? Enterprise buyers should put that question directly to any guardian vendor they evaluate.

Platform Consolidation or Specialist Market: One of These Will Happen First

More agents means more governance requirements. That much is settled. The open question is whether governance gets absorbed into the major platforms, ServiceNow, Palo Alto Networks, Microsoft Purview, or whether it breaks out into its own category the way endpoint security did before the consolidation wave.

Endpoint security became a standalone market because breaches were visible, attributable, and costly enough to force dedicated budget. Agent incidents so far have stayed internal and diffuse. If that changes, governance spending will follow quickly. If it doesn't, procurement will lag and bundling with existing platforms will win by default.

Right now enterprise buyers can still choose. Once a major incident forces the category into emergency procurement mode, that choice collapses into whatever the largest platform offers fastest.

The CIO Question

If your organization is running agents from more than one vendor platform, and those platforms each provide their own guardrails, who is accountable for enforcing behavioral standards across the full stack? If the honest answer is "no one yet," that is the risk profile guardian AI is addressing. The more pointed question: does your primary agent platform have a financial incentive to give you complete visibility into its own agents' failures?

Sources
Bratton, Laura. "Applied AI: 'Guardian' Apps Aim to Stop AI Agents From Going Rogue." The Information, 31 Mar. 2026.
ServiceNow AI Control Tower product documentation. ServiceNow, 2026, servicenow.com.
Wayfound product and company information. Wayfound, 2026, wayfound.ai.
Disclaimer: This blog reflects my personal views only. Content does not represent the views of my employer, Info-Tech Research Group. AI tools may have been used for brevity, structure, or research support. Please independently verify any information before relying on it.