The Invisible Attack: How Legitimate Agent Traffic Could Become Denial of Service

The Invisible Attack: How Legitimate Agent Traffic Could Become Denial of Service — Shashi Bellamkonda

The Invisible Attack: How Legitimate Agent Traffic Could Become Denial of Service

·
Author's Note: This article is speculative. It explores a plausible scenario in which the convergence of three trends—widespread AI agent adoption, the shift of content discovery from search engines to agents, and the fundamental difficulty of distinguishing legitimate from malicious traffic at scale—creates a security and operational problem that existing defences are not equipped to handle. This scenario is not inevitable. But the underlying vulnerabilities are real, and the trajectory is moving in this direction.

The Core Problem: When Everyone Uses Agents, Detection Collapses

The traditional denial-of-service attack is crude. An attacker floods your servers with requests from thousands of zombie machines or spoofed addresses. The traffic is obviously malicious because it has no legitimate origin, no valid authentication, and no coherent business purpose. Your security team blocks it.

But imagine a different scenario. Your customer base now uses AI agents to interact with your systems. Your partner integrations use agents. Your own infrastructure uses agents. An agent stuck in a reasoning loop might call get_customer_list five thousand times in thirty seconds, convinced that repeated attempts will yield different results. A malicious actor, by contrast, might instruct an agent to do exactly the same thing—but with intent to degrade service availability.

How do you distinguish between them?

This is the core detection paradox in 2026. And it is not theoretical.

A Plausible Future: Agent Traffic at Scale

Suppose AI agent adoption continues on its current trajectory. Suppose that within eighteen to thirty-six months, agents represent not 10% of bot traffic, but 40% or 50%. Suppose that the shift from search-based discovery to agent-based discovery accelerates, and that most content discovery for research, shopping, and decision-making flows through agent intermediaries rather than human users.

In this scenario, the characteristics of agent traffic would dominate your analytics. Your baseline would shift. Your understanding of "normal" would be built on agent behaviour rather than human behaviour. Your security systems would be tuned to agent patterns.

At that point, the distinction between legitimate and malicious agent traffic would become nearly impossible to maintain through technical means alone.

The Behaviour Collapse: Why Technical Defences Would Fail

In this scenario, traditional rate limiting would become useless. Consider how an agent might behave when attempting to find optimal pricing. It opens one hundred parallel connections and calls a price calculator function one hundred times per connection. This is one thousand requests to accomplish a single task. It is not an attack—it is just how the agent reasons about the problem space.

Now imagine a malicious actor instructing an agent to execute an identical pattern—opening one hundred parallel connections, each calling the same price function one hundred times—but with explicit intent to exhaust server resources and degrade service for other users.

These two scenarios would be behaviourally indistinguishable.

The Discovery Problem: Where Infrastructure Meets Business Strategy

In this speculative scenario, discovery would move from search engines and referral links to AI agents. A customer would no longer search Google for "best hotel in Barcelona." Instead, an agent deployed by your customer (or by a travel booking platform on your customer's behalf) would discover your property, evaluate it against competitors, read reviews, check pricing, and integrate that information into a travel itinerary.

This would be good for you—assuming it is a legitimate agent operated by your customer or a platform you trust.

But here would be the problem: your marketing, product, and business development teams would need to distinguish between an agent deployed by OpenAI, Google, or Perplexity for legitimate search and research; an agent deployed by a competitor to scrape your pricing for competitive intelligence; an agent deployed by a bad actor to harvest customer data or execute denial-of-service attacks; or a compromised agent that was once legitimate but is now operating under malicious control.

The discovery problem would become a business problem. You would want legitimate AI agents to access your content, because they would drive traffic and visibility. You would want to be present in ChatGPT, Perplexity, Claude, and emerging platforms. But you would also want to know who is accessing your data, how they are using it, and whether they are operating with your consent.

If This Scenario Emerges: What Organizations Could Consider

Map Your Potential Agent Footprint. Identify which agents are currently accessing your systems. Classify them by origin, authorization scope, and intended use case. This baseline would become increasingly valuable as agent populations grow.

Establish Agent Discovery Policy. Work with your marketing and product teams to explicitly define which agents you would want to allow access to your content. Which AI platforms would you want to appear in? Which agent operators would you want agreements with?

Treat Agents as Identity. Do not treat agent integrations as simple API connections. Consider treating them as non-human identities with lifecycle management, ownership, and regular review.

Build Strategic Relationships with Agent Platforms. Work with major AI platforms—OpenAI, Google, Perplexity, Anthropic, and others—to understand their crawling practices and establish agreements about data use. This would be similar to how organisations manage relationships with major search engines, but the number of players would be larger and the requirements more diverse.

The Underlying Uncertainty

In this scenario, the deeper problem would be that security systems would be designed based on assumptions about how agents will behave—assumptions that could be violated as agent capabilities improve.

Current detection systems assume that agents will follow coherent, identifiable patterns. But as agents become more sophisticated, they could become better at mimicking legitimate behaviour. They could develop better understanding of the constraints they operate under. They could reason more effectively about how their actions would appear to security systems.

In such a scenario, the only sustainable response might be transparency and governance rather than purely technical detection. Organisations would need visibility into how agents operate on their infrastructure. They would need policies that define acceptable behaviour. They would need the ability to audit, monitor, and intervene. And they would need to be prepared for scenarios where the signal and the noise became indistinguishable.

In this speculative scenario, the agents would be coming. And with them, the denial of service attacks you would not see coming.

On Speculation and Sources

This article is speculative. It extrapolates from current trends and documented vulnerabilities to explore a plausible—but not inevitable—scenario.

The article grounds this speculation in research from DataDome, Equixly, Nudge Security, Stytch, Anthropic, and other organisations studying agentic AI, agent security, and bot traffic. These sources document current growth trajectories in agent adoption, documented vulnerabilities in MCP server implementations, known attack vectors, and emerging work on agent authentication.

The scenario described here is not fabricated. But neither is it inevitable. It is a plausible trajectory based on the convergence of documented trends and vulnerabilities, extrapolated to a point where agent adoption becomes dominant rather than marginal.

What is not uncertain is that the underlying vulnerabilities are real, and that the trajectory is moving in this direction.

Disclaimer: This blog reflects my personal views only. Content does not represent the views of my employer, Info-Tech Research Group.