Paranoia Is the Right Instinct. But It's Not a Strategy.

Paranoia Is the Right Instinct. But It's Not a Strategy.

AI-powered vulnerability discovery has outpaced every enterprise patching cadence, and the companies disclosing these capabilities publicly represent the responsible fraction of a much larger threat surface. The non-transparent actors operating without disclosure obligations are the ones that should reshape how every security leader allocates budget and attention.

20x PROJECTED VULNERABILITY SPIKE
16 min TIME TO COMPROMISE ENTERPRISE AI
11,000 HIGH-IMPACT BUGS FOUND IN ONE MONTH
$577M DPRK CRYPTO THEFT, Q1 2026
52 ORGS WITH MYTHOS ACCESS

Jay Chaudhry has been in cybersecurity for over 30 years. In a recent CRN interview, the Zscaler CEO, Chairman and Founder said he's never seen anxiety in the field like this. I agree. And the anxiety is earned.

AI-accelerated vulnerability discovery, paired with the long-running failure to patch at speed, has created a gap that's widening faster than any budget cycle can close. Chaudhry's estimate of a 20-fold spike in software vulnerabilities isn't hyperbole. It's math.

What's Visible Is Only the Beginning

Anthropic's Claude Mythos reportedly uncovered thousands of zero-days across every major operating system and browser. Flaws dating back decades. The company took the high road: limited access to 52 organizations, engaged the U.S. government, disclosed responsibly. Dario Amodei himself warned there's a 6-to-12-month window to patch before adversarial AI catches up.

OpenAI isn't far behind. GPT-5.5-Cyber is now rolling out to vetted defenders through its Trusted Access for Cyber program. Codex Security claims 11,000 high-impact bugs found in a single month.

This is what's happening in the transparent world. Two American companies, operating in daylight, working with (or against, depending on the news cycle) their own government.

The Non-Transparent World Isn't Waiting

Here's what keeps me up at night. Everything discussed so far comes from companies that publish research papers, issue press releases, and testify before Congress. They operate under public scrutiny, legal liability, and reputational pressure.

Now consider what's happening without any of that.

China is already in the race. Qihoo 360's Digital Security Group publicly claimed AI-driven discovery of 1,000 vulnerabilities, including in Microsoft Office and AI frameworks. At the Tianfu Cup hacking competition, the winning team declared that AI has evolved "from an auxiliary tool to the core engine of vulnerability discovery." Researchers at ETH Zurich have analyzed these claims and concluded they are credible at a scale comparable to Claude Mythos. This is the part China chose to make visible. Ask yourself what isn't being shown.

State-sponsored actors are already using AI operationally. Google's Threat Intelligence Group reports that hacking groups from China, Iran, North Korea, and Russia are using AI at every stage of cyber operations: reconnaissance, exploitation, lateral movement, and data exfiltration. This isn't theoretical. It's happening now, across the defense industrial base and critical infrastructure.

North Korea alone stole $577 million in crypto from just two attacks in the first four months of 2026. They're deploying AI-enhanced social engineering, deepfakes, and fake video calls targeting executives with access to crypto wallets and exchange systems. The regime doesn't publish responsible disclosure frameworks. It publishes nothing at all.

Open-source offensive tools are proliferating. CyberStrikeAI, an open-source AI hacking tool, was observed being deployed from servers in China, Singapore, Hong Kong, the U.S., Japan, and Switzerland in attacks across 55 countries targeting FortiGate systems. Anyone can fork it. Anyone can improve it. No one is gatekeeping access.

The uncomfortable truth: Anthropic and OpenAI represent the responsible edge of a capability that is spreading without guardrails. The Mythos conversation is visible because Anthropic chose transparency. Other actors with equivalent or near-equivalent capabilities have no such incentive. They aren't negotiating access protocols with the White House. They aren't limiting deployment to 52 organizations. They are operationalizing these capabilities at scale, in silence.

What we can see is only a fraction of what exists.

Zero Trust Is Necessary. It's Not Sufficient.

Jay Chaudhry says the best defense is eliminating your attack surface with a Zero Trust architecture. "AI can't breach what it can't find." He's right, as far as it goes.

Double down on Zero Trust. Absolutely. But think beyond it.

Zero Trust assumes the perimeter is gone and every connection must be verified. Good. But when AI agents are discovering vulnerabilities at machine speed (Zscaler's own research shows enterprise AI systems can be compromised in 16 minutes) the question isn't just who gets access. It's whether your organization can detect, decide, and respond before the window closes.

When adversaries are state-funded, AI-equipped, and operating without disclosure obligations, architectural defense is table stakes. You need:

Listen to your security teams. They've been raising alarms that got deprioritized by growth roadmaps. The threat environment just validated every concern they've had for years.

Find good partners. This isn't a build-alone moment. The complexity of AI-accelerated threats demands ecosystem-level defense.

Budget isn't optional anymore. Companies that treat cybersecurity allocation as discretionary are making a bet they can't afford to lose.

Think beyond architecture. Zero Trust is a foundation. But AI-speed attacks require AI-speed defense: autonomous detection, automated response, and security postures that adapt in real time.

Assume adversary parity. The planning assumption can no longer be that your attacker is a lone actor with a laptop. Plan for AI-equipped, state-backed teams with vulnerability discovery capabilities equivalent to what Anthropic just demonstrated publicly.

Be Paranoid. Then Act.

I wrote recently that security is not a purchase, it's a position. The U.S. leads the world in security spending intent and in exposure. That paradox hasn't resolved. AI just made it more dangerous.

Jay Chaudhry is right: we all need to be paranoid. But paranoia without structural change is just anxiety. And anxiety doesn't patch vulnerabilities.

The companies that survive this wave won't be the ones that bought the most tools. They'll be the ones that treated security as an architectural and organizational commitment, from the board to the SOC, before the 20x spike became their problem.

The window is open. It won't stay open long.

CIO / CTO Viability Question

If your security posture assumes attackers operate at human speed with publicly known tools, what is your actual exposure window now that state-backed adversaries have AI-driven vulnerability discovery running continuously, without disclosure, against the same software stack you're patching on a quarterly cycle?

SOURCES & FURTHER READING

• Alspach, Kyle. "Zscaler CEO On Vulnerability Surge From AI: 'We All Need to Be Paranoid.'" CRN, 23 Apr. 2026.
• Amodei, Dario. Interview on Mythos vulnerability window. CNBC, 5 May 2026.
• "OpenAI Rolls Out New GPT-5.5-Cyber to Vetted Cybersecurity Teams." CNBC, 7 May 2026.
• "OpenAI Says Codex Security Found 11,000 High-Impact Bugs in a Month." CSO Online, 29 Apr. 2026.
• "Chinese Cybersecurity Firm's AI Hacking Claims Draw Comparisons to Claude Mythos." SecurityWeek, 23 Apr. 2026.
• Benincasa, Eugenio. Analysis of 360 Digital Security Group claims. Natto Thoughts / ETH Zurich, Apr. 2026.
• "Google Links China, Iran, Russia, North Korea to Coordinated Defense Sector Cyber Operations." The Hacker News, 14 Feb. 2026.
• "North Korea Steals IT, Defense Tech and $1.4 Billion in Crypto." Seoul Economic Daily, 10 May 2026.
• "Open-Source CyberStrikeAI Deployed in AI-Driven FortiGate Attacks Across 55 Countries." The Hacker News, 3 Mar. 2026.
• "Zscaler 2026 AI Security Report." Zscaler, 27 Jan. 2026.
• Bellamkonda, Shashi. "Security Is Not a Purchase. It's a Position." shashi.co, May 2026.

Disclaimer: This blog reflects my personal views only. Content does not represent the views of my employer, Info-Tech Research Group. AI tools may have been used for brevity, structure, or research support. Please independently verify any information before relying on it.