From the Pentagon's Anthropic standoff to Zoho's quiet insurgency to ServiceNow's autonomous workforce announcement, the real battle for enterprise AI is not being fought where most people are watching. I read this week's Wall Street Journal AI coverage and combined it with firsthand briefings to give you a picture the headlines alone cannot provide.
This week's Wall Street Journal landed with a stack of AI headlines that, read together, reveal something more interesting than any single story: the shape of a race that is far more complicated than the OpenAI-versus-everyone narrative the technology press loves to sell. OpenAI raises $110 billion. The Pentagon freezes Anthropic. Elon Musk's Grok faces a government safety reckoning. Amazon hires a new Artificial Intelligence czar and bets on low cost. In the background, Zoho quietly launches its own large language model. And ServiceNow, four days before this coverage ran, briefed analysts on what I believe is the most consequential enterprise AI architecture announcement of early 2026.
I want to be transparent about my vantage point. I receive briefings from Amazon and Zoho, recently attended Zoho Day, and am on the analyst lists for Google, ServiceNow, IBM Red Hat, and others. I attended the ServiceNow Autonomous Workforce executive briefing on February 24, 2026. What follows is my personal analysis combining that firsthand access with the broader market picture. It does not represent the views of my employer, Info-Tech Research Group.
The real question in all of this coverage is not who has the best model. It is who is building AI that enterprises actually need — reliable, private, auditable, integrated, and able to execute rather than merely advise. I want to go through each player honestly, separating verified facts from my own interpretation as I go.
1. Anthropic and Claude: Sticky Is the New Strong
The Pentagon dispute is actually a competitive moat story
The Trump administration froze Anthropic's use across government agencies, citing the company's AI guardrails and concerns about its ties to Democratic donors. On the surface this looks like a loss. Look closer and it is actually the clearest evidence of how deeply embedded Claude has become in sensitive government workflows.
The administration is not cutting Anthropic off immediately. It granted a six-month phaseout period. That is a bureaucratic admission that unraveling this dependency would be genuinely disruptive. The Defense Department said Anthropic's Claude models will stop being used in all lawful military use cases during that transition. Anthropic said it was "ready to continue our work to support the national security of the United States." The company has said it is not political and has previously called such attacks baseless.
The dispute itself — centred on Anthropic's safety guardrails conflicting with the Pentagon's desire for full operational control — actually reinforces the company's positioning in regulated industries like healthcare, legal, and financial services, where those same guardrails are features, not bugs. Employees at both Google and OpenAI signed an online petition urging their companies to maintain the same safety standards Anthropic has. When your competitors' employees publicly defend your differentiator, that is the best market signal you can receive.
Being at the centre of a federal controversy is, perversely, the best possible advertising for enterprise trust. Every regulated industry buyer watching this story is learning that Claude is the model governments find hard to remove. That is not a bad reputation to have going into procurement conversations.
2. ServiceNow: The Execution Layer That Changes the Question
Most enterprise AI stops at the answer. ServiceNow is building what comes next.
I attended the ServiceNow Autonomous Workforce executive briefing on February 24, 2026, four days before this Wall Street Journal coverage ran. I want to be direct: what ServiceNow announced is not incremental. It marks a genuine inflection point in how enterprises will deploy AI, and it belongs in this analysis because it reframes what the race is even about.
The central thesis Amit Zavery — President, Chief Product Officer, and Chief Operating Officer at ServiceNow — articulated is one I found compelling and have not heard stated this clearly elsewhere: most enterprise AI stops at the answer. It summarises, recommends, and suggests. Then it hands the work back to a human. ServiceNow's argument is that answers are not business outcomes. The differentiator is not the model. It is the execution layer.
ServiceNow AI — Internal Performance Data (Briefing, Feb 24, 2026)
The headline announcement was AI Specialists — and I want to emphasise this distinction because it matters for how enterprises should evaluate the announcement. These are emphatically not bots. Bots follow scripts. AI Specialists are designed to do a job dynamically. The L1 Service Desk AI Specialist, available out of the box next quarter, operates autonomously to resolve workplace technology incidents end-to-end without human intervention: it detects or receives an incident report, diagnoses root cause using live enterprise data, executes the appropriate fix, documents the steps, notifies the employee, and updates the knowledge base. That is a complete workflow, not a task assistance.
A recurring question at the briefing was whether these agents require a formal onboarding process. The answer is yes, and it follows a shared responsibility model across four areas. Configuration assigns specialists to existing work groups and authorisation scopes — no new infrastructure is required. Training happens continuously from live workflows; there is no separate pre-deployment training burden. Integration leverages 500-plus existing connectors, meaning customers do not rebuild what they already have. Governance runs through the AI Control Tower, which provides permission policies and autonomy thresholds that are customer-configured but ServiceNow-governed by default. That governance architecture is significant. It is how an enterprise maintains audit trails, prevents black-box scenarios, and satisfies its own compliance requirements.
ServiceNow's large language model strategy is explicitly hybrid and multi-model. While supporting frontier models including Claude from Anthropic, OpenAI, and Nvidia, the core investment remains in NowLLM — domain-specific models optimised for ServiceNow's specific data structures and approval chains. In deterministic enterprise workflows, purpose-built models often outperform general large language models on latency, cost, and precision. This is the same philosophy Zoho has taken with Zia LLM, and I expect we will see more enterprise platform vendors follow this path rather than becoming permanent consumers of frontier model APIs.
The second major announcement was ServiceNow Employee Works, born from the recent Moveworks acquisition. This serves as a single AI front door executing across Teams, Slack, mobile, and the open web. No system switching. No phone calls. Natural language converted directly into action. The strategic distinction ServiceNow drew between weak and strong return on investment resonated with me. Weak return on investment is isolated task improvement — saving an individual employee fifteen minutes — which leaves when that employee resigns. Strong return on investment transforms the mission-critical process end-to-end and delivers value to the bottom line regardless of workforce turnover. That framing should become standard in how Chief Information Officers evaluate AI investments.
From the Briefing — Allan Rosa, Chief Information Officer, CVS Health
"Boring is beautiful. Predictable. Stable. Trust is not a nice-to-have in healthcare. It is the basic currency."
CVS Health's approach: security embedded directly into the architecture from the whiteboard phase, not added as a gatekeeper later. Automated red teaming because static reviews are insufficient for dynamic AI models. This is the operational discipline that separates enterprises that successfully deploy AI from those that produce pilot theatre.
ServiceNow also articulated what I am calling Zero-Loss Optimisation — a framework that moves past headcount reduction to focus on reinvesting productivity gains back into high-impact work. By automating L1 and L2 support, organisations redirect human talent toward architectural innovation and closing the enterprise delivery gap. This is the right conversation to be having. The enterprises that win with AI will not be the ones that cut costs most aggressively. They will be the ones that reallocate human intelligence most effectively.
I wrote earlier this month that ServiceNow has been spreading their brand footprint with brilliant, clear campaigns — the "Dear IT" print campaign explicitly validating IT professionals as the architects who take ideas from pilot to production is a perfect example. While Anthropic struggles with brand recognition outside technical circles, ServiceNow is making itself legible to the business decision-maker and the IT professional simultaneously. That dual messaging strategy is underappreciated by the market.
My prediction for the next 60 months: enterprise IT will transition from tracking time to resolution to optimising for autonomous resolution percentage. ServiceNow is building the governance infrastructure — the AI Control Tower, the shared responsibility onboarding model, the audit trail — that makes that transition safe enough for a CVS Health or a regulated bank to actually execute. That is the enterprise AI category that is genuinely hard to replicate and genuinely hard to remove once embedded.
3. Amazon and AWS Bedrock: Winning by Owning the Racetrack
The Nova story is a distraction. Bedrock is the business.
Amazon's new Artificial Intelligence czar, Peter DeSantis, has been with the company since its early days. His stated thesis is blunt: "AI has a cost problem. If we ultimately want AI to transform everything, the costs have to be different." That is the Amazon low-cost playbook applied to enterprise AI — sell the thing everyone wants and charge less for it.
I receive briefings from Amazon and the more important story from those conversations is not Nova, their flagship model that has lagged others in independent capability benchmarks. It is Bedrock. Amazon's cloud platform lets enterprises access Anthropic, Mistral, Meta's Llama, Cohere, and others through a single cloud relationship they already have. The Chief Information Officer does not have to argue for a new vendor — they expand their Amazon Web Services bill. Amazon is not betting on one horse. They are owning the racetrack.
Amazon has invested a total of $8 billion in Anthropic, making AWS Anthropic's primary cloud and training partner. Separately — announced the day before this Wall Street Journal edition ran — Amazon committed a total of $50 billion to OpenAI: an initial $15 billion now, followed by a further $35 billion in the coming months when certain conditions are met. OpenAI in turn will spend $100 billion on AWS over the next eight years, expanding an existing $38 billion agreement, and AWS becomes the exclusive third-party cloud provider for OpenAI Frontier, their new enterprise platform for building and running AI agents. The round also includes $30 billion each from Nvidia and SoftBank — and the Nvidia investment is worth noting separately: a chip company taking a financial stake in one of its largest customers tightens a relationship that was already structurally intertwined. Andy Jassy was clear the OpenAI deal does not change the Anthropic relationship: "They've always had multiple partners, and we do too." That is not just diplomacy. It is strategy. Amazon is not picking a winner in the model race. They are making certain that whichever model wins, the workload runs on AWS.
A National Bureau of Economic Research survey of 6,000 executives found that 69% said their companies use AI in some form. Cost is the primary barrier to going deeper. DeSantis's low-cost strategy is the right answer to the right question at the right moment in the enterprise adoption curve.
My view from briefings: Nova may not win on benchmark performance today, but if Amazon can deliver comparable results at 30 to 50 percent lower cost through their own chips and infrastructure, that closes the enterprise deal faster than any capability gap. Amazon aims to use its in-house chips to develop AI models more cheaply than competitors and is betting on strong demand for enterprise AI products that make up for their lack of all-purpose power with task-specific customisation. That is a coherent enterprise strategy.
4. Microsoft: The Invisible Leader Everyone Overlooks
Absence from a government sandbox does not mean absence from enterprise
Microsoft barely appeared in this week's government AI coverage. That might suggest they are falling behind. The opposite is true. GitHub Copilot alone has tens of millions of developer seats. Microsoft 365 Copilot is embedded in the productivity tools that most knowledge workers use every day. Their strategy of building proprietary models — the MAI and Phi families — while distributing OpenAI through Azure is the most mature enterprise play in the market right now.
They may simply be subject to different procurement rules in the specific classified defence context described in this week's reporting. The absence from a particular government sandbox does not mean absence from enterprise broadly. Microsoft is blazing the development of their own models, which is the right long-term strategy. Depending indefinitely on OpenAI creates its own strategic risk. The $110 billion OpenAI raise values that company at $730 billion before the investment — at those numbers, Microsoft needs its own model story as insurance.
My prediction: Microsoft wins the enterprise productivity layer. Not because their models are the most capable, but because they have the deepest integration into how enterprise work actually happens — calendar, email, documents, code, and now with ServiceNow's Teams integration, automated task execution. That distribution advantage compounds every quarter.
5. Google: The Most Underreported Enterprise AI Story
Distribution plus data is a combination nobody else can replicate at scale
I am on Google's analyst list and receive regular briefings. Google is the most underreported enterprise AI story right now. Workspace integration means every Gmail and Google Docs user is already inside the Gemini ecosystem without making a separate purchasing decision. They have enterprise distribution through Google Cloud and Vertex AI, deep cloud infrastructure, and critically — the real-world data advantage that no competitor can replicate at the same scale.
From my earlier published research at Info-Tech Research Group, conversational AI has been the most successful use case for AI implementation across verticals and industries — even before ChatGPT — and Google's Customer Engagement Suite connects its foundation models with AI agents to offer a platform that is AI-optimised and scalable with the infrastructure of Google Cloud. They have more than 30 data retrieval connectors and 70-plus action connectors for integration with major platforms including Salesforce and ServiceNow.
Search, Gmail, Maps, YouTube, and Android all feed a training and feedback loop that continuously improves Google's models with real-world, real-time human behaviour. The enterprise play is quiet, embedded, and enormous. Every enterprise that runs on Google Workspace is already an AI customer whether they have formally decided to be one or not. That silent adoption path is one of the most powerful go-to-market advantages in the industry.
My honest observation: Google remains a question mark for some enterprise customers regarding their cohesive AI strategy. The capability is not in doubt. The clarity of the enterprise narrative still needs work. Companies that are better at explaining what they do will win deals that Google should be winning on technical merit alone.
6. OpenAI: The Brand Leader Still Catching Up on Enterprise
$110 billion raised. $730 billion valuation. Enterprise credibility still being earned.
OpenAI closed $110 billion in new funding on February 27, 2026 — the day before this Wall Street Journal edition — at a pre-money valuation of $730 billion. The round was led by Amazon at $50 billion, with $30 billion each from Nvidia and SoftBank. Accompanying the funding announcement, OpenAI reported that ChatGPT now has more than 900 million weekly active users and has surpassed 50 million consumer subscribers, with January and February 2026 on track to be the largest months for new subscriber additions in the company's history. The company also reported that weekly users of its AI coding tool Codex have reached 1.6 million, more than tripling since the beginning of the year (OpenAI, February 27, 2026).
While ChatGPT is a household name, Anthropic is not — even though Anthropic often fares better with enterprise deployments. OpenAI has the consumer recognition advantage but still has ground to cover on the compliance certifications, data residency guarantees, and enterprise-grade support structures that Chief Information Security Officers require before signing large contracts. The $730 billion valuation is pricing in enterprise dominance that has not fully materialised yet. That is not a reason to count them out. It is a reason to watch the next 18 months carefully.
7. Grok and xAI: The SpaceX Pivot Changes Everything — If It Sticks
Consumer attention and enterprise reliability are incompatible roadmaps
Grok's problems this week were significant and well documented. Multiple federal agencies raised safety concerns. A General Services Administration 33-page report determined that Grok "does not meet the safety and alignment expectations required for general federal use." The platform was under fire for allowing sexualised editing of photos. The General Services Administration concluded that even limited government use of Grok would require strict and layered safety oversight.
The merger of xAI with SpaceX is the more strategically interesting signal. SpaceX has real enterprise and government customers — the National Aeronautics and Space Administration, the Department of Defense, commercial satellite operators. That customer base demands mission-critical reliability standards, predictable behaviour, and strict operational governance. These requirements are the opposite of the consumer chatbot positioning that defined Grok's first chapter. Whether Musk can resist the consumer attention economy long enough to build boring, reliable enterprise infrastructure is the open question. I am watching.
8. Meta: Not an AI Vendor. A Training Data Machine.
The photos and videos are not products. They are data collection at planetary scale.
Meta is not selling AI to enterprises the way Amazon Web Services or Google Cloud does. They are the training data machine upon which the entire industry depends more than it wants to admit.
Meta by the Numbers — Q4 2025
Meta's Llama models are open-source precisely because the competitive moat is not the model itself — it is the continuous real-world data flywheel from those billions of users. Enterprises use Llama because it is free and customisable. Meta profits because every deployment refines the research that feeds their advertising targeting engine. The gimmicky photos, videos, Reels, and augmented reality filters are not products for their own sake. They are data collection mechanisms that entertain 45% of the world's population daily while training Meta's understanding of human visual preference and behavioural intent.
In terms of real-time learning data — which is critical for keeping large language models current and culturally accurate — the race is between Google, Meta, and xAI now merged with SpaceX. That data advantage translates directly into advertising return on investment for Google and Meta through a self-reinforcing loop: better data produces better ad targeting, better targeting produces more revenue, more revenue funds better data infrastructure. That loop is very difficult for competitors to break into.
9. LinkedIn: The Proprietary Insider That Just Changed Its Rules
Not on the same wavelength as the others — but moving faster than you think
LinkedIn's AI posture has always been proprietary and distinct from the broader consumer AI market. Their Productive Machine Learning initiative spans offline, nearline, and online environments and is deeply custom-built for their specific use cases: feed ranking, job matching, recruiter tools, connection suggestions, and content moderation. Their Photon-Connect platform enables engineers to rapidly try different machine learning methods across multiple model types. The guardrails LinkedIn maintains are not simply conservative corporate instincts — they are brand protection for a platform whose value depends entirely on professional credibility.
But the picture changed significantly in late 2025. Starting November 3rd, LinkedIn began using member profiles, posts, resumes, and public activity to train its generative AI models — on by default, with an opt-out requiring active steps. This drew criticism particularly around General Data Protection Regulation compliance in European markets. Being Microsoft-owned means Azure OpenAI is almost certainly powering the generative surface features — writing assistance, message suggestions, job description generation — layered on top of LinkedIn's own proprietary recommendation and matching infrastructure.
The November data policy change is LinkedIn making its play for the training data race using the most professionally dense dataset in the world — approaching one billion members' career histories, skills, endorsements, and professional interactions. No other platform has that specific signal. Whether LinkedIn can monetise it beyond their own platform or whether it remains internal infrastructure is the open question for their next chapter.
10. Zoho and Zia: The Quietly Ambitious Insurgent
Owning the full tech stack and charging less. Sound familiar?
I receive briefings from Zoho and recently attended Zoho Day. Zoho is underrepresented in mainstream AI coverage precisely because it operates in a different stratum — small and medium business to mid-market enterprise — but it is quietly building capabilities that put real pricing and privacy pressure on players far above its weight class.
What makes Zoho and their AI assistant Zia distinctive is the philosophy more than the technology. Most players in this analysis are either building AI and figuring out how to sell software around it, or bolting AI onto existing cloud infrastructure as a premium add-on. Zoho's position is that their shared data model, fully owned and managed technology stack, and portfolio of more than 100 applications are their greatest assets in building a stable AI platform. If you have the resources, you can use a company like Databricks or Creatio to create tailored processes. Alternatively you can use Zoho, which has AI specifically engineered to your daily use cases and priced to match.
The biggest move came in July 2025 when Zoho launched Zia LLM — their own proprietary large language model — alongside 40 pre-built Zia Agents, a no-code agent builder called Zia Agent Studio, and a Model Context Protocol server. Zia LLM comes in three sizes: 1.3 billion, 2.6 billion, and 7 billion parameters. The right-sizing philosophy is deliberate. Enterprises do not need a 70-billion parameter model to route a support ticket, score a sales lead, or summarise a customer call. Using a smaller, purpose-built model is faster, less expensive, and keeps data on infrastructure the enterprise controls. Zoho partnered with Nvidia during development, with CEO Mani Vembu noting the goal was to bring "cutting edge toolsets at a lower cost."
Zia Agent Coverage — Current Scope
Account management, sales development representative lead qualification, human resources onboarding, customer support, information technology help desk, and sales coaching. Each agent is assigned a unique identifier and mapped as a digital employee so enterprises can audit performance and workflows with guardrails.
Agent-to-Agent protocol support is on the roadmap, allowing Zia Agents to collaborate with each other and with agents on other platforms — including, potentially, ServiceNow's AI Specialists.
The privacy commitment is where Zoho has a genuine differentiator. Their fully owned technology stack enables AI that trains on your data and runs on your data without ever exposing it to external vendors' models. They explicitly commit to never using customer data to train their AI models, and they are compliant with the General Data Protection Regulation, Health Insurance Portability and Accountability Act, and California Consumer Privacy Act. For mid-market companies in regulated industries, that is a hard procurement requirement, not a nice-to-have.
The pricing strategy is arguably the most disruptive element. There are no per-use fees for Zia. AI features come included in Zoho apps — CRM, Projects, Creator, Books, Desk, and more. When the rest of the market charges per-seat AI licenses on top of existing software costs, Zoho bundling AI into the base subscription removes a massive adoption friction point. This is the Amazon low-cost playbook applied one layer down the market — and for the right buyer, it closes deals without a lengthy evaluation.
One nuance worth noting: Zoho's Workplace plan also supports a Bring Your Own Key model where administrators can connect GPT, Gemini, Claude, or Cohere and assign them to specific workflows. This is Bedrock-style flexibility at Zoho pricing. The enterprise does not have to choose between Zoho's native models and best-in-class third-party models — they can use both in context without sharing their data with those external providers.
11. What I Am Confident About and What Is My Interpretation
Transparency is more valuable than appearing certain
I want to be explicit about where the evidence is solid and where I am offering interpretation. This distinction matters, and blending the two without acknowledgment is a problem I see in a lot of technology analysis.
Verified from primary sources: ServiceNow's 99%, 90%, and 55x performance statistics are from their February 24, 2026 analyst briefing which I attended. The AI Specialists, Employee Works, NowLLM, and AI Control Tower announcements are confirmed from the same briefing and subsequent press releases. The CVS Health CIO quote is from the briefing. Zoho launched Zia LLM in July 2025 with three model sizes. Meta has 3.58 billion daily active users as of Q4 2025. Meta spent $48.45 billion on research and development in the 12 months ending June 2025. LinkedIn began using member data for generative AI training from November 3rd, 2025, on by default with opt-out available. Amazon's investment in Anthropic is as reported by the Wall Street Journal. OpenAI raised $110 billion at a $730 billion valuation. The Pentagon's six-month Anthropic phaseout is confirmed in reporting. Google has more than 30 data retrieval connectors and 70-plus action connectors in their Customer Engagement Suite — from my published Info-Tech Research Group analysis.
My interpretations that should be read as analysis, not verified fact: The characterisation of Zoho's customer base and market position. The assertion that Zoho is absent from government contracts — based on absence of evidence in available reporting, not confirmed knowledge. The strategic predictions about xAI and SpaceX. Microsoft's distribution advantage compounding quarterly. Google's silent adoption path. The description of LinkedIn's architecture as hybrid — an informed inference based on Microsoft ownership, not confirmed by either company. The 30 to 50 percent cost advantage estimate for Amazon Nova. The prediction that autonomous resolution percentage will replace time-to-resolution as the key enterprise AI metric over 60 months.
12. The Actual State of the Race
Infrastructure wins. Execution differentiates. Data rules everything.
This race has multiple tracks running simultaneously, and the winner on each track is different.
Infrastructure layer: Amazon Web Services Bedrock, Microsoft Azure, and Google Vertex AI are the structural winners. The cloud relationship comes first. AI capability rides on top of existing trust.
Execution layer: ServiceNow is building something the model vendors are not — the governance and workflow infrastructure that makes autonomous AI safe enough for regulated enterprises to actually run. Answers are not business outcomes. Execution is.
Trust and safety positioning: Anthropic wins in regulated industries where safety guardrails are non-negotiable. The Pentagon dispute is the best proof of concept they could have asked for.
Brand and developer mindshare: OpenAI wins on recognition and through developer tooling that converts to enterprise standardisation. ServiceNow wins the enterprise IT brand narrative with clarity that Google and Anthropic should study.
Training data supremacy: The real race for long-term model quality is between Google, Meta, and xAI with SpaceX. Revenue from that advantage accrues most directly to Google and Meta through advertising. For xAI it is a capability investment with no near-term clear revenue path.
Mid-market insurgent: Zoho is eating the vast middle of the global market with a better price-to-value and privacy story than anyone else in that tier. Do not underestimate them.
The question I keep returning to: in 18 months, when AI capability differences between the top models narrow further — and they will — what will enterprises be choosing between? They will be choosing on price, privacy, compliance, integration, and the ability to execute autonomously within governed guardrails. On every one of those dimensions, the hyperscalers, the execution platforms like ServiceNow, and the embedded players like Zoho hold structural advantages over the model-first companies.
A Note on How I Learn — and an Open Invitation
Briefings are welcome. There is no cost unless you need an advisory relationship.
George Bernard Shaw said that we learn throughout our lives except for a short break in school. That is my philosophy too. The analysis in this post is better because I attended the ServiceNow briefing on February 24th, because I was at Zoho Day, and because Amazon and Google keep me updated on what they are building. Firsthand access to the people making decisions — and the candid conversations that happen in briefings that never make the press release — is irreplaceable.
I am on the analyst lists for Amazon, Zoho, Google, ServiceNow, and IBM Red Hat. I would welcome briefings from other providers in the enterprise AI, Software as a Service, marketing technology, collaboration, productivity, and customer experience space. There is no cost to a briefing unless you are looking for a formal advisory relationship, in which case we can have that conversation separately.
What Is In It for You
Where your product or research is relevant to what I am writing about, I will mention it — here on shashi.co in posts like this one, in conversations with practitioners and technology leaders, and in research notes and tech notes published on infotech.com, where the audience includes approximately 40,000 Chief Information Officers, Chief Technology Officers, and senior technology decision-makers who rely on Info-Tech Research Group for software selection guidance and enterprise strategy.
I will not mention you where you are not relevant. I will not soften a critical view because we have a briefing relationship — you can see from this post that I am direct about limitations alongside strengths. What I will do is make sure I understand what you are building well enough to represent it accurately when it matters to the people who are making real purchasing and strategy decisions.
The enterprise technology market does not have a shortage of vendors. It has a shortage of decision-makers who understand the difference between what is being marketed and what is actually working at scale. My value to the companies I brief with is being one of the people who can make that translation — to a community of technology leaders who are actively choosing platforms, signing contracts, and advising their organisations on where to invest.
If you are building something in this space and believe it deserves to be part of this conversation, reach out. Connect with me on LinkedIn or through the contact details on this blog.
Boring is beautiful. Predictable is powerful. The enterprises that understand this first will build the AI infrastructure advantage that compounds for a decade. The ones chasing the benchmark headlines will be rebuilding their stacks in 2028.
shashi.co · Strategy & Technology Analysis
Sources: Wall Street Journal (February 28, 2026 print edition), ServiceNow Autonomous Workforce Analyst Briefing (February 24, 2026), Meta Investor Relations Q4 2025, Zoho Corporation press releases (July 2025), Constellation Research, Business Wire, Info-Tech Research Group published research, Social Media Today, TechRadar.
Disclaimer: This blog reflects my personal views only. AI tools may have been used for research support. This content does not represent the views of my employer, Info-Tech Research Group. I hold analyst relationships with Amazon, Zoho, Google, ServiceNow, and IBM Red Hat. I attended Zoho Day and the ServiceNow Autonomous Workforce briefing as an analyst.