Three separate government bodies published measurable results from artificial intelligence deployments this month. The State of Maryland reports $400,000 in savings. The State of Connecticut says forensic investigation times dropped from months to hours. The City of Dearborn, Michigan, says 70 percent of incoming city service calls are now resolved by an AI-powered multilingual chatbot. All three are using Google Cloud infrastructure. All three numbers come from a single vendor's newsletter. That is reason enough to look closely.
Why These Numbers Matter More Than Most
Government artificial intelligence announcements rarely include numbers that survive scrutiny. Vendors describe pilots. Agencies describe intentions. What Maryland, Connecticut, and Dearborn have published is different in kind, not just degree: specific figures tied to specific operational outcomes. The $400,000 Maryland saved came from using Gemini and NotebookLM across a 40,000-person workforce to reduce manual task load and accelerate a clean water management application from what would normally be a multi-month build to five weeks. That is a development cost compression figure, not a vague productivity estimate.
Connecticut's reduction in forensic investigation time is a security posture claim, and a significant one. Moving from months to hours on cyber investigations is not a marginal improvement. It represents a structural change in how the state's security operations team works, from reactive case-by-case analysis to something closer to real-time triage. The platform behind it is Google Security Operations, which consolidates security event data and uses AI-assisted investigation to surface and triage threats faster than a manual workflow allows.
Three Different Problems, One Pattern
What is worth examining across all three cases is where the value is actually coming from. Maryland's outcome is a workforce efficiency story: a large civil service organization reducing cognitive overhead and compressing application development cycles with generative AI tools. Connecticut's outcome is a security operations story: replacing fragmented, time-intensive manual analysis with a unified platform that uses AI to accelerate investigation. Dearborn's outcome is a citizen services story: a multilingual chatbot built on Vertex AI that handles a majority of incoming service requests in a city with a diverse, non-English-speaking population, without adding headcount.
Three different use cases. Three different problem types. But the underlying pattern is the same: government agencies facing staffing constraints, budget pressure, and expanding service demands found that AI tools produced measurable operational relief faster than traditional procurement and implementation cycles would have predicted. That pattern is worth taking seriously, precisely because it cuts against the common criticism that public sector AI is slow-moving and difficult to demonstrate.
The Vendor Context You Cannot Ignore
These results were published in a Google Public Sector newsletter, curated and distributed by Google's analyst relations team. That does not make them false, but it is the right framing to keep in view. Vendor-reported customer outcomes are real outcomes, verified by the customers who published them. They are not independent audits, and they are not representative samples. Every case study in a vendor newsletter reflects a deployment that worked well enough to publicize. The deployments that did not produce strong results are not in the newsletter.
That caveat matters for how technology decision-makers read these numbers. Maryland's $400,000 figure is meaningful context for agencies considering similar Gemini and NotebookLM deployments. It is not a benchmark. Connecticut's forensic timeline compression reflects one state's experience with Google Security Operations on top of a specific data environment, with specific staffing and integration decisions that another state may not be able to replicate exactly. Dearborn's 70 percent resolution rate reflects a particular city's call volume, language distribution, and chatbot scope that a different municipality would need to evaluate against its own situation.
What Replicability Actually Requires
Across all three cases, the precondition for measurable results appears to be the same: a clearly scoped problem, existing data infrastructure that the AI tool could connect to, and an operational owner inside the agency who defined success before deployment began. Maryland did not deploy Gemini across 40,000 employees and wait to see what happened. The clean water app had a specific scope, a five-week target, and a team that owned the outcome. Dearborn did not build a general-purpose chatbot. It built a city services triage tool for a defined set of call types in a defined set of languages.
That discipline is the actual differentiator. The tools are increasingly commoditized. Gemini, NotebookLM, and Google Security Operations are available to any agency with a procurement vehicle and a Google Cloud relationship. The agencies producing results have done the harder pre-work: constraint definition, scope discipline, and outcome ownership before the contract is signed.
The tools are increasingly commoditized. The agencies producing results have done the harder pre-work: constraint definition, scope discipline, and outcome ownership before the contract is signed.
The Broader Competitive Picture
Google is not alone in this space. Microsoft and Amazon Web Services both have active public sector AI programs with their own customer portfolios. Anthropic and OpenAI have both made moves into federal deployments. What Google's public sector results demonstrate, taken as a group, is that outcomes are available at the state and local level, not just at the federal level where most AI procurement attention has focused. Maryland, Connecticut, and Dearborn are not federal agencies. They are organizations operating under tighter budgets, with smaller technical teams and less procurement infrastructure, producing results that federal CIOs are still trying to demonstrate.
That is the underreported part of this story. The proof points for government AI are accumulating at the state and municipal level first. Federal agencies should be watching closely, because the replicability question is shifting from "can government AI produce results" to "what preconditions determine which agencies get those results and which do not."
Maryland, Connecticut, and Dearborn each had a defined problem and an accountable owner before they deployed. For government technology leaders evaluating similar investments: do you know which of your current operations has a constraint clear enough to produce a number in twelve months, and have you assigned someone who will be held to it?
Sources
Google Public Sector Newsletter, March 2026 Highlights. Matthew Cooper Gray, Google Public Sector Analyst Relations, 31 Mar. 2026.
"Charting a Path Forward with Maryland and Google AI." Google Cloud, 2026, youtube.com/watch?v=BKHxnvPav3w.
"Connecticut Reduces Cyber Investigations from Months to Hours with Google SecOps." Google Cloud, 2026, youtube.com/watch?v=N2l0NUlPlqk.
"Dearborn Is Delivering 24/7 City Services in 100+ Languages with Google AI." Google Cloud, 2026, youtube.com/watch?v=rUglM3jsVBE.
"GSA, Google Announce Transformative 'Gemini for Government' OneGov Agreement." U.S. General Services Administration, 21 Aug. 2025, gsa.gov.
