You're Buying a Subscription, Not a Device

You're Buying a Subscription, Not a Device

Analysis 9 minutes 2026-05-16
Physical AI · Enterprise Infrastructure

Four technologies are assembling into a stack that makes centralized cloud compute look like an expensive detour: a reconfigurable inference chip, a decentralized edge operating system, a governed integration runtime, and an on-device vision layer.

60 TOPS SAKURA-II at 8 watts
~78% Projected lifecycle cost reduction (hypothetical fleet model)
95% Potential data egress reduction with on-chip anomaly filtering

Being an analyst means reading a lot, attending a lot of briefings, and then sitting with the connections. Sometimes five separate companies line up in a way that none of them planned. I got back from Boomi World in Chicago and that is exactly what happened at my kitchen table.

I was thinking about four companies and how their work fits together. Then Google announced AI-capable laptops built around on-device intelligence, and I realized I had been using a version of that argument every day. The photo at the top of this post was taken with my Google Pixel. The camera suggested the composition, helped me get the right frame, and processed everything locally. The Tensor chip inside the phone made those decisions without sending anything to a server.

My phone already does this. The question I kept coming back to: when does the washing machine?

Five companies, one kitchen table

The companies I kept thinking about are EdgeCortix, mimik Technology, Boomi, Adobe, and Canva. Each one is working on a different piece of the same shift: moving AI from centralized cloud servers onto the physical devices themselves.

EdgeCortix makes a chip called SAKURA-II. It runs serious AI workloads at 8 watts of power, roughly the draw of an LED light bulb. It can run a capable open-source language model entirely on-device, with no cloud connection required. Think of it as the Tensor chip inside my Pixel, engineered for industrial equipment instead of a smartphone.

mimik Technology, founded by Fay Arjomandi, builds the software that lets those chips work together. If one machine needs more compute than its own chip can handle, mimik borrows processing capacity from the idle machine next to it, over the local network. A row of washers becomes a small shared computer. No internet required.

Boomi brought governance into this picture at Boomi World in Chicago. At Boomi World 2026, the Distributed Agent Runtime was announced in collaboration with Red Hat. The architecture pairs Red Hat AI and OpenShift with Boomi's runtime, using Red Hat's Kubernetes-native infrastructure for high-performance inferencing and AI governance across hybrid cloud environments. Enterprises can run their own models on Red Hat OpenShift and execute the Boomi runtime locally, keeping the full stack on-premises. The significance of that for this story is that Boomi is not a startup making edge computing promises. Boomi is an enterprise integration platform used by large organizations globally. When they ship a runtime that keeps AI local, their customers listen. When the machine's AI decides to open a maintenance ticket or check parts inventory, Boomi's runtime executes that through proper business rules. The AI figures out what needs to happen. Boomi handles the action. Keeping those two things separate is what makes it trustworthy enough to run in a production environment.

Adobe's piece is the most forward-looking. The idea is that on-device image processing eliminates the need to send camera footage to a cloud vision service. Adobe has not announced an appliance runtime. The economics of streaming raw video at fleet scale are already a problem, though, and the direction Adobe's on-device work is heading makes this a reasonable projection.

Canva is the fifth. At Canva Create 2026, the company announced offline mode: one click marks a design for offline availability, core editing works without a connection, and everything syncs when connectivity returns. I wrote at the time that the infrastructure question Canva did not fully address was the edge computing architecture that offline-first at global scale eventually requires. Peer-to-peer, low-bandwidth, intermittent connectivity environments are exactly the problem mimik's platform is designed to solve. Canva Offline works today as a sync-on-reconnect feature. The version that works for a field team in a facility with no Wi-Fi, or a designer in Lagos who stays offline for hours, needs the kind of distributed local mesh this stack describes. Canva has 265 million monthly users. That is a global reach problem, and global reach at scale is an edge infrastructure problem.

My phone already does this. The question is when the industrial machines running our buildings and facilities catch up.

A washing machine that fixes itself

A commercial washing machine in a university housing complex or hotel laundry room monitors its own temperature, vibration, motor load, and cycle status. Today that data streams continuously to a cloud platform. You pay to transmit it, store it, and analyze it, including the 99% of readings that say everything is fine.

Put an EdgeCortix chip inside the machine and the analysis happens locally. The machine notices a change in its own vibration pattern, the early warning of a bearing about to fail, before any person can hear or feel it. It does not send raw data to the cloud. It identifies the problem on-device.

mimik's software checks whether the machine next door has spare processing capacity to run a deeper diagnostic. It does. The two machines work through it together, entirely within the laundry room's local network.

Boomi's runtime then acts on the diagnosis. It marks the machine unavailable before the next person tries to start a load. It opens a maintenance ticket. It checks whether the replacement part is in stock and routes an order if it is not.

The machine handled the first three steps of its own repair before anyone filed a complaint.

From detection to action, no cloud required 1. On-chip detection notices bearing vibration is off
2. Local mesh borrows adjacent processing to confirm
3. On-device model identifies the failure and part number
4. Boomi runtime locks reservation, opens ticket, routes parts order
5. Cloud receives one structured event, not weeks of raw sensor data

The bill you are not looking at

Most connected device fleets are running on a cloud subscription whether they know it or not. Every sensor reading gets transmitted. Every reading gets stored. Every diagnostic query goes to a cloud AI service and comes back as a charge. At ten devices that is negligible. At ten thousand devices over five years, the math looks like this.

Cost Center (10,000 Units / 5 Years) Cloud-Dependent Fleet Device-First Edge Stack
Silicon and Hardware $0, standard microcontrollers ~$250,000 upfront for EdgeCortix NPU hardware
Data Transmission ~$600,000 for continuous raw telemetry ~$30,000 for anomaly events only
Cloud Storage ~$450,000 to store all those "everything is fine" readings ~$45,000 for active service ticket logging
AI Inference ~$900,000 in per-query cloud AI charges $0, runs on the chip
Software Licensing ~$300,000 for cloud integration platform ~$150,000 for mimik and Boomi edge runtime
Five-Year Total ~$2,250,000, growing with every device added ~$475,000, largely fixed at purchase

These figures are illustrative, not vendor pricing or a specific customer case. The point is the structure: one cost grows with the fleet, the other does not.

Why Boomi World changed this conversation

A hardware startup announcing edge AI capabilities is one thing. An enterprise integration platform with a large global customer base shipping a local agent runtime is a different signal. Boomi's customers are the CIOs and operations teams who actually buy connected device fleets. When Boomi tells them they can run AI locally and keep data behind the firewall, that carries weight that a chip company announcement does not.

Boomi is not building washing machine chips. The Distributed Agent Runtime is the governance and execution layer that makes the washing machine scenario deployable at enterprise scale.

The data that stays in the building

Beyond cost, there is a simpler argument. When the AI runs on the device, your operational data stays in the building. Machine behavior patterns, facility usage, internal camera footage: none of it reaches a cloud storage layer. That is not something you negotiate in a service agreement. It is a property of the architecture.

For regulated industries and anyone who manages sensitive facilities, that distinction matters regardless of the cost comparison.

Shashi Bellamkonda · Principal Research Director, Info-Tech Research Group · Former Adjunct Professor, Georgetown University, Entrepreneur in Residence, Stony Brook University, NY
Disclaimer: This blog reflects my personal views only. Content does not represent the views of my employer, Info-Tech Research Group. AI tools may have been used for brevity, structure, or research support. Please independently verify any information before relying on it.