Google’s latest useful agent story is not a press release. It is homework. In Google’s “Fabric of Unified Intelligence” codelab for its Cloud Next ’26 keynote demos, the company walks developers through a multi-agent workflow where Gemini Enterprise orchestrates agents deployed on Cloud Run, shares context across handoffs, pulls from BigQuery, generates product videos with Veo, and eventually kicks work over to Gemini CLI. That is much more revealing than another stage clip of an assistant nodding confidently at a dashboard.
The setup is fictional, because enterprise software is legally required to involve a pretend company with good furniture. You play a product manager or merchandising lead at “Organic Living,” trying to revive slow-moving inventory. The agent cast is familiar: a Market Research Agent analyzes trends and customer feedback, a Product Strategy Agent turns that into campaign ideas, an Orchestrator Agent coordinates the workflow, and a Dev Agent creates tasks and scaffolds code. The point is not that furniture needs more agents. The point is that a real business workflow does not fit inside one chat bubble for very long.
The real story is the plumbing. The codelab has users create a Google Cloud project, enable a small parade of APIs, set up BigQuery tables for mock inventory and sales data, create a BigQuery Data Agent, create shared Drive and Cloud Storage resources, grant IAM roles, deploy custom agents to Cloud Run, and register those agents inside Gemini Enterprise via Agent-to-Agent integration. That is not a criticism. It is the valuable part. Demos make agent work look like intention plus sparkle. The lab makes it look like identity, data access, runtime, permissions, storage, tool registration, and cleanup.
That matters for Useful Machines readers because this is where agent projects usually stop being cute. A chatbot can summarize a trend report. An agentic workflow that changes campaign plans, writes requirements, stores files, calls data systems, generates assets, and hands work to a developer needs boundaries. Which service account can read the warehouse data? Where do generated requirements land? What does the dev agent receive? What context survives the handoff? Who approved the plan before the expensive or visible part happened? Those questions are not bureaucratic garnish. They are the product.
Google’s example also shows why “multi-agent” is both useful and over-marketed. The lab asks one prompt in Gemini Enterprise: analyze current interior design trends, identify dead stock that matches the trend, and orchestrate a relaunch campaign. Behind that prompt, the Market Research Agent uses Deep Research, the Data Insights path uses the BigQuery agent to find low-velocity inventory, and the Product Strategy Agent turns the findings into a campaign strategy. In the lab’s own recap, one prompt engages multiple agents to analyze trends, identify inventory, develop a relaunch strategy, generate multimedia assets, simulate a business-to-dev handoff, and build a landing page with Gemini CLI.
That is a useful pattern, not a universal endorsement. Splitting work across agents can make a workflow easier to reason about when the agents map to actual responsibilities: research, data retrieval, strategy, implementation. It can also create a distributed fog machine if nobody can inspect the intermediate artifacts. The practical test is whether each agent produces something auditable enough for the next agent — or the human — to trust. “Shared context” sounds lovely. Shared confusion is also shared.
One small technical detail deserves attention: the codelab says the Market Research Agent wraps the Gemini Deep Research Interactions API and uses an AI Studio API key because that API is currently available through the AI Studio endpoint. Translation: even inside a Google Cloud demo, the clean enterprise-agent story crosses product seams. Builders should expect seams. They should document them. They should also ask whether a prototype depends on preview APIs, separate credentials, regional assumptions, or manual console setup that will make production deployment less elegant than the keynote adjective promised.
The Gemini CLI section is the most interesting handoff. After the business-side agents create a task, the user switches roles into Cloud Shell, configures Gemini CLI with a remote dev-agent entry, asks the dev agent to work on the generated task, and then tells Gemini to build and deploy the website. That is the version of agentic work that feels closer to reality: not one omniscient assistant doing everything, but a chain where one environment produces a bounded assignment and another environment executes it with different tools, different approvals, and different risk.
For teams evaluating Gemini Enterprise, the move is not to copy the furniture demo and declare victory over operations. Use it as a checklist. Can your current AI workflow register tools and agents cleanly? Can it separate data access from generation? Can it preserve context without spraying sensitive files everywhere? Can it hand work from a business user to a developer without turning Slack into the system of record? Can it show sources and intermediate outputs well enough that someone can say no before the website, campaign, ticket, or customer-facing artifact goes live?
The clicky version of this story is that Google showed a multi-agent system building a product campaign and website. Fine. The useful version is that Google showed how much ordinary cloud machinery sits beneath a serious agent demo. Cloud Run, BigQuery, IAM, Drive, Cloud Storage, Gemini Enterprise, Veo, and Gemini CLI all have jobs to do. That is less magical than “one prompt runs the business,” and far more believable. The next phase of agent adoption will belong less to the vendors with the shiniest orchestration diagrams and more to the teams that can make handoffs boring, traceable, and reversible. Boring is where the useful machines live.
In short
Google’s Cloud Next ’26 codelab shows Gemini Enterprise coordinating Cloud Run agents, BigQuery, Veo, Drive, and Gemini CLI. The useful lesson is not magic autonomy; it is where shared context and handoffs actually have to live.