At the Cloud Next 2025 conference held in Las Vegas, Google announced several updates related to AI infrastructure, models and enterprise tools. The company also confirmed that it will invest approximately USD 75 billion in capital expenditure in 2025, largely for AI compute and cloud infrastructure.
Google Cloud CEO Thomas Kurian stated that the company now has over 4 million developers using Gemini, and that usage of Vertex AI has grown 20 times. He described the company’s infrastructure as spanning 42 cloud regions across 200 countries, connected by more than 2 million miles of fiber.
Alphabet and Google CEO Sundar Pichai introduced Google’s seventh-generation Tensor Processing Unit (TPU), called Ironwood. According to the company, Ironwood is 29 per cent more energy-efficient and 3,600 times more powerful than its first TPU. A single Ironwood pod includes over 9,000 chips and delivers 42.5 exaflops of compute power. These chips will support Google’s AI Hypercomputer stack.
Google also announced the public preview of Gemini 2.5 Pro, described as its most capable multimodal model to date. In addition, the company introduced Gemini 2.5 Flash, a version optimised for lower latency and cost. Gemini models will now be available on Google Distributed Cloud (GDC), including in air-gapped environments. Support from partners like NVIDIA and Dell has been included.
The company stated that Meta’s Llama 4 and other open models will also be hosted via Vertex AI. For building AI agents, Google launched the Agent Development Kit (ADK), an open-source toolkit and the Agent2Agent (A2A) protocol, which enables agent communication across platforms.
In generative AI, Google announced Imagen 3, a text-to-image model with improvements in detail and inpainting. It also introduced Chirp 3, a voice model that can generate speech from 10-second audio samples, Lyria for music generation and Veo V2 for video creation and editing. These models will be integrated into Vertex AI.
Google Workspace added new features, including a ‘Help Me Analyse’ function in Sheets, audio summaries in Docs and Workspace Flows, which automates routine tasks using AI-based agents.
In cloud networking, Google said its private Cloud WAN will soon be globally available, offering an alternative to public internet connectivity. The company claims it provides up to 40 per cent faster performance and cost savings for enterprise users.
For developers and IT operators, Gemini Code Assist was announced as a tool that translates natural language prompts into code and integrates with platforms like Atlassian and Snyk. Gemini Cloud Assist will offer support for system troubleshooting, architecture design and cost management.
Updates for data and analytics teams include conversational capabilities in Looker, embedded agents in Colab notebooks and enhancements in data engineering workflows. Google also introduced Customer Engagement Suite, which allows AI agents to be deployed across contact points such as call centers, websites and physical stores.
Finally, Google announced Agentspace, a tool that allows enterprise employees to interact with internal systems and data via AI agents in Chrome Enterprise. The company said the platform supports frameworks such as LangGraph and Crew AI for enabling agent interoperability.