Google this week previewed a bevy of artificial intelligence (AI) agents and platforms that enable application developers and the DevOps teams that support them to automate a wide range of software engineering tasks.
Announced at the Google Cloud Next 2025 conference, the latest editions to the Gemini Code Assist and Gemini Cloud Assist portfolio include AI agents that via a natural language chat interface can generate code using product specifications in Google Docs, migrate code from one language to another, create code to address issues described in a GitHub repository, generate and run tests and, finally, create documentation.
Google is also previewing two additional AI agents for its Firebase platform, specifically trained to build and test prototypes of applications.
Leveraging the reasoning capabilities that Google has built into its Gemini 2.5 large language model (LLM), Google is also now previewing a set of connectors for integrating those AI agents into an integrated development environment (IDE).
Google is also now offering a limited preview of a sandbox environment, dubbed the Enterprise tier of the Google Developer Program, that allows developers to experiment with its AI capabilities easily for a flat fee of $75 per month.
At the same time, Google is providing DevOps teams with access to new platforms that enable them to manage cloud infrastructure services at higher levels of abstraction. For example, a public preview of a graphical Application Design Center service that makes it simpler to deploy applications using a set of visual templates. Gemini Cloud Assist agents are also integrated with Application Design Center to accelerate application infrastructure design and deployment and the Google FinOps 2.0 Hub to help optimize consumption of cloud infrastructure resources.
Additionally, application deployments will now automatically be registered in the Google App Hub service, which is now being extended to include a private preview of a Cost Explorer tool and a public preview of an application monitoring that automatically tags logs, metrics and traces.
Google is also making available a public preview of a Cloud Hub service that makes it simpler to centralize the management of application environments in a way that surfaces insights into deployments, health and troubleshooting issues, resource optimization, maintenance, quotas and reservations, and support cases.
Brad Calder, vice president and general manager for the Google Cloud Platform (GCP), said that capability will make it much simpler for organizations to understand the relationship between utilization of cloud infrastructure resources and actual costs
Mitch Ashley, vice president and practice lead for application development and DevOps at The Futurum Group, added that by taking an application-centric approach, Google is bringing a more holistic approach to application development, DevOps, observability, IT operations and FinOps.
It’s not clear how rapidly organizations are embracing AI tools for building and deploying AI applications. A recent Futurum Research survey finds that 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code, while 39% plan to make use of AI models based on machine learning algorithms. More than a third (35%) also plan to apply AI and other forms of automation to IT operations, the survey finds.
The challenge now, of course, is determining the degree to which organizations can now rely on AI tools to fully, versus partially, automate a task.