Google Cloud has made G4 virtual machines (VMs), powered by Nvidia RTX PRO 6000 Blackwell Server Edition graphics processing units (GPUs), generally available.
It also added Nvidia Omniverse and Nvidia Isaac Sim virtual machine images to the Google Cloud Marketplace to support visual computing, simulation, and physical AI workloads.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
G4 VM instances can be configured with up to eight RTX PRO 6000 GPUs, providing a total of 768GB of GDDR7 memory and access to local and network storage with high throughput.
The instances integrate with Google Cloud’s AI Hypercomputer architecture and with services such as Google Kubernetes Engine, Vertex AI, and Dataproc. This integration supports containerised deployments, machine learning (ML) operations and large-scale analytics on Apache Spark and Hadoop.
The RTX PRO 6000 Blackwell Server Edition is built on the Nvidia Blackwell architecture and targets AI inference and visual computing workloads.
It includes fifth-generation Tensor Cores that support FP4 data formats and fourth-generation RT Cores that Nvidia reports deliver more than twice the real-time ray-tracing performance of the previous generation.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataNvidia Omniverse is now available as a Marketplace virtual machine image offering integration libraries and frameworks based on Universal Scene Description (OpenUSD).
The Omniverse VMI, together with Nvidia Cosmos and Omniverse Blueprints, supports creation and operation of real-time, physically accurate digital twins.
Nvidia Isaac Sim is available as a VMI to support training, simulation and validation of robotics in physics-based virtual environments ahead of deployment.
The range of Nvidia software available on Google Cloud includes models and tools aimed at agentic AI, scientific high-performance computing, and design and visual computing.
The stack includes the Nemotron family of open reasoning models, Nvidia Blueprints, Nvidia NIM microservices for inference, CUDA-X libraries for scientific workloads, and Nvidia RTX Virtual Workstation software for virtualised design pipelines.
Nvidia reported that some genomics sequence-alignment algorithms show up to 6.8x throughput improvement on the RTX PRO 6000 Blackwell GPU versus the prior generation.
Google Cloud and Nvidia are presenting these additions as part of a Blackwell-based architecture that spans products for large-scale training and inference through to the RTX PRO 6000 Blackwell for inference and visual computing on G4 VMs.
This enables organisations to run multistage data analytics and physical AI pipelines within a single cloud environment.
Recently, Salesforce expanded its partnership with Google to bring Gemini AI models more deeply into its Agentforce 360 platform.
The enhanced integration will introduce Gemini’s multimodal intelligence to the Salesforce ecosystem, enabling capabilities like hybrid reasoning and multistep process automation for enterprise sales and IT services.
