New addition to AI infrastructure supports agile AI development and sovereign European infrastructure demand
LUXEMBOURG, March 31, 2026 /PRNewswire/ -- Gcore, the global edge AI, cloud, network, and security solutions provider, today announced the launch of GPU Virtual Machines (VMs) on NVIDIA Hopper, delivering flexible, cost-efficient access to AI compute.
As AI development becomes more iterative and central to company functioning, organisations increasingly need infrastructure that can scale dynamically. Gcore's VMs with NVIDIA GPUs make high-performance computing more accessible to a broad range of customers.
This new addition to Gcore's AI infrastructure and software suite is launching first in Sines-3, Gcore's sovereign AI region in Portugal, as a response to growing demand for European-based AI infrastructure. GPU VMs provide access to the same NVIDIA Hopper GPUs with high-bandwidth NVIDIA Quantum InfiniBand networking as Gcore Bare Metal GPU Cloud, without requiring a long-term commitment to hardware. This flexible deployment model is ideal for use cases such as early/growth-phase AI startups looking for performant GPUs without the high fixed costs, EU R&D labs needing sovereign infrastructure for burst PoCs and experiments, and research institutions seeking to run short-term, high-intensity fine-tuning runs on a budget.
Cutting idle costs without complexity
Some AI jobs require the full power and always-on availability of dedicated bare metal clusters. Others need something more agile: compute that can be sized up or down quickly, used for a burst of experimentation, powered down when idle, and spun back up when the next training run begins. Gcore's new GPU VMs allow companies to match GPU capacity and cost to the stage of their project with precision.
One of the biggest advantages of GPU VMs is how they behave when they're not in use. When the instance is powered off, GPU billing pauses automatically. Volumes, IPs, and configuration remain intact, but the GPU meter stops running, so companies only pay for storage and IPs while paused.
When ready to start work again, teams can restart the VMs without needing to set up or reconfigure. They can flexibly use a single Hopper GPU, scale to two or four, or jump to an eight-GPU VM when their workload requires it.
Key product capabilities:
- Reduces operational overhead: Power VMs off when idle and resume instantly, eliminating the need to redesign workflows.
- Creates flexible, cost-efficient compute: Adjust GPU capacity as workloads scale without committing long-term to a dedicated server.
- Maintains trusted infrastructure: Benefit from the same AI infrastructure as Gcore Bare Metal GPU instances, including high-bandwidth InfiniBand networking.
Part of a larger AI infrastructure roadmap
Gcore GPU VMs are the latest expansion of Gcore GPU Cloud, which already includes Bare Metal GPUs and Spot Bare Metal GPUs (a cost-saving option where capacity becomes available when there is spare capacity in a region). Customers can combine and switch between these GPU solutions in the Gcore Customer Portal for precision and flexibility over how their GPU compute is paid for and used.
Seva Vayner, Product Director, Cloud Edge & AI at Gcore, comments: ''This launch reflects Gcore's mission to democratise access to AI and connect the world to AI anytime, anywhere. Whether you're an early/growth-phase AI startup, a SMB looking for performant GPUs without the high fixed costs, an EU R&D lab needing sovereign infrastructure for burst experiments, or a research institution seeking to run short-term, high-intensity fine-tuning on a budget, Gcore GPU VMs deliver the flexibility and cost efficiency you need.''
To deploy a GPU VM workload on Gcore with just 3 clicks, click here. For more information on how to scale AI workloads without compromising on compute price, speak to the Gcore team.
About Gcore
Gcore is a global provider of infrastructure and software solutions for AI, cloud, network, and security, headquartered in Luxembourg. Operating its own sovereign infrastructure across six continents, Gcore delivers reliable, ultra-low latency performance for enterprises and service providers. Its AI-native cloud stack enables organizations to build, train, and scale AI models seamlessly across public, private, and hybrid environments, while integrating AI, compute, networking, and security into a single platform for mission-critical workloads.
Logo : https://mma.prnewswire.com/media/2527184/5890277/Gcore_Logo.jpg
Share this article