GPU Virtual Machine AMD

High-efficiency compute with AMD GPU

image

Overview

Tap next-gen cloud performance and solve your most compute-intensive challenges with GPU Virtual Machine AMD from the platform. Scalable, reliable, and optimised for maximum computing power. Up to 81.7 TFLOPS (FP64), 163.4 TFLOPS (FP32), 2,614 TFLOPS (FP16), 5,229 TFLOPS (FP8) with 192 GB HBM3 memory — engineered for compute-intensive HPC and massive-scale AI training.

Driven by a strong and stable foundation established for maximum computing potential, AMD GPU VMs enable you to push beyond limits and innovate. Speed up intensive simulations, train deep learning models, and render breathtaking visual content at unprecedented rates. Streamline your resource usage and speed up your journey to breakthrough results by leveraging the state-of-the-art capability of AMD GPU technology in the adaptable and business-ready cloud platform.

Variants

A range of configurations are available, allowing you to select the options best suited for testing and hosting production environments. The offerings are segregated into the below categories. Please, review them carefully before choosing your desired configurations for deployment.
Ubuntu (AMD)
  • Best for: Development & non-production environments
  • Framework: Base Ubuntu 20.04 LTS installation
  • GPU configuration: Available in configuration of 1x GPU, 2x GPU, 4x GPU, 8x GPU
  • Billing options: On-demand, 1-month, 6-month, 12-month reserved
  • Environment: Testing and development
Ubuntu PyTorch (AMD)
  • Best for: Development & non-production environments
  • Framework: Pre-configured PyTorch with ROCm
  • GPU configuration: Available in configuration of 1x GPU, 2x GPU, 4x GPU, 8x GPU
  • Billing options: On-demand, 1-month, 6-month, 12-month reserved
  • Environment: Research and model training
Ubuntu TensorFlow (AMD) Recommended
  • Best for: Ideal for all Production workloads
  • Framework: Enterprise TensorFlow with ROCm optimization
  • GPU configuration: Available in configuration of 1x GPU, 4x GPU, 8x GPU
  • Billing options: On-demand, 1-month, 6-month, 12-month reserved
  • Environment: Production AI workloads and enterprise deployments

Core Features at a Glance 

High-Speed Memory Bandwidth
Achieves smoother training and inference with memory bandwidths up to 5.3 TB/s—ideal for compute-heavy workloads.
Optimised Precision Modes
Provides native hardware support for FP8, BF16, FP16, INT8 — plus full FP32 and FP64 precision — enabling high-efficiency AI training, inference, and HPC workloads.
Instant & Flexible Provisioning
Spins up GPU VMs on demand with configurable specs, real-time deployment, and workload-optimised scaling.
Cross-Platform OS Support
Can be deployed on Ubuntu or RHEL for wide compatibility across AI, data analytics, and visual rendering workflows.
Framework-Ready Environment
Full support for AMD RoCM, along with leading AI/ML frameworks like PyTorch and TensorFlow — ready for immediate, accelerated development out of the box.
Flexible Pricing & Real-Time Insights
Offers options to choose from: On-Demand, Reserved, or Rental options, and monitor GPU usage in real-time for performance tuning and cost efficiency.
Extreme Model Capacity
Runs large models with up to 192 GB ultra-fast HBM3/HBM3e memory and long context lengths.
Multi-GPU Scalability
Trains models faster by scaling seamlessly across 2, 4, or 8 GPUs per VM for parallel workloads.

What You Get

Still have questions?

The AMD MI300X excels in memory capacity (up to 192 GB HBM3), making it ideal for large language models and memory-bound AI tasks. NVIDIA H200 provides higher ecosystem maturity with CUDA and a broader range of AI software support. Performance varies by workload: MI300X often leads in LLM inference, while H200 is optimal for CUDA-based deep learning frameworks.
LLM training/inference, HPC simulations, ROCm-based workloads.
MI300X: 192 GB HBM3, ~5.2 TB/s bandwidth
Yes, our platform offers both on-demand access for flexible scaling and reserved capacity options for guaranteed availability, ideal for enterprise SLAs and scheduled training runs.
We offer GPU Nodes in configurations of 1, 2, 4, or 8 GPUs per node, depending on the instance type and GPU model. This allows fine-tuned scaling based on workload intensity and budget.
Yes, we provide ready-to-deploy VM images with pre-installed GPU drivers, and AIML libraries (Pytorch/TensorFlow) optimized for each GPU model.

Ready to Build Smarter Experiences?

Please provide the necessary information to receive additional assistance.
image
Captcha
By selecting ‘Submit', you authorise Jio Platforms Limited to store your contact details for further communication.
Submit
Cancel