GPU Rental Cost Calculator
Compare H100, H200, A100 Cloud Pricing Across 15 Providers
Hourly and monthly GPU rental costs from hyperscalers (AWS, Azure, GCP), neoclouds (CoreWeave, Lambda, Crusoe), and marketplaces (Vast.ai, RunPod). Pricing updated weekly from live sources.
GPU cloud costs vary 3-5x across providers. Understanding pricing tiers helps optimize AI training budgets, negotiate enterprise contracts, and choose the right provider for workload requirements.
Select GPU model, usage hours, and cluster size. Compare providers by $/hour, monthly cost, or annual spend. Neoclouds typically offer 40-60% savings vs hyperscalers with trade-offs in support and compliance.
Vast.ai
8x H100 SXM @ 720hrs
vs most expensive
Configure Your GPU Workload
Unlock Live Pricing & Historical Trends
Get real-time pricing updates, 12-month historical trends, negotiated enterprise rates, and GLRI integration.
- ✓ Daily pricing updates via live API
- ✓ Negotiated enterprise pricing (not publicly listed)
- ✓ 12-month historical trend charts
- ✓ GLRI lease rate correlation analysis
Monthly Cost Comparison
8x H100 SXM @ 720 hours/month
Detailed Provider Comparison
| Provider | Tier | $/Hour | Monthly (1 GPU) | Your Monthly | Annual | Key Features |
|---|---|---|---|---|---|---|
| Vast.aiBest Value | marketplace | $1.89 | $1.4K | $10.9K | $130.6K | DePIN, 60-90% cheaper |
| CoreWeave | neocloud | $1.99 | $1.4K | $11.5K | $137.5K | Kubernetes, InfiniBand |
| Voltage Park | neocloud | $2.10 | $1.5K | $12.1K | $145.2K | Large clusters, Dedicated |
| Nebius | neocloud | $2.20 | $1.6K | $12.7K | $152.1K | Orchestration, InfiniBand |
| Crusoe Energy | neocloud | $2.30 | $1.7K | $13.2K | $159.0K | Stranded gas, Carbon neutral |
| RunPod | marketplace | $2.39 | $1.7K | $13.8K | $165.2K | Serverless, Spot GPUs |
| Lambda Labs | neocloud | $2.49 | $1.8K | $14.3K | $172.1K | 1-Click Clusters, PyTorch native |
| Genesis Cloud | neocloud | $2.69 | $1.9K | $15.5K | $185.9K | 100% renewable, EU GDPR |
| Hyperstack | neocloud | $2.85 | $2.1K | $16.4K | $197.0K | Enterprise SLA, GDPR |
| Google Cloud | hyperscaler | $3.00 | $2.2K | $17.3K | $207.4K | Vertex AI, TPU access |
| Paperspace | marketplace | $3.09 | $2.2K | $17.8K | $213.6K | Gradient ML, Notebooks |
| Oracle Cloud | hyperscaler | $3.50 | $2.5K | $20.2K | $241.9K | Bare metal, RDMA |
| AWS | hyperscaler | $3.90 | $2.8K | $22.5K | $269.6K | SageMaker, Enterprise SLA |
| IBM Cloud | hyperscaler | $5.20 | $3.7K | $30.0K | $359.4K | Compliance, Hybrid cloud |
| Microsoft Azure | hyperscaler | $7.35 | $5.3K | $42.3K | $508.0K | Azure ML, Enterprise |
Model Calculation Disclaimer
These calculations use simplified models of complex market realities. Assumptions about future conditions are inherently uncertain. Small changes in input parameters can significantly affect outputs. Always verify results with qualified professionals.
View methodologyHow to Choose a GPU Cloud Provider
GPU cloud pricing varies significantly based on provider type, commitment level, and workload requirements. Hyperscalers (AWS, Azure, GCP) offer enterprise SLAs and compliance but at 2-3x the cost of neoclouds. Neoclouds (CoreWeave, Lambda) provide competitive pricing with production-grade infrastructure. Marketplaces (Vast.ai, RunPod) offer the lowest prices but with less consistency.
When to Use Hyperscalers
Choose AWS, Azure, or GCP when you need enterprise compliance (SOC2, HIPAA, FedRAMP), integration with existing cloud infrastructure, or 24/7 enterprise support. Best for production inference at scale with strict SLA requirements.
When to Use Neoclouds
CoreWeave and Lambda offer 40-60% cost savings with InfiniBand networking for multi-node training. Ideal for AI training workloads, research clusters, and teams comfortable with Kubernetes-native infrastructure.
When to Use Marketplaces
Vast.ai and RunPod provide the lowest costs for batch processing, experimentation, and non-production workloads. Use for development, prototyping, or workloads that can tolerate interruptions and variable performance.
Frequently Asked Questions
What is the cheapest way to rent H100 GPUs?
Vast.ai offers the lowest H100 rates at ~$1.89/hr through their DePIN marketplace. For production workloads, CoreWeave at $1.99/hr is the most cost-effective option with enterprise-grade infrastructure. Spot instances can reduce costs by 50-70% for interruptible workloads.
How much does it cost to train a large language model?
Training costs depend on model size, dataset, and training duration. A 7B parameter model requires ~1,000 GPU-hours (~$2,000-4,000). A 70B model requires ~100,000 GPU-hours (~$200,000-400,000). Using neoclouds vs hyperscalers can reduce costs by 50%+.
Should I use reserved instances or on-demand?
Reserved instances (1-3 year commitments) offer 30-60% discounts vs on-demand. Choose reserved if GPU utilization exceeds 60% consistently. On-demand is better for variable workloads, experimentation, or when uncertain about long-term needs.