GPU Clouds for Each GPU
GPU | VRAM | 16-bit inference perf rank | Available on Runpod (Instant) | Available on Lambda (Instant) | Available on FluidStack (Instant) |
---|---|---|---|---|---|
H100 | 80 GB | 🏆 1 | No | ✅ $1.99 | ✅ $1.99 |
RTX 4090 | 24 GB | 2 | ✅ $0.69 | No | No |
L40 | 48 GB | 3 | ✅ $1.19 | No | No |
RTX 6000 Ada (2022) | 48 GB | 4 | ✅ $1.19 | No | No |
A100 | 80 GB | 5 | $1.79 | Out of capacity, min 8x | ✅ $1.49 (for SXM) |
A100 | 40 GB | 5 | No | Out of capacity, normally $1.10 | ✅ $1.20 |
V100 | 16 GB | 6 | No | Out of capacity, min 8x | $1.61 |
RTX 3090 | 24 GB | 7 | $0.44 | No | $0.60 |
RTX A6000 (2020) | 48 GB | 8 | $0.79 | $0.80 | $0.80 |
RTX 6000 (2018) | 24 GB | 9 | No | Out of capacity, normally $0.50 | No |
A40 | 48 GB | 10 | $0.79 | No | $2.07 |
A10 | 24 GB | 11 | No | $0.60 | $0.60 |
See more details, links, and add comments on the google sheet here.