Spheron GPU Catalog

Rent NVIDIA L40S GPUs on Demand from $0.72/hr

48GB GDDR6 ECC Ada Lovelace data center GPU, tuned for inference, video, and visual AI.

At a glance

You can rent an NVIDIA L40S on Spheron starting at $0.72/hr per GPU per hour on dedicated (99.99% SLA, non-interruptible), with spot pricing cheaper still. Per-minute billing, no long-term contracts, and instances deploy in under 2 minutes across data center partners in multiple regions. Each card ships with 48GB of GDDR6 ECC memory, 4th generation Tensor Cores with FP8 support, 3rd generation RT Cores, and hardware AV1 encode. The L40S is purpose-built for production inference of 7B-30B LLMs, Stable Diffusion and SDXL serving, video transcoding pipelines, and mixed AI + graphics workloads where you need data center reliability without H100 pricing.

GPU ArchitectureNVIDIA Ada Lovelace
VRAM48 GB GDDR6 (with ECC)
Memory Bandwidth864 GB/s

Technical specifications

GPU Architecture
NVIDIA Ada Lovelace
VRAM
48 GB GDDR6 (with ECC)
Memory Bandwidth
864 GB/s
Tensor Cores
4th Generation
CUDA Cores
18,176
RT Cores
3rd Generation
FP32 Performance
91.6 TFLOPS
FP16 Performance
183.2 TFLOPS
INT8 Performance
733 TOPS
System RAM
128 GB DDR5
vCPUs
22 vCPUs
Storage
625 GB NVMe SSD
Network
PCIe Gen4
TDP
350W

Pricing comparison

ProviderPrice/hrSavings
SpheronYour price
$0.72/hr-
RunPod
$0.79/hr1.1x more expensive
Lambda Labs
$1.29/hr1.8x more expensive
CoreWeave
$1.89/hr2.6x more expensive
AWS (g6e.xlarge)
$1.86/hr2.6x more expensive
Custom & Reserved

Need More L40S Than What's Listed?

Reserved Capacity

Commit to a duration, lock in availability and better rates

Custom Clusters

8 to 512+ GPUs, specific hardware, InfiniBand configs on request

Supplier Matchmaking

Spheron sources from its certified data center network, negotiates pricing, handles setup

Need more L40S capacity? Tell us your requirements and we'll source it from our certified data center network.

Typical turnaround: 24–48 hours

When to pick the L40S

Scenario 01

Pick L40S if

You're running production inference for 7B-30B LLMs, SDXL serving, or video transcoding pipelines and need ECC + data center drivers without H100 pricing. Also the pick when you need FP8 support but don't need HBM bandwidth, and when AV1 hardware encode is on the requirements list.

Recommended fit
Scenario 02

Pick A100 80GB instead if

Your workload is training-heavy and bandwidth-bound. A100 has 2 TB/s HBM2e (vs 864 GB/s GDDR6 on L40S), making it faster for pre-training and fine-tuning. L40S wins at inference, A100 wins at training.

Recommended fit
Scenario 03

Pick RTX 4090 instead if

Your model fits in 24GB and you're running dev / testing workloads where ECC and multi-tenant isolation don't matter. RTX 4090 is roughly half the hourly rate of L40S.

Recommended fit
Scenario 04

Pick H100 instead if

You need HBM3 bandwidth (3.35 TB/s) or NVLink for multi-GPU tensor parallelism. H100 is the right pick for 70B+ inference or any training job where memory bandwidth is the bottleneck.

Recommended fit

Ideal use cases

Use case / 01

AI Inference at Scale

Run cost-effective inference workloads with 48GB memory and INT8 support for high-throughput production deployments.

Production LLM inference (up to 30B params)Multi-model servingRecommendation system deploymentReal-time classification APIs
Use case / 02
🎬

Video Processing & Encoding

Leverage hardware-accelerated video pipelines for live streaming, transcoding, and video analytics at scale.

Live video transcodingCloud gamingVideo analyticsReal-time virtual production
Use case / 03
🖼️

Visual Computing & Rendering

Combine AI acceleration with professional graphics capabilities for rendering and visualization workloads.

3D rendering workloadsVirtual desktop infrastructure (VDI)Architectural visualizationProduct design rendering
Use case / 04
🔄

Mixed AI + Graphics Workloads

Take advantage of the L40S's unique combination of AI and graphics acceleration for next-generation creative and visual AI applications.

AI-powered video editingGenerative AI for visual contentNeural radiance fields (NeRF)Real-time style transfer

Performance benchmarks

LLaMA 2 13B Inference
2,800 tokens/s
FP16 batch 32
Stable Diffusion XL
32 img/min
1024x1024 FP16
Video Transcoding
8x real-time
4K H.265 to H.264
BERT Large Inference
6,200 seq/sec
INT8
Ray Tracing Performance
3rd Gen RT Cores
hardware RT, A100 has none
VDI User Density
3x more users
vs previous gen per GPU

Serve Llama 3.1 8B at FP8 on L40S

L40S's 48GB GDDR6 ECC and FP8 Tensor Cores make it a strong fit for production 7B-13B inference with heavy concurrency. vLLM gives you an OpenAI-compatible endpoint in one command.

bash
Spheron
# SSH into your L40S instancessh root@<instance-ip> # Install vLLMpip install vllm # Launch Llama 3.1 8B FP8 with high concurrencyvllm serve meta-llama/Llama-3.1-8B-Instruct \  --quantization fp8 \  --max-model-len 16384 \  --max-num-seqs 64 \  --gpu-memory-utilization 0.9 # Test the endpointcurl http://localhost:8000/v1/completions \  -H "Content-Type: application/json" \  -d '{"model":"meta-llama/Llama-3.1-8B-Instruct","prompt":"Hello","max_tokens":50}'

For 30B models (Qwen 2.5 32B, Mixtral 8x7B at AWQ), FP8 weights still fit with room for KV cache at moderate batch sizes.

Related resources

Frequently asked questions

How does L40S compare to A100?

The A100 is better suited for training workloads thanks to its HBM2e memory and higher memory bandwidth. The L40S, on the other hand, excels at inference and mixed AI+graphics workloads with its 48GB GDDR6 memory, 3rd generation RT Cores for ray tracing, and lower cost per hour. If your primary use case is inference or visual computing, the L40S offers significantly better value.

Is L40S good for LLM inference?

Yes, the L40S is excellent for LLM inference. With 48GB of GDDR6 memory, it can handle models up to 30B parameters comfortably. It delivers high throughput with INT8 and FP16 precision support, making it ideal for production LLM deployment at a lower cost than H100. For inference-heavy workloads, the L40S provides outstanding price-performance.

What makes L40S unique?

The L40S uniquely combines strong AI acceleration with professional graphics capabilities, including 3rd generation RT Cores for ray tracing and hardware video encode/decode. It is the only data center GPU that offers both powerful AI inference performance and full graphics capabilities, making it ideal for workloads that require both AI and visual computing, such as AI-powered video editing, generative visual content, and virtual production.

Can I use L40S for training?

Yes, the L40S can handle training for small to medium-sized models effectively. However, its GDDR6 memory bandwidth is lower than HBM found in A100 and H100, so for large-scale training workloads, those GPUs are better choices. The L40S truly excels at inference, where its 48GB memory and strong INT8/FP16 performance provide excellent throughput at a competitive price.

What video processing capabilities does L40S support?

The L40S features hardware NVENC/NVDEC engines supporting H.264, H.265, and AV1 codecs at up to 8K resolution. This makes it perfect for cloud gaming, live streaming, video transcoding, and video analytics workloads. The combination of AI acceleration and hardware video processing enables advanced use cases like real-time video analytics and AI-powered content creation.

How does L40S compare to RTX 4090 for AI?

The L40S has 48GB of memory compared to 24GB on the RTX 4090, along with ECC memory support and data center-grade reliability. This makes the L40S significantly better for production inference workloads where uptime and memory capacity matter. The RTX 4090 is a more affordable option for development and experimentation, but the L40S is the clear choice for deployment at scale.

What's the minimum rental period?

There's no minimum! Spheron charges by the hour with per-minute billing granularity. Rent an L40S for just an hour to test your workload, or keep it running for months. You only pay for what you use with no long-term contracts or commitments.

Can I run multiple models on L40S?

Yes, the 48GB of GDDR6 memory allows you to run 2-3 smaller models (around 7B parameters each) or 1 larger model (up to 30B parameters) simultaneously. The L40S also supports NVIDIA MPS (Multi-Process Service) for efficient multi-process GPU sharing, enabling you to serve multiple models concurrently with optimized resource utilization.

What regions are L40S GPUs available in?

L40S GPUs are currently available in US Region, Europe, and Canada. We're continuously expanding capacity and regions. Check our app or contact sales for specific region requirements.

Do you offer support for production deployments?

Our platform is plug-and-play for standard deployments. For 100+ GPU clusters, you get dedicated support via Slack or Discord, plus sourcing assistance. Enterprise customers get dedicated support channels and SLA guarantees.

Book a call with our team

What's the difference between dedicated and spot L40S instances?

Dedicated L40S instances are non-interruptible, run on a 99.99% SLA, and bill per-minute at the on-demand rate. Spot instances run on spare capacity at meaningfully lower rates but can be preempted when dedicated demand rises. Use spot for fault-tolerant workloads: batch inference, LoRA fine-tuning with checkpointing every 15-30 minutes, or video transcoding jobs that can resume. Use dedicated for customer-facing inference endpoints, live streaming pipelines, and any SLA-bound serving workload. Both tiers live in the same control plane, so you can mix them across a project.

Also consider