The Rising Demand for GPU Power in LLM Training and How DePIN Projects Capture It
Large Language Models (LLM) now write code, compose music, and generate lifelike visuals. Behind these breakthroughs lies the true engine of AI progress: GPU compute. Every new model depends on access to high-performance hardware, and that access is increasingly scarce.
As LLMs expand in scale and complexity, centralized clouds are struggling to keep up. GPU supply is limited, prices are rising, and innovation is slowing. Into this gap comes a new model of infrastructure coordination called Decentralized Physical Infrastructure Networks (DePINs).
DePINs aggregate unused GPU power from data centers, miners, and individuals into tokenized marketplaces that provide affordable, on-demand compute. This article examines the global GPU crunch, how DePINs are transforming access to compute, and the leading projects driving this change, including Fluence, Akash, Render, Aethir, and io.net, along with key insights for investors, founders, and analysts shaping the future of AI infrastructure.
The Great GPU Squeeze: Quantifying the AI Compute Demand
AI’s expansion has been extraordinary. The global market is set to grow from $234.4 billion in 2024 to $1.77 trillion by 2032, while the data center GPU market will rise from $14.48 billion to $190.1 billion in the same period. GPUs have become the clearest measure of AI’s momentum.
GPUs process thousands of operations in parallel, making them essential for the matrix multiplications and tensor operations that power deep learning. CPUs, built for sequential tasks, cannot match this throughput, which is why GPUs have become the foundation of modern AI.
Training state-of-the-art models now demands vast hardware resources. Meta’s Llama 3.1 405B consumed 1.3 million GPU hours across 16,384 NVIDIA H100s, costing millions in hardware, energy, and operations. Fine-tuning and inference workloads add further pressure, turning compute availability into a decisive competitive factor.
The GPU shortage stems from limited production, high costs, and restricted access. NVIDIA controls most of the high-end market, and chips like the H100 and B200 are slow to manufacture. Renting an H100 from AWS, Azure, or Google Cloud costs $3 to $7 per hour, excluding bandwidth and storage.
Long waitlists and rigid contracts keep startups and researchers sidelined, while reliance on single providers inflates costs and limits flexibility. The result is a compute bottleneck where access to GPUs increasingly dictates who can advance and who falls behind.
DePIN: The Decentralized Answer to the GPU Crunch
The global GPU shortage has exposed how centralized clouds struggle to scale affordably. Decentralized Physical Infrastructure Networks (DePINs) are filling that gap with tokenized marketplaces that connect compute supply and demand across the world. By activating idle GPUs from data centers, miners, and individuals, DePINs transform unused capacity into productive, income-generating infrastructure.
Providers earn rewards in tokens such as FLT, AKT, or RNDR, while users get instant, low-cost access to compute without contracts or corporate markups. This creates a flywheel where more supply lowers prices, which attracts demand, which in turn drives revenue back to providers. The model scales naturally and remains resilient since no single entity controls the network.
Why DePINs are gaining traction:
-
Massive cost advantage: Compute is typically 60 to 80% cheaper than hyperscalers.
-
Transparent pricing: Real-time markets replace opaque contracts.
-
Scalable and resilient: Supply expands dynamically with no single point of failure.
-
Global access: Anyone can provide or consume compute, democratizing AI infrastructure.
These strengths are turning DePINs into credible competitors to the centralized cloud model, offering cost, access, and flexibility that legacy providers struggle to match.
The New Compute Leaders and the Price Reality
Several DePIN networks now operate at commercial scale, each with its own approach to balancing cost, reliability, and openness.
Leading Projects
-
Fluence (FLT): Enterprise-grade infrastructure delivering GPU containers, virtual machines, and bare metal servers with high-end GPUs like the NVIDIA H200 and B200. Payments in USDC, with staking and a buyback-and-burn model supporting the FLT token. Early results include $1 million in revenue and more than $3.8 million in customer cloud savings.
-
Akash (AKT): A decentralized marketplace using reverse auctions. Open to anyone, it aggregates diverse GPUs and generates over $4.2 million in annual recurring revenue.
-
Render (RENDER): Built on consumer GPUs from creators and gamers. Its Burn-Mint Equilibrium model ties token supply to real usage, with clients such as NASA and the Las Vegas Sphere.
-
Aethir (ATH): Focused on enterprise AI and gaming workloads, with $125 million in annualized revenue and competitive pricing at $1.25 per hour for H100 GPUs.
-
io.net (IO): The largest DePIN network by scale, connecting over 30,000 GPUs from independent operators and offering compute from $1.19 per hour for H100s.
Comparative Analysis: DePIN vs. The Cloud Giants
The price gap between DePIN networks and traditional cloud providers is now impossible to ignore. Across comparable NVIDIA H100 instances, decentralized providers consistently deliver compute at a fraction of the cost charged by hyperscalers:
|
Provider / Platform |
GPU Model |
Price per Hour (USD) |
Provider Type |
|
Akash Network |
NVIDIA H100 80GB |
$1.05 |
DePIN |
|
Aethir |
NVIDIA H100 80GB |
$1.25 |
DePIN |
|
Fluence |
NVIDIA H100 80GB |
$1.50 |
DePIN |
|
io.net |
NVIDIA H100 80GB |
$2.19 |
DePIN |
|
Vast.ai |
NVIDIA H100 80GB |
$1.87 |
Marketplace |
|
Google Cloud |
NVIDIA H100 80GB |
$3.00 |
Hyperscaler |
|
AWS (Amazon EC2) |
NVIDIA H100 80GB |
$3.90 |
Hyperscaler |
|
Microsoft Azure |
NVIDIA H100 80GB |
$6.98 |
Hyperscaler |
|
Oracle Cloud |
NVIDIA H100 80GB |
$10.00 |
Hyperscaler |
What the Numbers Reveal
DePIN networks are typically 60 to 80% cheaper than centralized clouds, even before removing hidden fees like data egress. Akash leads on price efficiency but carries a mix of data center and consumer GPUs, while Fluence and Aethir charge slightly higher rates for enterprise-grade data center GPUs.
The cost difference between DePIN vs centralized cloud reflects structural efficiency: decentralized networks source global idle capacity without billion-dollar data centers or heavy overhead. This gap signals a structural turning point where open networks now outperform centralized clouds in both cost and capability.
Conclusion
AI’s demand for GPU power has outpaced what centralized clouds can deliver. DePIN networks convert idle global capacity into open, market-driven compute that lowers costs and speeds access for builders and enterprises.
As models scale, compute should not stay concentrated among a few hyperscalers. DePINs allow anyone to contribute or consume GPU power through transparent, real-time markets, creating a fairer foundation for innovation.
Reliability and regulation will take time to mature, but the direction is clear. DePINs are reshaping cloud infrastructure, turning GPU access into a shared global resource that fuels the next generation of AI.
Comments
Log in to post a comment
No comments yet
Be the first to share your thoughts!