CoreWeave
CoreWeave provides on-demand GPU cloud infrastructure for AI training and inference workloads.
CoreWeave operates a GPU cloud infrastructure platform providing on-demand access to NVIDIA H100 and H200 GPUs for AI model training and inference at scale. The company manages its own data centers with Kubernetes-based orchestration and proprietary Mission Control software, enabling customers to provision thousands of GPUs in minutes rather than weeks. CoreWeave serves AI labs, enterprises, and cloud providers who need flexible, transparent GPU capacity without egress fees or hidden costs.
Problem solved
AI developers and enterprises need access to massive GPU clusters for training and inference but face weeks of hardware procurement delays, inflexible capacity, and opaque pricing from traditional cloud providers.
Target customer
AI labs and enterprises (OpenAI, Mistral AI, IBM, Microsoft), cloud providers, and companies training or deploying large language models requiring flexible GPU capacity at scale.
Founders
M
Michael Intrator
CEO & Chairman
Previously CEO of Hudson Ridge Asset Management (natural gas hedge fund) and Principal Portfolio Manager at Natsource Asset Management; holds B.A. in Political Science from Binghamton University and M.P.A. from Columbia University's School of International and Public Affairs.
B
Brian Venturo
Chief Strategy Officer
Co-founder and commodities trader background.
B
Brannin McBee
Chief Development Officer
Co-founder and commodities trader background.
P
Peter Salanki
Chief Technology Officer
Co-founder responsible for platform architecture and cluster management software.
Funding history
Strategic Investment
$100M
April 2023
Led by NVIDIA
Debt Financing
$2.3B
August 2023
Led by Magnetar Capital, Blackstone
Series B
$1.1B
May 2024
Led by Coatue Management
Credit Line
$650M
October 2024
Led by Goldman Sachs, JPMorgan Chase, Morgan Stanley
IPO
$1.5B
March 28, 2025
Led by Public Markets
Strategic Investment
$2B
January 2026
Led by NVIDIA
Convertible Notes
$3.5B
April 2026
Led by Public Markets
Total raised:
$28B+
Industries
Pricing
Usage-based, transparent pricing: H100 PCIe at $4.25/hour (GPU only), 8-GPU HGX nodes from ~$49.24/hour (~$6.15/GPU bundled). Up to 60% discounts for committed usage. No data egress fees, no IOPS charges for standard storage, free internal network transfers.
Notable customers
OpenAI, Microsoft (60%+ of 2024 revenue), Mistral AI, IBM, Meta Platforms, Anthropic
Integrations
Kubernetes, NVIDIA hardware ecosystem, proprietary Mission Control software, datacenter management tools
Tech stack
GSAP (JavaScript frameworks)
jQuery (JavaScript libraries)
core-js (JavaScript libraries)
Open Graph
HTTP/3
DocuSign
HubSpot Analytics (Analytics)
Matomo Analytics (Analytics)
Linkedin Insight Tag (Analytics)
Google Analytics (Analytics)
Facebook Pixel (Analytics)
Google Font API (Font scripts)
Google Hosted Libraries (CDN)
cdnjs (CDN)
Cloudflare (CDN)
HubSpot (Marketing automation)
Google Tag Manager (Tag managers)
Webflow (Page builders)
Sendgrid (Email)
Cloudflare Rocket Loader (Performance)
Workable (Recruitment & staffing)
Website
Competitors
Lambda Labs
Smaller scale, primarily focused on academic and smaller AI research teams rather than enterprise-grade infrastructure.
Crusoe Energy
Focuses on energy-efficient GPU computing with sustainability angle; smaller customer base and infrastructure scale.
AWS (SageMaker), Google Cloud (Vertex AI), Microsoft Azure (AI Compute)
Larger hyperscalers with broader service ecosystems but less GPU-specialized infrastructure and higher egress costs; CoreWeave focuses purely on GPU optimization.
Lambda Cloud
Smaller, more developer-focused platform with less enterprise-scale infrastructure and fewer long-term contract options.
Why this matters: CoreWeave has become the critical GPU infrastructure layer for leading AI labs (OpenAI, Anthropic, Meta) by solving the capacity crunch created by exploding LLM demand. Its $28B+ in total financing (mostly debt secured by GPU inventory) and March 2025 IPO signal institutional validation of GPU cloud infrastructure as essential AI plumbing, positioning CoreWeave as a potential strategic infrastructure winner in the AI era.
Best for: AI labs, enterprises, and cloud providers needing instant access to thousands of GPUs for training, fine-tuning, and inference at transparent, predictable costs without procurement delays.
Use cases
Large Language Model Training at Scale
AI research teams (like OpenAI) provision 10,000+ GPUs for LLM training runs via CoreWeave's Kubernetes orchestration, scale down after completion, and avoid weeks of hardware procurement. This is impossible with traditional cloud providers that lack GPU specialization and charge egress fees.
Production AI Inference Serving
Companies like Anthropic use CoreWeave to run inference for Claude at production scale, leveraging committed capacity pricing to reduce per-token costs. Mission Control software monitors hardware performance in real-time to ensure SLAs are met.
Fine-Tuning and Model Adaptation
Enterprises needing to adapt foundation models to proprietary data can burst to thousands of GPUs on CoreWeave in minutes, avoiding capital expenditure on hardware while maintaining cost transparency via hourly billing.
Multi-Cloud AI Infrastructure Strategy
Large tech companies like Meta use CoreWeave alongside internal infrastructure to meet peak demand without overprovisioning, leveraging long-term contracts and free internal bandwidth to optimize total cost of ownership.
Alternatives
AWS SageMaker / EC2 GPU instances
Broader AWS ecosystem integration but higher egress fees, less GPU-optimized infrastructure, and longer procurement for large-scale capacity.
Google Cloud Vertex AI / TPU/GPU clusters
Strong integration with Google's AI tools but TPUs have different architecture trade-offs; less transparent pricing and smaller dedicated GPU inventory.
Microsoft Azure AI Compute
Good for enterprises already in Microsoft ecosystem but less GPU-specialized, higher egress costs, and weaker focus on flexible burst capacity.
Lambda Cloud
More developer-friendly UI and simpler pricing but significantly smaller infrastructure scale, fewer long-term contract options, and less enterprise support.
FAQ
What does CoreWeave do? +
CoreWeave operates a GPU cloud infrastructure platform providing on-demand access to NVIDIA H100 and H200 GPUs for AI model training and inference. The platform runs on Kubernetes with proprietary Mission Control software for performance monitoring and cluster orchestration. It allows customers to provision thousands of GPUs in minutes with transparent, usage-based pricing and no egress fees.
How much does CoreWeave cost? +
H100 PCIe pricing starts at $4.25/hour (GPU component only), and 8-GPU HGX node setups are approximately $49.24/hour (~$6.15 per GPU when bundled). CoreWeave offers up to 60% discounts for committed usage contracts. Data egress and internal network transfers are free.
Who uses CoreWeave? +
OpenAI, Microsoft (60%+ of revenue), Mistral AI, IBM, Meta Platforms, Anthropic, and other leading AI labs and enterprises requiring large-scale GPU capacity for training and inference workloads.
How does CoreWeave compare to AWS, Google Cloud, and Azure? +
CoreWeave specializes exclusively in GPU infrastructure with transparent pricing and no egress fees, whereas hyperscalers offer broader ecosystems but less GPU optimization and higher data egress costs. CoreWeave's Kubernetes-native platform enables provisioning thousands of GPUs in minutes, while traditional cloud providers have longer procurement cycles. CoreWeave is better for GPU-intensive AI workloads; hyperscalers are better for integrated multi-service deployments.
What makes CoreWeave different from competitors? +
CoreWeave owns and operates its own data centers with NVIDIA's latest GPUs, enabling instant capacity provisioning without long hardware procurement delays. Its proprietary Mission Control software provides real-time hardware performance monitoring. Transparent pricing with no egress fees and flexible reserved/on-demand capacity makes it more cost-predictable than alternatives for large-scale AI workloads.
Tags
GPU cloud infrastructure
AI compute
machine learning
on-demand GPU capacity
Kubernetes orchestration
NVIDIA H100
transparent pricing
production AI inference