Nscale

Nscale provides owned AI data center infrastructure for enterprise model training and inference.
Series C $3.28B total Founded 2023 London, England 356 employees
Nscale builds and operates AI-native data centers providing vertically integrated GPU cloud infrastructure for training, fine-tuning, and running large language models. Unlike competitors reliant on expensive third-party colocation, Nscale owns and operates its own facilities, enabling cost-effective deployment of 10,000+ GPU clusters with up to 20 MW of contiguous capacity. The company offers serverless inference, managed Kubernetes orchestration, and domain-specific model fine-tuning through an API-driven platform with pay-per-use pricing.
Problem solved
Organizations lack access to scalable, cost-effective GPU infrastructure without expensive colocation fees and complex cluster management overhead.
Target customer
Enterprise AI teams and ML platforms requiring large-scale GPU clusters for model training, fine-tuning, and inference workloads; AI startups building production generative AI applications
Founders
J
Josh Payne
Founder & CEO
Serial entrepreneur with background in recruitment, renewable energy, and cryptocurrency mining; previously Founder and Executive Chairman of Arkon Energy; experienced in energy markets, data centers, and hardware supply chains.
D
Dan Bathurst
Chief Product Officer
Product leader with 10+ years launching platforms across cloud, finance, AI, and Web3; shaped Nscale's GPU cloud services and brand strategy.
R
Ron Huisman
Chief Administrative Officer
25+ years in finance and capital markets; previously CFO at AtlasEdge Data Centres and senior finance roles at Liberty Global.
S
Sam Huckaby
President of Data Centers
Senior Vice President at Oracle leading global data center build and operations for Oracle Cloud Infrastructure; previously COO for North America at Vantage Data Centers.
Funding history
Seed $30M December 2023 Led by Unknown · Unknown
Series A $155M December 2024 Led by NordicNinja, Voima Ventures · Unknown
Series B $1.1B September 2025 Led by Aker ASA · Sandton Capital, Blue Owl, Dell, Fidelity Management & Research Company, G Squared, Nokia, NVIDIA, Point72, T.Capital
Pre-Series C SAFE $433M October 2025 Led by Blue Owl Managed Funds, Dell, NVIDIA, Nokia · Series B investors and new investors
Series C $2B March 9, 2026 Led by Unknown · Existing investors
Total raised: $3.28B
Pricing
Pay-per-request model with text-based billing for inference; pay-per-use for serverless inference. Enterprise custom pricing available. Specific rates not publicly detailed.
Integrations
Kubernetes (via Nscale Kubernetes Service), Slurm clusters, NVIDIA, Dell, Oracle Cloud
Tech stack
GSAP (JavaScript frameworks) Splide (JavaScript libraries) jQuery (JavaScript libraries) core-js (JavaScript libraries) Rive (JavaScript graphics) Open Graph LottieFiles (CMS) HTTP/3 HSTS (Security) Unpkg (CDN) jsDelivr (CDN) cdnjs (CDN) Amazon S3 (CDN) Cloudflare (CDN) HubSpot (Marketing automation) Google Tag Manager (Tag managers) Webflow (Page builders) Amazon Web Services (PaaS) CookieYes (Cookie compliance)
Website
Competitors
Lambda Labs
Third-party colocation dependent with higher operational costs; lacks vertical integration of compute, networking, and data center operations.
CoreWeave
Relies on third-party data center partnerships; cannot guarantee large contiguous GPU cluster capacity like Nscale's owned facilities.
AWS SageMaker
Broader cloud platform with less specialization in owned GPU infrastructure; charges colocation fees for large-scale deployments.
Google Cloud AI Platform
General-purpose cloud AI services without Nscale's vertically integrated data center ownership and cost advantages.
Why this matters: Nscale raised $3.28B (largest Series B in UK history) by attacking a real cost problem in AI infrastructure: competitors' reliance on expensive third-party colocation. With vertically integrated data centers and strategic backing from NVIDIA, Dell, and Aker ASA, Nscale is positioned to become critical infrastructure for enterprise AI workloads.
Best for: Enterprise AI teams and ML platforms needing cost-effective, large-scale GPU infrastructure for production model training and inference without complex cluster management.
Use cases
Large-scale model training
AI research teams training foundation models on 10,000+ GPU clusters with up to 20 MW power capacity. Nscale's owned infrastructure eliminates colocation bottlenecks that prevent competitors from scaling beyond 5,000 GPUs.
Serverless inference at scale
Production applications deploying multiple LLM variants across regions with autoscaling. Teams launch inference endpoints in minutes via API without managing Kubernetes clusters or bare metal orchestration.
Domain-specific model fine-tuning
Enterprise data teams customizing frontier models for industry-specific tasks (legal, financial, medical) using Nscale's managed fine-tuning pipelines and data services without DevOps overhead.
Alternatives
AWS SageMaker Broader cloud platform with less GPU specialization; suitable for teams wanting integrated ML tools but willing to pay colocation markups.
Google Cloud Vertex AI General-purpose AI platform with less focus on owned GPU infrastructure; better for teams prioritizing broader GCP ecosystem integration.
Lambda Labs Simpler GPU-as-a-service without Nscale's vertical integration; lower cost entry point but limited to smaller clusters.
Together AI API-first inference platform with less focus on training infrastructure; optimized for model serving rather than large-scale training workloads.
FAQ
What does Nscale do? +
Nscale builds and operates AI-native data centers providing GPU cloud infrastructure for training, fine-tuning, and running large language models. The company owns its facilities rather than relying on third-party colocation, enabling cost-effective deployment of massive GPU clusters with seamless orchestration via Kubernetes or Slurm.
How much does Nscale cost? +
Nscale uses a pay-per-request model for inference and pay-per-use pricing for serverless deployments. Text-based models are billed per request. Enterprise custom pricing is available for large-scale or specialized workloads. Exact rates vary by model and deployment type.
What are alternatives to Nscale? +
AWS SageMaker (broader cloud platform), Google Cloud Vertex AI (general-purpose AI services), Lambda Labs (simpler GPU-as-a-service), Together AI (API-first inference), and CoreWeave (GPU cloud with third-party colocation). Nscale differentiates via owned data center infrastructure and large contiguous cluster capacity.
Who uses Nscale? +
Enterprise AI teams, ML platforms, and AI startups requiring large-scale GPU clusters for model training, fine-tuning, and production inference. Target customers range from Series A-C AI startups to Fortune 500 enterprises building generative AI applications.
How does Nscale compare to AWS SageMaker? +
Nscale is GPU-specialist infrastructure with owned data centers and cost advantages from eliminating colocation markups. SageMaker is a broader ML platform with tighter AWS ecosystem integration but higher costs for large-scale GPU workloads. Choose Nscale for large-scale training/inference; choose SageMaker for integrated ML tools and broader cloud services.
Tags
GPU infrastructure AI data centers model training inference cloud computing machine learning enterprise AI