Verda (fka DataCrunch)

Verda provides affordable, developer-friendly GPU cloud infrastructure for AI workloads.
Venture Round $200M total Founded 2020 Helsinki, Southern Finland 110 employees
Verda is a vertically integrated AI cloud infrastructure provider that manages the entire stack from physical servers and data centers to developer tools for building and deploying AI workloads. The company offers GPU instances, bare-metal clusters, and managed endpoints for AI inference at up to 90% lower cost than hyperscalers. Built by experienced cloud entrepreneurs, Verda targets AI developers and teams who need production-grade GPU resources with a developer-first experience and no sales friction.
Problem solved
AI developers face prohibitively expensive GPU cloud costs from hyperscalers and complex procurement processes that delay time-to-market for AI applications.
Target customer
AI development teams, ML engineers, and organizations running inference and training workloads at scale who need cost-effective GPU compute without enterprise sales cycles.
Founders
R
Ruben Bryon
Founder & CEO
Belgian entrepreneur with vast experience in cloud software and GPU virtualization who launched the first self-serve GPU cloud from a Helsinki garage in 2020.
M
Milosz
Co-founder
Co-founder from the original garage launch in Helsinki in 2020.
T
Tamir
Co-founder
Co-founder from the original garage launch in Helsinki in 2020.
Funding history
Early Stage ~€1M 2020 Led by Unknown · Customer prepayments
Seed $13M 2024 Led by J12 Ventures · Lasse Espeholt, Nal Kalchbrenner, Oskari Saarenmaa, Henrik Rosendahl, Anders Bo Pedersen, Maaike Bryon, Ari Tulla, Tuomo Riekki, Tapio Tolvanen
Series A $64M January 2026 Led by byFounders · Skaala, Varma, Tesi, Nordea, Armada Credit Partners, Danske Bank, Norion Bank, LocalTapiola
Series B $117M April 2026 Led by Lifeline Ventures · byFounders, Tesi, Varma, Nordea financial institutions
Total raised: $200M
Pricing
Fixed hourly rates for on-demand GPU instances with no price fluctuations during workloads. Pay-as-you-go billing every 10 minutes or long-term prepaid contracts. Managed inference pricing example: $0.0010/minute for audio transcription, $0.0020/minute with alignment or diarization.
Notable customers
Nokia, 1X, ExpressVPN, Freepik, Sony, Findable, Harvard University, MIT, Korea University
Integrations
NVIDIA GPUs, HGX servers, Blackwell architecture, AWS (tech stack only), Google Workspace, HubSpot, Cloudflare
Tech stack
Front Chat (Live chat) DocuSign HSTS (Security) Google Workspace (Email) Cloudflare (CDN) HubSpot (Marketing automation) DoubleClick Floodlight (Advertising) Google Tag Manager (Tag managers) Amazon Web Services (PaaS) Cookiebot (Cookie compliance) Amazon SES (Email)
Website
Competitors
AWS/Azure/Google Cloud
Hyperscalers offer general-purpose cloud; Verda specializes in AI-optimized infrastructure with 90% cost savings through tighter optimization and direct GPU access.
CoreWeave
Competitor in specialized GPU cloud; Verda differentiates through full-stack integration and embedded AI Lab team working directly with customers.
Lambda Labs
Alternative GPU cloud provider; Verda offers more comprehensive infrastructure stack management and stronger cost positioning.
Why this matters: Verda exemplifies the emerging trend of vertical specialization in cloud infrastructure, combining rare full-stack integration (hardware through software) with founder-led operational excellence and rapid fundraising ($200M in 18 months). The company is challenging hyperscaler dominance in a critical AI bottleneck—expensive, inflexible GPU compute—at a moment when AI infrastructure is becoming a primary cost driver for enterprises.
Best for: AI development teams and organizations building large-scale AI inference and training systems who need predictable costs and rapid deployment without enterprise procurement friction.
Use cases
Secure AI Inference at Scale
ExpressVPN partnered with Verda to build a confidential computing solution for secure LLM inference. Using Verda's Blackwell architecture and secure enclaves, they deployed privacy-preserving AI without performance compromise, showcasing Verda's ability to support cutting-edge security requirements at scale.
Cost-Optimized ML Training
Teams training large language models or computer vision models can reduce infrastructure costs by up to 90% versus hyperscalers. Bare-metal clusters provide dedicated resources with predictable pricing, enabling longer training runs and faster iteration cycles.
Production AI Model Deployment
AI teams use Verda's Managed Endpoints to deploy state-of-the-art models like Whisper for transcription or other SOTA models with pre-configured, cost-efficient inference APIs. Eliminates the DevOps overhead of managing containerization and autoscaling.
Alternatives
AWS SageMaker General-purpose cloud ML platform; choose if you need broader AWS ecosystem integration despite higher costs.
Google Cloud AI Platform Integrated with Google's ML research; pick if you prioritize TensorFlow optimization and Google ecosystem over cost savings.
CoreWeave Alternative specialized GPU cloud; choose if you prefer a different vendor or specific hardware guarantees Verda doesn't match.
FAQ
What does Verda do? +
Verda provides vertically integrated AI cloud infrastructure combining physical GPUs, data centers, and developer tools. The platform offers on-demand GPU instances, bare-metal clusters, and managed endpoints for AI inference—all at up to 90% lower cost than hyperscalers with a developer-friendly self-serve experience.
How much does Verda cost? +
Verda uses transparent fixed hourly pricing with no surprise fluctuations. Pay-as-you-go billing occurs every 10 minutes, or lock in lower rates with long-term prepaid contracts. Managed inference costs example: $0.0010/minute for audio transcription. Contact Verda for custom enterprise pricing.
What are alternatives to Verda? +
AWS SageMaker and Google Cloud AI Platform offer broader ML platforms but at significantly higher costs. CoreWeave and Lambda Labs are specialized GPU cloud competitors with different cost structures and feature sets.
Who uses Verda? +
Target customers are AI development teams, ML engineers, and enterprises running large-scale inference and training workloads. Notable public customers include Nokia, 1X, ExpressVPN, Freepik, Sony, and leading universities like MIT and Harvard.
How does Verda compare to AWS? +
AWS offers general-purpose cloud with broader services but significantly higher GPU costs. Verda specializes exclusively in AI infrastructure with 90% cost savings, tighter optimization, direct GPU access, and faster deployment without sales friction—making it ideal for cost-sensitive AI teams.
Tags
GPU cloud AI infrastructure machine learning inference bare-metal servers cost-effective compute developer-first vertically integrated