Weka
Weka accelerates AI training and inference with cloud-native storage.
Weka provides NeuralMesh, a software-defined storage platform purpose-built for accelerating AI workloads at scale with microsecond latency and exabyte capacity. The platform unifies multiple storage protocols (S3, POSIX, NFS, CSI) and supports hybrid and multi-cloud deployments with a zero-copy architecture. It enables 10x faster AI model training with 93% GPU utilization and serves 11 of the Fortune 50 plus leading research organizations.
Problem solved
Data movement and I/O bottlenecks severely limit GPU utilization and extend AI model training times from weeks to months, blocking faster AI development cycles.
Target customer
Enterprise AI/ML teams, research institutions, and Fortune 500 companies running large-scale GPU clusters and generative AI workloads requiring sub-millisecond storage performance.
Founders
L
Liran Zvibel
CEO & Co-Founder
Co-founded WEKA in 2013; previously principal software architect at XIV Storage Systems (acquired by IBM 2007) and founder of Fusic; served as Captain in Israeli Defense Forces; BS Mathematics & Computer Science from Tel Aviv University.
M
Maor Ben-Dayan
Chief Architect & Co-Founder
Co-founded WEKA in 2013; brings 20+ years experience; previously Executive VP at Digital14 managing secure communication and data management solutions.
O
Omri Palmon
Co-Founder
Co-founded WEKA in 2013.
Funding history
Series B
$32M
June 2016
Led by Norwest Venture Partners
· Qualcomm Ventures, Celesta Capital, Gemini
Series C
Unknown
Unknown
Led by Unknown
· Unknown
Series D
$135M
November 2022
Led by Unknown
· Unknown
Series E
$140M
May 15, 2024
Led by Valor Equity Partners
· NVIDIA, Generation Investment Management, Atreides Management, 10D, Hitachi Ventures, Ibex Investors, Key1 Capital, Lumir Ventures, MoreTech Ventures, Qualcomm Ventures
Total raised:
$415M
Industries
Pricing
Hybrid capacity-based subscription model with usage-based pricing. Charged per usable terabyte with subscription options, plus hourly pay-as-you-go via cloud marketplaces. List pricing not publicly available; customers contact orders@weka.io for quotes and custom enterprise contracts.
Notable customers
Cohere, Contextual AI, Oklahoma Medical Research Foundation, Cerence, Deakin University, 11 of Fortune 50
Integrations
Kubernetes CSI, S3-compatible object storage, POSIX, NFS, NVIDIA GPU clusters, AWS, Azure, Google Cloud
Tech stack
AngularJS (JavaScript frameworks)
jQuery UI (JavaScript libraries)
jQuery (JavaScript libraries)
core-js (JavaScript libraries)
Bootstrap (UI frameworks)
YouTube (Video players)
Google Maps (Maps)
Qualified (Live chat)
Popper
Open Graph
DocuSign
WordPress (Blogs)
HubSpot Analytics (Analytics)
Matomo Analytics (Analytics)
Google Analytics (Analytics)
Demandbase (Analytics)
Crazy Egg (Analytics)
Linkedin Insight Tag (Analytics)
HSTS (Security)
Typekit (Font scripts)
Twitter Emoji (Font scripts)
Nginx (Reverse proxies)
Varnish (Caching)
PHP (Programming languages)
Google Workspace (Email)
Cloudflare (CDN)
Fastly (CDN)
jQuery CDN (CDN)
cdnjs (CDN)
HubSpot (Marketing automation)
MailChimp (Marketing automation)
MySQL (Databases)
MariaDB (Databases)
Linkedin Ads (Advertising)
Google Tag Manager (Tag managers)
Yoast SEO Premium (SEO)
Yoast SEO (SEO)
Pantheon (PaaS)
Amazon Web Services (PaaS)
Amazon SES (Email)
Website
Competitors
Pure Storage
Traditional all-flash storage vendor focused on general-purpose enterprise storage; Weka purpose-built for AI/ML with software-defined architecture.
NetApp
Legacy storage provider serving broad markets; Weka specialized for AI workloads with zero-copy architecture and optimized for GPU utilization.
DDN (DataDirect Networks)
Competing HPC/AI storage vendor; Weka differentiates with unified namespace, microservices architecture, and cloud-native hybrid/multi-cloud support.
Why this matters: Weka raised $1.6B at Series E valuation with NVIDIA investment, signaling strong validation of AI infrastructure market and Weka's technical moat. The company is capturing explosive demand from generative AI, with eight-figure ARR deals driving unprecedented growth, making it a critical infrastructure layer for enterprise AI acceleration.
Best for: Enterprise AI/ML teams and research institutions training large language models and running compute-intensive workloads that require extreme storage performance and GPU efficiency.
Use cases
Large Language Model Training Acceleration
Cohere uses NeuralMesh to achieve 10x faster model checkpointing and lower training costs. Contextual AI reduced checkpoint times 4x and cloud storage costs by 38%, enabling faster model iteration cycles and developer productivity gains.
GPU Cluster I/O Optimization
Oklahoma Medical Research Foundation reduced research job execution times from 70 days to 7 days (90% improvement) through accelerated I/O on GPU servers. Weka eliminates storage I/O as the bottleneck limiting GPU utilization.
Multi-Cloud AI Infrastructure
Organizations running AI workloads across hybrid and multi-cloud environments use Weka's unified namespace and bidirectional auto-scaling to maintain consistent performance and reduce data movement overhead across cloud providers.
Alternatives
Pure Storage FlashBlade
All-flash storage array for general enterprise use; lacks AI-specific optimizations and zero-copy architecture that Weka provides.
NetApp ONTAP
General-purpose NAS/SAN platform with broad OS support; not purpose-built for AI and lacks the microsecond latency at exabyte scale Weka delivers.
Vast Data
AI-focused storage competitor; Weka differentiates with software-defined microservices architecture and superior multi-cloud support.
FAQ
What does Weka do? +
Weka provides NeuralMesh, a cloud-native storage platform purpose-built to accelerate AI model training and inference at scale. It unifies multiple storage protocols (S3, POSIX, NFS), supports hybrid and multi-cloud deployments, and delivers microsecond latency with exabyte capacity. Customers achieve 10x faster training, 93% GPU utilization, and dramatically reduced cloud costs.
How much does Weka cost? +
Weka uses a hybrid pricing model combining capacity-based subscriptions (per usable terabyte) and hourly pay-as-you-go consumption via cloud marketplaces. Public list pricing is not disclosed; enterprises contact orders@weka.io for quotes and custom contracts with volume discounts.
What are alternatives to Weka? +
Pure Storage FlashBlade and NetApp ONTAP are traditional enterprise storage platforms. Vast Data, DDN, and WekaIO are AI-focused competitors. Weka differentiates via software-defined architecture, zero-copy design, and native multi-cloud support purpose-built for GPU workloads.
Who uses Weka? +
Enterprise AI/ML teams, research institutions, and Fortune 500 companies (11 of Fortune 50 are customers). Named customers include Cohere, Contextual AI, OMRF, Cerence, and Deakin University. Target buyers run large-scale GPU clusters for LLM training, inference, and compute-intensive research.
How does Weka compare to Pure Storage or NetApp? +
Pure Storage and NetApp serve general enterprise storage markets; Weka is purpose-built for AI with zero-copy architecture and microsecond latency at exabyte scale. Weka's software-defined microservices platform scales more efficiently for AI workloads and integrates natively with Kubernetes and cloud GPU clusters. Weka customers report 10x faster training times, while traditional vendors lack AI-specific optimizations.
Tags
AI storage
GPU acceleration
data platform
machine learning infrastructure
cloud-native storage
high-performance computing
exabyte scale