TensorWave
TensorWave provides AMD GPU infrastructure for AI training and inference at scale.
TensorWave is an AI and HPC cloud platform built on AMD Instinct GPUs that accelerates LLM training, inference, and research workloads without the supply constraints and premium pricing of NVIDIA. The platform leverages AMD's physical GPU partitioning for true hardware isolation and includes direct liquid cooling that reduces data center energy costs by up to 51%. With 192GB of VRAM per GPU, customers can fine-tune massive models like 405B parameter LLMs on single 8-GPU nodes. TensorWave targets enterprises seeking high-performance compute with dedicated infrastructure, managed inference services, and autoscaling capabilities.
Problem solved
Enterprises face constrained NVIDIA GPU supply, high pricing premiums, and insufficient VRAM for training massive language models, forcing them to accept long wait times or compromise on workload performance.
Target customer
Enterprise AI teams, AI research organizations, HPC workload operators, and companies seeking alternatives to NVIDIA with high memory requirements and dedicated compute needs.
Founders
D
Darrick Horton
CEO & Co-Founder
28-year-old Forbes 30 Under 30 awardee with experience at Lockheed Martin Skunk Works on nuclear fusion, NASA plasma physics research, LIGO astrophysics work, and 5 years as CTO/CEO of FPGA cloud provider VMAccel; holds degrees in Mechanical Engineering and Physics from Andrews University.
J
Jeff Tatarchuk
Chief Growth Officer & Co-Founder
Serial entrepreneur who co-founded FPGA cloud provider VMAccel with Horton and previously sold startup Lets Rolo to digital identity firm Lifekey.
P
Piotr Tomasik
President & COO
Co-founder of influencer marketing platform Influential and second co-founder of Lets Rolo; holds BS in Computer Science with minor in Mathematics from University of Nevada-Las Vegas.
Funding history
Series A
$100M
May 2025
Led by Magnetar, AMD Ventures
· Maverick Silicon, Nexus Venture Partners, Prosperity7
SAFE
$43M
October 2024
Led by Nexus Venture Partners
· Maverick Capital, Translink Capital, Javelin Venture Partners, StartupNV, AMD Ventures
Early Stage
$2.2M
2023
Led by StartupNV
· Unknown
Total raised:
$166M+
Pricing
Pay-as-you-go usage-based model with flexible consumption pricing. Smaller companies offered credit system with marketing collateral exchanges. Enterprise custom pricing available.
Notable customers
Zyphra, Modular, WEKA
Integrations
AMD Instinct GPUs, WEKA storage, Zyphra AI models, GSMA Open Telco AI, AWS infrastructure
Tech stack
React (JavaScript frameworks)
Next.js (Web servers)
core-js (JavaScript libraries)
Webpack
Open Graph
TYPO3 CMS (CMS)
Magento (Ecommerce)
Linkedin Insight Tag (Analytics)
Google Analytics (Analytics)
Facebook Pixel (Analytics)
HSTS (Security)
Nginx (Reverse proxies)
PHP (Programming languages)
Node.js (Programming languages)
Google Workspace (Email)
Cloudflare (CDN)
MySQL (Databases)
Reddit Ads (Advertising)
Twitter Ads (Advertising)
Linkedin Ads (Advertising)
Google Tag Manager (Tag managers)
Amazon Web Services (PaaS)
AWS Certificate Manager (SSL/TLS certificate authorities)
Priority Hints (Performance)
Website
Competitors
CoreWeave
Broader GPU infrastructure provider supporting both NVIDIA and AMD; larger scale but less focused on AMD-native optimization and memory-intensive workloads.
Lambda Labs
Primarily NVIDIA-focused GPU cloud with broader ecosystem; lacks TensorWave's AMD GPU partitioning and liquid cooling efficiency advantages.
Groq
Specialized inference accelerator with proprietary hardware; excellent for inference speed but not comparable for general AI training and HPC workloads.
Eclipse
Emerging competitor offering cloud GPU services; less established than TensorWave with smaller infrastructure footprint.
Why this matters: TensorWave is breaking NVIDIA's GPU monopoly with a well-funded startup (backed by $166M+ including Magnetar and AMD Ventures) led by a Forbes 30 Under 30 founder with deep infrastructure expertise. The timing is critical as enterprises increasingly seek GPU alternatives due to NVIDIA supply constraints and premium pricing, and TensorWave's 192GB-per-GPU memory capacity makes it uniquely positioned for 400B+ parameter model training.
Best for: Enterprise teams training large language models, HPC researchers, and AI organizations that need dedicated GPU compute without NVIDIA constraints and require high memory capacity per node.
Use cases
Fine-tuning 400B+ Parameter LLMs
Organizations training massive language models like 405B parameter models can run inference and fine-tuning on single 8-GPU nodes with 192GB VRAM per GPU, eliminating need for complex distributed training setups. This accelerates model development cycles and reduces engineering overhead.
HPC and Scientific Computing
Research institutions and enterprises running physics simulations, climate modeling, or molecular dynamics benefit from TensorWave's dedicated hardware access without virtualization overhead. Direct liquid cooling reduces operational costs by up to 51% compared to air-cooled alternatives.
Managed LLM Inference at Scale
Startups and enterprises deploying production LLM services use TensorWave's managed inference platform with autoscaling and burst capabilities to handle variable traffic. This eliminates infrastructure management overhead while providing cost-effective inference without NVIDIA premium pricing.
Alternatives
CoreWeave
Choose CoreWeave for multi-GPU vendor flexibility and broader ecosystem; choose TensorWave for AMD-native optimization, superior memory density, and energy efficiency at lower cost.
Lambda Labs
Lambda dominates NVIDIA-based GPU cloud with larger scale; TensorWave wins for NVIDIA-alternative seekers needing high memory capacity and liquid cooling efficiency.
Modal
Modal excels at serverless GPU inference with frictionless scaling; TensorWave better for dedicated training workloads and organizations requiring full hardware control and high VRAM.
FAQ
What does TensorWave do? +
TensorWave is an AI and HPC cloud platform powered by AMD Instinct GPUs that provides dedicated compute infrastructure and managed inference services for training large language models and running HPC workloads. The platform features true hardware isolation via AMD GPU partitioning, 192GB VRAM per GPU, and direct liquid cooling that cuts data center costs by up to 51%. It serves enterprises seeking alternatives to NVIDIA with superior memory capacity and cost efficiency.
How much does TensorWave cost? +
TensorWave uses a pay-as-you-go usage-based pricing model where customers pay based on compute consumption. Smaller companies can access credits in exchange for marketing partnerships. Custom enterprise pricing is available for large-scale deployments. Specific pricing tiers are not publicly disclosed.
What are alternatives to TensorWave? +
Key alternatives include CoreWeave (broader multi-GPU vendor support), Lambda Labs (NVIDIA-focused GPU cloud), Modal (serverless GPU inference platform), Groq (proprietary inference hardware), and Eclipse (emerging GPU cloud provider). Each has different strengths depending on workload type and vendor preference.
Who uses TensorWave? +
Target customers include enterprise AI teams, AI research organizations, HPC operators, and companies training large language models. Public customers include Zyphra, Modular, and WEKA. TensorWave is particularly appealing to organizations frustrated with NVIDIA supply constraints seeking high-memory-capacity alternatives.
How does TensorWave compare to CoreWeave? +
TensorWave specializes exclusively in AMD GPUs with native optimization for GPU partitioning and liquid cooling efficiency, delivering superior cost and memory density. CoreWeave offers broader GPU vendor flexibility including NVIDIA at larger scale but without TensorWave's AMD-specific advantages. Choose TensorWave for cost-effective, high-memory workloads; choose CoreWeave for vendor flexibility.
Tags
GPU cloud
AMD Instinct
AI infrastructure
LLM training
HPC
alternative to NVIDIA
inference platform
liquid cooling