EnCharge AI

EnCharge AI brings GPU-level AI compute to laptops and edge devices with 20x better efficiency.
Series B $144M+ total Founded 2022 Santa Clara, California 51 employees
EnCharge AI develops analog in-memory computing hardware that performs AI computations directly within memory cells, eliminating energy-intensive data movement between processing and storage. The company's EN100 accelerator delivers 200+ TOPS of AI compute in an 8.25W power envelope for laptops and workstations, achieving up to 20x better performance per watt than competing solutions. Available in M.2 and PCIe form factors, EN100 brings advanced AI capabilities to edge devices and client platforms with unprecedented efficiency.
Problem solved
Current AI accelerators consume excessive power shuttling data between separate memory and processing units, making advanced AI inference impractical for laptops, workstations, and edge devices.
Target customer
Original equipment manufacturers (OEMs), laptop and workstation vendors, enterprise customers requiring on-device AI inference, defense and aerospace contractors.
Founders
N
Naveen Verma
CEO & Co-Founder
Princeton University professor of electrical and computer engineering since 2009 with PhD in Advanced Computing and Integrated Circuits from MIT; led 6+ years of research in next-generation computing.
K
Kailash Gopalakrishnan
CTO & Co-Founder
IBM Fellow with 20+ years in AI, silicon devices, and chip architectures; led worldwide efforts on advanced AI hardware and product translation.
E
Echere Iroaga
COO & Co-Founder
25+ years in semiconductors; VP and GM of Connectivity at MACOM and Director of Engineering at Qualcomm.
Funding history
Series A $21.7M December 2022 Led by Anzu Partners · AlleyCorp, Scout Ventures, Silicon Catalyst Angels, Schams Ventures, E14 Fund, Alumni Ventures
Series A Extension $22.6M December 2023 Led by VentureTech Alliance · RTX Ventures, ACVC Partners, Anzu Partners, S5V, Alley Corp, Scout Ventures, Silicon Catalyst Angels
DARPA Grant $18.6M Unknown Led by DARPA OPTIMA Program
Series B $100M+ February 2025 Led by Tiger Global · Maverick Silicon, Capital TEN, SIP Global Partners, Zero Infinity Partners, CTBC VC, Vanderbilt University, Morgan Creek Digital, Samsung Ventures, HH-CTBC, In-Q-Tel, RTX Ventures, Constellation Technology Ventures
Total raised: $144M+
Pricing
Not publicly available; contact-for-pricing model with custom pricing based on volume and application.
Tech stack
jQuery (JavaScript libraries) jQuery Migrate (JavaScript libraries) RSS Open Graph WordPress (Blogs) Site Kit (Analytics) Linkedin Insight Tag (Analytics) Google Analytics (Analytics) HSTS (Security) Twitter Emoji (Font scripts) Font Awesome (Font scripts) Google Font API (Font scripts) Nginx (Reverse proxies) PHP (Programming languages) Microsoft 365 (Email) MySQL (Databases) Google Tag Manager (Tag managers) WordPress.com (PaaS) Jetpack (WordPress plugins)
Website
Competitors
NVIDIA
NVIDIA offers digital GPU-based AI acceleration with broader software ecosystem but higher power consumption and cost per TOPS.
Qualcomm Hexagon
Qualcomm's NPU focuses on mobile/edge but lacks the raw compute density and efficiency of EnCharge's analog approach for demanding AI workloads.
Intel Gaudi
Intel's AI accelerators target data centers and cloud; EnCharge focuses on edge and client-side compute with superior power efficiency.
Why this matters: EnCharge AI represents a fundamental shift in AI hardware architecture, moving from traditional digital compute to analog in-memory computing to solve the energy efficiency crisis in AI. With $144M+ in funding including participation from TSMC's VentureTech Alliance, defense/aerospace leaders (RTX, In-Q-Tel), and major OEMs (Samsung, Foxconn), the company is positioned to reshape on-device AI capabilities and reduce cloud dependency.
Best for: OEMs and enterprises needing high-performance, low-power AI inference on laptops, workstations, and edge devices without cloud dependency.
Use cases
On-Device AI for Laptops
Laptop manufacturers can integrate EN100 M.2 accelerators to enable advanced AI features like real-time image processing, voice assistants, and generative AI applications without cloud connectivity, delivering 200+ TOPS in just 8.25W.
Workstation AI Acceleration
Professional workstations can leverage the PCIe form factor EN100 to achieve GPU-equivalent AI compute (1 PetaOPS equivalent) at a fraction of power consumption and cost, enabling data scientists and engineers to run complex models locally.
Defense & Aerospace Applications
Defense contractors can deploy EN100 in edge systems requiring offline AI inference with extreme power constraints, supported by the company's DARPA funding and participation from RTX Ventures and In-Q-Tel.
Alternatives
NVIDIA RTX Mature ecosystem with broad software support but significantly higher power consumption and cost; better for applications where power is not constrained.
Apple Neural Engine Proprietary to Apple products with excellent efficiency but closed ecosystem; EnCharge offers an open platform for broader OEM integration.
Qualcomm Snapdragon AI Optimized for mobile/smartphone form factors; EnCharge targets larger form factors (laptops, workstations) requiring higher absolute compute density.
FAQ
What does EnCharge AI do? +
EnCharge AI develops analog in-memory computing hardware that performs AI computations directly within memory, eliminating power-hungry data movement. The EN100 accelerator delivers 200+ TOPS of AI compute for laptops and workstations in extremely low power envelopes, achieving up to 20x better performance per watt than traditional digital AI chips.
How much does EnCharge AI cost? +
EnCharge AI does not publicly list pricing. The company operates on a contact-for-pricing model with custom pricing based on volume, form factor (M.2 or PCIe), and specific customer requirements. Pricing varies significantly depending on the scale of OEM integration.
What are alternatives to EnCharge AI? +
NVIDIA RTX offers mature GPU acceleration but with higher power consumption; Qualcomm Snapdragon AI targets mobile but lacks workstation-class performance; Apple Neural Engine provides extreme efficiency but is proprietary to Apple devices. EnCharge differentiates through analog efficiency and broad OEM accessibility.
Who uses EnCharge AI? +
Target customers include laptop and workstation OEMs, enterprises requiring on-device AI inference, defense contractors, and aerospace manufacturers. Specific customer names are not publicly disclosed, but the company has backing from Samsung Ventures and Foxconn's HH-CTBC, suggesting OEM interest.
How does EnCharge AI compare to NVIDIA? +
EnCharge's analog in-memory approach achieves significantly better performance per watt, critical for power-constrained edge devices. NVIDIA GPUs offer broader software ecosystem and more raw performance for tasks where power consumption is not the primary constraint. EnCharge targets client-side/edge inference; NVIDIA dominates cloud and high-performance computing.
Tags
AI accelerators edge computing analog computing hardware in-memory computing semiconductor power efficiency