Safe Superintelligence
SSI develops safe superintelligent AI through focused alignment research.
Safe Superintelligence (SSI) is an AI research lab founded by former OpenAI chief scientist Ilya Sutskever, dedicated to developing safe superintelligent AI through aligned engineering and scientific breakthroughs. The company operates with a singular focus on safety-first AI development, rejecting commercial product pressures to prioritize long-term alignment research. SSI has raised $3B from top-tier investors including Alphabet, Andreessen Horowitz, and Sequoia, and partners with Google Cloud for TPU infrastructure, positioning itself as a differentiated player in frontier AI research.
Problem solved
Superintelligent AI systems risk misalignment with human values if built without rigorous safety engineering from the ground up.
Target customer
This is a research organization, not a B2B SaaS company. It serves as a frontier AI research lab attracting institutional capital and talent focused on AI safety.
Founders
I
Ilya Sutskever
CEO
Former chief scientist at OpenAI, co-led Superalignment team; instrumental in GPT model breakthroughs; departed OpenAI May 2024 over safety concerns.
D
Daniel Gross
Co-founder (departed June 2025)
Former head of AI at Apple; founded Cue (acquired by Apple 2013); partner at Y Combinator; early investor in Uber, Figma, GitHub, Perplexity AI.
D
Daniel Levy
President
Former head of Optimization Team at OpenAI; became president following Gross's departure in June 2025.
Funding history
Series A
$1B
September 2024
Led by NFDG (Nat Friedman & Daniel Gross)
· Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel
Series B
$2B
April 2025
Led by Greenoaks Capital Partners
· Andreessen Horowitz, Lightspeed Venture Partners, DST Global, Alphabet, Nvidia
Total raised:
$3B
Pricing
Not publicly available. Company is pre-revenue research lab; first product will be the safe superintelligence itself.
Notable customers
Not disclosed. Company has not released products or services to external customers.
Integrations
Google Cloud (TPU infrastructure provider); no commercial integrations as company is pre-product.
Tech stack
Open Graph
HSTS (Security)
Vercel (PaaS)
Website
Competitors
OpenAI
OpenAI prioritizes rapid commercialization and product release; SSI focuses exclusively on safety research without near-term product timelines.
Anthropic
Anthropic combines research with AI product development; SSI maintains singular focus on superintelligence safety without commercial product distractions.
DeepMind (Google)
DeepMind is a large subsidiary pursuing broad AI capabilities; SSI is an independent research lab with exclusive safety mandate.
Why this matters: SSI represents a rare institutional bet on pure AI safety research at scale, with top-tier capital backing and founding talent from OpenAI. The company is notable for explicitly rejecting the OpenAI path of rapid commercialization, instead betting that safety-first engineering can solve superintelligence alignment—a thesis being tested with $3B in investor capital and Google's infrastructure support.
Best for: SSI is not a commercial tool but rather a research institution for investors, AI researchers, and stakeholders interested in frontier AI safety breakthroughs.
Use cases
AI Safety Research Foundation
Institutional investors and AI governance bodies use SSI's research insights to inform policies and funding decisions around AI safety and alignment.
Talent Attraction for AI Safety
Top AI researchers and engineers join SSI to work on fundamental superintelligence alignment problems with world-class funding and focus, avoiding commercial pressures.
Alternatives
Anthropic
Anthropic also prioritizes safety but releases commercial products like Claude API; SSI is purely research-focused without product timelines.
OpenAI Superalignment Team
OpenAI's safety team operates within a company pursuing rapid commercialization; SSI's entire mission is aligned safety research without competing commercial goals.
DeepMind
DeepMind pursues broad AI capabilities research under Google; SSI focuses exclusively and independently on superintelligence alignment.
FAQ
What does Safe Superintelligence do? +
SSI is an AI research lab founded by Ilya Sutskever dedicated to developing safe superintelligent AI through aligned engineering. The company operates with a singular focus: solving the technical challenges of building superintelligent AI that remains safe and aligned with human values. It is currently pre-revenue and pre-product, focusing entirely on fundamental research.
Does SSI have commercial products? +
No. SSI has not released any AI models, products, or services. Sutskever has stated 'Our first product will be the safe superintelligence.' The company dedicates all resources to fundamental AI safety research.
How much funding has SSI raised? +
SSI has raised $3B across two rounds: $1B in Series A (September 2024) and $2B in Series B (April 2025). Investors include Alphabet, Andreessen Horowitz, Sequoia Capital, Greenoaks Capital Partners, and Nvidia.
Who founded Safe Superintelligence? +
Ilya Sutskever (former OpenAI chief scientist), Daniel Gross (former head of AI at Apple), and Daniel Levy (former head of Optimization Team at OpenAI). Gross departed in June 2025 to join Meta; Sutskever is now CEO and Levy is President.
How does SSI differ from OpenAI and Anthropic? +
Unlike OpenAI and Anthropic, which pursue product development and commercialization, SSI maintains an exclusive focus on safety research without commercial product pressures. SSI rejected a Meta acquisition attempt and uses Google Cloud TPUs instead of the industry-standard Nvidia GPUs, reflecting its unique positioning.
What is SSI's competitive advantage? +
SSI's differentiators are: (1) Founding team pedigree—Sutskever was instrumental in OpenAI's breakthroughs and a key safety advocate; (2) Singular focus on safety without commercial distractions; (3) Massive funding ($3B) supporting long-term research; (4) Rare partnership with Google Cloud for TPU infrastructure.
Tags
AI safety
superintelligence
alignment
frontier AI research
AGI
safety engineering
alignment research