The growth experimentation stack in 2026 is leaner and more consolidated than it was 18 months ago. Three of the biggest brands in the space -- Statsig, Eppo, and AB Tasty -- have been acquired or merged since May 2025. Google Optimize has been dead since September 2023. What's left is 23 tools that actually ship, split across five categories: feature flags, server-side experimentation, client-side A/B testing, statistical analysis, and experiment ops. This guide compares all 23 with real pricing, killer features, and who each tool is not for.
What is the best A/B testing tool in 2026?
There is no single best A/B testing tool in 2026. The right tool depends on three factors: where your data lives, who runs the tests, and how much engineering you can spare.
For most teams, the shortlist looks like this:
- Warehouse-native data teams: Eppo (by Datadog) or GrowthBook
- All-in-one product teams: PostHog or Statsig
- Enterprise web CRO: VWO + AB Tasty or Optimizely Web Experimentation
- Marketing teams that hate flicker: Mida.so or ExperimentHQ
- Mobile-first: Apptimize (Airship) or Statsig
Ignore vendor-published rankings. According to the VWO + AB Tasty merger announcement (January 2026), the combined entity now serves 4,000+ customers and crossed $100M ARR -- proof that the web-CRO total addressable market has saturated and consolidation is the dominant 2026 trend.
What are the 5 categories of growth experimentation tools?
The 2026 experimentation stack splits cleanly into five categories. Most teams need at least one tool from category 1 (feature flags) and one from category 2 or 3 (experimentation):
- Feature flag platforms -- gate code paths, run progressive rollouts, kill-switch broken features. Examples: LaunchDarkly, Unleash, Flagsmith, ConfigCat, Split.io.
- Server-side experimentation -- decide the variant on the backend, before the page renders. Examples: Eppo, GrowthBook, Statsig, Optimizely Feature Experimentation, Amplitude Experiment.
- Client-side A/B testing -- modify the page in the browser via JavaScript. Examples: VWO + AB Tasty, Optimizely Web Experimentation, Convert, Kameleoon, Adobe Target.
- Statistical engines and lightweight A/B -- the math layer plus minimal tools that just run tests. Examples: ABsmartly, PostHog, Mida.so, ExperimentHQ, Sigmize.
- Experiment ops and behavioral -- mobile testing, heatmaps, session replay that supports the experiment program. Examples: Apptimize, Crazy Egg, Contentsquare (Hotjar).
Four tools (PostHog, Statsig, GrowthBook, Optimizely) span multiple categories. That overlap is why the market is consolidating: enterprises want one vendor, not five.
Which are the best feature flag tools in 2026?
Five feature flag tools cover 90% of real-world needs in 2026. They differ on pricing model, governance depth, and whether you self-host.
1. LaunchDarkly -- Last reviewed May 2026. Enterprise leader. Pricing: $12/service connection + $10 per 1,000 client-side MAU + $3/1k MAU for experimentation add-on. Median contract value $71,847/year per Vendr (2026). Killer feature: granular custom roles for SOC2/SAML governance. Weakness: pricing is hard to predict -- service connections, MAUs, and add-ons stack. Not for: any team under 50 engineers.
2. Unleash -- Last reviewed May 2026. Largest open-source feature flag project on GitHub, Apache 2.0. Free self-host with no usage limits. Killer feature: 15 official SDKs and 15+ community SDKs. Weakness: you operate the infrastructure. Not for: teams without DevOps capacity.
3. Flagsmith -- BSD 3-Clause, free self-host. Used by British Airways, Toyota, and Ferrari per Flagsmith (2026). Killer feature: data residency control via self-host. Weakness: smaller SDK ecosystem than Unleash. Not for: teams that want experimentation built in.
4. ConfigCat -- Proprietary backend, open-source SDKs. Free up to 10 flags. Killer feature: dashboard non-engineers can actually use. Weakness: not built for experimentation. Not for: data-driven teams.
5. Split.io -- Feature delivery + monitoring + experimentation. Killer feature: tight observability integration. Weakness: pricing is enterprise-only. Not for: bootstrapped startups.
What are the top server-side experimentation platforms?
Server-side experimentation is where the action moved in 2025-2026. Three of the five leading platforms have changed ownership in the last 14 months.
1. Eppo (by Datadog) -- Last reviewed May 2026. Acquired by Datadog for approximately $220M in May 2025 per TechCrunch. Warehouse-native: runs analysis on Snowflake, BigQuery, Redshift, Databricks. Killer feature: was the first commercial platform with CUPED variance reduction. Weakness: pricing is now bundled into Datadog quotes -- no public tier. Not for: teams without a working warehouse.
2. GrowthBook -- MIT-licensed core. Free self-host with unlimited seats. Pro cloud at $40/user/month for up to 50 users. Killer feature: same warehouse-native architecture as Eppo, but open source. Weakness: visual editor and CUPED are Pro-only. Not for: teams that want zero engineering involvement.
3. Statsig -- Acquired by OpenAI for $1.1 billion in September 2025 per CNBC. Free tier includes 2M events/month plus unlimited feature flags. Killer feature: tightest integration of flags + experiments + analytics with broad mobile SDK support. Weakness: long-term roadmap is now tied to OpenAI's priorities. Not for: teams uncomfortable with their data sitting at an OpenAI subsidiary.
4. Optimizely Feature Experimentation -- Custom enterprise pricing. Optimizely customers ran nearly 300,000 new experiments in 2025 per PRNewswire (December 2025). Killer feature: tight integration with Optimizely DXP and the new Experimentation MCP server. Weakness: only economical if you already buy other Optimizely products. Not for: standalone product teams.
5. Amplitude Experiment -- Add-on to Amplitude Plus or Growth plans. Killer feature: targeting based on rich Amplitude behavioral cohorts. Weakness: not warehouse-native -- analysis runs on Amplitude data, not your warehouse. Not for: teams whose source of truth is Snowflake or BigQuery.
Which client-side A/B testing tools are still worth using in 2026?
Client-side A/B testing is the smaller half of the market post-Google Optimize, but five tools still matter for marketing and CRO teams.
1. VWO (merged with AB Tasty) -- Last reviewed May 2026. Combined entity since January 2026. 4,000+ customers, $100M+ ARR. Backed by Everstone Capital. Killer feature: AI-led experimentation plus real-time personalization in one workflow. Weakness: integration risk -- two product roadmaps still being merged. Not for: teams running fewer than 5 tests/month -- pricing starts ~$314/month.
2. Optimizely Web Experimentation -- Despite older rumors, Optimizely Web is alive in 2026 with active release notes. Killer feature: brand trust at the enterprise level plus deep stats engine. Weakness: enterprise-only pricing and steep learning curve. Not for: SMB teams.
3. Convert.com -- 15+ year operating history. Privacy-first GDPR architecture. From $399/month. Killer feature: agency-friendly multi-account management. Weakness: smaller integration ecosystem. Not for: teams that need a full DXP.
4. Kameleoon -- Strong in EU enterprise, especially personalization-heavy use cases. Killer feature: AI predictive targeting baked into the editor. Weakness: less developer-friendly than newer tools. Not for: pure engineering teams.
5. Adobe Target -- Part of Adobe Experience Cloud. Killer feature: tight binding with Adobe Analytics, Audience Manager, and Real-Time CDP. Weakness: only viable economically as an Experience Cloud customer. Not for: anyone not already on Adobe.
Which statistical analysis and lightweight A/B tools matter?
These five fill specific gaps that the big platforms miss: rigorous Bayesian stats, lightweight scripts, WordPress-native testing, and free Google Optimize replacements.
1. ABsmartly -- Enterprise B2C statistical engine. Killer feature: sequential testing and group sequential analysis built in -- you can stop tests early without inflating false-positive rates. Weakness: implementation-heavy, custom enterprise pricing. Not for: marketing-led teams.
2. PostHog -- Last reviewed May 2026. Free up to 1M events and 1M feature flag requests per month. Open source. Killer feature: analytics + session replay + feature flags + experiments + surveys in one platform. Weakness: usage-based pricing scales fast at high event volumes -- 100M events/month gets expensive. Not for: teams that want a single best-in-class tool per category.
3. Mida.so -- AI-powered lightweight client-side A/B testing. Killer feature: 20KB script vs typical 200KB+, eliminating most flicker. Free up to 50,000 MTU. Weakness: shallow integration ecosystem. Not for: teams that need server-side tests.
4. ExperimentHQ -- Built explicitly as a Google Optimize replacement. Free up to 50,000 MTU. Killer feature: lightest-weight setup of any current tool. Weakness: small team, smaller community than incumbents. Not for: enterprise procurement.
5. Sigmize -- WordPress + WooCommerce A/B testing with heatmaps and session recordings built in. From $19/month. Killer feature: only tool here that ships as a WordPress plugin. Weakness: useless outside the WP ecosystem. Not for: any non-WordPress site.
What experiment ops and behavioral tools complete the stack?
Experiment ops covers the supporting tools: mobile testing, heatmaps, and session replay that feed hypothesis generation.
1. Apptimize (Airship) -- Last reviewed May 2026. Mobile A/B testing and feature release management for iOS, Android, and OTT. Owned by Airship. Killer feature: visual editor for mobile UI changes without forcing a code release. Weakness: tied to Airship's broader CX suite -- standalone deployments are rare. Not for: web-only teams.
2. Crazy Egg -- Heatmaps + scroll maps + lightweight A/B testing. From $99/month. Killer feature: cheapest credible heatmap + A/B combination. Weakness: stats engine is basic compared to dedicated platforms. Not for: teams running statistically rigorous programs.
3. Contentsquare (Hotjar) -- Hotjar was acquired by Contentsquare and the platforms are now unified. Killer feature: deepest experience analytics + session replay in the market with experimentation overlay. Weakness: enterprise-only pricing for the full stack. Not for: small teams that just need heatmaps -- buy Crazy Egg or Hotjar's standalone tier.
These tools don't replace your experimentation platform. They feed it. According to the Princeton GEO study (2024), pages with concrete behavioral data and statistics get cited 30% more often by AI engines -- the same logic applies internally: experiments grounded in real session data win more often than experiments based on opinion.
Which experimentation tools have died or pivoted in 2026?
Six notable shifts since 2023. If you're reading older listicles, half their picks no longer exist as standalone products.
1. Google Optimize -- Sunset September 30, 2023. Google announced the deprecation in January 2023 and pointed users to integration partners. There is no Google replacement.
2. Maxymiser -- Acquired by Oracle in 2015. Was the A/B testing market leader pre-acquisition. Per multiple practitioner reports, the platform moved away from self-service and most senior product staff left. Effectively dead as a standalone option.
3. Eppo (as standalone) -- Acquired by Datadog for ~$220M in May 2025. Now sold as 'Eppo by Datadog'. Existing customers retained but pricing rolled into Datadog enterprise quotes.
4. Statsig (as standalone) -- Acquired by OpenAI for $1.1B in September 2025. CEO Vijaye Raji became OpenAI's CTO of Applications. Statsig still serves external customers but is now a captive supplier of OpenAI.
5. AB Tasty (as standalone) -- Merged with VWO in January 2026, backed by Everstone Capital. Combined ARR over $100M. AB Tasty branding being phased out under VWO leadership.
6. Leanplum experimentation -- Acquired by CleverTap in June 2022. The product pivoted away from rigorous experimentation toward retention marketing and engagement campaigns. Use it only if you want a CleverTap-style engagement tool, not for serious A/B testing.
What replaced Google Optimize?
Nothing from Google. When Google Optimize was sunset on September 30, 2023, Google did not release a replacement -- it pointed users to certified integration partners instead.
The practical migration paths in 2026 are:
- For free, no-code Optimize parity: ExperimentHQ or Mida.so, both built explicitly as Optimize replacements with 50,000 MTU free tiers.
- For free, open source, more rigorous: GrowthBook self-hosted, PostHog cloud free tier.
- For paid, mid-market parity: VWO + AB Tasty or Convert.com.
- For WordPress sites specifically: Sigmize is the closest analog to what Optimize offered.
Google Optimize's biggest gap was statistical rigor -- it ran frequentist tests with a default 95% confidence and offered no sequential testing or CUPED. Most replacements are stronger on stats than the original.
What is the difference between server-side and client-side experimentation?
Client-side experimentation runs in the browser. JavaScript modifies the DOM after the page loads, so users may briefly see the original page (Flash of Original Content, or FOOC). According to Convert.com (2025), client-side is fast to deploy via WYSIWYG editors but limited to UI changes -- copy, layout, color, button text.
Server-side experimentation decides the variant on the backend before the page renders. There is no flicker, no JavaScript injection, and you can test anything: pricing logic, search ranking, recommendation algorithms, onboarding flows, backend performance.
The trade-off:
| Dimension | Client-side | Server-side |
|---|---|---|
| Setup time | Hours, no engineering | Days to weeks, engineering required |
| What you can test | UI only | Anything |
| Flicker | Possible | None |
| Cost | Lower | Higher |
| Tools | VWO, Optimizely Web, Convert | Eppo, GrowthBook, Statsig, LaunchDarkly |
Most mature teams run both. Marketing iterates on copy and layout client-side. Engineering ships pricing, search, and onboarding tests server-side.
Which feature flag tool is best for early-stage startups?
The best feature flag tool for early-stage startups is the one with a free tier that doesn't lock you into the vendor. Three real options in May 2026:
- PostHog -- Free up to 1M events and 1M feature flag requests per month. Bundles analytics, session replay, A/B testing, surveys. According to PostHog (2026), more than 90% of its users stay on the free tier.
- Statsig -- Free 2M events/month with unlimited feature flags. Now owned by OpenAI but still independent.
- Unleash or Flagsmith self-hosted -- free forever with no usage caps. Trade engineering time for licensing fees.
Avoid LaunchDarkly until Series B. The pricing model (service connections + MAUs + add-ons) makes total cost of ownership impossible to predict at low scale. Per Vendr's 2026 marketplace data, even the smallest LaunchDarkly contracts start around $15,000/year -- that's most of a startup's tooling budget for a single flag platform.
Migrate when you actually need SOC2-grade audit logs, SAML, and custom roles. Most pre-Series-A startups don't.
How much does an experimentation platform cost in 2026?
Costs in 2026 span a 100x range, from $0 to over $150,000/year. The five main pricing patterns:
- Free, open source self-hosted: GrowthBook ($0 + infra), Unleash ($0 + infra), Flagsmith ($0 + infra), PostHog (cloud free tier covers most startups).
- Free SaaS tier with usage caps: PostHog (1M events + 1M flag requests free), Statsig (2M events free), Mida.so (50k MTU free), ExperimentHQ (50k MTU free).
- Mid-market subscription ($300-2,000/month): VWO + AB Tasty (from ~$314/month), Convert.com (from $399/month), Crazy Egg (from $99/month), Sigmize (from $19/month for WordPress).
- Per-user / per-seat: GrowthBook Pro ($40/user/month for up to 50 users).
- Enterprise quote-only ($15k-150k+/year): LaunchDarkly (median $71,847/year per Vendr), Optimizely (Web + Feature), Adobe Target, Kameleoon, ABsmartly, Eppo (post-Datadog), Apptimize.
Hidden costs to budget for: warehouse compute (Eppo, GrowthBook running queries onSnowflake adds up), engineering time for server-side instrumentation, and the analyst hours needed to interpret results. According to LaunchDarkly's pricing page (May 2026), the experimentation add-on alone costs $3 per 1,000 client-side MAU on top of base flag pricing -- a 1M MAU app pays $3,000/month for experiments before any other line item.
| Tool | Category | Type | Starting Price (May 2026) | Best For |
|---|---|---|---|---|
| LaunchDarkly | Feature flags | SaaS | $12/svc + $10/1k MAU | Enterprise release governance |
| Unleash | Feature flags | Open source (Apache 2.0) | Free self-host | Engineering teams that want full control |
| Flagsmith | Feature flags | Open source (BSD 3) | Free self-host / $45/mo cloud | Teams with strict data residency rules |
| ConfigCat | Feature flags | SaaS (open SDKs) | Free up to 10 flags | Non-technical flag toggling |
| Split.io | Feature flags + experimentation | SaaS | Free up to 10 MAU/mo (limited) | Feature delivery with monitoring |
| Eppo (by Datadog) | Server-side experimentation | SaaS, warehouse-native | Custom (post-Datadog acquisition) | Data teams already on Snowflake/BigQuery |
| GrowthBook | Server-side experimentation | Open core (MIT + enterprise) | Free self-host / $40/user/mo Pro | Warehouse-native without enterprise pricing |
| Statsig (now OpenAI) | Server-side experimentation | SaaS | Free 2M events/mo | High-volume product experimentation |
| Optimizely Feature Experimentation | Server-side experimentation | SaaS | Custom enterprise | Companies on the full Optimizely DXP |
| Amplitude Experiment | Server-side experimentation | SaaS | Add-on to Amplitude Plus/Growth | Teams already running Amplitude analytics |
| VWO (merged with AB Tasty) | Client-side A/B | SaaS | From ~$314/mo (Growth) | Mid-market CRO programs |
| Optimizely Web Experimentation | Client-side A/B | SaaS | Custom enterprise | Enterprise web teams with QA budget |
| Convert.com | Client-side A/B | SaaS | From $399/mo | Privacy-conscious CRO agencies |
| Kameleoon | Client-side A/B + personalization | SaaS | Custom enterprise | Personalization-heavy use cases |
| Adobe Target | Client-side A/B + AI personalization | SaaS (Adobe DX) | Custom enterprise | Adobe Experience Cloud customers |
| ABsmartly | Statistical engine | SaaS | Custom enterprise | B2C apps that need Bayesian + sequential |
| PostHog | All-in-one (analytics + flags + experiments) | Open source | Free up to 1M events + 1M flag req | Startups that want the whole stack free |
| Mida.so | Lightweight client-side A/B | SaaS | Free up to 50k MTU | Marketing teams that hate flicker |
| ExperimentHQ | Lightweight client-side A/B | SaaS | Free up to 50k MTU | Google Optimize refugees |
| Sigmize | WordPress A/B + heatmaps | SaaS | From $19/mo | WordPress and WooCommerce sites |
| Apptimize (Airship) | Mobile A/B + feature release | SaaS | Custom enterprise | iOS/Android consumer apps |
| Crazy Egg | Behavioral + A/B | SaaS | From $99/mo | Heatmaps + lightweight CRO |
| Contentsquare (Hotjar) | Experience analytics + A/B | SaaS | Custom enterprise | Session replay + experimentation overlap |