A/B Test Duration Calculator
Calculate how long to run experiments for statistical significance
e.g., 20% means detecting 5% → 6% change
What This Means
You need 8,162 visitors per variant to detect a 20% relative change (5% → 6.00%)
With 95% confidence, there's only a 5% chance of a false positive.
With 80% power, you have a 80% chance of detecting a real effect if it exists.
Quick Reference
A/B Testing Best Practices
- • Don't peek at results early - it inflates false positive rates
- • Run tests for full weeks to account for day-of-week effects
- • Test one change at a time for clear attribution
- • Larger effects are easier to detect (need less traffic)
What is A/B Test Duration Calculator?
An A/B test duration calculator determines the sample size and time required to achieve statistically significant results from a controlled experiment. It accounts for baseline conversion rate, minimum detectable effect (MDE), statistical power, and significance level to prevent premature or inconclusive test conclusions.
How to use this calculator
- 1Enter your baseline conversion rate — the current rate before any changes.
- 2Set your minimum detectable effect: the smallest relative improvement worth detecting (e.g., 10%).
- 3Adjust statistical power (80% is standard) and significance level (95% or 99%).
- 4Review the required sample size per variant and estimated test duration based on your daily traffic.
- 5Use the duration to decide if the test is worth running or if you need to test a bolder change.
Why this matters for founders
Ending A/B tests too early leads to false positives -- you ship changes that do not actually improve anything. Running them too long wastes traffic and delays decision-making. Proper test duration planning ensures you make real, data-driven product decisions.
Start shipping today.
Free community, free tools, free AI. Upgrade for unlimited power.