Every A/B testing guide eventually gets to this question. Most of them answer it with "it depends" and then list five variables without giving you a number. That's technically correct and completely useless.
So here's a number: if the page you want to test gets at least 1,000 visitors a month, you can run a meaningful A/B test. You'll need to be patient (two to four weeks per test), and you'll need to test changes that are big enough to matter. But it's workable.
If you're getting 500 or fewer visitors a month to that page, A/B testing probably isn't the best use of your time right now. There are better things to do instead, and we'll get to those.
If you're somewhere between 500 and 1,000, it depends. (Sorry. But at least we gave you the number first.)
Why the answer is 1,000 (and not 100 or 10,000)
A/B testing is a statistics problem. You're trying to figure out whether the difference in conversion rates between two versions is real or just random noise. The more visitors you have, the faster you can tell the difference.
Three things determine how much traffic you actually need:
Your current conversion rate. Higher conversion rates need less traffic because there are more conversions to measure. A page that converts at 5% gives you signal faster than one that converts at 1%. If your homepage converts 50 out of every 1,000 visitors, you've got 50 data points per variant after splitting traffic. If it converts 10 out of 1,000, you've got 10 per variant. Bigger sample, clearer signal.
How big the difference between the two versions is. If your new headline is dramatically better than the old one (say, it doubles conversions), you'll see the result quickly with relatively few visitors. If it's only slightly better (a 5% improvement), you'll need a lot more data to be confident the difference is real and not a fluke.
This is why the advice to "test big changes" keeps coming up. It's not just because big changes are more interesting. It's because they produce bigger effects, which need less traffic to detect.
How confident you want to be. The industry standard is 95% statistical significance. SplitPea uses 90% as the threshold for calling a winner, because for most website decisions, a 90% certainty that you're making the right call is plenty. You're not running a clinical trial. You're deciding between two headlines.
Lowering the confidence threshold from 95% to 90% meaningfully reduces the traffic you need. It's a trade-off: you're accepting a slightly higher chance of being wrong (10% instead of 5%) in exchange for getting answers faster. For small businesses with limited traffic, that's usually the right trade-off.
Put those three factors together and 1,000 monthly visitors is roughly where testing becomes practical. Not easy, not instant, but practical.
What testing looks like at different traffic levels
500 visitors/month or less
Testing is technically possible but slow. A test might take two to three months to reach any kind of confidence, and by then your business context has probably changed enough to make the result less relevant.
At this level, you're better off spending your energy on other things. More on that below.
1,000–3,000 visitors/month
This is where most small business websites sit, and it's perfectly testable. You'll run one test at a time (don't split your traffic across multiple experiments). Each test will take two to four weeks. You'll want to test changes that are meaningfully different, not subtle tweaks.
Good tests at this level: a completely rewritten headline, a simplified contact form, adding or removing a hero image, changing your CTA from "Contact Us" to "Get a Free Quote."
Bad tests at this level: "Learn More" versus "Read More," green button versus blue button, slightly different font sizes. The effects of these changes are too small to detect with this traffic.
3,000–10,000 visitors/month
Now you've got room to be a bit more precise. Tests run faster (one to two weeks is often enough). You can test subtler changes and still get results. You can start running tests on secondary pages, not just your homepage.
10,000+ visitors/month
You can run multiple concurrent tests on different pages. You can test smaller changes. You can be pickier about confidence levels. This is where the enterprise tools earn their price. But if you're reading this article, you probably aren't here yet, and that's fine.
The traffic that matters is page traffic, not site traffic
One thing that trips people up: the traffic number that matters is visitors to the specific page you're testing, not total site traffic.
If your site gets 3,000 visitors a month but your pricing page only gets 400 of those, save it for later. Test your homepage instead, where the traffic actually is.
This is why homepage tests and landing page tests are the best starting point for small businesses. They're usually the highest-traffic pages on the site. Test where the visitors are.
What to do if you don't have enough traffic
If your traffic is below the threshold, don't just sit around waiting for it to grow. There's real work you can do that doesn't require statistical significance.
Fix the obvious stuff. You don't need a test to know that a six-second page load is bad, that a broken mobile layout is losing you visitors, or that a homepage without a clear CTA is confusing. Go through your site as if you're a first-time visitor. Is it obvious what you do? Is it obvious what the visitor should do next? Can they find your phone number or contact form within five seconds? Fix what's broken. These aren't A/B test questions. They're basic usability.
Ask real people. Show your website to five people who aren't familiar with it. Ask them what the site is about and what they'd do next. Watch where they get confused. This kind of qualitative feedback is often more valuable than a test result, especially when you're early.
Use best practices for the decisions that don't need testing. There are some things that are almost always true. A specific headline beats a vague one. A visible CTA above the fold beats a hidden one. A shorter form gets more submissions than a longer one. Social proof near the CTA builds trust. These aren't guaranteed winners in every case, but they're the right starting defaults. Implement them and test later when you have the traffic.
Focus on getting more traffic first. If your site gets 200 visitors a month, the highest-impact thing you can do is get it to 1,000. SEO, content marketing, a Google Business Profile, local directories, social media, paid ads if the economics make sense. More traffic solves the testing problem and a lot of other problems too.
Track micro-conversions. If your main conversion (a purchase, a phone call, a form submission) happens too rarely to measure, track smaller actions that lead to it. Button clicks, scroll depth, time on page. These happen more frequently and give you signal about whether a change is moving behaviour in the right direction, even if you can't prove it's driving final conversions yet.
How long should you run a test?
Even with enough traffic, time matters. Don't call a test after three days just because one version looks like it's winning.
Minimum: one full week. Traffic patterns change between weekdays and weekends. If you only run a test Monday through Thursday, you're missing half the picture. A full week catches the rhythm.
Better: two weeks. This gives you two complete cycles and smooths out any one-off spikes (a social media post driving unusual traffic on Tuesday, a holiday weekend, etc.).
Maximum: six weeks. If a test hasn't reached confidence after six weeks, it's probably not going to. The two versions likely perform about the same, which is useful to know. End the test, keep either version, and try a bigger change next time.
Don't peek obsessively. Check at the end of each week. Early results are unreliable because small sample sizes produce wild swings. A version that's "winning by 40%" after two days might be dead even after two weeks. The confidence number is what matters, not the raw conversion rate.
The bottom line
You need about 1,000 monthly visitors to the page you're testing. More is better. Less is manageable if you test big changes and accept 90% confidence instead of 95%.
If you're not there yet, don't force it. Fix the basics, ask real people for feedback, and grow your traffic. When you're ready, the test will take you about five minutes to set up.
A/B testing rewards patience. The first test takes a few weeks. The result changes one page. But the second test changes another page, and the third changes another, and six months later your site is meaningfully better than it was. That's the game. You just need enough visitors to play it.