Can you trust your Facebook A/B testing results?

I recently read about a Facebook ad A/B test of 26 design variations.

The article drew several conclusions, showing which ads outperformed others. However, there was no word about the experiment’s setup and budget.

Which left me wondering… Were the test results even statistically valid?

Most likely, they weren’t.

However, this wasn’t just one bad example. Many marketers make the same mistake of running Facebook A/B tests without realizing their results are skewed.

Don’t make the same mistake!

This article explains how to get more out of your A/B tests by ensuring your Facebook ad split test results are valid.

You’ll learn:

  • What should you test to get a high ROI?
  • How to ensure your results are statistically significant?
  • How to set the right A/B testing budget?
  • What’s the best A/B testing campaign structure?
  • When and how to conclude your split tests?

Alright, let’s get started!

Rule #1: Test One Element at a Time

As you get started with Facebook advertising, you’ll realize that there are so many things to test: ad image, ad copy, target audience, bidding methods, campaign objective, etc.

The rookie mistake you’re likely to make at this point is creating an A/B test with multiple changing variables.

Let’s say you want to test 3 ad images, 3 headlines, and 3 main copies. This makes 3x3x3 = 27 different Facebook ads. This test will take weeks to conclude.

It would make much more sense to test one of these ad elements at once, e.g. three different Facebook ad designs.

📍 The fewer ad variables you have, the quicker you’ll get relevant test results.

Here’s a great illustration by ConversionXL, showing what will happen if you try to test too much stuff at once.

It gets super messy – Image source

Which leads us to…

Rule #2: Test a Reasonable Number of Variables

Even when testing a single ad element, you may be tempted to create tens on variations with small alterations.

Here’s an example of an A/B test that overdid the number of tested ad design variables.

It will cost them $3k+ to get valid results – Image source

It doesn’t make sense to test that many ad variations at once as Facebook will start to auto-optimize too soon or your target audience will see 20+ different ads by you.

That’s going to be one expensive (and most likely, annoying) Facebook A/B test.

Rule #3: Test 3-5 Highly Differentiated Ad Variables

It’s best to start by testing 3-5 Facebook ad variations.

With one caveat.

If you haven’t yet found your perfect ad copy or ad design, you should aim to experiment with highly different ad variations.

It won’t make much difference to your target audience if you change a few words or move your product around in the image a bit.

However, as you test highly differentiating variations, you can get insight to the type of ad design or ad copy people prefer and expand on it later.

For example, in Scoro, we’ve tested many various ad designs to find the one that works best.

First, we A/B tested 3 highly different ad designs

Each of these ad designs has a completely different design angle.

Later, we could use the winning Facebook ad variation to create more similar designs for further testing.


Later, we split tested the winning design’s alterations

Here’s the formula of smart Facebook ad testing:

A/B test 3-5 variables ➡ Find a winning variation ➡ A/B test the winner’s alterations

Rule #4: Test Ad Elements With Highest Impact

Hate to break this to you, but…

Not all your split testing ideas are gold. 💡💡💡≠💰

And with limited marketing budgets, you’ll need to find the ad elements that have the highest experiment ROI. Otherwise, you’ll be missing out on awesome discoveries.

When searching for Facebook ad A/B testing ideas, think which ad element could have the highest effect on the click-through and conversion rates.

If you’re looking for a more detailed prioritization framework, check out the ones by Optimizely and ConversionXL.

Start by A/B testing the most promising ad elements – Image source

If you’re unsure which test ideas to include on your prioritization spreadsheet, you’ll have more ideas in a few secs.

Rule #5: Know Your Testing Options

AdEspresso studied data from over $3 millions worth of Facebook Ads experiments and listed the campaign elements with highest split testing ROI:

  • Countries
  • Precise interests
  • Facebook ad goals
  • Mobile OS
  • Age ranges
  • Genders
  • Ad designs
  • Titles
  • Relationship status
  • Landing page
  • Interested in

However, take this list with a huge grain of salt.

As you already know your target audience’s locations and demographics, this list becomes irrelevant to your A/B testing strategy.

Instead, you might want to split test the following Facebook campaign elements:

  • Ad design
  • Ad copy, especially the headline
  • Your unique value offer
  • Ad placements
  • Call-to-action buttons
  • Bidding methods
  • Campaign objectives

If I had to weigh in my own Facebook A/B testing experience, I’d say we’ve seen the biggest gains by testing ad audiences (Lookalike vs. Custom audiences), ad design, and value offers.

Rule #6: Use the Right Facebook Campaign Structure

When testing multiple Facebook ad designs or other in-ad elements, you’ve got two options for structuring your A/B testing campaigns:

1. A single ad set — all your ad variations are within a single ad set.

A/B test campaign structure 1

The good side of this option is that your target audience won’t see all your ad variations multiple times as may happen with multiple ad sets targeting the same audience.

However, this A/B testing campaign structure has a huge negative side: Facebook will start to auto-optimize your ads and you won’t get relevant results.

I suggest that you go with the second option:

2. Multiple single-variation ad sets — each ad variation is in a separate ad set.

A/B test campaign structure 2

As you place every ad variation in a separate ad set, Facebook will treat each ad set as a different entity and won’t auto-optimize based on little results.

However, it might happen that the same people will see multiple ad variations in the course of your experiment. (That’s not necessarily a bad thing as you’ll learn what finally makes them click and convert.)

If you want to get valid Facebook testing results, set up a campaign where each variation’s in a separate ad set.

Rule #7: Ensure Your A/B Test Results Are Valid

Do you know when’s the best time to analyze your A/B test results and conclude the experiment?

Is it three days after the campaign activation? Five days? Two weeks?

Or what would you do if Variation A had the CTR of 0.317% and Variation B the CTR of 0.289%?

For example, how would you conclude the experiment below? 👇


Would you conclude this A/B test?

Truth be told, the test above should not be concluded yet as there isn’t enough data to really tell which variation performed best.

To make sure your A/B tests are valid, you’ll need to have a sufficient amount of results to draw conclusions.

The best way to guarantee the quality of your Facebook ad test results is to use a calculator. (A very specific kind of calculator.)

Rule #8: Always Mind the Statistical Significance

If you want your Facebook tests to give valuable insights, put them through an A/B significance test to determine if your results are valid.

Test your test’s validity

Instead of website visitors, enter the no. of impressions to a specific ad variation or ad set. Instead of web conversions, enter the no. of ad clicks or ad conversions.

Look for a confidence level of 90% and more before you conclude any tests.

Tip: Wait at least 24h after publishing before evaluating your split test results. Facebook’s algorithms need some time to optimize your campaign and start delivering your ads to people.

According to an article on ConversionXL, there’s no magical number of conversions you need before concluding your A/B test.

However, I’d suggest that you collect at least 100 clicks/conversions per variation before pausing the test. Even better if you’re able to collect 300 or 500 conversions per each variation.

Rule #9: Know Your A/B Testing Budget

The logic is simple: The more ad variations you’re testing, the more ad impressions and conversions you’ll need for statistically significant results.

So, what’s the best formula for calculating your Facebook ad budget?

What’s your perfect Facebook testing budget?

It’s quite simple:

Average Cost-per-conversion x No. of Variations x Needed Conversions

Start by looking at your other Facebook campaigns and defining your average cost-per-conversion.

Let’s say your goal is to get people clicking on your Facebook ad and the average cost-per-click for past campaigns has been $0.8.

Let’s continue the hypothesizing game, and say you’re looking to split test 5 different ad variations.

To get valid test results, you’ll need around 100-500 conversions per each ad variation.

So, the formula to calculate your budget would be:

$0.8 x 5 x 300 = $1,200

Now, before you bury all the hopes of getting statistically significant A/B test results, consider this:

You can cheat a little. 😉

If one of your test variations is outperforming others by a mile, you can conclude the experiment much sooner. (You should still wait for at least 50 conversions on each variation.)

Rule #10: Don’t Track the Wrong Metrics

As you look at you Facebook A/B test results, there will be lots of metrics to consider: Ad impressions, cost-per-click, click-through-rate, cost-per-conversion, conversion rate.

Which metrics should you measure in order to discover the winning ad variation?

It’s not the cost-per-mile or click-through rate. These are the so-called vanity metrics that give you no real insight into your campaign’s performance.

Always track the cost-per-conversion as your most important goal.

📍Always measure the cost-per-conversion

Cost-per-conversion is your single most important ad metric as it tells you how much it cost you to turn a person into a lead or client. And most of the time, getting new customers is the ultimate goal in your Facebook ad strategy.

You can read an in-depth article on Facebook ad goal setting here: HOW TO SET FACEBOOK AD GOALS FOR PHENOMENAL RESULTS

Facebook A/B Testing Rulebook

To make your life easier, here’s the list of all Facebook split testing rules. You better copy-paste these to your growth hacks document (doesn’t every marketer have one…) or bookmark this article for reference! 🙌

Rule #1: Test one element at a time
Rule #2: Test a reasonable number of variables
Rule #3: Test 3-5 highly differentiated ad variables
Rule #4: Test ad elements with highest impact
Rule #5: Know your testing options
Rule #6: Use the right Facebook campaign structure
Rule #7: Ensure your A/B test results are valid
Rule #8: Always mind the statistical significance
Rule #9: Know your A/B testing budget
Rule #10: Don’t track the wrong metrics

Whatcha think? Anything we missed?