fbpx

Facebook’s split test functionality lets you run AB tests within the Facebook ads manager. Surely this should help you ascertain which ads perform well and which don’t? Here are 3 reasons why we prefer & recommend not using this functionality.




ABOUT ROCKETSHIP HQ: Website | How Things Grow | LinkedIn  | Twitter | YouTube


FULL TRANSCRIPT BELOW: 

Today I’m going to talk about Facebook’s split test tool and why it isn’t the most optimal to use on your campaigns. Now at the outset the split test tool does appear to be perfect. It allocates equal number of impressions to each ad variant, and rough each ad variant has roughly similar spend so similar exposure so you get as close to an apples-to-apples comparison between the ads as you can possibly get. 

Now there are three problems with this approach. Let’s look at each of them.

1. Equal impressions to each ad variant is great in theory, but that’s not how ads work in the wild.

Let’s say you have five to 10 ads in a Facebook ad set. They do not get the same number of impressions each. The best ad gets the lion’s share of impressions and the rest get far fewer, and therefore you when run a split test, the results do not correspond to how your ads are going to perform in the wild. We’ve often seen that the winners of a split test often tend not to be best performers in regular ad sets.

2. Split tests can get expensive to run.

You need a lot of money to run a split test compared to a regular ad set where you run multiple ads. Let’s see how this happens. Let’s assume you have five variants that you want to A/B test against each other, and we test them using Facebook split test tool.

Let’s assume we have a CPA of $50. Now in order to get to 30 conversions per ad to get statistical significance you need to get 30 conversions x CPA 50 x 5 ads that is $7,500 to run this particular split test. That for the vast majority of advertisers it can be forbidding to spend that sort of budget on a creative test.

If you wait for statistical significance, you have to  wait for each of your 5 to 10 ads to get enough data. This can cost you thousands of dollars. Even if you don’t wait for full statistical significance, it can be expensive – and of course in this approach you are still not getting reliable results comparable to those you get you would get from running ads in the wild. 

3. After a split test, you’re going to have to reset your learning phase of your ad sets.

This is because as you end a split test you have to end up pausing the underperformers and running only the winning variant. And when you run the winning variant, you have to run it with other ads in a ‘business as usual’ ad set.

Your ads’ learning phase gets reset. You could run ads within the split test and keep just the winners – but it’s not optimal because you can’t say you don’t really want to run one ad per ad set. So really after you run any sort of Facebook split test, your learning phase is going to get reset no matter what and eventually you’re going to run ads in the wild without a split test anyway and your learnings during the split tests aren’t really going to count for anything.

So for these three reasons, we typically do not run split test within Facebook.

For these reasons, we typically recommend doing Facebook creative testing within the normal ad sets: add all your ad variants to normal ad sets and let Facebook surface the best performer to the top and if something under performs you pause the underperformers, let the next best ad set surface to the top, so you can control its performance through bidding and optimizations.

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you – and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A POST IDENTIFIER WORLD?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.