fbpx

There are a couple of key challenges in testing creatives on Facebook:

-> given that the learning phase can impact performance, how do you mitigate its effects?
-> how do you pre-empt creative saturation – and keep finding new winners? In this episode, we outline our approach to creative testing on Facebook – and address key questions and considerations that we deal with.




ABOUT ROCKETSHIP HQ: Website | How Things Grow | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

😬Why Facebook’s split-testing feature can fall short 

3️⃣The 3 key challenges in Facebook ad testing

💡How we recommend dealing with these challenges — separate ad sets into “core” and “test” ad sets

📲Budgeting, testing, and optimizing “core” and “test” ad sets

🤷🏻‍♀️FAQs on tactical execution.

FULL TRANSCRIPT BELOW:

There’s no clear documentation from Facebook on what is the best way to test creatives. Yes, Facebook does offer a split-testing feature – but this can be sub-optimal for a number of reasons.

There are three key challenges in FB ad testing:

-> If you add a new creative to an existing ad set with history and proven performance, it can reset the ad set’s learning phase (and result in deterioration in performance).

-> Ads that are proven and are performing well can often start to deteriorate due to creative or audience saturation.

-> If you add new creatives to a completely new ad set, that can often result in audience overlap – and adversely impact performance.

How do we deal with these challenges? What we’ve found effective is to use a portfolio strategy – and to separate our ad sets into ‘core’ and ‘test’ ad sets.

Core ad sets: These are ad sets with proven audiences & proven ads – ads that have shown strong CPAs/ROAS numbers in the past. 

Test ad sets: These are the ad sets in which we run new & untested ads/concepts. The goal of these ad sets is to surface new winners that can replace ads in core ad sets that get saturated. 

How do these work in practice? Here are a couple of FAQs around our approach:

How much budget do you put into core ad sets?

These are ad sets that are meant to drive strong performance by making up 90%+ of your budgets – and carry your campaigns through. 

What happens when these proven ads in core ad sets start to see creative saturation? 

We pause down ads with deteriorating performance. We prefer to run 8-10 ads per ad set – and so we’re able to pause ads without disturbing the ad set’s learning phase significantly. 

How else can you optimize the performance of core ad sets?

We can still calibrate and optimize the performance of these ad sets by making small changes to bids and budgets(typically less than 10% per day) so as to not disrupt the learning phase.

What happens when your test ad sets surface new winners?

Whenever we see a clear & sustained winner from a test ad set(below), we add this to the core ad set – being okay with the performance deterioration from disruption of the learning phase, knowing that the new winner can more than makeup for this disruption. 

Don’t your test ad sets perform badly just because the creatives are not proven?

Yes. Typically these ad sets are likely to perform far worse than your core/proven ad sets, simply because the creatives are unproven and untested. 

How do you justify losing money on test ad sets?

We typically deploy about 5-10% of our budgets into these test ad sets, knowing that these may lose money, but this lost money will be made up for by the winners that these ad sets surface that can be added into core ad sets. For what it’s worth, this ‘loss’ is important and essential to finding and surfacing winners.

What do you do if you want to expand to a new audience(say x% lookalikes of most active users, which you had never tested before)?

You test the new audience ONLY with proven creatives + variants of proven creatives – since you want the new audience to have the best possible chance of success. You add in new creatives only when you have new winners that you want to introduce.

Won’t audience overlap between test ad sets and core ad sets hurt you?

There are two ways to deal with this:

-> Run your test ad sets on a completely new audience (different from what you have going on your core ad sets).

-> Run your test ad sets on a smaller budget than core ad sets.

Audience overlap is much much less of an issue when you are targeting large audience sizes – and an ad set with a small budget really doesn’t significantly impact performance

(especially since the creatives in the test and core ad sets are completely different from each other).

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you – and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A POST IDENTIFIER WORLD?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.