fbpx

Facebook recently made its Automated App Ads(AAA) available for all advertisers – as a part of its broader push toward having greater automation drive advertising performance.

As of this writing, we see a number of challenges with AAA due to which we don’t recommend doubling down on AAA. 

In this episode, we explain how AAA works – and what some of the challenges with it are.






ABOUT ROCKETSHIP HQ: Website | How Things Grow | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

🤖 What are Facebook’s Automated App Ads

🦾 The optimisation that AAA automates

⚗️ The introduction of liquidity

📏 Why CPA is not necessarily the right metric to use

🤷 AAA doesn’t leverage your existing knowledge of your users

🧭 Navigating the learning phase is tricky

🔌 Why knowing when to pull the plug is uncertain

🧺 AAA is like putting all your eggs in one basket

🌁 How performance data is obfuscated

😵 There’s no way to know what’s going wrong, when something’s going wrong

🎯 Target exclusion is not quite there yet

🌅 It’s early days yet; AAA will get better

🌏 Test in non-critical geos

⚖️ How to choose the right type of campaigns for testing

☔ Using split tests is the prudent path forward

KEY QUOTES

An early adopter’s experience

“Indeed, one developer we know saw wild success with AAA – and provided a case study for Facebook, only to find out that performance dropped precipitously afterward and never fully recovered.”

FULL TRANSCRIPT BELOW

What is AAA – and how it works

Facebook’s Automated App Ads (AAA) is a type of dynamic creative optimization. It lets you provide 50 different creatives (videos or images) – and allows you to automatically test different combinations of these creatives to deliver the best performance. 

In theory, that’s faster testing and a quicker path to performance – we’ll talk about practical implications soon.

With AAA you’re hands off with not only creative optimization but also placement and demographic optimization – you only pick the app store (iTunes or Google Play), language and optimization goal for your campaign. You can run only a single campaign, single ad set and single dynamic creative ad.

Even prior to AAA, Facebook had increasingly promoted what it calls liquidity – in other words, it has consistently recommended removing as many targeting parameters as possible (gender, placements, age, interests) – in order to let its machine learning attain the most reach and perform optimally – so AAA’s structure is understandable in why it’s architected in this manner.

Key challenges with AAA

While in theory AAA should let marketers be hands off and let the algorithms deliver performance, in practice there are definitely challenges with it. Here are the key ways in which these suboptimalities manifest:

1. AAA depends on Facebook-reported metrics, not necessarily the true metrics for your app. 

AAA (or Facebook’s SDK) doesn’t know or see the true value of your users. The CPA or value reported in Facebook can be directionally right – but not always precise. 

Similarly, if you want to target users preferentially (say you’re a dating app that wants to target women only – or if you know your users aged 18-24 are more valuable), you can’t do it with AAA because it automatically picks all users over 13.

2. AAA’s learning phase is risky

AAA, like all machine learning systems, has a training period or a learning phase – that you have very little visibility over. Will it work if you give it more time? You don’t know.

What makes this riskier is that it isn’t easy to keep AAA as a ‘test’ campaign as a part of your overall campaign mix. You can only run one campaign/ad set at time for a given operating system, country, language, and optimization goal – and Facebook’s documentation says: “If your Automated App Ads campaigns and usual app install campaigns have an overlap in delivering to the same audience, our machine learning may prevent Automated App Ads from achieving the best performance.”

What this means is that if you wanted to test a portion of your US traffic on AAA, you’d have to run a split test with a non-AAA campaign (which has its own problems and issues – as we’ve discussed before). 

Plus: if you decide to go all in on AAA, you can’t run say 5 AAA campaigns in parallel to offset your risk from one big campaign that consumes all your budget.

3. AAA’s opacity doesn’t let you see performance issues.

The opacity of AAA, which Facebook touts as a virtue can be a challenge just as well. You don’t see which of your 50 creatives are doing well (while you have some performance metrics, you don’t see for instance cost per unique purchase for each creative. You don’t also see creative performance via the Facebook API or in your MMPs – you just see one single AAA ad with its performance aggregated, which does not help).

4. It’s hard to reverse declines

This is related to the previous two points. We’ve seen AAA do well in patches. The problem is that once it declines, it can be challenging to reverse – which creative caused the decline? What placement? You don’t know – so you don’t know where to intervene. 

Indeed, one developer we know saw wild success with AAA – and provided a case study for Facebook, only to find out that performance dropped precipitously afterward and never fully recovered.

5. User exclusion is imperfect right now.

As per Facebook’s documentation, “Automated App Ads uses a 90 day exclusion window, meaning if a person has opened the app in the last 90 days, they will not be eligible to receive an ad from that app.”

In practice though, this hasn’t worked as well. We’ve seen users from within the last 90 days get targeted by AAA – and this has had performance implications.

What should you do?

Despite the issues, we’ve seen pockets of performance with AAA – and it’s certainly possible that the really smart folks at Facebook will figure out improvements in AAA. If you want to test it, here’s what we recommend:

  1. Test it on non-core geos. Test AAA in one of your smaller countries to see how it performs – and also to continue to gain experience with it.
  2. Test it on app install campaigns that are more signal-rich than purchase or value campaigns. The machine learning algo requires more signals to succeed – so the more signal you provide AAA the better it is. We’ve found that campaigns with greater numbers of conversion events tend to do well – and oftentimes that has meant campaigns optimizing for app installs rather than purchases or value.
  3. Run split tests before doing broader rollouts. This is a path recommended by Facebook as well – and their language around it clearly suggests that AAA may or may not work for everyone. Even though split tests have their own issues, this is a path to derisking your AAA test.

In summary, AAA is a promising addition to marketers’ arsenal – and is definitely something that we recommend continuing to test, even though it isn’t ready for prime time yet.

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you – and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A POST IDENTIFIER WORLD?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.