fbpx

Our guest today is Gina Kwong, a Senior User Acquisition Manager at EA. She focuses on global growth for EA’s mobile titles, and has great experience in creative testing especially across channels. 

With the advent of Facebook AAA, many marketers have been sceptical about its efficacy for UA. Gina is one of the few marketers that not only advocates for using AAA for UA, but has also achieved great success in doing so. 

We are excited to feature Gina’s insights today, as she takes us through her process of setting up campaigns on Facebook AAA, and how they can be used to test creatives – addressing many of the reservations many folks(including ourselves) have had until this point. Tune in to this episode for a completely different perspective on Facebook AAA, and a primer on how to make them work for you.






ABOUT GINA: LinkedIn




ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

👬🏼 Facebook AAA is just like Google UAC

🎰 Marketers are already working with reduced levers on Google UAC

💨 Facebook AAA initially looked like a fast and efficient alternative to lookalike campaigns

⚖️ How to evaluate creative performance using the UI

🌐 Expect discrepancies between a self-attributing network and internal data

🚧 The changes to the Facebook API may not be there yet

💯 Why 100 creatives was reduced to 50

👌 The optimum number of creatives to maximize effectiveness of the campaign

🏆 How to structure a campaign to surface winning creatives

🎛️ The downsides to having too many creatives in a campaign

🔄 When to refresh creatives in a campaign

🎭 The efficacy of testing creatives in AAA vs. lookalike campaigns

📜 How AAA handles the bias between historical and new creatives

🗺️ How to mitigate the risk of one geo-language optimization

🧺 Putting all eggs in the AAA basket isn’t recommended

🎡 The product life cycle determines the campaigns

🏘️ Using country clustering to test in multiple geos

🏃🏻‍♂️ The case for running different types of campaigns at the same time

🧑‍🤝‍🧑 Factoring in audience overlap into a campaign strategy

🎚️ The balance between audience overlap and performance

💰 There are days when learning will be costly

💸 How to think about a high CPA with respect to the product

📅 Using higher funnel events to amplify algorithm optimization

🖼️ How to think about creative testing with AAA

🧪 It is important to be open to testing in multiple scenarios

📈 Spend and conversion do not always correlate exactly

⚓ Using AAA to correct for historic creative bias

🧱 Structure campaigns for scale based on geo and regional observations

🔭 The case from moving from broad to narrow during optimization

⚙️ User exclusion doesn’t work perfectly; and why that’s ok

🆗 How fewer levers could be a positive thing

📖 Learning can be achieved on conservative campaigns

🦺 Multiple ways to mitigate risk on Facebook AAA campaigns

🎆 Best practices for scaling campaigns 

👗 Facebook can take multiple changes in a week

🔆️ Why it is important to keep campaign changes to less than 30% 

📶 How to think about scale responsibly

KEY QUOTES

AAA can succeed where others don’t

At the time, I was working on a title that had not been successful, really, on the lot of lookalike audiences. And so this product felt like a very fast and efficient way in testing an audience. And yes, I was one of the first marketers to test in it, when it was still in its beta. And it was really exciting, because we went from really spending a very low daily budget to tens of thousands a day, and we saw really good results.

It is possible to optimize creatives

You have to rely on the Facebook UA data to optimise creatives. What I did was, I used the UI to optimise based on the relative performance; not necessarily its true performance.

Less is sometimes more

While it’s nice to have all of these creatives in there and max it out to 50, I actually have seen that having more creatives could potentially lower the performance of your campaign. Because Facebook is trying to actively test the new or different creatives that are in the campaign, with the caveat that, of course, they’ll still serve your highest performing creative.

The process of gauging creatives

I see 20 to 25 being the most optimal number of creatives. And you refresh those, as soon as you see one or two weeks where they’re just not spending, it’s time to pause them and add in new creatives.

It is important to be judicious

I have definitely seen titles that have been run by my teammates on other titles that have basically all the campaigns on AAA via value optimization. And you’re right: that makes me very nervous, when you see one type of test because anytime there’s a bug, when you see issues, which we’ve definitely seen before.

Assess effectiveness over a longer period

There will be days where you’re like: “Well, it’s spending a couple thousand dollars, and I’m not getting any payers. What should I do?” That will happen. You have to look at your average costs across a certain period of time that makes the most sense for your product and see if that is better than your lookalike campaigns already.

Always test everything

A lot of times your highest spending creative may not have the highest conversion because Facebook just stopped serving the other ones. So in comparison, they could have a lot higher IPM, but maybe a 10th of the spend. So understand what your objectives are and keep testing, there’s really no magic formula for all of it.

How to ensure you get all opportunities

I learned this from one of my directors: he taught me that you want to start with the most broad and then you hone into the most narrow. Because if you start the other way around, there’s nowhere to go if it doesn’t work.

The name of the game

It’s a running joke: it will always spend your budget, it’s up to you to figure out if it is effective.

FULL TRANSCRIPT BELOW

Shamanth: I’m very excited to welcome Gina Kwong to the Mobile User Acquisition Show. Gina, welcome to the show.

Gina: Thanks. I’m very excited to be here; just chat about user acquisition, which is one of my favourite things. 

Shamanth: Absolutely, absolutely. And you’ve been playing this game for a long, long time. And I’m excited to have you here, not only because of your experience in UA for many, many years, but also because you were among the first marketers to test AAA, which is what we’re going to speak about today. 

Among very many marketers that I’ve spoken to, I think there’s been a lot of hesitation in going full steam ahead with AAA. And certainly when you and I chatted the last time, AAA is something you’re very bullish on. It is something you’ve made work, which, in my experience, is relatively uncommon. So I’m excited to dive into your experience with AAA. And you were among the first people to test AAA when it was in its very early stages. 

So tell me about what it was like then? What were some of your early reservations? And what were your actual experiences in the early tests? 

Gina: Sure. So for AAA—which we all know is Automated App Ads—is very similar to the Google UAC product; even Facebook says this, maybe not officially. I have to say, for context, I was not one of the first, very excited people to test UAC, because I was like: “Oh, man, this is going to be a mistake on Google’s part: taking control away from the UA marketers.” 

And similarly, that’s what AAA is basically: reducing the levers that you can pull as a marketer to just a few things. You can only really choose geo and language, and maybe now this one part where you can exclude people. But we can talk about it later, about how effective or ineffective it is from what we’ve seen. 

So, I was actually really excited to test this product. Because

at the time, I was working on a title that had not been successful, really, on the lot of lookalike audiences. And so this product felt like a very fast and efficient way in testing an audience. And yes, I was one of the first marketers to test in it, when it was still in its beta. And it was really exciting, because we went from really spending a very low daily budget to tens of thousands a day, and we saw really good results.

And that’s usually where we increase budget, optimise where needed, etc.

Shamanth: Yeah. And it sounds like, because you had this title where lookalikes were not working, you’re like: “Look, let’s just try this.” And it just took off. I would love to dive into aspects that did make it work, and we will get into many of those. 

But, I would say one challenge that many folks see with AAA is that you can’t get creative-wise metrics in your MMP. You can’t even get cost per unique purchaser or payer to appear within the Facebook ads interface, because it just says cost for total purchasers, if that’s what you’re optimising for. 

Given those limitations, how do you evaluate creative performance? And how do you detect creative saturation?

Gina: Yes, so that was one of the big frustrations, because as we all know, with Facebook and UA marketing in general, creative is one of the bigger levers nowadays; especially when big channels now are automating all of these different ad products that you can market on. 

And so for AAA, yes, initially it was quite a bit of a challenge.

You have to rely on the Facebook UA data to optimise creatives. What I did was, I used the UI to optimise based on the relative performance; not necessarily its true performance

—because we know there’s some discrepancies between Facebook and internal data, because it’s a self-attributing network. So, you can download that data from the UI and optimise that way. 

Now, there are some updates on the Facebook API, where you can pull this data via the API directly. I don’t know how successful that is currently, because we have done that before, but I hadn’t been able to see it internally in the data in my previous roles. So we are still optimising based on the UI data. 

So it’s possible to do so but it is much more challenging, where you can’t readily pull it via an MMP or maybe even through the API—I don’t know how successful that is. But I believe that is something that a lot of marketers have brought to Facebook’s attention. And I hope that they hear us because this is just like any ad product; it should be something that they could update. 

Shamanth: Sure, sure. And in cases like this, if you’re looking at relative performance, do you find it’s a good barometer of creative saturation? Because, again, if you’re optimising for purchases, that’s a lot of purchases you have to see before you notice that a creative is tanking. 

Gina: Sure, let’s just take a step back to answer that question, actually. Because there used to be—in the beginning when it first started, and the product was available, you could actually add in 100 creatives. That was the pitch that Facebook sold. And now it’s been reduced to 50, because I believe they realised that, from my inference, 100 is too many. It’s not really necessary. 

I mean, we would love to think that: “Hey, we add in a 100 creatives, and it’s just all gonna have its chance. And we don’t have to worry about it; or creatives will not get fatigued as often; things like that.” But in reality, only 10 to 15 creatives really ever get a chance to spend. 

And as you said, getting that number enough purchases, conversions a day, in order for Facebook to be able to utilise that creative effectively, or have enough data to say, “Hey, this is a winner. Should we keep this?” 

So to answer your question, and maybe peripherally,

while it’s nice to have all of these creatives in there and max it out to 50, I actually have seen that having more creatives could potentially lower the performance of your campaign. Because Facebook is trying to actively test the new or different creatives that are in the campaign, with the caveat that, of course, they’ll still serve your highest performing creative. 

But I think that having the optimal—from what you can see for your campaigns—number of creatives in there and refresh it that way with that number of creatives is much more effective in getting the best performance out of AAA. So I can’t tell you exactly how many that would be; I think that generally 10 to 15 will perform very well. 

And then over time, Facebook will pull some of your mid-range creatives and serve them. And sometimes, you’ll see that a creative that has been running for a month or two will, all of a sudden, be your highest spending. So that’s why I see that Facebook will try to actively find new creatives that will perform and give you the higher CTR over time. But having so many just really means that you’ll see a whole bunch of creatives at the bottom, where it spends $100, $50, here and there, and it adds up and they just do not give you any return. 

Shamanth: Right. So this can be a long tail that’s completely ineffective. And from that perspective, you’re suggesting it just makes sense to restrict yourself to maybe 10 or 15 ad creatives, so you have more of a focused approach to running and testing creatives in there. 

Gina: Yeah, from the titles that I have run,

I see 20 to 25 being the most optimal number of creatives. And you refresh those, as soon as you see one or two weeks where they’re just not spending, it’s time to pause them and add in new creatives. 

And I’m always a big proponent of AAA being a fairly effective way to test your creatives. This is the fastest way Facebook is going to tell you: “With all these creatives, I’m going to serve these. A lot of these other ones will not have a chance at all.” I find that to be very effective, and I’ve been pushing that. We’re still working on creative testing. 

Shamanth: Sure, out of curiosity, when you say that’s the fastest way to test, do you find the testing process or the velocity of getting results to be stronger than just a normal broad campaign or lookalike campaign?

Gina: Well, I do not recommend testing creatives in just lookalike campaigns in general. I mean, that’s a separate issue. 

Traditionally, everybody has a certain way of testing their creatives because they know their product best. I have seen that advertisers will put in new creatives in their business-as-usual campaigns, which I don’t recommend, because that will disrupt your BAU campaigns. And then, there are others that will test it separately in its own campaign, a certain geo, things like that. And that can get very costly, very quickly; because some ones need at least a certain amount of pay or spend on each creative. I don’t really agree, because I think that being conservative, probably over 50% of creatives that we run will never see more than $100 of spend. That’s being conservative; that could be much more.

And so in that vein, in my opinion, AAA campaigns would be the most effective because, let’s say you have 50 in this case that’s fine to put in 50 and Facebook will tell you very quickly which it will favour. And you can’t really do that in normal, non-AAA campaigns, even dynamic creative ads which are only 10. So, that’s why I think that it’s a good way to test new creatives, especially if you’re lucky enough to be working with a creative team that can generate a lot of creatives for you at any given time. 

Shamanth: Yeah. So you’re saying, if you want to test new creatives and you put them into AAA, if it’s a winner, Facebook will surface them far faster than in a normal campaign; whereas in a normal campaign, there’d be a lot of weight for historical creatives

Gina: And even if you don’t have the budget, or the time to test it in its own separate campaign; even if you put a new creative in the AAA, like BAU, the new creative may not be the highest spending—because obviously, you still have your champion creative that has the most history—it will still be a mid-range spend creative, which to me is already a winner. Because the fact that Facebook is saying: “Hey, I still want to serve this ad, because it’s generating conversions. It’s already a winner.” 

So it really depends on the situation. And of course, there’s no one size fits all. But these are the observations that I’ve seen across the many different titles that I’ve worked on. 

Shamanth: Certainly, I think it’s insightful to notice, at this point of time, there’s less bias towards historical winners on AAA. And that’s helpful in just getting you winners out sooner than later. Great. 

To switch gears a bit: something that a lot of marketers question around AAA is that you have one geo-language optimization combination—you can only have that. And a lot of marketers like me worry that with one geo-language optimization combination, we’re putting all our eggs in one basket. Let’s just say US-English value optimization: You’re running AAA there, and the campaign starts to drop in performance. That’s a huge hit to your overall UA operations.

So that’s a reservation that I have that many people I have spoken to have. How do you think about this risk? And mitigating this? 

Gina: Yeah, that’s definitely a very understandable concern.

I have definitely seen titles that have been run by my teammates on other titles that have basically all the campaigns on AAA via value optimization. And you’re right: that makes me very nervous, when you see one type of test because anytime there’s a bug, when you see issues, which we’ve definitely seen before.

So I think that the Facebook team knows that, because that’s why there are other types of optimization—AEO or MAI plus purchase, which I’ve seen work in certain cases. And I don’t recommend running AAA only; I think a combination of look alike campaigns and AAA campaigns. 

It really depends on where the product is—its stage of life. If it’s a newer product, AAA would be really interesting, because you’re going to get a whole lot of installs in the beginning to feed the system, the Facebook algorithm, to understand where the niche groups would be, which, in turn, I believe would make the lookalike campaigns likely more effective. But you can run more than one geo now for AAA campaigns. 

Actually, it was really funny, because even during its beta, they’re like: “Hey, you know, we can only run one geo.” I’m like: “But I’m running three geos.” I may have found some sort of loophole that they had, perhaps. And I find that country clustering is really effective; especially for smaller countries, regions that may have similar performance—which is really nice to be able to also test all that very quickly, and AAA does give you that. I think it’s just trying to understand where your product would perform well, and it will give you results fairly quickly. 

Shamanth: Yeah. So what I’m hearing you say is, if all you’re doing is running AAA yes, that can be risky; you really need to mitigate that risk by running lookalikes, or AEO alongside AAA. And so what you’re saying: “Look, if you’re running US-value, keep that at some percentage of your spend that’s not 100%.”

Gina: Yeah, I think performance marketers already do that from before AAA, where we run whichever campaigns will give you the best return or what’s the best KPI. Because what’s to say that lookalike campaigns aren’t going to give you lower performance than AAA? Well, in that case, then you run AAA campaign, until it does not give you the best performance and then you continue testing. 

Shamanth: Sure, Facebook’s guidance seems to have been that, if you run AAA, don’t try anything else to minimise audience overlap. It sounds like you are saying just run the rest because, otherwise it’s just too risky. 

Gina: Yeah, I think that unless you’re running tens of thousands of dollars a day, the overlap, yes, of course, it’s there with the wide net that AAA casts. It’s possible, but Facebook does have tools that can tell you how much it overlaps. And you will have to decide for your own product, whether that’s acceptable. Maybe it is acceptable if you run both, and it will still give you the ROAS or high KPIs that you’re looking for. That is the due diligence that we have to do, because these are the tools that we’re being provided and we still have to see if this is too good to be true. 

Shamanth: Absolutely. And speaking of mitigating risk, another risk that a lot of marketers see is, in events that an ad or a product is relatively whale-driven, and has very high cost per unique purchase: let’s just say it’s over $100. 

And a reservation that a lot of marketers have is: “Look, the learning phase can be just very risky in AAA, because you have 20-25 creatives that need to learn. You have to spend at least a couple thousand dollars to get a clear read if this is working or not.” 

How do you recommend thinking about AAA in a context like this where the cost per purchase or the CPA is very high?

Gina: Sure. So I think there are a couple ways to attack this problem and put a strategy down on how to actually launch AAA campaigns in these types of ad products. Because if the cost per unique payer is what you’re looking for, or the lower cost per payer, I would start with AAA with AEO, app event optimization, which makes the most sense. 

And you’re right,

there will be days where you’re like: “Well, it’s spending a couple thousand dollars, and I’m not getting any payers. What should I do?” That will happen. You have to look at your average costs across a certain period of time that makes the most sense for your product and see if that is better than your lookalike campaigns already. 

Now another way to tackle this is if the cost per payer is so much higher—or it’s high for your product—there are other ways to optimise. There are ways to go around it, where you can test events that are not so far down the funnel. So you can maybe test an event that is higher in the funnel that you have more conversions of, so that the algorithm can optimise towards and learn. 

Or if you don’t have a whole lot of historical data there, run lookalike campaigns first. You know that they have worked for you, and you start there. I would like to test them in parallel, just so that it gives the campaigns the best chance to tell me which one is going to work for my product. And you set the budget properly. And you will learn fairly quickly. 

Shamanth: Sure, yeah. And that makes perfect sense, in how you want to reduce the risks around some of this. 

And to switch gears again, I’m curious how you recommend running creative tests in a AAA context. I know you briefly talked about this, but to understand this, do you typically run these in a value or AEO campaign or a BAU campaign? What does your creative testing methodology look like in AAA? 

Gina: So just to give a little bit of background from what I have seen, a lot of marketers test their creative testing in an MAI—mobile app install campaign—so that they can gauge the highest IPM—installs per million. And I think that is fine, it works if you know that your creatives will be your champion, which is not as often as we would like it. That’s one way to do that. And I’ve seen that done across a lot of different ad products, titles, etc. 

My recommendation, and like I mentioned before, is to put it in a AAA. We have tested MAI versus AEO, because we have requests from other team members; they want to see the highest IPM. Which is fine, but in my opinion, why not test it against what you’re optimising towards? If you’re optimising towards ROAS and you have that kind of data, I think the best way to do this is to put it in a AAA, and optimising towards AEO. 

Because as much as I would love to optimise towards ROAS—which a lot of products optimise towards—you may not get enough for the short amount of time in the low budget that you’re testing. So I think AEO is the best compromise between the two. Actually, MAI plus purchase may also work. I haven’t done that yet. Having that openness to test these types of campaigns, putting all the creatives you have into one campaign, setting a budget and having the parameters: will give you pretty fast results. 

The only caveat is that your highest creatives may not have the highest IPM. It’s very normal, where

a lot of times your highest spending creative may not have the highest conversion because Facebook just stopped serving the other ones. So in comparison, they could have a lot higher IPM, but maybe a 10th of the spend. So understand what your objectives are and keep testing, there’s really no magic formula for all of it.

This is what I’ve observed to be effective for us. 

Shamanth: Yeah, that makes a lot of sense. And like you said earlier, you’re getting winners faster; they’re not competing against historic ones as much—the historic ones aren’t eating up all the spending the way that it would in a normal campaign. So I think it makes a lot of sense. They either may or may not win in an AEO or value, so you don’t have that bias either. 

What do you recommend as the ideal structure of campaigns in a AAA context: geos, geo tiers, or optimization methods? How do you cluster your campaigns? How do you structure campaigns? 

Gina: For a global product, I would recommend structuring campaigns by region—on how they perform in that region, understanding your product and what works for you. Sometimes it could be tier 1, tier 2, tier 3 countries, where they may not be the same region, but grouping them together and then reviewing what works, which countries give you the best performance, and then using that data. It’s almost like running worldwide campaigns, and then you whittle it down or recluster geos into new campaigns, basically to optimise towards the best BAU campaigns you have. And those are going to be the campaigns that you scale. 

Shamanth: Sure. So you are breaking down by LTV tiers, like high LTV tiers, or maybe you’re separating out US. Gotcha. And are you typically running all optimization types in AAA on all geo tiers? How does that typically look? 

Gina:

I learned this from one of my directors: he taught me that you want to start with the most broad and then you hone into the most narrow. Because if you start the other way around, there’s nowhere to go if it doesn’t work. 

I would start, if it’s a new product, MAI with purchase or the AEO. Start there and see how performance comes up. And if there are certain regions that are not working, you can go to VO or recluster the different geos together. 

There are a lot of ways to tackle this problem. But that’s how I like to think about this type of problem, because it’s very systematic that you have tested everything. Otherwise there may be missed opportunities, if you don’t structure it that way in the beginning. 

Shamanth: Yeah, that makes sense. And one reservation that advertisers have is that you can’t have any sort of user exclusion in AAA, so you might get younger users or users who already use the app. And I know Facebook says: “Oh, we will exclude last 90 day installers.” It’s my experience that that isn’t fully accurate; we do see somewhat more decent installers. 

So how do you account for this dynamic that just might be your past installers coming in? 

Gina: Sure that is likely going to happen. And to be honest, even in lookalike campaigns, you will see that. I mean, I have retailers that people have worked for, or companies I’ve worked for—they’re retargeting me to death. I’ve already purchased it. And I’m still getting that. Maybe it’s a retargeting campaign, but the ads look very similar to the ones I just clicked. 

So yes, as you mentioned, Facebook does have this section in connections, where you can exclude people who have installed your app. And you may be still showing your ad to people that have your app already. That is, unfortunately, the reality of marketing, regardless of which channel in most cases. However, Facebook algorithm is smart enough to know that, if I show this ad to these people, after a certain frequency, and they don’t click and they’re not converting, it will stop. Because they want your campaigns to be successful. They want to show that impression to another person that will convert, because that’s how they know that you’re going to spend more if the performance works better. 

I know that as a UA marketer, it’s completely counterintuitive. I think that AAA just has that advantage because it has that lower hindrance in showing your ad to people that maybe we wouldn’t have shown it again. Maybe they’ve lapsed, but they saw the ad, they came back, and they still converted. And that’s why AAA is a pretty cool product. And it has generally 40 to 60%, lower CPRs than anything else. And in that way, you’re getting that many more installs. And the chance for them to convert, giving you the ROAS or a lower cost per payer that campaign can succeed. 

Shamanth: Sure. So you’re saying because the machine learning algorithm sees so many data points in AAA that’s so much more powerful, it can locate purchasers—high value purchasers—far more efficiently, far more cost effectively. Even if there are some past installers, oftentimes it can make up for the past installers getting targeted.

Gina:  Yeah, it’s possible. But again, it’s a case by case basis; I cannot generalise. 

I definitely recommend that advertisers check it out. It’s widely available now, I believe, as of a couple months ago. There are definitely different products that still need the accounts to be whitelisted. Of course, I’m saying this at the cost of saturation for my own campaigns. But I think it’s worthwhile to test and it’s a very cool thing. Yeah, we have fewer levers to pull, but it’ll give you some opportunities to look at other things that perhaps can perhaps really help your campaigns, like thinking about different strategies. 

Shamanth: Yeah, absolutely. And speaking of the machine learning aspects of AAA, do you find that it’s effective if there is a certain amount of budget behind it? Do you find that if it is below a certain threshold, the machine learning probably may not be effective? Or how do you think about that? 

Gina: So I’m a very conservative marketer. I don’t have to be reminded that it is money we’re playing with here, so I always start campaigns at a few hundred dollars. And I always find that to be fairly effective to give the campaigns a few days to a week to learn. If it is past that certain amount, and it’s still not working and you’ve spent one to two thousand, then you know something is not correct. Maybe that’s not the right optimization, maybe it needs better creatives; what are the metrics that it’s failing on? Or not high enough on?

Shamanth: Gotcha. So you still recommend people start conservatively?

Gina: Because I’ve definitely seen a lot of campaigns not work. As much as I would like to say every single one has worked; no, that’s not true. I have definitely paused campaigns after a few days, because it’s not giving me what I want, so I need to think about what’s the next step. How am I narrowing this audience? Is it the creative? Maybe it’s the geo just not working for us. Like with anything else, even though this is a tool that may be a little bit faster than what we’re used to in giving results, we still have to do our due diligence. 

Shamanth: Certainly, it could just be that the algorithm hit a pocket of bad inventory. And it’s just giving bad performance and so what you’re advocating is just start small, even if you need a lot of data. You don’t want to start with giving it a huge number of purchases on day one just because that’s too risky. 

Gina: Exactly. Yeah, in certain geos you don’t want to run AEO; certain geos, you may want to just run VO; and maybe only English speakers or speakers that do well for your app. There are some ways that you can make this lower-risk.

Shamanth: Certainly, yeah, I think risk mitigation—it’s come up a couple of times in today’s conversation. And I think that’s certainly very critical to something like AAA. 

Of course, where there’s risk, there’s also opportunity. And something you’ve done is scaled AAA campaigns to very significant levels. So what would you say are some of the best practices for scaling the AAA? Let’s just say you have a campaign or a portfolio of campaigns that’s working well, what does that scaling process look like? 

Gina: As we’ve talked about, I’m very conservative when I start, but I do launch several types of campaigns for certain geos. So that I learn very quickly, is there going to be AAA, that’s going to give me scale, or is it going to be lookalike. 

For certain types of products, I like to run one type optimization, because I know that’s going to work. And then I use that as a geo optimization. So depending on the title itself, that’s how I would start with one approach, and then move on from there. 

Then I scale campaigns that perform well. And I’m very diligent about this. Facebook can really take changes a few times a week. So if you see campaigns that are performing well, you can scale there: launch different optimizations to test, because it’s proven that geo is going to give you really high KPIs. You can test different ways to target that geo’s audience. So really, start in on one path that you believe will work for you, scale and then add on, which is the best way that I can talk about that approach. 

Shamanth: Sure. On a day-to-day level, does that look like: “Oh, we started VO on US; we’re spending maybe 1000 or 2000 on this campaign; it’s working well, let’s double this, maybe twice a week.” Is that typically what it looks like? How do you think about that? 

Gina: So yeah, Facebook does not recommend that you increase or decrease more than 20-30% each time, because it will reset the learning phase. 

So when you see that campaign that’s running one to two thousand is doing well, you can increase it by 20%, if you’re looking for scale. VO is the most expensive, I would say, of the AAAs because it has the highest CPI as it’s optimising towards finding the highest value. I would launch an AEO campaign, and see what volume that gives you; what’s your ROAS; what’s your KPI that comes from that campaign—because then you’re opening up your audience, so this is the other way to do that. And then if performance still holds, you can do an MAI with an AEO and then see how that goes. I would run some lookalike campaigns with it and see how those campaigns perform. 

Really narrowing down the work for you for that product in AAA and launching campaigns with variations of that, is how I would scale. That’s how you go from a couple 1000 to maybe 10,000, 15,000 a day. Facebook can definitely bear it. 

It’s a running joke: it will always spend your budget, it’s up to you to figure out if it is effective.

Shamanth: Yeah, certainly, that’s the name of the game. So what you’re saying is, if you have to scale, you’re expanding out into different optimization types, and within each you’re increasing by maybe 20% at a time. You’re probably doing this twice or thrice a week. 

Gina: Yeah, I mean, if it calls for it. And we have to be very careful with increasing, because a lot of times we get caught up in like: “Oh, it’s really high performance.” But you have to see, with the increase, what are you getting? Are you getting more? Or are you just spending more? A lot of UA marketers get caught up in that. And I know we’re all very smart people; but it’s really just being aware of what you’re getting for the spend. 

Shamanth: Yeah, that makes complete sense. And with great scale comes great responsibility, as somebody said.

Gina: Yeah, absolutely. I mean, it’s a lot of money. Early in my career I’m like: “This is not play-money!” I have to remind myself because it’s numbers. I had this ad network executive who said: “Yeah, I know very well; it’s not play-money.” And I was a junior and I was like: “Yeah, you’re right, it’s not play-money. So please stay within the budget that we asked.” 

I’m always very conscientious of that. And that’s why I’m very conservative. Because until I can prove that performance is working, I will not scale. And maybe I don’t scale as fast as I should, or I could; I’m very conservative in seeing how the numbers make out. Because when it’s spent, you can’t take it back. And at that level, for the companies that I work for, I have a responsibility to make sure that it all backs out. You can’t really just hope a lot of times. 

I think I’m definitely preaching to the choir to your listeners, because I’m sure everybody’s had that experience. That’s why I’m very cautious with these new betas. I’m always really excited to test them. But I’m also very cautious. 

Shamanth: Yeah, certainly one has to be very prudent. And like you said, it’s not play-money, I remember being very surprised in my first UA role that I was managing millions. And I was like: “This is a lot of money; I am freaked out!” 

Gina, this has been very instructive. I’ve certainly taken a ton of notes and I’ve certainly learnt a whole lot. For what it’s worth a few weeks ago, we released an episode called: “This is why AAA isn’t working.” That’s exactly why I want to speak with you, because you have made it work. 

Gina: You know, I believe it too. There are certain titles or certain types of products, for which this probably won’t work, because it really depends on your product. And I think that we as marketers cannot be closed off to these types of things. Because I’ve seen it work and I don’t want to apply it to everything, I will test it. If it doesn’t work, I’m gonna just turn it off. There are some titles I have turned them off, because it just didn’t work after a long period of time or a long period of testing. 

So I hope that the best practices will help. At least, whether it works or not, after you go through this process, you’ll know then you can say definitively. “Because people are going to be like: “Oh, have you tested this? Have you tested that way?” You’ll be like: “Yes, I have. Here’s my documentation.” 

It’s something that I really took to heart because I’ve learned from some of the great UA marketers, especially on Facebook. I see their thought processes and I can only really continue to learn; because I don’t know how much of an expert in each channel all of us are. Because it is ever changing, but I see that these people have been doing this for so long. That is the process. And you can really only do that with any new product. Even then I would recommend people who may have tested earlier to relaunch them again, because I think it’s just gotten that much better. Just like certain Google products that may not work in the beginning, but are really killing it now. So never say never in UA. 

Shamanth: Absolutely. Absolutely Gina. Yeah, sage words, good words to wrap up our episode today. This is perhaps a good place for us to close. Before we do that, can you tell folks how they can find out more about you and everything you do? 

Gina: Sure I’m on LinkedIn. I don’t have a blog or anything like that. I’m happy to have the one on one conversations. Please reach out if you have any questions. Don’t know how many Gina Kwongs are on there; probably the only one that works for Electronic Arts. 

I’m always happy to chat with colleagues in the industry or people who are just starting out who want to go into this industry. I actually do love it that much that I’m always trying to go back to my business school and be like: “Hey, you know, we have internships. come work with me because this career has changed my life.” And I find it to be completely relevant to the times and it’s a great career in marketing. If you love marketing, if you love numbers, it’s the best intersection of all. 

Shamanth: Absolutely, Gina, thank you so much for taking the time to share your insights and wisdom with us today. And I’m excited to release this to the world very, very soon.

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you – and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A POST IDENTIFIER WORLD?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.