fbpx

Today’s episode is from a webinar that that we hosted with Alexei Chemenda from Poolday.ai. We talk about our playbooks, Alexei’s and mine – and go deep into how we have turbocharged our respective creative production output using AI.





About Alexei: LinkedIn | Poolday.ai |

ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

🗒 Producing a high volume of creatives is essential to find successful ads and test more variables.

📈 Smaller companies or those in regulated industries may need help with high-volume creative production.

📍Marketers often need to pay more attention to AI, leading to poorly executed creatives; human input is crucial for quality.

🔐 Competitive analysis tools like Meta Ads Library, PyPy Ads, and Foreplay. Co is valuable for gathering inspiration and analyzing competitors.

✂️ AI can identify patterns in creative performance but should assist rather than replace human creativity.

🔍 Considering budget constraints, the testing process should begin with TikTok, followed by Applovin, and finalized on Meta.

✏️ Monitoring and managing creative fatigue, especially on TikTok, is vital to maintain performance.

🎈 Regularly producing new variations and having backup creatives ready ensures continuous ad performance..

📈 AI-generated video ads can significantly streamline and enhance marketing efforts in the gaming industry.

📌 Collaborations with major platforms like Meta, Google, and TikTok are crucial for driving effective user acquisition strategies.

FULL TRANSCRIPT BELOW

Shamanth Rao: 

Welcome to this webinar that I have been looking forward to for a very long time with Alexei from Poolday.ai. We will talk about our playbooks, Alexei’s and mine, and complete transparency. A lot of what I have learned, our teams learned, has been from Alexei. So very excited to dive into what Alexei, the OG, has to say. And also to show what we’ve learned from some of what he’s had to show us in the past. So this is about us. Alexei, tell folks about what you do if you want to introduce yourself briefly.

Alexei Chemenda: 

Thank you for the kind words. I appreciate it. Hi everyone. I’m Alexei. I’m the founder and CEO of Poolday. And we’re a software platform that allows marketers to create videos using AI actors in minutes. I’ve been in the gaming space for about a decade, and I have run and still own about 22 apps on the App Store, so I was tired of working with content creators.

This is why I created Poolday in the first place. I’m excited about the webinar. Thank you for having me. The feeling is mutual. I’m so excited to dig in together.

Shamanth Rao: 

To introduce myself, I’m the founder of Rocketship HQ. I’ve also worked in gaming for more than a decade. I met Alexei ten years ago while working on a bingo game. Rocketship HQ works with top-grossing games and subscription and consumer apps to drive user acquisition, mainly through creative and UGC.

We’ll discuss the playbook, which has changed much of our work. Again, Rocketship HQ partners with Meta, Google, and TikTok. We made 10,000+ ads, although lately, we’ve been making 1,000+ every month. So again, we’re excited to dive in and share more about our playbook.

Alexei Chemenda: 

I’ll add to that if I can. We also worked together, and you guys did an amazing job on my apps.

Shamanth Rao: 

That’s true. Alexei has been a client of ours. Please also check out our many playbooks. We have about 14 free playbooks on different aspects of mobile marketing. Check out our podcast, Mobile User Acquisition Show, for many goodies and insights about mobile marketing from the last six or more years.

With all of that said, let’s kick off the webinar. Alexei, over to you. The first question is to set the context. So when we say 10X your creative production, you and I have done that on our team, but to start with the basics, why is this a worthwhile goal? Why 10X your production in the first place?

Alexei Chemenda: 

There are a few other ways to think about it, but for us, the way we think about it is that we have yet to learn what will work and what will work, even after creating so many videos and ads.

So we want to play the numbers game. One of the only ways to play the numbers game is to produce more videos. So, more videos, playing the numbers game means that one of those videos, for whatever reason, will work better than others. People are trying to analyze why the video works better and so on. I have a hard time believing we can pinpoint precisely why a video works. At least, I have yet to, so if you don’t know what works, the numbers game is where performance comes from.

Therefore, one reason to increase the number of videos is that it helps to find that needle in the haystack and find the best execution of a creative idea. And that’s what we’ve noticed. Sometimes, we have a good idea, but we can improve it if we execute 200 or 20 versions of this particular idea. So, ultimately, it comes down to performance, and that’s why we’ve decided to hop on that train.

Shamanth Rao: 

Speaking from our perspective, even though you can’t isolate precisely why a creative has worked, testing more variables helps you find that needle in that haystack. And we also see that tiny changes lead to massive differences. Small changes in the frames and hooks lead to huge changes.

And in the past, if we were testing five ads per month, it would have been tough to say, okay, what if we change this one word? Would this change performance? And now we can, right? So that makes all the difference. You know, sometimes I think about things we did one year ago, two years ago, where things tanked, and I’m like, well, maybe we just didn’t test enough variables. And that was the problem back then.

Alexei, we talked about 10xing output. Are there companies or businesses that shouldn’t be 10xing their output? Are there people for whom this needs to be corrected?

Alexei Chemenda: 

It’s interesting. We’ve asked ourselves the question many times. The intuitive answer is that companies with minimal UA spend might need help. And it’s debatable. Some companies with small spend are still bumping a ton of creatives and hoping for the best. So I don’t know if it’s that bad, but anyway, I see that potentially being a struggle.

And I also see struggles with companies that are very, very, very much focused on controlling every single component of the video. With Poolday, we work with companies in regulated industries, for example. And if I tell them they’re going to 10x their creative production, they look at me like I’m insane because then it has to go through the legal department. And then I go through so many other hoops that they have to jump through that it feels unrealistic, not just from the operational headache. But just the teams involved and so on.

So I feel like those regulated industries, and there are ways to mitigate and control that. Still, naively regulated industries or companies with powerful branding, like Chanel. No one from Chanel or Rolex is in the audience here. But if those companies have a very different strategy, and then for performance marketers, I would say that companies with minimal spending should focus on personal.

There’s a company that did really well recently called Alinea. It has two great female founders. They tried a lot, and we tried with them a very numbers game. And then they went with a very personal touch. It’s two female founders. They’re building a FinTech app for women. They filmed themselves on TikTok just building the app, and that resonated well.

They were trying to do something other than 10x creative production. They were just trying to be very personal. Some smaller companies should pursue that and not just blindly create more videos.

Shamanth Rao:

 If I had to share our perspective, we’ve done high-volume creative for small budgets and early-stage companies. It’s valuable if they’ve never done UA. They’re like, we want to start doing UA. It’s better to build 50 to 100 ads and test them than go with five because if you do five and everything tanks, you don’t know the problem. And if you run a hundred and everything tanks, you know there is a problem with the product, or you know you can isolate and pinpoint the problem.

Or out of the hundred, the five that are working, you can get learnings. So, I do think early-stage apps. Again, we’ve worked with quite a few early-stage apps to do high-velocity testing. I think the situations where we recommend not doing the test or not taking this approach are when the cost per payer or the CPA is extremely high, which means to get.

It will take you a lot of budget to get enough conversions. And obviously, we’ll talk about the testing process further in this webinar, right? But if your CPA is extremely high and you’re unwilling to do an up-funnel metric for your test, right? One way to mitigate the high CPA is to test in a low-cost country. Test for installing optimized campaigns, in which case you can run campaigns for as little as ten a day, which we do for high CPA products.

But if the high CPA product looks like we want to optimize for purchases, it’s 150 per purchase. We want to run this only in the U.S.; you do not want to run it in a low-cost geo that can become extremely expensive. And then they really shouldn’t be doing this.

Alexei Chemenda:

That’s a really good point. I really liked your comment about early-stage companies having to test to have a chance. It’s true that if you build five videos and they don’t work, then what? So it’s a fair point. Then it really depends on the channels that they’re spending on. And whatever channel they’re spending on would differ. My answer would vary depending on the channel they’re spending on, but it’s a really good point.

Shamanth Rao: 

And of course, it’s not enough to get the volume because I’ve seen. I, I’m sure you have too, seen situations where there’s a high volume of creatives that are poorly executed or poorly thought out, which isn’t very hard with generative AI to make sense of—poorly thought out tons of creatives. What do you see as the most common reasons for this?

Alexei Chemenda: 

Interestingly enough, I spoke to marketers who said they rely on AI to be a fall guy in a way. And so it’s like if the AI works, then they take the credit. If the AI fails, then it’s like, Oh! The world has been overpromised that AI will do everything. It may become true in the future. I’m not commenting on the future. I’m commenting on the current state of the world.

And you’re seeing these demos even from OpenAI, like solar model videos and so on. So you can rely on AI. We call this the Delta between the demo and the production. You can have a demo. Then, when you want to put it in production, it’s like, Oh, it’s not exactly like that. None of these companies are lying, but it just takes a lot of work to make it good. And so it’s just setting realistic expectations about using deployable AI products in production takes work.

Even ourselves, we’ve adopted this internally and adapted our platform for that. Initially, we launched this as a side fun fact, hoping that anyone would use it. Anyone who needs one or two videos would use Poolday. We realized that we needed to switch gears and allow only large companies to use Poolday because it takes work. And if you sell one or two videos, you can’t really charge. You know a lot for one or two videos, and yet it takes work.

 I think the reason we’re seeing those poorly shaped creatives is that people are putting too much hope into AI currently. And rightfully so. I think they’re right to make that back, but they should still understand that the seed comes from them, and AI helps them make a little bit different things, a little bit more of what they’re currently doing, rather than entirely relying on AI and hoping that it works really well.

Shamanth Rao:
There have been instances where we’ve been asked to produce one or two creatives. My most common answer is that I could make one creative for you. However, there’s not a great deal of confidence that this will drive performance because what will drive performance isn’t so much the creative but the process of testing multiple creatives; we’re learning.

If I give, I even tell people to look like I want to provide you with less than 100 creatives on day one. Instead, give you ten creatives on day one, test them, see what works, build, and extend based on those learnings. In my experience, the creative process is much more valuable than the creative output in and of itself, which ties into what you said about advertisers just wanting one or two creatives. It’s certainly not good for them. Suitable advertisers to be testing one or two creatives unless they fall in some of the categories we talked about regulated by our high CPA kind of categories. Again, you spoke about how many advertisers can’t treat AI as the fall guy. How do you think about maintaining quality and performance with increased output?

Alexei Chemenda:

That really comes down to the initial creative concepts designed by the marketing artist or whoever is working on those productions. If you think about it, it takes a lot of work to figure out what concept works, what has a high chances of working, what’s interesting, and what I can use in my game.

There are so many steps to building a great initial concept that need to be remembered that need to be remembered. And the way that I know, we’re a platform that produces videos. So sometimes the reaction from marketing artists, who are our end users, marketing artists is like, is the platform going to replace me? And I tell them that we, in fact, very early on, had script generation done with AI. So you upload your app, put the link to your app in the App Store or Play Store, and get scripts generated automatically. We saw that the results were insanely better, and when I say insanely, meaning more than three to four XS on performance on the backend. The performance was insanely better when the person wrote the initial script.

Or the bet that I made with them was, Hey, if you make a script, I’m betting that I’m going to build a better script thanks to your script. I’m going to help you find a better execution of that script. But it all relied on the marketing artist doing a good job of figuring out the initial core concept that they wanted to try.

That’s where people spend more time coming up with those initial ideas. When I say ideas, it’s different from what the person will say. It’s like what the gameplay will show, the app UI, the workflow, or the angle. And we see those unexpected. What is one of these apps called? Muz, it’s like a Tinder for Muslims. And I sent them a link to a humor comedy show.

And the comedian was talking about their app. It was not a sponsored episode, but because one of the audience members thought it was an improv show, they started talking about the app. The video blew up on YouTube shorts, and they got a massive spike in downloads. And I emailed them. I didn’t know they got an enormous spike in downloads. I emailed that to the founder of Muz, and I was like, Oh, that’s amazing. It looks like it got a million likes, which is amazing. And it’s like, yeah, we call these all the time. Because these are original, they are not even scripted, but they are original ideas executed perfectly.

So, I recommend spending more time on those original concepts rather than thinking about finding more iterations. That’s where the human side adds more value.

Shamanth Rao: 

You are a hundred percent right. And certainly, pop culture is a great place to visit. And we certainly also look at a lot of organic TikTok to come up with ideas. I would also add that, you know, I think while original concepts are important and they matter, something that I also, we also use on our team is really sort of a checklist for elements that do need to be present in every brief really and sort of in the spirit of showing and telling for the audience and for everybody.

I’m going to quickly screen share so everyone can see. I’m going to share this in a moment to show the elements that we make sure to include because, yes, I think the original ideas matter, but I also like to think about this. It’s like the building blocks of every script. So this is basically our template, but as you can see here, we say, Hey, what format are we going for? What are the hooks? And we try to have multiple hooks here.

What are the visual hooks or visual cues? Again, ideally related to the team that you’re building for. What are the text overlays that you’re putting on top? And then, of course, the script itself and the folks from our team on this webinar will recognize this. And we say, Hey, are there other visual cues? The visual hook is basically what comes up in the first couple of seconds. Visual cues would be whatever comes afterward. Are there other visual cues that need to accompany each scene? Examples? Do we have an AI presenter? What are the other general notes, you know?

So this is our gehecklist for every creative that we produce. Alexei, do you see anything else? Anything different in terms of what you’ve seen?

Alexei Chemenda:
I have one question and one comment on what you’re sharing right now. The question is for me, but also maybe for people in the room: Can you define format? I’ve seen the format being used in many different ways. What do you mean by format?

Shamanth Rao:
It’s not a universal thing. So it’s very advertiser-specific. For instance, for a game we just worked on today, Noob versus Pro was a common format, you know, or relaxation, ASMR was a format. So it’s very advertiser-specific, but I’m curious if you’ve seen it used in a different context.

Alexei Chemenda:
For us, the format is similar to how the video typically looks. Is it a split screen or UGC-style content, or is it something like this? So that’s why I was asking you. ‘Cause I know that we use different terminology. I think that we have similar things. I would say that the script, the only one where I would emphasize is column G on the script. We’ve seen the hook, and I know you know this, but we just structure this differently where, for us, the hook, we consider the hook not even part of the script.

Just because how important it is in the performance of the video. We consider the hook and the script. And then the hook is basically the first sentence or two. Oh I see hook dialogue. That’s where I skipped the visual hook right away. So in this case we have, I think the most important for us is the actor. So which actor is being used. The music, the hook obviously like we talked about the hook, the actor and we have the edit style as well.

So, how is the video edited, or in which style is it edited? And then the style. Then, we can give it that hip or high-energy energy and a very romantic slow editing. So, style is a variable for us. Yeah, and then a very similar structure.

Shamanth Rao: 

Those are very valid points. We kind of put that under summary or reference video or visual cues. But yeah, very much. I think it sounds like we have very similar approaches. You talked about generating ideas. You do competitive analysis, and if yes, how do you do that?

Alexei Chemenda:

 Yes, we used to. So here I am for the sake of this call; I’m wearing three hats, really. So the first is I have apps, so I have 22 apps. I have to know how many at some point, but let’s say 22. I have 22 apps. So for those, that’s one hat I’m wearing. The other hat I’m wearing is Poolday.ai, the self-serve platform. The third is the hat that I used to wear that we transformed into Poolday, which was called Viral. We had a content creator network where we and our creators were coming up with ideas, and so on.

So, I’m no longer doing competitive analysis as much in my Poolday hat because we’re a self-serve platform. So we give the platform for people to be more efficient with their creative production, but they come up with ideas. For Viral, we were doing competitive analysis a lot using the Meta ads library and tools like PyPy ads. Those are were great tools for us. We were very quickly able to see that we were even doing manual discovery on TikTok.

It works extremely well. If you set up, at least what we did was we set up an account where we would only interact and view and comment and so on content that was ads made by first of all you can set you can you can say you want to see ads only. This way, you only see ads on TikTok. That’s a great user experience. And then once you’re in the world of TikTok, and you only see ads, that’s a user-facing account where you can say, like, I only want to see ads.

Then, you only interact with content that is relevant to your vertical. And TikTok will quickly figure out what kind of videos you want. For example, if we had a celebrity lookalike app, then we would only interact with similar types of content. TikTok would then serve us ads from our competitors. So, we were able to see what works and the engagement. We wouldn’t have the full analytics, but we would see if the video is getting traction and if it gets likes and comments or not.

So those are the three things that we really did. Meta PyPy ads and this manual discovery. I’m no longer doing that as much just because the platform is self-serve now.

Shamanth Rao: 

We use a very similar approach. We also do the TikTok. I actually didn’t know about the ads-only mode. So I’m going to have to check that out. I think that’s going to become a lot more fun. I’ve sort of done that. I think the organic feeds, which basically train TikTok to show the ads I want, are something we’ve used. I think that we’ve found it to be hugely helpful.

Alexei or anybody else, I’d love to hear if you’re using it is Foreplay. Foreplay.co Alexei have you used that?

Alexei Chemenda:

 No, I’m not aware of it. Foreplay.

Shamanth Rao: 

https://www.foreplay.co/ I will drop the link in the chat for people who want to check it out. It’s basically Pinterest for ads, and they have a Chrome extension. Since you’re showing and telling, I can actually show you our Foreplay dashboard as well. I recommend it because it’s Pinterest for ads, and it’s basically your swipe file that you can use to find inspiration.

Hopefully, you guys can see these ads on my Pinterest board. You can sort of tag this, but I’m not sharing some details because some have engagements. It’s about engagements we work on, but basically, you can filter these and tag these ad boards. If you have a Chrome extension installed, you can add it to Foreplay and specific boards within Foreplay. So, I have this opened up so you can add this ad to Foreplay.

And once it’s added to Foreplay, you can look at ads and compare and contrast them around specific themes. You can say, Hey, I have subscription apps here, and I have like 20 subscription apps here. Can I see this particular subscription app around AI or wellness apps? Can I compare all wellness apps and see what they’re doing? And I think that can be hugely impactful.

The other cool thing here is that I will open this up. You can get a transcript of your ad and copy it. Then, you can use this in ChatGPT to find something somewhat similar or comparable. That’s hugely helpful. This is something we use a lot.

Another cool thing here is that you can also see how long the ad has been running. That’s not for TikTok, but for Meta. So that’s huge. Another thing we use is the TikTok ads dashboard. Alexei, I’m curious if you’ve used this.

Alexei Chemenda: 

I didn’t mention this one, but yes, we have this one.

Shamanth Rao: 

I think what we like is the fact that there are metrics here. So you can say, Hey, which ones, what are the number of likes, and what is the high budget? These are the numbers we are typically looking for.

Alexei Chemenda: 

That’s very interesting. Actually, I’m thinking about one more that might be useful. I forgot the name of it, but it’s Marcus Burke. I hope I’m not butchering it that he released a tool that’s very interesting about two weeks ago that’s kind of a streamlined or simplified version of the meta ads library where you can search into the issue you get with meta ads library like once you and I are sure you’ve seen this as well. Once you play enough with the meta ads library, you’re not getting a one-to-one match.

For example, if you type in the name of a client or of a company, Sometimes, it’s not that account that’s registered on the meta. And so you can’t really find them. And that tool helps streamline the process. So I’m going to try to find it very quickly, but if not, I think just by going on his LinkedIn page, I will find the tool, but they’re very, very useful tools. He was released only recently. So it’s a very early-stage side thing, but it works well for a simple workforce.

Shamanth Rao: 

Alexei, are there ways you’re using AI to analyze creative content/performance?

Alexei Chemenda: 

I want to say on paper yes. And we’ve seen companies building products in this category. So, on paper, at a very high level, you can figure out or try to figure out why a video works or why a variation will perform better in specific channels. To be very frank, we’ve struggled with it. And we decided to focus more on using AI to iterate, pull existing content from creatives, and iterate faster. Thanks to that. The only two variables that we were able to do this at scale were actor and hook, meaning we tested 20 different actors for a given script.

And by the way, people don’t realize that we used to do this in the studio. So we had a studio, and actors were coming in, and we were recording God knows how many videos. Naturally, we can now do this in a very streamlined fashion with AI actors. But in the past, we were doing this with real people and the process was very tedious but we got too many many actors.

When we tested them, or at least that’s what we can do now with AI to analyze the creatives to look at what actor was used in the video, and to influence the next batch of videos that we create thanks to that information. So we use that for actors and we use that for hooks. The third thing we use for is to extract the scripts and generate variations.

And when I say script, really, I mean the hook. And to generate variations of the hook and the full script to try to tweak a little bit, but we haven’t been able to, we have never been able to put a video in one of those platforms and get an analysis from AI that says, Oh in this video, X Y and Z made it work. We’ve never been able to reach anywhere near that, at least nothing meaningful.

Shamanth Rao: 

I would say yes and no. I don’t think it’s possible to be deterministic because this is the exact thing that made it work. But I guess when you’re testing many creatives, I believe AI can surface patterns. Again, in the spirit of showing and telling, people can check out this not-very-active YouTube channel called Intelligent Artifice. I think we got busy with many other things, and we discontinued this, but check out this video called AI, which writes meta ad scripts based on data.

The TLDR is you upload 20 scripts and say these were the CPA I got for script 1, script 2, script 3, and script 4, and say, then tell the AI, hey, what patterns do you see in the best scripts versus the worst scripts? And come back with something similar to the best ones. And that is something you can totally do. And we did this for something where we had 20 to 30 scripts historically. You can upload the PDF and CSV to do that.

Alexei, I’m curious if this is something similar to what you’ve tested.

Alexei Chemenda: 

Yes. Not to that extent, but yes. I wouldn’t say that we’ve seen as much success, but that approach, maybe we didn’t put enough energy and effort and maybe it was like one of those things that we decided not to pursue in the short term, but now it gives me second thoughts.

Shamanth Rao: 

I think it’s also useful for generating slightly different variants. For example, for an AI app, it could be, Hey, this app had me get a date versus find a job versus something else. Can you maintain a very similar structure while generating different scripts with the same tone?

And I think it has gotten a lot better at internalizing tonality. I also think it’s a function of the training data you give it. We have a custom GPT with a lot of our historical scripts, and it’s gotten very good at understanding exactly what we want versus saying you can be more successful. Here’s a UGC script. And that’s kind of random. Let’s switch gears a bit because I know the time’s advancing.

A big aspect of ramping up production is testing. One of the most common questions I get is whether we don’t have a budget to test 10x creatives, 1500 per week. And I imagine you get that. So, what do you say typically?

Alexei Chemenda: 

First of all, I tell them I don’t have a budget either. So when I run for my apps, we spend 300k a month. It’s not bad, but it’s not millions of dollars every month. And relative to how many videos we produce, there is certainly nothing.

This is where I think the channel matters a lot. So we’re heavy on TikTok, Applovin, and Meta as well, but the top two are Applovin and Meta for my apps. We acknowledge that not every creative will get enough spending to be statistically significant. We dump everything and let TikTok figure it out.

And so what I say to them is, if you’re looking to spend an equal amount per creative, then sure, it’s not going to work for you to 10x your creative production. If you’re looking to, and by the way, I want to make a distinction because if we take a step back when you create a video, if you think about the number of videos you create. For example, on our platform, you create batches of videos, and then, for the month, you’ll have created whatever 50 batches of, let’s say, six videos or ten videos. So you get like three to 500 videos.

You don’t need to deploy the three to 500 videos. First of all, when you see the video being generated or when you produce a video even with a competent creator, you look at the video, and you immediately can say, Oh, I wish X, Y, and Z. It doesn’t mean that that’s the right choice because your personal preference may or may not be good instincts but at the very least that video is not necessarily going to be deployed.

So, there’s a distinction to be made on the number of videos that you produce through whichever method you want to produce videos with. And the number of videos that you actually deploy. So that’s the first distinction I would make. And the second depends on the channel; you can probably dump all those videos and watch them live or embrace the fact that TikTok will. I say TikTok and some other channels do that as well, but TikTok is the most aggressive on that front; TikTok will maybe only throw one at some of these videos.

And that’s okay because the way you get this spend, you will get an actual selection that comes out of it. With that natural selection, you can create another ad group or another campaign where you only run those that have been “selected” by TikTok and focus your budget on that. So even if you only have 20k a month or 10k a month on a UA channel, if you dump everything reasonably into the platform again, it’s more tied to the platform than the budget.

But if you dump everything in there and let them figure it out. Then I don’t see any reason why you shouldn’t do this. And in fact, I can guarantee people will be surprised. Because the winner will not be the one they thought it would be in the first, or the winner will be one they would not have deployed in normal times.

Shamanth Rao: 

That mirrors our approach. I know we talked about this earlier, too. And I think we do differently sometimes if your top three or four are somewhat close, we pause the top performer. Yeah. So that the number two can get more spending, we do that with rules on Meta. You say, Hey, it gets paused if it receives any ad that gets 20 installs.

We do that if there’s enough budget to support that approach. If there isn’t, we take the top spender. And I think our approach is that if the algorithm says that look, this ad will not scale. That’s a very valid signal, as well as the algorithms. I think we are looking to improve the testing process performance. Certainly, you’re looking at the CPA, but you’re also looking at scalability, you know, and as you said, TikTok is much more aggressive at finding scale but Meta as well.

If Meta says an ad gets 90 percent of your spend, that’s what you want in your business-as-usual campaign. So rather than something with the best CPA but an inferior scale. But like ten a day in spending, it’s just not very useful or helpful.

Alexei Chemenda: 

Yeah exactly. And people think in binary terms often. So they think either they deploy the ad that will get 90 percent in spending. If their argument is, Oh yeah, but it’s not as good on the CPA side. As you said, first of all, if it doesn’t spend, it’s no good, but even when they put it in the BAU in the business-as-usual campaign, and it does scale, let’s assume it doesn’t hit CPA goals or ROAS goals whatever performance backed goals the app wants, they can work off this ad because they know it can get some spending.

And let’s play with it because they will have data like you have retention where people are dropping off in the video like is the quality bad? Maybe it’s good on a CPI basis but it’s bad on a retention basis. Like it gives you data but if you don’t have any spend and you don’t have any data and then you’re just playing blind games. So that’s where it’s harder I think.

Shamanth Rao: 

And in fact, I remember this from a couple of years ago. I’m pretty sure it’s still present, though. I think Meta actually has official documentation around this because, according to our rep at the time, they got sick of people asking why my top-spending ad is not tthe top-performingad. And which happens, right? They’re like, look, the ad that’s spending one or whatever 20 a day has the best CPI CPA, that is the ad that’s spending like hundreds of thousands a day, isn’t as strong as the one that’s spending 20 a day.

And Meta’s documentation is, they call it the breakdown effect. Maybe that’s changed since then but their point is, sure we want to have the lowest CPA at the ad group level. And if you judge that the highest spending ad has maybe the third best CPA and you pause your highest spending ad and your worst ads get more spend, then your worst ads would just have crazy CPA that would just be completely unsustainable.

So their point is that your third-best ad and third-highest spending ad are the third highest spending for a reason because if we scale it more, its CPA would just be very poor.

Alexei Chemenda:

 I love that example. It’s what we’ve seen over and over and over.

Shamanth Rao: 

Staying on the testing process. You talked about running TikTok and Applovin primarily and, to a certain extent, Meta. Is there any aspect of the testing process on Applovin at all?

Alexei Chemenda: Yes. So we have a waterfall system where we start out with TikTok again, and this is me putting my own 22 mobile apps hat on, not Poolday, but essentially, the process is for us. So we put on TikTok, we pick the winners, and thanks to TikTok, we surface the winners. Then, we deploy those to Applovin. Applovin is good at ingesting creatives, but not as many as TikTok. Like, we’re very flexible with TikTok, a lot more than any other platform, but obviously, we’re not affiliated with them in any way.

I should specify I said TikTok this past hour. But when we deploy them to Applovin, we run. For non-gaming, Applovin has updated its platform to work with much smaller minimum spending, such as daily spending. So now we can do a lot and be a lot more creative there. Then, we get the winners from Applovin, so we get smaller and smaller, and we put them in Meta. And I think that you guys are kind of the other way around, or at least you do a lot on Meta, and I remember we talked about it as well. So we’d love to hear more about that. I think we have the same goal, but we go through different paths.

Shamanth Rao: 

On Applovin, what volume of creative do you typically see yourself testing?

Alexei Chemenda: 

So, on Applovin, it’s more budget-dependent for us. So, on TikTok, we just dump everything we see that works. On Applovin, typically, we would put about 20; II would say 10 to 20 percent of the creatives that we uploaded to TikTo, and about 10 to 20 percent of those are going to Applovin. So that’s performers. We measured the correlation between success on TikTok and success on Applovin. And what we see is it’s not a one-to-one relationship, meaning it’s not because it works on TikTok that it works on Applovin.

But we see that the correlation differs based on the direction that you take. So a video that works on Applovin works is defined by performance on the backend side, ROAS, cost per subscription, or whatever. So, a video that works on Applovin will not necessarily work on TikTok, but a video that works on TikTok will have a higher chance of working on Applovin. So the direction is, well it’s not a two way street.

And this is why we get about 10 to 20 percent of our videos from TikTok to Applovin. And then from here we usually have so if we dump like 100 to TikTok we’ll put probably 10, 15, 20 or so on Applovin. And from the 10 to 20 videos on Applovin a few will work like three to four probably will work and one will be extremely good and a few others will be good. Those will be the way to Meta.

Shamanth Rao:

 Sure. And you’re still using Applovin as a testing platform. Did I understand that correctly? Okay. So you use TikTok as round one of testing Applovin as round two. And whatever’s validated past round one and round two goes to Meta.

Alexei Chemenda: 

Yeah, correct. And I would say it’s testing. Yes, we are using Applovin as a testing platform. But it’s not true testing in the sense that they already get a clear, clean set. So we deploy, we do less, you know, fidgeting with testing campaigns and so on. We deploy assuming they will work. Some of them don’t work, and we kill them, but it’s less of a, on TikTok, we have a more complex setup where there are a lot more ad groups we’re playing with to make sure that there’s actually a testing pool and the best winners go to another pool.

Shamanth Rao: 

Sure, and I would imagine on Applovin, you can only set up a single campaign with all of your creatives. Did I understand that?

Alexei Chemenda: 

That’s what we do. I don’t even know if you can set up multiple with the same.

Shamanth Rao: 

I don’t think you can. Yeah. It’s pretty much like advantage plus on Meta but actually you can do multiple advantage plus now. But as it used to be where you could do one campaign per optimization event and geo right. And I believe Applovin is pretty much similar to that.

Alexei Chemenda:

 Yeah. I was going to say cause Applovin you can do multiple campaigns based on target events. But for a given target event yeah it will be one campaign.

Shamanth Rao:

 Exactly. I know we are at about four minutes to go and we still have a ton of questions left but we want to honor the end time. So something that a lot of marketers think about is creative fatigue. Is that something you’d think about? Is that something you evaluate? How do you think about it?

Alexei Chemenda: 

Oh yeah. We’re obsessed. This is probably the one thing that keeps us up at night again: my mobile apps or our clients on Poolday. It’s, again, depending on the channels, so TikTok is probably the most painful one on that front. It scales very nicely and then it kills creatives very nicely as well. So overnight, sometimes, the creative just dies. Sometimes it takes a few days. As you can see, when performance starts tanking, volume starts tanking.

So yeah, we certainly look at that a lot and we, that’s why we, in fact, started focusing on iterations a lot because some of our creatives, one creative of ours, our top performer for one of our apps, has been running since July 22. So it’s been actually it’s been July 17th. So, in two weeks, it’ll be exactly two years. And we’ve, it’s not that exact creative that has changed, but it’s just like small tweaks to that that has been running on TikTok for the past two years.

It’s hard to try to forecast when we’ll hit creative fatigue. To be frank, it was hard to do. So we try to look at variations of performance and variations of volume where you get this like the road to excitement. And then, at some point, that road to excitement starts slowing down. And so we try to measure it okay from the moment we start to slow down on the acceleration. For example, when the acceleration is reduced, how many days from that is performance really bad?

And so we see, sometimes it’s four days, sometimes seven, sometimes ten, but it’s like that order of magnitude. And sometimes, it’s 24 hours, but the numbers we typically look at are around seven days. From when the performance starts slowing down, or the acceleration of scale and performance starts slowing down. So we look at that, and we try to say, okay, well, we need to find a winner before that happens because if we wait until it happens, then we’re screwed because we were going to add ourselves a lot of, we’re going to put ourselves under a lot of pressure if we have only four or seven days to find a new winner.

Shamanth Rao: 

Yes, and you know, I remember one of the first times we noticed this was, I think, I got a report from my team saying, Oh, we have a CPA CPI of one dollar. I was like, sure. And then I looked much more closely, and I was like, oops, this is not like a CPI of a dollar. The CPI was 40 cents, up to 150 over seven days. And the average is a dollar. And I did not realize it would be that quick until I saw this data. And yeah, it’s not that quick every single time.

But I’ve seen enough instances. And again, this is much more so on TikTok. There’s a much stronger bias in favor of historical pages on Meta. But I think TikTok this, you’re right. That’s just a much more aggressive fall-off from peak to trough.

Alexei Chemenda: 

Yes. You just have to be ready with new variations and anticipate that.

Shamanth Rao: 

I know we’re on time. I know we had a lot of other questions, but I want to respect everyone’s schedule. So, we will stop the webinar for now, but feel free to shoot across questions. If you found this via email newsletter, please feel free to hit reply. Alexei, every time I talk to you, I learn a lot. And I think as I said at the outset, certainly a lot of the playbook we spoke about is something I learned from you in our earlier conversation. So happy to sort of trade notes now.

Alexei Chemenda: 

I’m super happy. And obviously, we learn a lot on our end every time we chat. So I’m always very happy to talk. And yeah, thank you for setting it up. Thank you for having me as well. And thanks to everyone who participated and attended. I’m excited to hear more from everyone.

WANT TO SCALE PROFITABLY IN A GENERATIVE AI WORLD ?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.