fbpx

Danika Wilkinson is a product marketer at Social Point, where she is responsible for the overall, high level marketing strategy of one of their games. She strategises all aspects of game marketing, and guides UA specialists in performance analysis and optimisations.

In our conversation today, Danika goes deep into the specifics of their creative process – and outlines how their very different processes for Facebook and other channels have been dramatically effective. She walks us through not just how her team’s tests are structured, but also about how their testing process has changed their workflow dramatically for the better.






ABOUT DANIKA: LinkedIn  | Twitter | Social Point




ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

🔀 Why Danika and her team use a custom DSP for creative testing. . 

🧮 The specific testing process on the DSP that results in reliable results.  

🏆 Purchasing set impressions for new creatives through sub-publishers yields clear IPM results for which creatives work—and which don’t. 

🔬Mitigating risk by rolling out creatives starting from low to high risk partners. 

💸 The key difference between low risk partners and high risk ones. 

🕸️ Creatives that have good IPMs in testing are likely to work in like campaigns. 

🎉 Creatives that perform well in MAI campaigns can do well in AEO and VO campaigns as well. 

💡Ideas come from everywhere.

🤓Using hypotheses to shortlist and prioritise creative development. 

🌿For Facebook, weed out the creatives with terrible IPM – and treat MAI tests as a screening for VO and AEO.

🌟 Remember, Facebook rewards quality. 

⏳ You can extend the life of a winning campaign with variations. Up to a point. 

🚀 How to give internal creative teams the space and energy to ideate about new concepts. 

🔍 The ideal lens to look at creatives with is through your player persona. 

🦋 Beauty in ads lies in the eyes of the user. 

🃏 Why tacky ads work

KEY QUOTES

How testing creatives on Facebook is unique

“So, when it comes to creative testing outside of Facebook, we’re specifically looking for what creative has the winning IPM; what creative has an IPM that is the same as or higher than our current control. 

Whereas on Facebook, basically what we’re doing is looking for what creatives are terrible; what is absolutely terrible that we want to eliminate. Anything that is okay, or really good—if you’re lucky—but anything that is just okay, we see that as a low risk to push into live campaigns. Because, at the end of the day, if you are doing creative testing in an MAI campaign, you don’t know how that’s going to perform when you push it into an AEO or VO campaign.  

If you have an extremely low IPM, below 1 or maybe even below 2 on an MAI campaign, it’s pretty safe to say that if you put that into an AEO campaign or VO campaign, it’s not going to take off; it’s not going to work. But if it’s okay—it’s not amazing, but it’s not bad—then it’s a pretty good candidate to move into your live campaigns.”

The importance of considering all metrics together

“We also make sure we look at is that we don’t become too IPM-blind; that we don’t let focus on IPM and CPI—these really high funnel metrics—take away from everything else post-install. So it’s important to always be looking at your retention, looking at your LTV, and thinking: ‘Okay, if my retention might be a little bit lower than what I would like, I may want to include more gameplay into my video.’ or ‘Maybe if I want to increase the LTV a little bit, I might play around with adding some more casino-style elements, like coins and currency and that kind of thing.’ And that might be interesting to that segment of players, for example.”

Align ad creatives to the users

“We started to think: “Well, why do these ugly, tacky creatives work?” We started to investigate, and we realised the target audience that we’re going after—the best audience for us—they’re also really into casino games and Solitaire, which are games that typically aren’t the most beautiful. They tend to have a lot of flashy graphics and block colours, and things that are going on. And therefore that kind of ad is the most attractive to that kind of user.”

FULL TRANSCRIPT BELOW

Shamanth: I’m very excited to welcome Danika Wilkinson to the Mobile User Acquisition Show. Danika, welcome to the show.

Danika: Thank you. Thanks so much for having me.

Shamanth: Yeah, Danika. Excited to have you because we’ve certainly crossed paths virtually, even though we just found out we are less than a mile away. Every time I’ve heard you speak, I’ve learned so much. I feel that you always bring a lot of insight, and certainly, that’s what struck me the last time we spoke last week. Which is why I thought that we’ve got to have you on the show to ask you all the questions that I want to ask.

Danika: Thank you. Very nice of you to say.

Shamanth: Certainly, right up. To get started, we’re going to talk about creative tests, and how you run them, which again, was something I took notes on from the last time we spoke. When you’re running creative tests outside of Facebook, what channels do you use? And why do you do that?

Danika: This is something that I think a lot of companies probably do, where they will run their creative testing on Facebook separate to creative testing for other networks and DSPs. In our case, we do it because we often see different results between what works on Facebook and what works everywhere else. 

So basically, we use a custom DSP called Dataseat to do our creative testing. And the reason why we started doing this was because, previously we were working with another live programmatic partner, who required us to run pre-testing on their platform, before we were able to push our creatives into live campaigns. 

We found that process to be really slow, really expensive, really clunky; and we were testing for months and months and months. And maybe we came up with one creative winner in all that time. So the idea was, maybe, we can put a pre-filter for pushing these creatives into live testing, and basically just eliminate everything that’s not going to work and make sure that we’re only testing the absolute best creatives with this programmatic partner. 

So we started to do that, and within three rounds—which took about four weeks—we already had three creative winners; after six months with maybe just one. So we started to roll those results out onto our other networks, and also our other programmatic partners as well. And about 80% of the time, I would say—because it’s pretty difficult to have everything 100%—we would see that those same winners were effective in our live campaigns as well. 

Shamanth: Wow, that’s impressive, because it would appear that there can be so much variability in performance across networks. So clearly, you guys have these winners that seem to be consistently winning everywhere. Can you explain the mechanics of how this test is set up on the custom DSP? Because I think there’s a specific way you run it, which contributes to why this is effective.

Danika: Yeah, so the good thing is we’re not really limited by the amount of creatives that we can test at any one time. But normally, we would be testing maybe four or five, at once, against a control. 

We purchase a set amount of impressions per creative. We send the DSP a list of sub-publishers; there are usually more or less 100 sub-publishers where we have acquired the most users in the last 30 days. We test those creatives against a control. And basically we take any creatives that have a similar or a higher IPM than the control in that specific test, and we roll them out into the other live partners. 

We find that some partners are more sensitive to creative changes than others. So we will rank them in order of lowest to highest risk, and we’ll roll out those winners into the lowest risk partners first. If we see the performance is good there, then we basically have more confidence to then test them in the higher risk or the more sensitive partners.

Shamanth: Right. Can you elaborate on what you mean by sensitivity to changes?

Danika: There are some partners that have quite a strong algorithm when it comes to adding new creatives where they might send a small amount of impressions to a new creative, figure out if it’s working or not, and then decide whether it’s going to start sending a large chunk of your investment there. 

But there are other partners that maybe are a little bit less than advanced in that sense. I won’t name names, but you will sometimes find that you add a creative to a campaign, and the performance isn’t good, but the partner just consistently sends spend to this creative anyway and it can really affect your performance. So with those kinds of partners, we will always leave them last and will only add the new creative to that partner, if we’re really confident about its performance; because we’ve seen it work time and time again on everything else.

Shamanth: Yeah, I like that gradation of risk. Because if you’re really certain something’s really good, that’s the only case in which you roll it out everywhere. That makes a lot of sense. And we did talk about how the creatives that went outside of Facebook tend to be very different from ones that do well on Facebook, can you speak to how ideas are generated for creative concepts?

Danika: Yeah, so sometimes we do—it’s not to say that the creatives that work the best are always different. Sometimes we are lucky and we have creatives that work really well on absolutely everything, even Facebook. 

We’re very fortunate to have creative teams on a product level, on a game level; who know the game inside and out. And it’s basically their job on a monthly basis to present maybe a dozen different concepts that they’ve come up with. And from those, we might pick maybe 6-8 concepts that we want to go ahead with and we want to produce in the next month. So it’s up to them to decide which ones they’re going to produce in-house and which ones they’re going to outsource. 

The ideas can come from a number of different sources. We could get them from combinations of different concepts that have worked in the past; we could get them from competitors; looking at App Annie; looking at Facebook Ads library; or even inspired by creatives that are working for other games inside the company. 

And on top of that, we have really open communication with our creative team. So if I have an idea, if one of my UA specialists has an idea, somebody on the product team or any other team in the company has an idea: we’re always sending through the ideas to the creative team. And they add them to the list to present in the brief at the end of the month.

Shamanth: Sure. So it’s my experience that ideas—typically with marketers and creators—are never in shortage, because you can think of ideas in the shower; you can think of ideas all the time; you come up with ideas. How does the creative team figure out or prioritise which ones to work on? Is there a process? How does that happen?

Danika: Yeah, so we’re actually trying to move towards more of a hypothesis-driven system. So basically making assumptions and asking questions about what has worked in the past, and also what hasn’t worked in the past. 

And then using that to build a hypothesis to say: “Okay, I think, for example, that by simplifying this creative, it will work better, because maybe users don’t understand what is the gameplay of our game”, or something like that. And that will improve the conversion, treating it scientifically. So every creative test we do, we’re trying to prove or disprove the hypothesis that we’ve come up with. 

We have these brief meetings because the marketer is on our side and so is the UA specialist, and we’re seeing day in day out what is working, what isn’t and kind of identifying patterns. Whereas maybe the creative team doesn’t always have that visibility. So the idea is that every week we have a meeting, we talk about what’s working, what’s not and our theories about why. And then when we have this creative brief meeting, at the end of the month, the final call about what creatives are going to be produced out of all of these ideas that they present, is down to the marketers themselves. So we will say: “Okay, out of the 12 ideas you presented, we want these 6.” Because we think that, based on our hypotheses and based on what we’ve seen, this is what’s going to work the best.

Shamanth: Sure. And I like how you expressed how you’re asking why these are working. Because I think that’s an important factor in making sure you’re just not spraying and praying. And that the creative does indeed work. 

To switch gears a bit, you talked about how the creative testing process works outside of Facebook; you have this very structured process. How does the process on Facebook differ from the process you described?

Danika: Yeah, that’s a really good question. There are several key differences. But I would say that the main one is the objective.

So, when it comes to creative testing outside of Facebook, we’re specifically looking for what creative has the winning IPM; what creative has an IPM that is the same as or higher than our current control.

Whereas on Facebook, basically what we’re doing is looking for what creatives are terrible; what is absolutely terrible that we want to eliminate. Anything that is okay, or really good—if you’re lucky—but anything that is just okay, we see that as a low risk to push into live campaigns. Because, at the end of the day, if you are doing creative testing in an MAI campaign, you don’t know how that’s going to perform when you push it into an AEO or VO campaign.

If you have an extremely low IPM, below 1 or maybe even below 2 on an MAI campaign, it’s pretty safe to say that if you put that into an AEO campaign or VO campaign, it’s not going to take off; it’s not going to work. But if it’s okay—it’s not amazing, but it’s not bad—then it’s a pretty good candidate to move into your live campaigns.

The other thing with Facebook is that we don’t test against a control. Whereas when we test for other partners, we do. And the reason for this is because Facebook is always biased towards creatives that have historical data. So it doesn’t matter, if in theory, the new creatives you’re testing should be the best, they will never be a control on Facebook. Because Facebook already knows everything about that control. It knows who to show it to, when and where. So the idea with testing on Facebook is not to split test, or anything like that; it’s to warm up the creatives in an environment that is the most similar to a real live campaign environment. And basically, we let the algorithm decide what is the best; we don’t try to fight it. We just try to work with the algorithm as much as possible.

Shamanth: Yeah, definitely. We’ve seen control ads that hog so much spend; it’s just hard to compete against it. It can be a good thing, just because it’s getting you stable and steady performance, no matter what. At some point, you’re going to have to find new creatives; and it makes complete sense to just not have the new ones compete against what’s working already. 

And how do you think about the fact that something that’s winning in MAI, may not be the winner in VO or AEO? So, do you just run separate tests? How do you think about that?

Danika: Yeah, so it goes back to what I was saying before. It’s pretty obvious to say that, if something has an absolutely terrible IPM and MAI, it’s not going to work in AEO or VO. Because we have to remember that at the end of the day, Facebook rewards quality. For them, giving their users the best experience is what makes them money. So if you’re trying to add a very low IPM creative into an AEO or VO campaign, Facebook is not going to reward you with the high quality audiences that, in theory, you’re looking for with that campaign type. So that’s on the one side. 

However, on the other side, when I talk about the potential to add creatives that, in testing have an okay IPM, because it’s low risk, this is because, perhaps, in that creative itself, there is something in particular that might pique the interest of the kind of user that you’re going after. And in this case, it would be a paying user. 

It’s all about finding the balance, there have been times where we’ve had creatives that, in MAI campaigns work amazingly. They scale up; they have an extremely high IPM, but they simply don’t resonate with the quality users that we’re looking for in AEO and VO campaigns. And in the end, a lower IPM creative works better. 

But I’ve never had a case where there’s been a creative that in MAI has had an absolutely terrible IPM, and it’s worked well in AEO and VO. That’s never happened.

Shamanth: Right. So it’s almost like a first stage screening for your AEO and VO. So whatever passes your first stage of MAI goes through a second stage of testing, so to speak.

Danika: Exactly, yeah.

Shamanth: How do you think about extending the life of a winning ad – either by making variations or in any other way? That could be on Facebook or off Facebook? How do you think about that?

Danika: Yeah, I guess there are a few different areas when it comes to variations of a winning concept. So the first and the most obvious one is localizations: localising the winning creative in all the languages that you’re currently investing in. Sometimes you will see a difference in the things that perform well in English and the things that perform well in other languages. But more often than not, we see that the correlation is pretty clear. 

The second thing is going back and looking at things that have worked in the past and trying to combine that with a new concept. So if you have been fortunate enough to have a history of winning concepts, you’ll probably already know different elements in those concepts that have worked. And you’ll be able to easily combine them with a new concept. 

Then the third area is basically trying to make as many iterations as you can. And one trap that I think a lot of people will fall into is making iterations that are too similar to the original. If you were looking at these videos every single day, or these playables every single day, to you they look different. But to a user, they look like the exact same thing. You have to make sure you play around with the most obvious elements: the colours, the backgrounds, the first three seconds, so that it looks different to the user. 

Going a little bit further, something else that

we also make sure we look at is that we don’t become too IPM-blind; that we don’t let focus on IPM and CPI—these really high funnel metrics—take away from everything else post-install. So it’s important to always be looking at your retention, looking at your LTV, and thinking: “Okay, if my retention might be a little bit lower than what I would like, I may want to include more gameplay into my video.” or “Maybe if I want to increase the LTV a little bit, I might play around with adding some more casino-style elements, like coins and currency and that kind of thing.” And that might be interesting to that segment of players, for example.

Shamanth: Yeah, which illustrates how much you can do to make variations of what’s winning. Because, again, you don’t always have to come up with something brand new. But just by changing the treatment, changing the visual look and feel of what’s working, there are a lot of wins to be had.

Danika: It’s a tough one. It’s the age old question of focus on iterations that are based on what we know already works, or look for something completely new. It’s difficult.

Shamanth: Yeah. In terms of the creative team’s time and effort, how is it balanced between coming up with something completely new, versus building on something that is known to work?

Danika: What we’re moving towards at the moment is trying to outsource the iterations as much as possible. And basically, when we identify a hit creative to activate a protocol that sends to an outsourcing agency instructions to make as many iterations of this video or this playable as possible, and localizations on top of that. 

That basically gives our internal team the time and the space and the energy to be thinking about new concepts, brainstorming and working on new things that are of a high quality. 

We do have weekly sprint meetings, however. And if we see that, there’s an iteration that is really urgent, maybe—for example something is fatiguing, and we need something really fast—then we might, in some instances, prioritise an iteration with our internal team, as opposed to outsourcing it. 

You can maintain your performance by extending the shelf life of your winning concepts for several months. However, if you really want to take your performance to the next level and scale to the next level, you have to find a new winning concept. That’s basically the be-all and end-all.

Shamanth: Why do you think that is?

Danika: Just from past experience. When you find a new winning concept, you will see the difference notably, because you’ll suddenly be able to increase the scale. You’ll have a lot more margin for increasing the scale as well, when it comes to ROI. Eventually, they’ll start to fatigue and you can just ride that wave by making a lot of iterations. But if you want that big boost of scale and performance again, you’ve got to find something different.

Shamanth: Totally. Have you seen any elements that are common to creatives that really hit it out of the park, that are clear winners? What might be some of the things that characterise such creatives?

Danika: Yeah, I think it really depends on the game that you’re working on; the genre of game. I don’t want to give too much away, obviously. If people really want to see the creatives that we’re using, they have the tools to do it. 

But I think that one thing that is really important is, again, to put yourself in the shoes of the user; have a really strong player persona. And don’t just create things that look nice to you. As a marketer, as somebody who is in the industry, this is something that we had to turnaround a lot on, on our side. And, I suppose, force the artists to question their integrity as artists, and literally ask them to create things that they thought were really ugly. Because we found that’s what resonated better with our audience, and things that we thought were really beautiful and high quality weren’t working as well as things that were ugly and flashy and tacky.

Shamanth: Yeah, we have a joke internally on our team that, oftentimes if we have multiple creatives, the ugliest one will win. The one we think is the least likely to win ends up winning. So that’s almost a contrarian way of predicting what could actually win.

Danika: Yeah, and that’s why a player persona really came to align with thinking about that, because

we started to think: “Well, why do these ugly, tacky creatives work?” We started to investigate, and we realised the target audience that we’re going after—the best audience for us—they’re also really into casino games and Solitaire, which are games that typically aren’t the most beautiful. They tend to have a lot of flashy graphics and block colours, and things that are going on. And therefore that kind of ad is the most attractive to that kind of user.

Shamanth: Right. I think a similar example would be in the health and fitness space. Because there’s a reason infomercials do well. They tend to be very in your face, very tacky. And people we know that have tried beautiful ads have failed, just because the audience likes tacky ads. The audience wants to see that. Certainly, I think that makes a lot of sense. 

Danika, this has been very instructive. And every time I speak to you, there’s so much that I want to take away to our own work. This is perhaps a good place for us to wrap. Before we do that, can you tell folks how they can find out more about you and everything you do?

Danika: Sure, get in touch with me on LinkedIn. Or if you want to find out more about the games that we have at Social Point, you can just go to the website and see our 3 flagship games there, which are Dragon City, Monster Legends and, my personal favourite, Word Life.

Shamanth: Excellent. We will link to all of that in the show notes. Thank you so much Danika for being at the Mobile User Acquisition Show.

Danika: Thanks very much for having me.

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you – and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A GENERATIVE AI WORLD ?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.