Danika Wilkinson is a product marketer at Social Point, where she is responsible for the overall, high level marketing strategy of one of their games. She strategises all aspects of game marketing, and guides UA specialists in performance analysis and optimisations.
Today’s conversation is an update on a topic that we discussed with Danika the last time we spoke with her on the Mobile User Acquisition Show. Today we talk about how she and her team completely revamped their creative process – and broke out of a creative rut.
We go in depth about what inspired them to hit the reset button, what the elements of the reset looked like, what some of their challenges were, and how Danika brought different teams together to work towards a single goal to revamp their creative process.
There are so many insights in this conversation for those of us that are often hesitant to bring about big changes in workflow, and for those of us wanting to unlock new creative winners for their marketing – and I highly recommend this.
ABOUT DANIKA: LinkedIn | SOCIAL POINT
ABOUT ROCKETSHIP HQ: Website | LinkedIn | Twitter | YouTube
KEY HIGHLIGHTS
💡 The inspiration behind resetting Social Point’s creative process
🏆 Creative winners are usually the result of a process and not ideas
🏃🏽♀️ The new framework for ideating new concepts
📆 The testing process had a fixed timeline that all members were aware of
🎤 How Danika’s team overcame the miscommunication between the creative and UA teams through data literacy
🏄♀️ The different challenges and biases in creative testing
🚗 Moving from an IPM centric testing to a CPI centric
📱 How has the testing process changed for iOS post ATT
🎭 How company culture influences changes in workflow
🪅 Creative diversity reduces the risk of losing profitability
KEY QUOTES
How to recognize the right time to revisit your processes
I think that this creative winner we had was born out of pure luck. And continuing to do the same thing over and over again, wasn’t going to find us a new winner. And so it was evident that after a year, nothing was coming out of this process. And so we realized that it was time to really take a step back, look at what was going on, and totally revitalize everything that we were doing.
Let the data tell you what to do
We had absolutely no shortage of ideas with the previous system. Our art team was always responsible for coming up with the creative briefs. And quite often, we would find that our decisions to either approve or disapprove certain creative ideas, were born out of pure intuition maybe, or maybe a vague idea around what we thought looks good, or what we thought our audience would respond well to. So in the new framework, basically, we decided to set a list of categories or guidelines to justify the production of a creative, and we had a very low threshold for what we call wild card creatives, which is to say, totally original creatives. So these categories include things like market insights, Audience Insights, and what we see our competitors are doing. Most importantly, the hard data, the hard learnings that we have from our own internal testing and also our own internal live campaigns.
The new framework that ran on autopilot at Social Point
Our testing process previously was very ad hoc. We’d just test things when we received them, and send back the results when they were ready. And one of the core parts of our framework was making sure that there was a really strong synergy between testing ideation and production. So the easiest way to do this, and it sounds very simple, is to put together a very fixed roadmap of when things need to be delivered, when things have to be tested, when the results are to be communicated and analyzed, and then when the decisions are to be made about what is to be produced next. So it was as simple as just picking the days of the week and the time of the day, when our artists had to deliver assets and when they should expect to get results about those assets back.
Educating the creative team to understand data is beneficial
We did a three or four part training session for the creative team where we really looked at the different metrics and what they mean and basically helped them to understand that creative is at the core of our strategy, especially with a game of this type. And it really helped them to understand how important their role actually is, I think perhaps beforehand, they just felt like they were just churning out assets without any idea of where they went or what impacts they had. So it was really about empowering them. And not only to seek the data and to analyze the data and understand it, but also to feel that they are making the biggest impact on the entire marketing team on the success and profitability of the game.
Motivate your creative team by thinking like them
One thing that we did was that we tagged all of the creatives that we were producing, we grouped them into different concepts, and also different visual skins. And then we actually have a bar graph with stacked columns, where each stack is a different color and represents a different creative concept and the investment. So we can see across all of the live campaigns, the bigger a color gets, the more investment is allocated to that concept. And you can imagine, we had one big winner for about a year, that entire year, all the bars have the same color. Then the second we implement the creative framework, you have a rainbow of colors across the board. That’s something that’s really useful for the creative team. It’s also to motivate them to say, look, the impact that I’m having, all of this variety, all this diversity that I’m having.
Why an extra level of testing can sustain profitability
In order to protect ourselves from disrupting live campaigns based purely on split testing results, we added another level of testing where by anything that beats our control is moved into a secondary quality testing phase, where we then double check retention LTV, so that way we can be extra sure that if we add a new concept into a live campaign, it’s not going to tank our metrics.
A CPI centric approach is better than an IPM centric approach
We find that going after the lowest CPI tends to be much more sustainable and much more profitable. So we focus much more heavily now on what creative is achieving us the lowest CPI as opposed to what is getting us the highest IPM. Network testing has a difference because obviously, it’s not an auto bid, well, then you need to be focused a bit more on IPM and CPM. We also look at impression and investment share, basically we look at where the algorithm is sending all the impressions to, because that usually is the best indicator that that creative is the most in line with whatever goal you’ve set up with the algorithm. But of course we do backup checks. We look at retention, LTV and of course ROI on our side, just to verify that.
Even a poor creative offers learnings
What if something passes that initial test, it gets a lower CPI than the control, we move it into quality testing and the retention or the LTV is poor. We’ve got a learning to say okay, something about this creative makes people convert and want to download the game. But what is it that has not been in line with the game that they get? And how can we adapt accordingly?
How company culture influences major revamps
We have this culture that we call being T shaped, which essentially means that you can be an expert in your field, but you’re expected to branch out like the letter T into other fields as well. So nobody is confined into a box. In fact, it’s expected of us to be able to do other things outside of our traditional area of expertise. We’re very wary of having silos between teams as well. And this is kind of like a logic that we apply not only to say, in this case, UA and creative collaboration, but also, for example, marketing and product collaborations.
A great team makes a great workplace
I would say the second thing is that I’m very fortunate to have a team who are extremely self reflective and self critical. And they were able to take a step back and say, okay, after a year, we’ve been working so hard, we’ve been coming up with all these original creative concepts, and nothing has worked. Something needs to change here.
Creative diversity keeps your profitability afloat
The best thing about having so much creative diversity is that you have backups, so you’re not just depending on one big winner and the second that runs out, you lose all of the scale, you lose all the profitability.
FULL TRANSCRIPT BELOWShamanth
I’m very excited to welcome Danika Wilkinson to the Mobile UA show, Danika welcome back.
Danika
Thanks for having me, again.
Shamanth
Very excited to have you. And a little background for folks listening, I think you posted something very tangentially on the socials. Our team really thought that was an interesting update and change, compared to what you talked to us about the last time – how you guys were testing creatives. I think this is a good point to really talk about, because for folks who are still wondering, you completely revamped your creative framework. That is not easy, as we will all find out in the rest of this conversation. So we’re definitely excited to have you talk about how you guys completely hit reset on your creative framework. So what inspired you and your team to hit the reset button on your creative framework?
Danika
Back around the beginning of summer, we had a creative winner that had lasted us about a year that we were unable to beat, no matter what we did. Basically, we’ve gotten by for about a year, just making a whole lot of iterations of this particular winner. But conceptually, we were never actually able to surpass the benchmark. Naturally, because we’d found this winner out of a certain creative process, which was based around monthly ideation and production of maybe 10 to 12 original new concepts. We followed the same path. I think that this creative winner we had was born out of pure luck. And continuing to do the same thing over and over again, wasn’t going to find us a new winner. And so it was evident that after a year, nothing was coming out of this process. And so we realized that it was time to really take a step back, look at what was going on, and totally revitalize everything that we were doing. But on top of that, we also realized that there was somewhat of a separation, perhaps between our art team and our UA team. The UAs, for example, couldn’t really understand why they were receiving certain creative assets that maybe weren’t in line with the test results that they were seeing. And on the other side, the creative team couldn’t really understand why the things they were producing weren’t working. And they didn’t know a lot about the specific KPIs, the specific metrics that went into that as well. So yeah, we basically took a step back, we started from square one, and we built a whole new process from the ground up.
Shamanth
We’ll certainly talk about the team dynamics in a bit. But what I find very interesting and not always intuitive is how you said: “Look, we’re not getting winners, and the current winner is a function of the current process.” That’s not always an obvious conclusion for a lot of people. I think, somewhat obvious conclusion could be, “Oh, we don’t have winners because we don’t have amazing inspiration, because we don’t have breakout ideas.”
Danika
We had absolutely no shortage of ideas. I mean, we had a very creative team and an amazing amount of ideas. But the problem is, where do those ideas come from? And are they actually going to work? And how can we prove that they’re going to work?
Shamanth
The last time we spoke, you said, you guys put a structure around where ideas are coming from. Tell us more what the structure is like.
Danika
So like I said, we had absolutely no shortage of ideas with the previous system. Our art team was always responsible for coming up with the creative briefs. And quite often, we would find that our decisions to either approve or disapprove certain creative ideas, were born out of pure intuition maybe, or maybe a vague idea around what we thought looks good, or what we thought our audience would respond well to. So in the new framework, basically, we decided to set a list of categories or guidelines to justify the production of a creative, and we had a very low threshold for what we call wild card creatives, which is to say, totally original creatives. So these categories include things like market insights, Audience Insights, and what we see our competitors are doing. Most importantly, the hard data, the hard learnings that we have from our own internal testing and also our own internal live campaigns. And I think the best thing about this is it also helps our creative team to not get lost when it comes to thinking of new ideas. They have a very specific structure where they have to ideate from. It also means that when we approve creatives, we have something to back us up. We can’t just disapprove something based on a feeling. We disapprove something because it doesn’t fit into the categories and of course, the more boxes this creative idea ticks, the stronger it is.
Shamanth
I like your analogy of the more boxes that it needs to tick, it helps the creative team know and understand how to come up with winners. And it helps your marketing teams to really have a very objective criteria for evaluating this. So it’s not fuzzy anymore. Can you speak to the testing process itself? Because you did put a structure around the testing process. Can you speak to the cadence of this, and the mechanics of this and how this was helpful?
Danika
Another reason why we decided to revolutionize the entire framework was because there was this separation between what the UA team were testing, the results they were getting, and what the creative team was actually producing. And this was because our testing process previously was very ad hoc. We’d just test things when we received them, and send back the results when they were ready. And one of the core parts of our framework was making sure that there was a really strong synergy between testing ideation and production. So the easiest way to do this, and it sounds very simple, is to put together a very fixed roadmap of when things need to be delivered, when things have to be tested, when the results are to be communicated and analyzed, and then when the decisions are to be made about what is to be produced next. So it was as simple as just picking the days of the week and the time of the day, when our artists had to deliver assets and when they should expect to get results about those assets back. And so the UA team also knows when they have to test things, how long they have to test them, and when they have to then send the results back. So we usually start the month with kind of a rough idea of the sorts of concepts we’re going to develop in that month, how many, but as the month goes on, and we learn more about what’s working and what isn’t, we meet every week, and we basically dynamically shift around the building blocks of that roadmap to prioritize things that are more in line with our current winner.
Shamanth
Yeah, I think you anticipated one of the questions I had, which is, what happens if things change midcourse? So it sounds like you have a monthly process and you are prepared to change the process on the fly.
Danika
We look at our roadmap, like a series of building blocks. And a big part of our framework is being reactive and being flexible. So every week, we have a couple of meetings. The first one is where we analyze with the core team. So the core UAs and marketing art lead. What isn’t working both in testing and in a live environment, we discuss between all of us, it’s a very two way conversation, what we think that could mean and what we think we should do next. And then we have a creative sprint meeting where we rearrange the roadmap accordingly, we align with the artists, we say, “Okay, are you guys happy to do it like this? Can you fit this in?” Usually everything is fine. They work very fast.
Shamanth
You briefly touched upon the dynamics within the teams. And that can certainly be a huge challenge when you are working with creative folks and marketing and UA folks, because marketing and UA folks run testing, they’re very quantitative. Creative teams are very qualitative. They run production. And that’s a very, very different skill set. It can almost be like they’re talking two different languages, it’s just very hard to have them align and talk to each other. What are some of the things that were helpful in bridging this gap between the creative folks and quantitative folks? Can you also tell us where you fit into this structure, and how all of this operated?
Danika
I think one of the things about the previous system that we had in place was quite often, the communication of the results could be a little bit vague. The artists didn’t really know a lot about why a creative that they produced didn’t work, other than we tested it, and it failed. So we realized that one of the reasons behind this was because the data literacy of the creative team was really low. So not only did they feel that the BI tools that we use were really inaccessible to them, they also didn’t really understand some of the most basic creative metrics such as CTR, or IPM, and the impact that they actually had in marketing. So for example, a better IPM equals better auction efficiency.
So the first thing that we did was we set up these special custom dashboards, on our BI tools specifically for the creative team, where they can go and find everything they need. We even put in custom dimensions on these dashboards to basically help them prioritize what to look at. So there was kind of no difficult work required on their end. Then the second thing we did was we did a three or four part training session for the creative team where we really looked at the different metrics and what they mean and basically helped them to understand that creative is at the core of our strategy, especially with a game of this type. And it really helped them to understand how important their role actually is, I think perhaps beforehand, they just felt like they were just churning out assets without any idea of where they went or what impacts they had. So it was really about empowering them. And not only to seek the data and to analyze the data and understand it, but also to feel that they are making the biggest impact on the entire marketing team on the success and profitability of the game.
Then the other thing that was really important, again, was the structure like I mentioned before. Structuring our weeks, structuring our meetings, making sure the agenda was really specific, so that we communicated all the learnings in a way that they could understand and in a way that they could leverage. And the idea behind this was that we wanted to get learnings from absolutely everything we did, whether it was a success, or whether it was a fail, we wanted to get something from it. I guess my role was very much bridging the gap between the UA team who are very much data focused, and the creative team who are a lot more artistic, and just trying to eliminate the silo between the two and make them one big, holistic team.
Shamanth
And it sounds like it was a huge effort more so toward making sure the creative team are educated about how critical everything they do is. I see the same challenge as well. Because if I go and ask marketers, they know and intuitively understand how important the creative is. When I’ve talked to designers, it’s not clear to them as to how much depends on them. There’s millions of dollars at stake that really depend on how performant their creatives are. And what I also find impressive is how you said you simplified the data for the creative folks, because the data is there but it can just be too much for somebody that isn’t looking at data every day. I’m also just curious if there’s anything that comes to mind in terms of an example of how the data was simplified for the creators.
Danika
One thing that we did was that we tagged all of the creatives that we were producing, we grouped them into different concepts, and also different visual skins. And then we actually have a bar graph with stacked columns, where each stack is a different color and represents a different creative concept and the investment. So we can see across all of the live campaigns, the bigger a color gets, the more investment is allocated to that concept. And you can imagine, we had one big winner for about a year, that entire year, all the bars have the same color. Then the second we implement the creative framework, you have a rainbow of colors across the board. That’s something that’s really useful for the creative team. It’s also to motivate them to say, look, the impact that I’m having, all of this variety, all this diversity that I’m having.
Shamanth
That’s so smart, because for somebody that’s a visual thinker, that’s so crystal clear.
Danika
Exactly. No numbers, it’s just colors. And anyone can look at that and say the impact is clearly obvious.
Shamanth
This is also something that I feel like I’ve had to push our quantitative folks about, because I think it’s somewhat easy for a quantitative person to say, there’s a 10% improvement, I will put this in a statement, and people will intuitively get it. And oftentimes when I’ve had them just basically show a graph instead of a paragraph. It’s so much more vivid.
In terms of the testing itself, can you speak to the mechanics of how the tests are run? And I recollect you and I talked about the different challenges in testing and some of the biases that can be introduced. Can you speak to how you test? How do you think about the biases that can happen?
Danika
So we now use split testing on Facebook and I have said in the past, and I still stick by this, that I’m quite anti-split testing on Facebook, because I think it rarely accurately mirrors a real or a live environment of a campaign. However, at the end of the day, it is the most scientifically viable way to get hard creative learnings. And we wanted to get some kind of learning out of everything we did, whether it was a fail or a win. So basically, because of this, in order to protect ourselves from disrupting live campaigns based purely on split testing results, we added another level of testing where by anything that beats our control is moved into a secondary quality testing phase, where we then double check retention LTV, so that way we can be extra sure that if we add a new concept into a live campaign, it’s not going to tank our metrics.
Then as we kind of got deeper into the process, we also realized that we couldn’t really paint Facebook and all our other networks and DSPs with the same brush. We also observed that a lot of competitors were using different creatives depending on the network as well. So we decided to start running tests on a network partner. And then based on these results, we would roll out the creative assets to other live partners, based on order of risk. There are certain partners that are much more sensitive to creative change than others. So by the time we get to the very end of that list, we have to be really, really, really sure that a creative is going to work before we add it into a live campaign.
Another difference I think, is that the metrics that we look at are now different. So previously, we were very IPM centric. But if we’re speaking specifically about Facebook, we have to remember that if you set up an MAI campaign, you’re essentially telling Facebook, get me the cheapest installs. And that’s exactly what Facebook is going to try to do. So it makes a lot more sense to look at CPI. Obviously, on Facebook, the IPM can change depending on the placement that it gives you. We use automatic placements. So we don’t really get any control over this. And also it changes depending on the format of the creative – is it a banner, is it a video or is it playable?
The game that I work with is primarily ads monetized. And as part of our overall marketing strategy, we find that going after the lowest CPI tends to be much more sustainable and much more profitable. So we focus much more heavily now on what creative is achieving us the lowest CPI as opposed to what is getting us the highest IPM. Network testing has a difference because obviously, it’s not an auto bid, well, then you need to be focused a bit more on IPM and CPM. We also look at impression and investment share, basically we look at where the algorithm is sending all the impressions to, because that usually is the best indicator that that creative is the most in line with whatever goal you’ve set up with the algorithm. But of course we do backup checks. We look at retention, LTV and of course ROI on our side, just to verify that.
Shamanth
That’s so interesting. A couple of things jumped out as not very intuitive and I can see why you’d do that. Number one was what you said: “Look, I can’t just depend on the placement that it’s running on. If something’s running just on FAN, the performance can be very different. The other thing that was interesting is you’re also mitigating the risk by moving everything to a second level of testing before everything goes into a live campaign. That’s like a second level of checks, because if something is winning in IPM test, it doesn’t necessarily mean it will eventually win.
Danika
And again, it can also give us an extra layer of learning. Because what if something passes that initial test, it gets a lower CPI than the control, we move it into quality testing and the retention or the LTV is poor. We’ve got a learning to say okay, something about this creative makes people convert and want to download the game. But what is it that has not been in line with the game that they get? And how can we adapt accordingly?
Shamanth
Absolutely. Speaking to the somewhat ubiquitous topic for the last few months which is iOS. How does everything you described change for iOS, as compared to Android?
Danika
We test now purely on Android because it’s faster, and it’s cheaper. And of course, we don’t have all the blockers that are associated with iOS 14 now. Basically, we just apply the results directly to iOS. Occasionally, we do see a difference between what works between iOS and Android, but it’s quite rare, I would say 8 out of 10 times the results between the two platforms are the same.
Shamanth
Yeah, it makes sense. That’s faster, quicker, smoother. And also to speak to one aspect of the team dynamics, something that I’m very impressed by is that everything you described looks like a very far reaching change: from where the ideas come to how the testing happens to how the creative teams look at the metrics, how the communication happens. Basically, you’re wanting to make multiple new trains run on time. And these are all new things for new people. And I don’t know if it strikes you as very challenging, but for me, looking from the outside in, it seems staggering. People have to change the way they work, and oftentimes that can result in pushback. How did you address this? How did you prepare for a lot of the organizational resistance?
Danika
Yeah, it’s true. Quite often when you try to make such a sweeping change like this, you do worry about stepping on other people’s toes or encroaching on somebody else’s territory. And you do worry about resistance and pushback. And I think, first of all, I have to say we are very fortunate at Social Point that we have this culture that we call being T shaped, which essentially means that you can be an expert in your field, but you’re expected to branch out like the letter T into other fields as well. So nobody is confined into a box. In fact, it’s expected of us to be able to do other things outside of our traditional area of expertise. We’re very wary of having silos between teams as well. And this is kind of like a logic that we apply not only to say, in this case, UA and creative collaboration, but also, for example, marketing and product collaborations.
I would say the second thing is that I’m very fortunate to have a team who are extremely self reflective and self critical. And they were able to take a step back and say, okay, after a year, we’ve been working so hard, we’ve been coming up with all these original creative concepts, and nothing has worked. Something needs to change here. And I would say, I’ve got this amazing marketing art lead who was so open and so keen to learn more about data literacy metrics. He wanted to understand how all the different partners worked, and what the ads looked like for the users. And then on the other side, I have user acquisition specialists, who are super empowered now to make creative decisions, who feel like all of their suggestions are valid, and that nobody’s sort of saying to them – hey, you’re a UA person, you deal with numbers, who are you to say that this creative works or this creative works?
I have to say as well, I’m very lucky that the implementation of this framework has actually worked. I think, maybe if it was a total failure, then I would be in a different position today. And I think I also have to mention that one of our mantras within our team that we’re always saying to ourselves is that we never get complacent. And we always question everything. So we are always running sanity checks to challenge even our own processes. We do quarterly retrospectives, we’re constantly trying to improve the framework. So basically, my advice to anybody that wants to make a big change such as this, and if they’re getting a lot of pushback, would be just to rely on the data. I mean, it’s the most basic thing. Like in our case, if you’ve produced X amount of creatives over X amount of time, and you’ve made very little ground, that is the best proof that something needs to change at a much deeper level.
Shamanth
Yeah, and there’s so many aspects of what you said that I find impressive, including the fact that it was a company wide culture that led to people being open to this. And also, there’s an intellectual honesty that you need to look at your metrics and results and say, “Look, in this graph, everything is one big color, something that needs to change.” And something you didn’t say, but I would call out as a great deal of credit for the success of the initiative would also probably go to the effort you put in to bridge the gap between the creative folks and the quantitative folks.
Danika
I would say we feel more united than ever. And I would say that team motivation and morale is kind of an all time high, because we all feel like we all have this one big goal that we’re working towards, and everybody’s contributed that part as well,
Shamanth
That feels magical.
Danika
Definitely makes working a lot more fun.
Shamanth
Like I said, it wasn’t always easy. You had to simplify the BI data, you had to take a step back yourself and say, “How do I make this simple for everybody?” You did the session for your creative team to help them actually make sense of a lot of this. I’m so impressed and so inspired by everything you described. You told us about the rainbow, can you tell us a bit more about the results of the change in this process?
Danika
Yeah, I guess if the result wasn’t positive, we wouldn’t be here talking about it today. So like I said, we had one big winner, one consistent color that we were unable to beat for a year. And we just sustained it through a lot of iterations. Now in the past three or four months, since we implemented this framework, we have six different concepts that work for us. When we include iterations, that equals a total of about 20 different creative assets that are currently working for us in a profitable way. And I would say that the impact of this creative diversity is mostly reflected in our performance. So since we implemented the new creative framework, our ROI on D3 has increased by 30%, in just three months. So the results are really obvious. And actually, if you put those two graphs side by side, the evolution of the ROI on day three, and the colorful bar graph that I described earlier, you can actually pinpoint the exact moment that the creative diversity increases, and the trend of the ROI goes up as well. So the impact is extremely obvious. And we’re all very happy with it.
Shamanth
I would imagine if you have more creative diversity, you’re also reaching different kinds of audiences and users. So I imagine it’s easier to scale, but please correct me if that’s not quite right.
Danika
No, I think that this is definitely part of it. But I think the best thing about having so much creative diversity is that you have backups, so you’re not just depending on one big winner and the second that runs out, you lose all of the scale, you lose all the profitability. But rather you’re constantly shifting or cycling the budget all around different concepts. So one moment one concept starts to fatigue, you have five others in the pipeline that we know all work perfectly. And we probably have another five ideas ready to come in and substitute that sixth one. Ideally, I would like 100 but, have more than one.
Shamanth
Certainly I’ve been in a place where we have been reliant on a limited number of creatives and we go into every meeting saying – “we hope this does not tank.” But you have really mitigated that risk just by diversifying.
Danika
Exactly. You just put your eggs in all of the baskets and the risk is immediately reduced.
Shamanth
Certainly, Danika This has been so instructive, much like the last time we spoke. Every time I’ve spoken to you, I’ve certainly learned so much. This time is no different. This is a good place for us to wrap this conversation. It’s a pleasure speaking with you. But before I let you carry on, could you tell folks how they can learn more about you and everything you do?
Sure, you can go to the Social Point website, you can download my brilliant game Word Life from the App Store or Google Play. Or you can reach out to me on LinkedIn.
Danika
Shamanth
Wonderful. We will link to all of that in the show notes. But for now we will let you carry on with the rest of your afternoon.
Danika
Thank you so much.
A REQUEST BEFORE YOU GO
I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify, Google Podcasts or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.
Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.
Thank you – and I look forward to seeing you with the next episode!