fbpx

Our guest today is Ekaterina Gamsriegler, Head of Growth and Marketing at Mimo – an app that helps people learn coding.

We cover so much ground around subscription monetization. Ekaterina talks about how the iOS privacy changes provided the early impetus for monetization experiments, and describes the different levers that impact monetization. She describes the nuances of optimizing trial screens, highlighting benefits, understanding and combating cancellations and much more. 

Todayโ€™s interview is a masterclass in subscription monetization – and we’re excited to present it today.

Note:

We wrapped up the Mobile Growth Lab where over 60 marketers, executives, product managers and developers signed up to break the shackles of ATTโ€™s performance and measurement losses. You can get access to the recorded versions of these sessions through our self-serve plan.

Check it out here: https://mobilegrowthlab.com/





ABOUT EKATERINA: Linkedin | Mimo | Course – Growing Mobile Apps

ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

๐Ÿงถ Identifying the levers in the funnel

๐ŸŒฝ Scenario planning models for quantitative analysis

โ˜‚๏ธ The impact of UA costs on iOS

๐Ÿก Why trial opt in rates have disproportionate impact.

๐Ÿงค Variables that were tested with trial screens

โ˜•๏ธ How offering to send notifications at the end of the trial period actually improved conversions

๐Ÿฉฐ Combating cancellations post trials

๐ŸงŠ The impact of highlighting benefits

๐ŸŽณ The impact of localizing the app

๐Ÿงฉ How the team implemented the content-sharing loop

๐Ÿ”ญ Tracking in-store searches

KEY QUOTES

Scenario planning with something as basic as spreadsheets

What I mean by scenario planning is that I would have a sheet with a lot of different revenue-related metrics covering pretty much everything from the top of the funnel, like trial opt-in rates, average revenue per subscription to the very bottom of it, ending up with LTV, payback periods of a different subscription durations. 

Mine is quite complex, the one I’m using at work. They can also be fairly basic. As the spreadsheet is automated, you have multiple inputs, and multiple variables which impact each other naturally. Once you change into just one of these, you can pretty much see in the end, what kind of a shift in revenue or left-hand values or any other metric you can expect.

How increasing the trial opt-in rate impacted LTVs

We fall into the category of personal development apps, these are the ones where the users’ motivation is the highest at the very beginning. Then gradually, it might be going down because learning to code requires a lot of energy and effort. So for us, in order to lock the user in, it was critical to helping the user commit. That’s why we were offering discounted subscriptions at the beginning of the journey, which was also having a negative impact on the lifetime values overall. 

So of course, if we would increase the trial opt in rates, then the distribution of the purchases happening at full price, versus the discounted price would get more healthy. This would have a high impact on our LTVs, which we did. This is exactly what happened and what helped us a lot.

The reminder to cancel trials went a long way

We were receiving a lot of user feedback and reviews where users were not very happy with the trial opt-in overall, they have very little trust in it. So a lot of reviews were talking about developers capitalizing on users who forget to cancel. So it’s quite a barrier for a lot of users to even start the trial. The way it would work is that it would describe how exactly the trial period is working, and how long it is and mention that the user will get a notification or an email before the trial expires so that they can cancel it if they don’t find the pro features valuable. We did exactly that. This resulted in around a 100% increase in the trial opt-in rates and also decreased our cancellations within the first 24 hours by 25% which was huge for us.

Tracking trial churn and subscriber churn

There were different experiments that we did for tackling trial churn and also subscriber churn. For the trials, the first thing we noticed is that when the users opt in for the trial, let’s say the trial of 10 screens was working great, and users were clearly understanding how the trial is working. But it was not clear to them why they would upgrade and what would be the benefits of having a pro subscription. This was because the big part of the benefits for the user was missing. 

After that we started experimenting with benefits, in a personalized way based on the user’s answers during on-boarding. This helped us decrease trial cancellations by around 10%. On top of explaining how the trial works, we also started to add different parts to the layout, which would be saying, why would you need to upgrade and what would be the main benefit for the user.

Experimenting with screens

The big question is how exactly to figure out what is worse, including the screens into the flow, and whether it should be one long scrollable screen or whether it should be all broken down into separate screens. So we experimented quite a bit with that. Currently, the multi-screen version works very well. It also led to around 60% improvement in the trial opt-in rates and also decreased our cancellation rates by a lot because even at the obtain stage users would already clearly understand the benefits of opting in for a trial for a pro subscription and also, how is it different from the basic version.

Notifying users of updates & changes

It was kind of clear that a lot of users were under the impression that the content might not be deep enough, or it might not be enough of it for the whole year. That’s why they would cancel the subscription pretty fast at the beginning of the journey. The only way to tackle this, as I saw it, was value nurturing, which means constant reminders of the new content. We just were not communicating it properly. Showing the in-app messages, sending push notifications, and even setting up in-app events came in quite handy with promoting the big updates because otherwise, the users were under the impression that literally nothing is changing or happening or improving.

FULL TRANSCRIPT

Shamanth 

I’m very excited to welcome Ekaterina Gamsriegler to the mobile user acquisition show. Ekaterina, welcome to the show.

Ekaterina 

I’m really excited to be here. It’s my first podcast. 

Shamanth 

One of the reasons I’m excited to have you, Ekaterina, is because from our last conversation, you have very nuanced insight into all things subscriptions, monetization and yet, you still come across as the kind of person that’s outside of the conference circuit. I think it’s great to be learning from somebody that’s been in the trenches executing and not quite as much in the limelight. So I’m excited to have you on the show today. 

To get started, when you’re looking at monetization and testing, there are different parts of the user funnel that can be optimized. So it could be installing, registration, registration to trial to purchase, recurring purchases, or there could be more as well, there could be virality, there could be the referrals, there could be content. 

How do you identify which of these is the biggest lever to focus on in your testing?

Ekaterina 

That’s a great question, I have to mention our funnel looks slightly different, because for us, the signup is optional at the beginning of the user journey. So we basically have a free user who we can then convert to starting trial for them. Hopefully, theyโ€™ll upgrade and then renew the subscription. 

It’s natural to assume that the biggest levers are usually the ones at the top of the funnel because these are the changes in the features that have the highest exposure. However, if you look at the typical, most popular prioritization frameworks, if you have something that might not have too much exposure, then youโ€™re at the bottom of the funnel. But at the same time, if it takes very little effort and you have a lot of confidence that it’s going to work, then this is what pretty much falls into the low hanging fruit category. 

So, there is a way to use these basic prioritization frameworks, which are quite popular for defining and figuring out what to work on. But what I personally find also very useful is I think the correct way to call it would be something like scenario planning, scenario modeling sheets. I love spreadsheets.

What I mean by scenario planning is that I would have a sheet with a lot of different revenue-related metrics covering pretty much everything from the top of the funnel, like trial opt-in rates, average revenue per subscription to the very bottom of it, ending up with LTV, payback periods of a different subscription durations. 

Mine is quite complex, the one I’m using at work. They can also be fairly basic. As the spreadsheet is automated, you have multiple inputs, and multiple variables which impact each other naturally. Once you change into just one of these, you can pretty much see in the end, what kind of a shift in revenue or left-hand values or any other metric you can expect.

I think this was exactly the kind of automated thing that helped me figure out what changes even need to be tested. Because some of the changes, like for example, increasing the price, as long as there is no cannibalization from different subscription plans going on, were pretty straightforward. You can project very easily what an increase in revenue is going to bring. 

With more tricky scenarios, you might have up to two to three different scenarios in the way your lifetime values and revenues might change in the end. So I found these kinds of templates fairly useful. They’re widely used in marketing. Overall, when you also try to plan out different initiatives starting from testing new acquisition channels, maybe it makes more sense to invest this energy into optimizing your onboarding emails. But for different growth planning scenarios and product changes it also makes a lot of sense.

Shamanth 

Yeah, it sounds like you have a quantitative model that helps you really quantify which of these will have a bigger impact and help you evaluate which one is actually moving the needle.

Ekaterina 

That’s a very good way to put it.

Shamanth 

What were some of the things that you saw would have the most impact on testing when you guys were getting started with the experimentation?

Ekaterina 

2021, for us as well as for many other advertisers and products was quite a challenging time, in terms of the customer acquisition costs on iOS that increased quite a lot. The thing with that was that the unit economics was viable, but it was still far from the way it used to be, which was limiting our marketing investment quite significantly. So naturally, the choices were to either decrease the customer acquisition costs or increase our lifetime values or do both simultaneously, which we pretty much focused on. 

So for us, the lifetime value was the biggest focus for the whole of last year and the main lever for us when it comes to lifetime values was actually increasing the price slightly after doing quite an extensive surveys in the apps. I also did the van Westendorp price sensitivity survey, trying to figure out what the ideal pricing points for different segments of users that we have, based on their motives. 

Apart from this, another big lever was increasing the trial opt in rates, because a lot of our subscriptions before that were happening with a discount, we would offer a discount at the beginning of the user journey.

We fall into the category of personal development apps, these are the ones where the users’ motivation is the highest at the very beginning. Then gradually, it might be going down because learning to code requires a lot of energy and effort. So for us, in order to lock the user in, it was critical to helping the user commit. That’s why we were offering discounted subscriptions at the beginning of the journey, which was also having a negative impact on the lifetime values overall. 

So of course, if we would increase the trial opt in rates, then the distribution of the purchases happening at full price, versus the discounted price would get more healthy. This would have a high impact on our LTVs, which we did. This is exactly what happened and what helped us a lot.

Shamanth 

With the trial screen, what were some of the variables that you guys tested? 

Ekaterina

The key metrics, which we were aiming to improve were, first the trial opt-in rate, and then also the trial cancellation rate. I think we’re not that much different from other apps when it comes to cancellations and about half of them are happening within the first 24 hours. But I can start with the trial opt in rate. Here the successful experiments were probably similar for many apps in the industry. This is also testing out the Blinkist-inspired screen with the timeline.

Shamanth 

Can you describe that for people who don’t know?

Ekaterina 

They get the trial opt-in screen in this case, which explains how the trial works and where the app pretty much also commits to sending the user a push notification or an email or both, before the trial expires to decrease this trial anxiety. 

Since we are similar to Blinkist and I believe many other apps,

we were receiving a lot of user feedback and reviews where users were not very happy with the trial opt-in overall, they have very little trust in it. So a lot of reviews were talking about developers capitalizing on users who forget to cancel. So it’s quite a barrier for a lot of users to even start the trial. The way it would work is that it would describe how exactly the trial period is working, and how long it is and mention that the user will get a notification or an email before the trial expires so that they can cancel it if they don’t find the pro features valuable. We did exactly that. This resulted in around a 100% increase in the trial opt-in rates and also decreased our cancellations within the first 24 hours by 25% which was huge for us.

Shamanth 

Even if people forget to cancel, there’s only a short-term revenue bump, because they will remember eventually, they will cancel that. 100% improvement up front more than makes up for any revenue you would have gotten from people who forget.

Ekaterina 

Of course, after sending the reminders to the users, the push notifications and the emails will also see a slight increase in the number of users. But this definitely is a much more ethical approach to the trial practice as well. For us in terms of numbers, it definitely paid off. I think I heard feedback from some other folks in the industry for whom this experiment did not bring that much significant impact or maybe small marginal improvement, which they would believe would disappear over time anyways. But for us, this change was very successful.

Shamanth 

Definitely. I know you mentioned cancellations and obviously, that can be a big opportunity in reducing cancellations. Can you describe what the user’s state of mind is that typically leads to cancellations? What have been some of the things you guys have done to combat cancellations? 

Ekaterina 

We have the trial cancellation survey, which we send out to users and also show in the app after they cancel the trial. This helped us come up with the first hypothesis for improving these rates. For us, the main reason that led to cancellation was the price. This got a little bit more intense this year because you’ll see more and more users cancel specifically because of this reason. Before, they were only specific segments which would cancel because the price was too high. 

Right now we see it across both platforms and across different user segments. Another reason why they would cancel it is that they might not find the pro features valuable enough. Here I have to admit, we offer a lot of content and a lot of features for free, and you can pretty much learn to code and go through everything. 

But we only provide the pro features for users who want to learn faster and more efficiently without distractions. At the same time, not every typical user necessarily needs that. So a lot of the cancellations were happening because they would be perfectly fine to keep using the app for free and would not necessarily find the need to upgrade. There are also multiple ways to tackle that. 

But a lot of them would be borderline unethical, for example, increasing the ad load, to add more distractions to the user experience, which contradicts our company’s mission of making coding accessible. We could always lock more content behind the paywalls, which would also not be in line with what we’re actually aiming to achieve. That’s why we are regularly doing a lot of user interviews to figure out what would actually be the most valuable set of features for the users. 

There were different experiments that we did for tackling trial churn and also subscriber churn. For the trials, the first thing we noticed is that when the users opt in for the trial, let’s say the trial of 10 screens was working great, and users were clearly understanding how the trial is working. But it was not clear to them why they would upgrade and what would be the benefits of having a pro subscription. This was because the big part of the benefits for the user was missing. 

After that we started experimenting with benefits, in a personalized way based on the user’s answers during on-boarding. This helped us decrease trial cancellations by around 10%. On top of explaining how the trial works, we also started to add different parts to the layout, which would be saying, why would you need to upgrade and what would be the main benefit for the user. 

We also experimented later a lot with the layout: how to show these benefits and what exactly to show on this flow and on the paywall, because there is a huge space for experimentation there, you can have the list of features, then you can have the explanation of how the trial works, then you can have some social proof. You can have the comparison of the basic plan and of your pro subscriptions. 

The big question is how exactly to figure out what is worse, including the screens into the flow, and whether it should be one long scrollable screen or whether it should be all broken down into separate screens. So we experimented quite a bit with that. Currently, the multi-screen version works very well. It also led to around 60% improvement in the trial opt-in rates and also decreased our cancellation rates by a lot because even at the obtain stage users would already clearly understand the benefits of opting in for a trial for a pro subscription and also, how is it different from the basic version. 

Having all of this explained at such an early stage helped a lot to decrease the cancellations as well. With the price, I think we started differentiating the price based on the user segments. It’s a fairly low risky strategy in terms of unless your distribution of users significantly changes, then you have a choice of whether this particular segment will not be properly monetized at all, and all of them will end up not even opting in for the trial and then canceling it. Or you technically have a chance to resurrect such subscribers. 

Which means that for some segments we were offering 50% off. If they cancel the trial, weโ€™ll then try to remind them of the value of the Pro features and explain that they’re going to lose access to those and offer them to re-subscribe the 50% offer, which also works pretty well. 

For subscriber churn, from the user interviews which we did quite a lot,

it was kind of clear that a lot of users were under the impression that the content might not be deep enough, or it might not be enough of it for the whole year. That’s why they would cancel the subscription pretty fast at the beginning of the journey. The only way to tackle this, as I saw it, was value nurturing, which means constant reminders of the new content. We just were not communicating it properly. Showing the in-app messages, sending push notifications, and even setting up in-app events came in quite handy with promoting the big updates because otherwise, the users were under the impression that literally nothing is changing or happening or improving. 

Shamanth 

To switch gears a bit for many apps, localization isn’t a priority, because English language audiences and monetization potential can be huge. Anything outside of that you look at from a cost-benefit lens, and it’s oftentimes not worth it. 

How did you guys decide to prioritize localization and what is the impact of localizing?

Ekaterina 

When it comes to potential in monetization there are improvements that you can get out of localizing the product. However, I would say that better monetization is just one of the very many aspects based on which I would make and prioritize strategic initiatives. For us again, it was a lot about the users and about our mission. When you see day after day that having the app in a particular language is the number one request from a huge amount of your daily monthly active users, then, naturally, it becomes obvious that this is a huge barrier for them. 

Learning to code is not easy already. Adding this extra barrier on top of all of these other misconceptions, various users feel hesitant to even try it out. So I would say this decision was based to a large extent on our users. Another big aspect of it was not that much about monetization, but about better retention and better user engagement in the apps, because you could clearly see that users whose devices are set to English had much better engagement metrics. This would also lead to higher monetization. So in this case the main hypothesis that would help us improve retention, which in turn, would be a driver for growth was better monetization. 

This is exactly what happened, our retention improved in the short term and long term. So day 1, day 7, day 14, improved depending on the country, from like 25 to 100%. Naturally, the improvement was much higher for markets where the adoption of English is lower. In turn, this helped a lot with our referral scheme. So to say our slightly incentivized referral loop. There were some improvements between 100 and 170% of users who started to share the link of the app with friends. On top of this, when it comes to monetization, specifically dependent on the market, there was between a 30 and 50% increase in the trial opt-in rates as well. 

Also in purchase rates, of course, following the higher trial opt-in rates, which basically means that it worked out pretty well. 

We did not start with all the markets and all the languages at once. We started testing with a couple of switches first to also see this cost benefit to do the cost-benefit analysis. I have to mention here that for all languages localization is of course not necessary. 

That’s why I would suggest always look not only at the qualitative insights of feedback from users, but also at the usage data as well, because what we saw was users whose devices were set to German or French were requesting localization a lot, they would be very vocal about it in user reviews, and giving us low ratings for not having one. 

But on the other hand, the way they were behaving in the app, the way they were using it, and the way they were purchasing the subscriptions, were pretty much in line with the English-speaking users who were consuming the content in English. But there were other aspects like better ratings and reviews, which also improved since we started rolling localizations out. 

Shamanth 

There’s a halo effect that cascades down once you localize it and customize it for specific audiences, specific countries, and geos. Definitely, that makes sense. 

Something else you mentioned, the last time we spoke was about your content-sharing loop. Can you speak to what the genesis of the content-sharing loop was and how you implemented this and what did the results look like?

Ekaterina 

Yeah, of course. Our app overall is using a lot of different gamification techniques because we want users to build a habit to learn to code and a way to get into it. To build the confidence that everyone can learn to code. As we are an educational product it feels like there was this great product feature fit when it comes to also enable users to be able to share their progress and successes on social media when it comes to their coding journey. This is not something that we used to have up to a year ago. This was also the time when Meta rolled out the feature for sharing Instagram stories and Facebook stories. 

We pretty much jumped on to it because it really felt like a good product feature fit and a good time for it. The implementation was fairly straightforward. I think it was a medium size task for the design and development teams. We added the buttons and the possibility for users to share the results, as they progress onto the screens where we felt the users are having their aha! moments and get the reminders that they’re making progress. 

For example, continuing the streak screens, moving up to a higher level, successfully completing the challenge. We also did at the end of 2021 a yearly wrap like Spotify where we were wrapping up the progress the user has made over the year. In the end, even though it took some time, of course, the features have a lot of traction now, because we kept adding different use cases there. So at the moment, I think around 10,000 users a month share their progress. 

Also, from the absolute numbers perspective, we would have maybe 0.2% of users doing this from our daily active users. But when your daily active users are close to six figures, then it also makes it quite a significant amount. The good thing about it is that as you keep adding different cases to it, more sharing opportunities, then it starts working in a very sustainable way, which also helps with the virality. Overall, the impact of it is not so easy to measure but since we started rolling it out there has been a huge increase in the number of organic brand searches. As far as I know, I think a 25 to 30% increase of users who are searching for our brand these days is more on the stores. 

I also believe that recently it became possible to track the downloads that are actually happening from the stories which have been shared, which makes it much easier to estimate the impact.

Shamanth 

How are you tracking the store brand searches? 

Ekaterina

That was looking back then at the Google Play stats. That’s pretty much the top branded searches and how it has changed since we implemented the feature. I mean, it’s not easy to do recently, in recent months. But until autumn it was, it was possible.

Shamanth 

Ekaterina, this has been incredible. It’s great to see just the plethora of options that are available, just the opportunities that are available for testing and improving the LTV of an app so dramatically. The ways that you’ve described, there are just so many levers that aren’t quite obvious but you have described them so clearly and so well. 

This is perhaps a good place for us to wrap. But before we do that, can you tell folks about how they can find out more about you and everything?

Ekaterina 

Yes, of course. I’m on LinkedIn, I think that is where I shared the most. Of course, as you also mentioned in the beginning I was not going that much to conferences in recent years. I believe nobody was because of the pandemic. But I’m planning to change that in the upcoming year. But for now, I am always super happy to connect and to exchange knowledge and experience and share learnings. 

Shamanth 

Wonderful with that. We’ll wrap up for today. Thank you so much, Ekaterina.

Ekaterina 

Thank you for having me.

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I donโ€™t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform โ€“ be it iTunes, Overcast, Spotify or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms โ€“ or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you โ€“ and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A POST IDENTIFIER WORLD?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.