fbpx

Today’s episode is an extract from our recent book ‘How to Implement MMM to Measure Post-ATT iOS Performance: A Practical Guide.’

We talk about 5 of the common mistakes marketers make – and ways to address them.

If you’d like to dive into the practical implementation of MMMs(even if you have no coding experience), check out our book ‘How to Implement MMM to Measure Post-ATT iOS Performance: A Practical Guide.’ 

Link here: https://www.rocketshiphq.com/playbooks/mmm-for-post-att-performance/

***





ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


FULL TRANSCRIPT BELOW

Today’s episode is an extract from our recent book ‘How to Implement MMM to Measure Post-ATT iOS Performance: A Practical Guide.’

While MMM can provide valuable insights, it’s easy to make mistakes that can skew results and lead to misguided strategies. 

Here are some of the most common pitfalls mobile app marketers fall into when using MMM:

1. You’ve taken ‘all revenue or events’ as dependent variables (rather than early revenue d1/d7).

One of the most common mistakes is using ‘all revenue’ as the dependent variable in the model. This approach can be misleading as it doesn’t differentiate between early revenue (like Day 1 or Day 7 revenue) and long-term revenue. 

Early revenue metrics are often more indicative of the immediate impact of a marketing campaign, while long-term revenue can be influenced by numerous other factors like live ops, retention and CRM efforts. 

By focusing on early revenue, marketers can get a clearer picture of effectiveness and ROI of marketing campaigns.

(Note: it is definitely possible to build a model with all revenue as dependent variable – but that tends to be more complex as compared to having early revenue as dependent variable. So start with the easy stuff before doing hard stuff. :)). 

2. You’ve not accounted for all sources/variables.

Another frequent oversight is not accounting for all potential sources or variables that can influence marketing performance. For instance, organic traffic, word-of-mouth referrals, or even external factors like seasonality can play a significant role in an app’s success. 

This may seem obvious – but many marketers often just take paid media sources(this happened to us too – we were scratching our heads about results that should have made sense but didn’t – until we found out there had been a huge influencer marketing campaign that we hadn’t been told about – and we hadn’t accounted for).

Ignoring these variables can lead to an incomplete or skewed understanding of what’s driving performance. It’s essential to ensure that all potential influencing factors are included in the model to get a holistic view.

3. Treating sources as single monolithic entities when they are not.

If a source is very large – or has distinct components, it might make sense to treat it as 2 sub-sources. If you treat Apple Search brand and non-brand campaigns together as a single ‘source’ – and Facebook web-to-app and Facebook SKAN as a single ‘source’, then you might come up with muddy conclusions – as brand/non-brand behave very differently, as do web/SKAN.

So: when necessary, use your judgment to figure out what each ‘source’ or independent variable is.

4. You’ve not backtested past performance.

Another common mistake is to not ascertain how well the model predicted past performance. 


Look at the actual historical dependent variables and compare them vs what was predicted by the models. If these numbers look off, your model may not be very accurate – in these cases you might want to remove outliers or periods with poor accuracy – and rerun your models.

5. You don’t have a critical mass of data per geo.

Many marketers make the mistake of not having enough data for each geographical region they operate in. Oftentimes the spends, installs or revenues are too small to result in reliable results from your models. 

In these cases, you want to ensure that you only run models for geos where there is enough data.

***

In conclusion, while Media Mix Models can be a powerful tool for mobile app marketers, it’s essential to be aware of these common pitfalls. By reviewing and addressing these, marketers can make the most of their MMMs.

And if you want to dive deeper into how you can do a practical MMM implementation using Robyn(Facebook’s open source code), check out our book ‘How to Implement MMM to Measure Post-ATT iOS Performance: A Practical Guide.’

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify, or wherever you get your podcast fix? This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth@rocketshiphq.com. We read all the reviews & I want to make this podcast better.

Thank you – and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A GENERATIVE AI WORLD ?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.