In the previous two parts of this series, we saw how to install Robyn and run demo code. We also looked at the key components of the environment, as also of the code itself.
In today’s episode, we take a look at the results obtained from running the demo code on Robyn. Our Senior Mobile Marketing Manager Virendra Shekhawat will show us the outputs of the models and the information we can get from the different plots.
The video is here: https://youtu.be/vbfVL3jXSQo
In case you missed them, here are the first two parts of the series:
Part 1: https://youtu.be/3RnDfC1usZc
Part 2: https://youtu.be/rsn12wFqjlw
Mobile Growth Lab
We launched the Mobile Growth Lab where over 60 marketers, executives, product managers and developers signed up to break the shackles of ATT’s performance and measurement losses. While the premium and executive editions are closed for registrations, you can get access to the recorded versions of these sessions through our self-serve plan.
Check it out here: https://mobilegrowthlab.com/
ABOUT ROCKETSHIP HQ: Website | LinkedIn | Twitter | YouTube
FULL TRANSCRIPT:
In the previous two parts of this series, we saw how to install Robyn and run demo code. We also looked at the key components of the environment, as also of the code itself.
In today’s episode, we take a look at the results obtained from running code on Robyn. In the Youtube video that accompanies this episode, Virendra Shekhawat, our Senior Mobile Marketing Manager shows a walkthrough explaining the results.
The output of the code is a folder that contains a set of one-pagers, each of which is a set of plots. Which one of these to pick? You look at the accuracy of the plots(in other words, look at the forecast and the actual metrics from the past) and pick one of the one-pagers.
Each one-pager has a set of plots – we’ll talk about the most important ones of these to look at if you are running an MMM on your marketing metrics. One of the most critical ones is the share of spend vs. share of effect plot.
This plot looks at the share of spend of each of your marketing channels in the past – and the ‘true’ impact of each of these. Based on historical variations, how much revenue, trials or purchase(or whatever dependent variable you have) did each drive? If a channel drove a lot of spend but no revenue, or low spend and high revenue, then you see it in this plot. You also see the ‘true’ CPA in here.
What is also critical is the R-square, which signifies the accuracy level of the model – and also a plot of the historical predicted vs. actual metrics. Because if the model isnt accurate, then there is no point in running one. That can usually signify that you need more data for a longer duration.
Assuming you have strong R-square, once you’ve picked one of the models as the best reflection of your business, you enter that in the code – and run the code to get a plot for the ideal budget allocation. This plot shows you what your current budget distribution is, and what your ideal budget distribution should be across channels.
There are a number of other plots that you will see in your output folder – which, if you are curious, you can dig into – but the 80-20 of these plots is the share of spend vs. effect, the predicted vs. actual, and the budget allocation plot.
These 3 by themselves quite significantly fill in the gaps left behind by ATT – and help you understand the true value of your marketing spend in a post-ATT world.
Don’t forget to check out the video walkthrough that is linked in the show notes. We’ve also linked the first two parts in the series, in case you missed these or would like to revisit these.
The video is here: https://youtu.be/vbfVL3jXSQo
A REQUEST BEFORE YOU GO
I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify, Google Podcasts or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.
Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.
Thank you – and I look forward to seeing you with the next episode!