Our guest today is Aayush Sakya, Head of UA at Earnin.
Aayush has over 10 years of experience in growth marketing and has managed growth leading to 2 unicorn exits in his career.
He’s among the most experienced among a cohort of early mobile growth marketers that I’ve known. We’re excited to have him.
In this episode, we chat about measurement post-ATT, geo-lift tests, SKAN, and much more.
ABOUT AAYUSH: Linkedin | EarnIn
ABOUT ROCKETSHIP HQ: Website | LinkedIn | Twitter | YouTube
KEY HIGHLIGHTS
🤔How should digital marketers think about measurement post-ATT?
🔨Structuring geo-hold out tests.
📏Channels to measure impact from geo-holdout tests.
💯Evaluating results to understand the true impact of SKAN campaigns.
🔥Understanding the impact of multiple channels on performance
📈How should a marketer review performance of creative post-ATT?
😎Managing expectations and reporting with incomplete data post-ATT.
Shamanth
I’m very excited to welcome Ayush Sakya to the Mobile User Acquisition Show.
I’m grateful to have you, Aayush because you entered mobile in 2013-2014.
There’s a lot from your experience, from the time pre-ATT to today that I’d like to share with our listeners.
To get started, how do you recommend that marketers think about measurement post-ATT overall at a high level?
Aayush
Thank you Shamanth.
I would say that Marketers should consider post-ATT measurement similar to the way they did pre-ATT.
The focus remains on driving incremental performance, measuring it, and sharing data with stakeholders, generally the finance team, the product team, the CEO, etc
Before ATT, device identifiers enabled tracking on both Android and iOS, and the last-click attribution model was prevalent. However, it favored channels like Google.
For instance, if someone saw an ad on TikTok, then later used Google to find the product, Google got the credit—even if the connection was minimal. This model was chosen for its simplicity, despite its flaws.With ATT, device-level info vanished on iOS. Aggregate metrics and the SKAN framework replaced it, dropping data for various reasons.
Marketers adapted to close the data gap post-ATT and enhance ad spend impact certainty. The focus has always been on proving the incrementality of their performance.
Marketers adapted to close the data gap post-ATT and enhance ad spend impact certainty. The focus has always been on proving the incrementality of their performance.
A reliable method is a geo-lift test, halting ads in specific regions to measure true impact.
Back when we collaborated at a gaming company, we were lucky that we had so many huge portfolios of games and we were advertising in many different countries.
We would just stop advertising in Britain and a few countries in Europe to see what was the impact there versus in other countries, to measure the incrementality of our advertising dollars.
Shamanth
Yeah. 100%, even pre-ATT you have to be focused on incrementality not just believe what your measurement system tells you.
And how do you recommend that folks get folks’ structure geo holdout tests?
Aayush
Starting with the geo holdout test’s goal, there are two main objectives. One is proving channel effectiveness in driving incremental traffic or customers. Leveraging geo lift test insights, provide feedback to the current measurement model, like SKAN, if applicable.
If SKAN underreports metrics post-test, say, installs by 30% and paid customers by 50%, establish a multiplier for daily use—ideal for daily performance measurement.
Running frequent geo tests is costly, so collaborating with an analyst is key. Analyze customer distribution across regions, ensuring a representative yet non-disruptive sample size
Startups looking to grow, generally don’t like posting advertising in large markets. You want to make sure that you are lowering the spend by a large enough quantity that you do get a good read in those markets, and measure the lift because you don’t want to come out of this test and not learn anything.
Shamanth Rao
Yeah, you’re right. You don’t want to not learn anything, because you are putting pause on a lot of spend. And which in turn translates to a lot of revenue. So you do want to be very careful and intentional about how you structure these tests.
And what channels do you recommend that these be run on?
Aayush
Focus on high-spending channels. Starting with larger channels provides clearer performance insights. With small channels, it’s difficult to get a read on geo testing.
Shamanth
So obviously, with SKAN, there are additional complications with measurement.
So how do you typically evaluate the results of these geo holdout tests? And how does that tie in with how SKAN is structured and architected?
Aayush
Regarding SKAN campaigns on iOS, first, understand its mechanics, especially signal drops, grasp installs and event signals, especially when an app isn’t opened daily.
When using SKAN for iOS, measure with substantial traffic that can drive installs. Avoid splitting campaigns across small publishers for sufficient installs per campaign. Swift optimization within 24 hours is crucial, causing DSPs and programmatic networks to consolidate. These networks struggled with multi-publisher optimization post-SKAN’s 2021 launch.
After a test, maximize accuracy by applying a multiplier to SKAN traffic. If you’re crediting half the installs, use the multiplier to ensure full credit.
Shamanth
So you want to understand how much your reported metrics are off from true metrics. And obviously, this gets a lot more complicated if you have multiple channels growing.
When you and I worked together, we were on 15-20 channels at a time and there’s been consolidation since then.
But there are still a lot of channels that are possible to run on.
How do you understand the impact of multiple channels, given all of what we just talked about?
Aayush
Life would be really simple if you were just running Facebook and Google. If you’re handling smaller budgets like $200k or $500k monthly, simplicity lies in focusing on Facebook and Google. They often constitute 60-90% of spending.
For the rest of us who are launching seven, or eight channels, this is a very challenging question.
Yet, it’s not just about one channel’s impact. Channels often collaborate to influence user behavior. To grasp the effect of multiple channels, opt for marketing mix modeling.
This involves correlational studies on variables like spend on each channel versus outcomes like paid customers. A data scientist can help uncover the impact of adjusting multiple channels’ spend together or individually.
Shamanth
Yeah, I think MMM is something we’ve tested quite a bit.
What do you recommend with regard to the tactical execution of MMM? Do you recommend building something in-house using some of the open-source tools?
Aayush
The approach depends on your in-house expertise. Few companies have the resources for data scientists and analysis. If lacking such resources, agencies offering these services are valuable.
Google and Facebook have services for MMM, but it requires significant effort and understanding. Recognize the work involved and get data science knowledge and even data engineers for proper setup.
Shamanth
Yeah, it’s not a trivial exercise by any means.
How do you recommend thinking about the performance of creative and measuring it post-ATT?
Aayush
Creatives are vital and creative performance is critical for marketing success. Post-ATT, two approaches stand out.
First, test creatives on Android and apply insights to iOS. Android often mirrors iOS traffic, serving as a solid proxy.
But if for some reason, you don’t have an Android app, or you think that the Android app traffic is very different, you can still use creative data from SKAN.
We do look at creative data on Facebook and TikTok, for example, to optimize which is useful.
Shamanth
Yeah, there’s the tactical measurement that is important to solve. But I think what we’re facing post-ATT is also very much an organizational and team-level challenge.
What do you recommend, in terms of managing expectations and ensuring this alignment in the team around the incomplete data and the measurement challenges that we are all seeing?
Aayush
Building trust is vital in the industry. Marketers must foster it through transparency and honesty. Instead of feigning certainty, acknowledge confidence levels. Disclose gaps, and over time, close these gaps with the suggested strategies I have discussed so far.
Detailed reports to stakeholders, especially finance teams, build trust. A neutral analytics team prevents doubts of manipulation. Educating stakeholders about reporting deepens trust.
Weekly meetings or extended discussions align everyone for company success. Trust is critical for scaling, because everyone will be questioning if the marketing team is really doing their thing.
Shamanth
Indeed, navigating marketing, analytics, finance, and product can feel like different languages. Trust-building is crucial due to concerns, especially post-ATT projections. Emphasize clarity and acknowledge the data’s incompleteness. Building trust takes time and patience.
Aayush, this has been great. You’ve been very insightful. Can you tell folks how they can find out more about you and everything you do?
Aayush
I work at Earnin which is a first-of-its-kind, FinTech company built entirely around on-demand access to your income earnings. We are about providing customers access to money that they’ve already earned at the speed that they have earned it.
Since being founded in 2013, 3.8 million customers have accessed over 15 billion in earnings through Earnin. Check out the Earnin app.
Shamanth
Excellent. Thank you for that. Anything about your work that you’d like to share and how people can connect with you or find out more about you?
Aayush
You can reach out to me on LinkedIn. If anyone has any questions, I’d be happy to share my knowledge of the marketing industry.
Shamanth
Wonderful. We’ll link to your LinkedIn and your knowledge and insights from more than a decade of working on mobile.
Thank you again for being on the show.
Aayush
Thank you for having me.
A REQUEST BEFORE YOU GO
I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify, or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.
Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth@rocketshiphq.com. We read all reviews & I want to make this podcast better.
Thank you – and I look forward to seeing you with the next episode!