Our guest for today is Piyush Mishra. Piyush is one of the leads of the Growth Marketing team at Product Madness and is known in the mobile marketing space for his expertise in all things related to SKAN. Piyush also co-hosts a podcast called Level Up UA with Adam Smart.
In this episode, Piyush talks about the evolution of SKAN since its release and how marketers are still trying to figure out measurement around it. He talks about the early days of SKAN, how itâs changed – and what is coming in the future. He includes some tips and strategies for dealing with the measurement mess that SKAN has brought in its wake which weâve found incredibly helpful – and we hope you will too.
ABOUT PIYUSH: LinkedIn | Product Madness
Level Up UA Podcast: Apple | Level Up UA Podcast: Spotify
ABOUT ROCKETSHIP HQ: Website | LinkedIn | Twitter | YouTube
KEY HIGHLIGHTS
đ± The beginnings: SKAN 1.0
âïž Evolution of SKAN from 2.0 to 3.0
đ¶ When SKAN gained popularity
đŁ Learnings from conversion value testing
đ The shift towards prediction based events
đ Revenue based vs. non-revenue based predictions
đ The reason why probabilistic is sticking around
đThe challenges in comparing different channel performances
đ How are some advertisers working with conversion values
đ Attribution models that are worth looking in to
đ The changes coming with iOS 15
đ©ș Postbacks and their significance
KEY QUOTES
The humble beginnings of SKAN
If I remember correctly, SKAN 1.0 started with 13.7 iOS version back in 2018 or 2019. And itâs true that when it came out nobody actually paid attention. For most of the marketers, it was just there, they knew they were getting an install postback. There were no post-install attributes to it at that point in time.
Delving into predictions
One way is where you maximize enough information from the conversion value schema and run a prediction on the basis of conversion value. The other way is doing prediction in the conversion value schema itself.
How to overcome non-revenue events
Non-revenue events are not great predictors, but thatâs where we have to be smart. Maybe we can have a mix of events â for example, somebody reaching the lobby, viewing an ad, and doing a hundred spins. All three combinations together. Then you set that as a conversion value schema and see if it can become a better predictor of the likelihood of a purchase.
Comparing performance between channels
Youâre trying to compare two different attribution solutions in a certain sense. One is the MMPsâ solution and the other is Appleâs solution. From the data that weâre receiving, like re-download and other stuff, Appleâs solution is technically based on Apple ID. And probabilistic is based on, as you said, the signals that you receive and itâs by nature probabilistic, as the word goes. It wouldnât be fair to compare the two, because both have their own pros and cons. For instance, the kind of post-install data that the MMP solution or the probabilistic solution gives you is much higher than what Apple gives.
Making use of SKAN to benefit from it
Some of the advertisers in my own network have figured it out. For example, one of them is using conversion value schema purely to extract install date. Some others are using it to do prediction within the first 24 hours. A few of them actually extended the conversion window to 48 and 72 hours
Working from what we don’t want
Weâll have to take a call on what kind of users we donât want any information on. For example, if I want to extend my conversion window to 48 hours, am I happy with just having information about your opening the app on day two or day three?
The disadvantage of direct postbacks
The second problem with receiving the direct postback is that we donât have the matching key. We do receive the raw report campaign ID and ad network ID, but we donât have the key to match it to an ad network name or an ad campaign name, because that exists only with the networks.
What’s included in the postback?
For example, Facebook and Google are not sending re-download percentage in the postback we receive directly. But since we are getting it directly, we can still have a directional metric stating that re-download percentage on Google is around 20% or something like that. Google was also converting all blank conversion values due to privacy threshold to zero, because in their head these were at least installs. But now we will know the ROAS percentage in Google, where we know what percent of the ROAS have null conversion value because of the privacy threshold. So it has a lot more information especially on SRNs, and thatâs a welcome change.
Postbacks are better off with advertisers than ad networks
If the postbacks were going to ad networks, theyâll definitely know that the last updated conversion value can be fudged and changed to something that they need to optimize towards and they can play with it. But with iOS 15, we have the control as advertisers, and thatâs a very good change.
Data reaches us from two sources
You have to keep in mind that you have two different sources providing the same data. One is for users on iOS 15 and above coming directly to advertisers or to MMPs, and the other is going to the ad network and then coming to the MMPs and then to us. So itâs similar data, but itâs a little hidden and somewhat controlled by the individual networks.
Data that’s available through the different versions of SKAN
You have users who are converting through 2.0, where youâre getting only click-through conversion. Then thereâs inventory on 2.2, where youâre getting view-through and click-through conversion. And then SKAN 3.0, where youâre getting multiple data points on the last bid and who won and who didnât and who was the assistant installer and so on. And on top of that, youâre getting direct postback for iOS 15.
What SANS maybe doing under the hood
Letâs say theyâre creating multiple campaign IDs on Facebook, which is true we know. But how would you associate it back? I donât know which campaign ID stands for which campaign, even within their own words. So you canât extract unless you have the key. And thatâs the limitation.
FULL TRANSCRIPT BELOWShamanth
I am very excited to welcome Piyush Mishra to the Mobile User Acquisition Show. Piyush, welcome to the show.
Piyush Mishra
Thank you, Shamanth, itâs a pleasure to be here. Thanks for the invitation.
Shamanth
Well, youâre among the people who have been at the forefront of thinking and also actioning all things SKAN and iOS, you have a podcast of your own. So Iâm thrilled to have you here today to talk about SKAN, which is very top-of-mind for a lot of mobile marketers.
Letâs start at the beginning. SKAN 1.0 came out a long time before ATT. And even though I knew it was there, I didnât really pay any attention to it, just like a lot of marketers. So tell us what SKAN 1.0 was like.
Piyush
If I remember correctly, SKAN 1.0 started with 13.7 iOS version back in 2018 or 2019. And itâs true that when it came out nobody actually paid attention. For most of the marketers, it was just there, they knew they were getting an install postback. There were no post-install attributes to it at that point in time.
Plus, there was a huge delay in terms of receiving conversion values â around 48 to 72 hours.
Now, that was an IDFA world. And when you look at it in hindsight, you can see that there were messages from Apple throughout saying that it might not be the world weâd all step into. But at that point in time, people didnât pay attention because it wasnât a big thing. It was just provided as install postbacks.
Shamanth
And I can see why a lot of marketers would just disregard it, because what could they do with just an install postback in an IDFA world.
Piyush
Exactly. Nobody expected Apple to build an attribution system altogether! We did not expect it to become big, like with SKAN 2.0, 2.1 to 2.2 and now 3.0.
Shamanth
When did you really sit up and take notice of SKAN? When did you realize that it could become something significant?
Piyush
Well, it was as soon as they announced that IDFA would go, and that for users, it would be opt-in â users would have to respond on individual apps. Thatâs when we started paying a lot more attention to SKAN. We all assumed at that point in time that probabilistic or fingerprinting would also be stopped. So SKAN was the only solution out there for us.
Now understanding it took me more than six years! It was complicated, to be very honest, with the whole conversion value schema and the delay associated with it, and then understanding exactly why Apple was pushing it. So it took us some time to come to terms with it. But as soon as it was announced that we would have ATT and users would be choosing it for each and every app, it became very clear that this was a significant move from the company that had already given us: LAT.
At that point in time, Apple had also started looking into SKAN 2.1 post-install conversion value metrics and thatâs when we started paying attention. And to be fair, I know there was a lot of criticism for SKAN when it was introduced, but my first thought was that it could create a level playing field.
Yes, MMPs are doing a great job. But it is also a fact that Google and Facebook and even Apple have enormous control in terms of attribution because they are self-reporting networks. So SKAN sort of created a level playing field when it launched. And that was my first thought, that maybe it was what we required. Now, of course, things have changed and itâs pretty clear where weâre heading.
Shamanth
Since the first version of SKAN (which was just install attribution) to ATT, through to SKAN 2.0 and 3.0, what are the changes that have happened in SKAN?
Piyush
There have been four releases since ATT was launched â SKAN 2.0, 2.1, 2.2 and 3.0.
In SKAN 2.0, they introduced conversion values. That was their main focus. They declared that post-install conversions would be delayed, and the data would be aggregated, without any user level identifiers. Another big thing was that they said they would send the postbacks to ad networks and not directly to advertisers. That was how SKAN 2.0 started.
There werenât really any huge changes in 2.1. But in SKAN 2.2, they introduced view-through conversion. That was different from the earlier 2.0, which had only click-through conversion. That was when we realized that Apple was building a full-blown attribution solution in a certain sense. This became clearer with SKAN 3.0, because they started providing postbacks for situations where the ad networks were not winning the bids. It didnât dive completely into multi-touch, but they did start providing more information than only the last win bid and other stuff.
Another big change came about with iOS 15. Ad networks and advertisers both started receiving direct postbacks from Apple for all iOS 15 users. This is a significant departure from iOS 14 and 14.5, where all the postbacks were going to ad networks first and then to MMPs and then to us, which was a longer process and suffered from certain loopholes.
Shamanth
You talked about conversion values. What have been some of your learnings from testing conversion values over the last several months?
Piyush
We started testing conversion value right away when it was released and we have been continuously playing with it. The learnings depend on the gaming genre. Firstly, for us in the social casino space, a relatively low number of users do a purchase in the first 24 hours. So we have not tested anything outside the 24-hour conversion window. Iâm assuming 24 hours because thatâs a practical timeframe.
The second thing that weâve started expanding and delving more into is predictions. What can we do with the amount of data that weâve collected till the last updated conversion value?
This can be viewed in two different ways. One way is where you maximize enough information from the conversion value schema and run a prediction on the basis of conversion value. The other way is doing prediction in the conversion value schema itself.
Weâve been experimenting with both and trying to understand which will work better going forward, because we definitely need to maximize the output from conversion value.
A third thing as part of the experimentation is that we are considering expanding the conversion window from 24 hours and see how exactly that will work. Of course, there is the drawback that we will only get information about users who are opening the app on both the days. So we have to back it up with data on how many users actually come back on the second day and go ahead from there.
But Iâll tell you something – it is very, very limiting. Conversion value is one reason why everybody in the marketing world has been panicking. And the other big reason is the privacy threshold. Thatâs a black box, right! We just donât know enough about what it is or how it changes. There have been significant changes in the past six months, where DSPs started getting far fewer non-zero conversion values as compared to SRNs. But itâs one of the biggest limiting factors, especially given that we can only set up a hundred campaign IDs and so we have to be very smart about our conversion value schema that can potentially provide a lot more information in a privacy threshold world.
Shamanth
I agree, the privacy threshold is certainly a big question mark. Whatâs frustrating is that the privacy threshold has changed over time. So itâs sort of like the six blind men and the elephant. You interpret based on the data you are seeing, but youâre never really sure whatâs going on.
Iâm also curious, Piyush. You said that with conversion values, you have tested and leaned towards prediction-based events. Tell us more about that, and perhaps you can give us some examples of what that could look like.
Piyush
Letâs say a user is playing in hour 6, where hour 0 is the app open instance. Now, we always get the last updated conversion value in the SKAN raw postback, right? If that last updated conversion value is at hour 6, then you can only have six hours of data to do a prediction on the likelihood of a user doing a purchase in, say, the next seven days.
Contrast this with a situation where a user comes back to the app to play, and we have enough information; for example, the user is opening the app four times, theyâre executing enough number of spins, theyâre going to the lobby a number of times and other stuff. This gives us more data to predict the likelihood of a purchase.
Ultimately, the prediction is on purchase, but itâs about the data you can collect in the first 24 hours. Now, this will keep changing for individual users. For instance, you may have to do prediction on the basis of two hours of data because the user opened the app for two hours and did not open it for the next 24, and that is the last updated conversion value that you receive.
So you have to create a step by step conversion value schema with a smarter framework, so that you can track users who are also dropping out at 22 hours as well as 24 hours. And you can then do a prediction of the likelihood of a user purchase.
Shamanth
Those examples are extremely helpful. Iâm curious about something though. Based on my experience, I have seen that non-revenue events tend to be very poor predictors of revenue events. How do you address that?
Piyush
Yes, thatâs been my conversation with the data science team as well, and there are no two ways about it. But the fact remains that usually in the first 24 hours, if you just base your predictions on revenue, and only 3 or 4% of users are actually doing a purchase, what happens to the rest 96% of the users? You need to have some information about them because theyâre the bulk of your users who have the potential of eventually doing a purchase later.
So yes,
Non-revenue events are not great predictors, but thatâs where we have to be smart. Maybe we can have a mix of events â for example, somebody reaching the lobby, viewing an ad, and doing a hundred spins. All three combinations together. Then you set that as a conversion value schema and see if it can become a better predictor of the likelihood of a purchase.
So you have to be smart.
Basically, marketing is heavily based on experimenting, and data science is heavily based on, well, data. And both have a different take on what is practical and how we can take that and collect and provide enough information to the partners as well for them to optimize.
Shamanth
So instead of taking one single event, you use a combination of events, maybe even a conditional flow of events, which can be a better predictor than a single event. And I imagine itâs truer for you in the social casino space? As you said, since the payer percentages are really low, you must have this sort of prediction because itâs more difficult to get revenue-based predictions.
Piyush
Yes, itâs true, but still, especially at Product Madness, we have come up with a decent number of events that we can use for predictions. And we do see some amount of purchase in the first 24 hours that is also giving some input on how the campaign is performing.
But we should address the elephant in the room. The fact remains that only a few networks are working with conversion value schema right now since probabilistic attribution is still around. Thatâs why this is a phase where you can experiment a lot. We donât know where Apple is going to head with the whole fingerprinting thing. Or probabilistic modeling, if you will. Of course, those two are different, but they do come from a similar background. And as long as you have probabilistic modeling in the system, it will be very difficult for marketers to dive into conversion value schema completely. Thatâs because theyâre still used to getting that D7 ROAS and D30 purchase rate based on probabilistic.
Shamanth
And as far as channels are concerned, probabilistic channels do give you a ROAS, but in my experience, there is still some loss of signal. Then you have Google, which is completely modeled based on what theyâre doing. Facebook is SKAN, but itâs also based on how they define campaigns under the hood. So how do you recommend marketers compare performance between channels at this point in time?
Piyush
Iâll be honest, itâs a struggle. Right now the problem is data inconsistency.
Letâs say youâre working with 10 partners. For Facebook, youâre receiving only SKAN data. For DSPs, you are receiving SKAN plus probabilistic plus deterministic data for users who are opting in on both sides. Thatâs three layers of data. So straightaway, you see the difficulty in comparing.
But then, there is also the fact that
Youâre trying to compare two different attribution solutions in a certain sense. One is the MMPsâ solution and the other is Appleâs solution. From the data that weâre receiving, like re-download and other stuff, Appleâs solution is technically based on Apple ID. And probabilistic is based on, as you said, the signals that you receive and itâs by nature probabilistic, as the word goes. It wouldnât be fair to compare the two, because both have their own pros and cons. For instance, the kind of post-install data that the MMP solution or the probabilistic solution gives you is much higher than what Apple gives.
So you can experiment with both those solutions. After all, youâre getting data for both â for instance, on DSPs, itâs not as if you wonât get data for SKAN if you run probabilistic; youâll still get both. You can then simply compare the data and understand in which direction itâs moving. So yes, itâs a struggle, but I think it can be done.
Shamanth
Exactly. Itâs never going to be an apples to apples, clear deterministic solution. Now, the last time we spoke, you said some advertisers had actually figured out SKAN as completely as it is possible to, given the privacy thresholds. Can you talk about the mechanics of how these advertisers figured it out, and what about their business model and monetization model made it possible to do so?
Piyush
Well,
Some of the advertisers in my own network have figured it out. For example, one of them is using conversion value schema purely to extract install date. Some others are using it to do prediction within the first 24 hours. A few of them actually extended the conversion window to 48 and 72 hours.
So everybody is basically experimenting.
I believe that in the hyper-casual segment, where most of the money is coming from ad monetization, I think the major players have tried to figure out what conversion value schema would work for them. I canât say much more because itâs rather confidential and still a work in progress, but I can say that a few have figured it out.
Shamanth
Of course. And I can certainly see that if they have a lot of conversion value events, I imagine some of those can be collated to get more data points about the users. So even if the exact mechanics are not clear, it doesnât surprise me that the code can be cracked, right?
Piyush
The code can be cracked, but think about a world where thereâs no probabilistic at all. You will see a lot more experimentation, a lot more brainstorming on SKAN and conversion schema and other stuff. And although as I said, both have pros and cons and I would not pick one over the other, we still need to reach a world where we have just one source of data. If Apple is very sure that theyâre going to continue with probabilistic, maybe SRNs can come on board with the idea.
I know that is a very big statement to throw out there. But if we are not going ahead with probabilistic and Apple says no to it, then weâre in a world where itâs completely SKAN and then everybodyâs focused on SKAN. Right now, itâs a transition period, and itâs going on longer than anybody could have anticipated.
Shamanth
With all of these different sources of data, different attribution models and different channels, how helpful is non-deterministic measurement? There are a couple of approaches that come to mind. One could be probabilistic modeling. Another could be incrementality and media mix modeling-based understanding of how much incremental lift each channel provides. These are not really dependent on anything deterministic. How effective would you consider these methodologies?
Piyush
Iâm not really speaking from personal experience, as we have looked at them but not really used any of them till now. But I can say that itâs definitely worth experimenting with these methodologies, at least with incrementality. I wouldnât totally go with media mix modeling, since its input has historically favored FMCG clients and itâs not really meant for a gaming world.
Incrementality, however, is a promising modeling and itâs a good path to experiment with to understand how exactly you can extract more information and use the output to define your media budget and other details.
It goes back to what I was saying before. Of all these solutions that are out there today, which will survive in 2022? We donât really know, and thatâs been my main struggle â I donât know which one to invest my energy in. These options might work well if we live in a non-deterministic world, but then Iâll personally choose probabilistic modeling. But letâs say Apple turns off probabilistic attribution altogether in 2022, and itâs only a SKAN world. Then Iâll invest all my energy in SKAN and focus on maximizing the output in the first 24 hours, and also play around with other options out there.
Weâll have to take a call on what kind of users we donât want any information on. For example, if I want to extend my conversion window to 48 hours, am I happy with just having information about your opening the app on day two or day three?
Those are the bigger calls that weâll have to take in a world where thereâs just one data source. But right now these other solutions do exist, and I would definitely suggest anyone to have a look at them and read about them.
Shamanth
You talked about some of the changes with iOS 15 and the postbacks that are coming to the advertisers. Help us understand â what has changed within the interfaces of MMPs or other platforms and what are some of the things that changed under the hood?
Piyush
We have started receiving the postbacks, yes. But again, this is that transition period I was talking about, where weâre receiving postbacks only for iOS 15 users, not for those below iOS 15. And the percentage of iOS 15 users is relatively low.
The second problem with receiving the direct postback is that we donât have the matching key. We do receive the raw report campaign ID and ad network ID, but we donât have the key to match it to an ad network name or an ad campaign name, because that exists only with the networks.
So that is somewhat limiting. But there are things we can do at a broader level.
For example, Facebook and Google are not sending re-download percentage in the postback we receive directly. But since we are getting it directly, we can still have a directional metric stating that re-download percentage on Google is around 20% or something like that. Google was also converting all blank conversion values due to privacy threshold to zero, because in their head these were at least installs. But now we will know the ROAS percentage in Google, where we know what percent of the ROAS have null conversion value because of the privacy threshold. So it has a lot more information especially on SRNs, and thatâs a welcome change.
Moreover, I know people donât typically associate fraud with SKAN, but I do believe that when you are receiving the postback directly, you know what the last updated conversion value is roughly, and you know that nobody can play around with it.
If the postbacks were going to ad networks, theyâll definitely know that the last updated conversion value can be fudged and changed to something that they need to optimize towards and they can play with it. But with iOS 15, we have the control as advertisers, and thatâs a very good change.
Thereâs something else I want to talk about, and thatâs related to private relay. Iâll give you a fascinating update! I recently updated my phone to iOS 15. Now, Iâm an iCloud+ user, and I always assumed that private relay would be opt-in by default, right? Thatâs not the case. You have to actually go into your settings and choose to switch on your private relay. And only then will your IP and other details be hidden from tracking on Safari. I didnât really anticipate this, as I thought it would be a default opt-in. Itâs not really impacting users because itâs similar to LAT.
Shamanth
And going back to the postbacks for iOS 15, you said you could see the campaign IDs, but you do not know which campaign name it corresponds to. Is that it?
Piyush
Thatâs correct. Of course, you can be smart about it; since youâve been receiving data from ad networks directly for the past six months, you know historically which campaign ID is associated with which campaign name, and you can sort of manually match them at your end. But if you create a new campaign, youâll have to wait for the campaign ID and campaign name and match it from the ad networks. And then, there are some ad networks that donât provide it. Itâs totally up to them, and so youâre completely reliant on them to provide it to you. Thatâs a drawback.
Shamanth
And how are you collecting these postbacks?
Piyush
You can import it directly into your system, but weâre using the MMPâs endpoint to receive the iOS 15 postbacks.
Shamanth
Please correct me if Iâm wrong, but as of now, I donât believe that there is an interface in the MMPs where you can go in and see the received postbacks.
Piyush
The postbacks come in through their API or the S3 source they have from where you can pull the data directly. So theyâre just doing it on our behalf. You can import the data into your systemâs BI tool and run comparisons if needed.
But
You have to keep in mind that you have two different sources providing the same data. One is for users on iOS 15 and above coming directly to advertisers or to MMPs, and the other is going to the ad network and then coming to the MMPs and then to us. So itâs similar data, but itâs a little hidden and somewhat controlled by the individual networks.
Shamanth
But hopefully thatâll change as iOS 15 gets more widely adopted in the next few months.
Piyush
Definitely. Take SKAN right now, during this transition period.
You have users who are converting through 2.0, where youâre getting only click-through conversion. Then thereâs inventory on 2.2, where youâre getting view-through and click-through conversion. And then SKAN 3.0, where youâre getting multiple data points on the last bid and who won and who didnât and who was the assistant installer and so on. And on top of that, youâre getting direct postback for iOS 15.
So even within the world of SKAN, there are different layers of data. You donât have one consistent data that you can say is completely accurate right now. For example, I was looking at the data, and I found that almost 40 to 45% of inventory is still on SKAN 2.0. Technically, youâre getting only click-through conversions there, and even if view-through conversions are happening, you just donât know it.
And that is why it goes back to your earlier question about comparing probabilistic with SKAN. At what level would you do the comparisons? Because the levels are changing and itâs very complicated.
Shamanth
Yes, definitely. Itâs like you said â iOS itself is fragmented. One follow up on what you said earlier. You said you had to get the individual postbacks through the S3 endpoint. So is there no marketer-facing interface at the moment where we can see these validated metrics?
Piyush
Itâs not there for the MMP we are working with, but Iâm pretty sure itâs in their pipeline. The postbacks are in what they call their data locker, from where I can just pull the data.
Shamanth
Right, thatâs also the communication I have received from the multiple MMPs that we work with, that thereâs no UI that a marketer can log in to, just the developer-facing interface.
Piyush
True, because at the end of the day, you already have one data source. Itâs the same data source, but with more data points because itâs coming to you directly. Thatâs good for advertisers, because for a cynical guy like me, I would want to check for frauds, re-download percentage, how many conversions happening via view-through and how many null conversions because of the privacy threshold and so on.
So these are the extra information that you can extract, but well, you already have a data source. Most of the partners out there are fraud-free and theyâre doing a great job.
Shamanth
For all these years until now, we never had a chance to see the postbacks, even in the pre-SKAN IDFA world. But now when you looked at the postbacks for Facebook and Google, was there anything surprising? Did you feel you learned a lot by looking at these postbacks that were not revealed earlier?
Piyush
To be honest, not really. Itâs just that you are receiving more information, like for example as I said, the re-download for Google. There was no re-download flag in Googleâs raw data earlier, but now with iOS 15, weâre receiving it. So youâre getting extra information for SRNs, for sure. But itâs almost similar to what you are getting for other DSPs already. Thereâs nothing significantly different.
Shamanth
Right. And I think a lot of people were wondering what the SANs might be doing under the hood.
Piyush
I mean, whatever they are doing under the hood, the problem is,
Letâs say theyâre creating multiple campaign IDs on Facebook, which is true we know. But how would you associate it back? I donât know which campaign ID stands for which campaign, even within their own words. So you canât extract unless you have the key. And thatâs the limitation.
Shamanth
Ah, I see what you mean! And are you saying these campaign IDs keep changing every day? That would happen if there are multiple campaigns under the hood.
Piyush
Could be, but again itâs all fragmented. Right now, weâre running a campaign on iOS 14.5 and above, and weâre getting postbacks only for iOS 15 and above. We donât really know whatâs actually happening in that flow. It’s an analysis that we would like to do when iOS 15 has seen wider adoption. But until then, itâs all about the matching key. We can only gain information at a very broad, media source level, in a certain sense, across CTA, VTA, re-download percentage and so on, but itâs at a media source level. And I donât see advertisers getting the match key from SRNs to know which campaign ID stands for which campaign name.
Shamanth
Youâre absolutely right. Piyush, itâs rare for me to find somebody that I can speak to about SKAN and all these minor details, which are actually a huge deal. So thank you for sharing everything that you did today.
Piyush
Thank you for inviting me. I really appreciate this!
Shamanth
Before we wrap up, could you tell folks how they can find out more about you?
Piyush
Anybody can reach out to me on LinkedIn. Iâm pretty active over there. And I like to have these conversations because every time, I learn something new.
I also co-host a podcast called Level Up UA with Adam Smart, sponsored by AppsFlyer. You can find me there too.
Shamanth
Great, we will link to those in the show notes. Thank you for being a guest on the Mobile User Acquisition Show.
Piyush
Thank you!
A REQUEST BEFORE YOU GO
I have a very important favor to ask, which as those of you who know me know I donât do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform â be it iTunes, Overcast, Spotify, Google Podcasts or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.
Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms â or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.
Thank you â and I look forward to seeing you with the next episode!