fbpx

Our guest for today is Piyush Mishra. Piyush is one of the leads of the Growth Marketing team at Product Madness and is known in the mobile marketing space for his expertise in all things related to SKAN. Piyush also co-hosts a podcast called Level Up UA with Adam Smart.

In this episode, Piyush talks about the evolution of SKAN since its release and how marketers are still trying to figure out measurement around it. He talks about the early days of SKAN, how itā€™s changed – and what is coming in the future.  He includes some tips and strategies for dealing with the measurement mess that SKAN has brought in its wake which weā€™ve found incredibly helpful – and we hope you will too. 






ABOUT PIYUSH: LinkedIn  | Product Madness


Level Up UA Podcast: Apple  | Level Up UA Podcast: Spotify




ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

šŸŒ± The beginnings: SKAN 1.0

ā˜‚ļø Evolution of SKAN from 2.0 to 3.0

šŸ•¶ When SKAN gained popularity

šŸ£ Learnings from conversion value testing

šŸŽŠ The shift towards prediction based events 

šŸ“ Revenue based vs. non-revenue based predictions

šŸ”Ž The reason why probabilistic is sticking around

šŸ“‰The challenges in comparing different channel performances

šŸ‘€ How are some advertisers working with conversion values

šŸ” Attribution models that are worth looking in to

šŸŽ The changes coming with iOS 15

šŸ©ŗ  Postbacks and their significance

KEY QUOTES

The humble beginnings of SKAN

If I remember correctly, SKAN 1.0 started with 13.7 iOS version back in 2018 or 2019. And itā€™s true that when it came out nobody actually paid attention. For most of the marketers, it was just there, they knew they were getting an install postback. There were no post-install attributes to it at that point in time.

Delving into predictions

One way is where you maximize enough information from the conversion value schema and run a prediction on the basis of conversion value. The other way is doing prediction in the conversion value schema itself.

How to overcome non-revenue events

Non-revenue events are not great predictors, but thatā€™s where we have to be smart. Maybe we can have a mix of events ā€“ for example, somebody reaching the lobby, viewing an ad, and doing a hundred spins. All three combinations together. Then you set that as a conversion value schema and see if it can become a better predictor of the likelihood of a purchase.

Comparing performance between channels

Youā€™re trying to compare two different attribution solutions in a certain sense. One is the MMPsā€™ solution and the other is Appleā€™s solution. From the data that weā€™re receiving, like re-download and other stuff, Appleā€™s solution is technically based on Apple ID. And probabilistic is based on, as you said, the signals that you receive and itā€™s by nature probabilistic, as the word goes. It wouldnā€™t be fair to compare the two, because both have their own pros and cons. For instance, the kind of post-install data that the MMP solution or the probabilistic solution gives you is much higher than what Apple gives.

Making use of SKAN to benefit from it

Some of the advertisers in my own network have figured it out. For example, one of them is using conversion value schema purely to extract install date. Some others are using it to do prediction within the first 24 hours. A few of them actually extended the conversion window to 48 and 72 hours

Working from what we don’t want

Weā€™ll have to take a call on what kind of users we donā€™t want any information on. For example, if I want to extend my conversion window to 48 hours, am I happy with just having information about your opening the app on day two or day three?

The disadvantage of direct postbacks

The second problem with receiving the direct postback is that we donā€™t have the matching key. We do receive the raw report campaign ID and ad network ID, but we donā€™t have the key to match it to an ad network name or an ad campaign name, because that exists only with the networks.

What’s included in the postback?

For example, Facebook and Google are not sending re-download percentage in the postback we receive directly. But since we are getting it directly, we can still have a directional metric stating that re-download percentage on Google is around 20% or something like that. Google was also converting all blank conversion values due to privacy threshold to zero, because in their head these were at least installs. But now we will know the ROAS percentage in Google, where we know what percent of the ROAS have null conversion value because of the privacy threshold. So it has a lot more information especially on SRNs, and thatā€™s a welcome change.

Postbacks are better off with advertisers than ad networks

If the postbacks were going to ad networks, theyā€™ll definitely know that the last updated conversion value can be fudged and changed to something that they need to optimize towards and they can play with it. But with iOS 15, we have the control as advertisers, and thatā€™s a very good change.

Data reaches us from two sources

You have to keep in mind that you have two different sources providing the same data. One is for users on iOS 15 and above coming directly to advertisers or to MMPs, and the other is going to the ad network and then coming to the MMPs and then to us. So itā€™s similar data, but itā€™s a little hidden and somewhat controlled by the individual networks.

Data that’s available through the different versions of SKAN

You have users who are converting through 2.0, where youā€™re getting only click-through conversion. Then thereā€™s inventory on 2.2, where youā€™re getting view-through and click-through conversion. And then SKAN 3.0, where youā€™re getting multiple data points on the last bid and who won and who didnā€™t and who was the assistant installer and so on. And on top of that, youā€™re getting direct postback for iOS 15.

What SANS maybe doing under the hood

Letā€™s say theyā€™re creating multiple campaign IDs on Facebook, which is true we know. But how would you associate it back? I donā€™t know which campaign ID stands for which campaign, even within their own words. So you canā€™t extract unless you have the key. And thatā€™s the limitation.

FULL TRANSCRIPT BELOW

Shamanth

I am very excited to welcome Piyush Mishra to the Mobile User Acquisition Show. Piyush, welcome to the show.

Piyush Mishra

Thank you, Shamanth, itā€™s a pleasure to be here. Thanks for the invitation.

Shamanth

Well, youā€™re among the people who have been at the forefront of thinking and also actioning all things SKAN and iOS, you have a podcast of your own. So Iā€™m thrilled to have you here today to talk about SKAN, which is very top-of-mind for a lot of mobile marketers. 

Letā€™s start at the beginning. SKAN 1.0 came out a long time before ATT. And even though I knew it was there, I didnā€™t really pay any attention to it, just like a lot of marketers. So tell us what SKAN 1.0 was like.

Piyush

If I remember correctly, SKAN 1.0 started with 13.7 iOS version back in 2018 or 2019. And itā€™s true that when it came out nobody actually paid attention. For most of the marketers, it was just there, they knew they were getting an install postback. There were no post-install attributes to it at that point in time.

Plus, there was a huge delay in terms of receiving conversion values ā€“ around 48 to 72 hours.

Now, that was an IDFA world. And when you look at it in hindsight, you can see that there were messages from Apple throughout saying that it might not be the world weā€™d all step into. But at that point in time, people didnā€™t pay attention because it wasnā€™t a big thing. It was just provided as install postbacks.

Shamanth

And I can see why a lot of marketers would just disregard it, because what could they do with just an install postback in an IDFA world.

Piyush

Exactly. Nobody expected Apple to build an attribution system altogether! We did not expect it to become big, like with SKAN 2.0, 2.1 to 2.2 and now 3.0.

Shamanth

When did you really sit up and take notice of SKAN? When did you realize that it could become something significant?

Piyush

Well, it was as soon as they announced that IDFA would go, and that for users, it would be opt-in ā€“ users would have to respond on individual apps. Thatā€™s when we started paying a lot more attention to SKAN. We all assumed at that point in time that probabilistic or fingerprinting would also be stopped. So SKAN was the only solution out there for us.

Now understanding it took me more than six years! It was complicated, to be very honest, with the whole conversion value schema and the delay associated with it, and then understanding exactly why Apple was pushing it. So it took us some time to come to terms with it. But as soon as it was announced that we would have ATT and users would be choosing it for each and every app, it became very clear that this was a significant move from the company that had already given us: LAT.

At that point in time, Apple had also started looking into SKAN 2.1 post-install conversion value metrics and thatā€™s when we started paying attention. And to be fair, I know there was a lot of criticism for SKAN when it was introduced, but my first thought was that it could create a level playing field. 

Yes, MMPs are doing a great job. But it is also a fact that Google and Facebook and even Apple have enormous control in terms of attribution because they are self-reporting networks. So SKAN sort of created a level playing field when it launched. And that was my first thought, that maybe it was what we required. Now, of course, things have changed and itā€™s pretty clear where weā€™re heading.

Shamanth

Since the first version of SKAN (which was just install attribution) to ATT, through to SKAN 2.0 and 3.0, what are the changes that have happened in SKAN?

Piyush

There have been four releases since ATT was launched ā€“ SKAN 2.0, 2.1, 2.2 and 3.0.

In SKAN 2.0, they introduced conversion values. That was their main focus. They declared that post-install conversions would be delayed, and the data would be aggregated, without any user level identifiers. Another big thing was that they said they would send the postbacks to ad networks and not directly to advertisers. That was how SKAN 2.0 started.

There werenā€™t really any huge changes in 2.1. But in SKAN 2.2, they introduced view-through conversion. That was different from the earlier 2.0, which had only click-through conversion. That was when we realized that Apple was building a full-blown attribution solution in a certain sense. This became clearer with SKAN 3.0, because they started providing postbacks for situations where the ad networks were not winning the bids. It didnā€™t dive completely into multi-touch, but they did start providing more information than only the last win bid and other stuff.

Another big change came about with iOS 15. Ad networks and advertisers both started receiving direct postbacks from Apple for all iOS 15 users. This is a significant departure from iOS 14 and 14.5, where all the postbacks were going to ad networks first and then to MMPs and then to us, which was a longer process and suffered from certain loopholes.

Shamanth

You talked about conversion values. What have been some of your learnings from testing conversion values over the last several months?

Piyush

We started testing conversion value right away when it was released and we have been continuously playing with it. The learnings depend on the gaming genre. Firstly, for us in the social casino space, a relatively low number of users do a purchase in the first 24 hours. So we have not tested anything outside the 24-hour conversion window. Iā€™m assuming 24 hours because thatā€™s a practical timeframe.

The second thing that weā€™ve started expanding and delving more into is predictions. What can we do with the amount of data that weā€™ve collected till the last updated conversion value?

This can be viewed in two different ways. One way is where you maximize enough information from the conversion value schema and run a prediction on the basis of conversion value. The other way is doing prediction in the conversion value schema itself.

Weā€™ve been experimenting with both and trying to understand which will work better going forward, because we definitely need to maximize the output from conversion value.

A third thing as part of the experimentation is that we are considering expanding the conversion window from 24 hours and see how exactly that will work. Of course, there is the drawback that we will only get information about users who are opening the app on both the days. So we have to back it up with data on how many users actually come back on the second day and go ahead from there.

But Iā€™ll tell you something – it is very, very limiting. Conversion value is one reason why everybody in the marketing world has been panicking. And the other big reason is the privacy threshold. Thatā€™s a black box, right! We just donā€™t know enough about what it is or how it changes. There have been significant changes in the past six months, where DSPs started getting far fewer non-zero conversion values as compared to SRNs. But itā€™s one of the biggest limiting factors, especially given that we can only set up a hundred campaign IDs and so we have to be very smart about our conversion value schema that can potentially provide a lot more information in a privacy threshold world.

Shamanth

I agree, the privacy threshold is certainly a big question mark. Whatā€™s frustrating is that the privacy threshold has changed over time. So itā€™s sort of like the six blind men and the elephant. You interpret based on the data you are seeing, but youā€™re never really sure whatā€™s going on.

Iā€™m also curious, Piyush. You said that with conversion values, you have tested and leaned towards prediction-based events. Tell us more about that, and perhaps you can give us some examples of what that could look like.

Piyush

Letā€™s say a user is playing in hour 6, where hour 0 is the app open instance. Now, we always get the last updated conversion value in the SKAN raw postback, right? If that last updated conversion value is at hour 6, then you can only have six hours of data to do a prediction on the likelihood of a user doing a purchase in, say, the next seven days.

Contrast this with a situation where a user comes back to the app to play, and we have enough information; for example, the user is opening the app four times, theyā€™re executing enough number of spins, theyā€™re going to the lobby a number of times and other stuff. This gives us more data to predict the likelihood of a purchase.

Ultimately, the prediction is on purchase, but itā€™s about the data you can collect in the first 24 hours. Now, this will keep changing for individual users. For instance, you may have to do prediction on the basis of two hours of data because the user opened the app for two hours and did not open it for the next 24, and that is the last updated conversion value that you receive.

So you have to create a step by step conversion value schema with a smarter framework, so that you can track users who are also dropping out at 22 hours as well as 24 hours. And you can then do a prediction of the likelihood of a user purchase.

Shamanth

Those examples are extremely helpful. Iā€™m curious about something though. Based on my experience, I have seen that non-revenue events tend to be very poor predictors of revenue events. How do you address that?

Piyush

Yes, thatā€™s been my conversation with the data science team as well, and there are no two ways about it. But the fact remains that usually in the first 24 hours, if you just base your predictions on revenue, and only 3 or 4% of users are actually doing a purchase, what happens to the rest 96% of the users? You need to have some information about them because theyā€™re the bulk of your users who have the potential of eventually doing a purchase later.

So yes,

Non-revenue events are not great predictors, but thatā€™s where we have to be smart. Maybe we can have a mix of events ā€“ for example, somebody reaching the lobby, viewing an ad, and doing a hundred spins. All three combinations together. Then you set that as a conversion value schema and see if it can become a better predictor of the likelihood of a purchase.

So you have to be smart.

Basically, marketing is heavily based on experimenting, and data science is heavily based on, well, data. And both have a different take on what is practical and how we can take that and collect and provide enough information to the partners as well for them to optimize.

Shamanth

So instead of taking one single event, you use a combination of events, maybe even a conditional flow of events, which can be a better predictor than a single event. And I imagine itā€™s truer for you in the social casino space? As you said, since the payer percentages are really low, you must have this sort of prediction because itā€™s more difficult to get revenue-based predictions.

Piyush

Yes, itā€™s true, but still, especially at Product Madness, we have come up with a decent number of events that we can use for predictions. And we do see some amount of purchase in the first 24 hours that is also giving some input on how the campaign is performing.

But we should address the elephant in the room. The fact remains that only a few networks are working with conversion value schema right now since probabilistic attribution is still around. Thatā€™s why this is a phase where you can experiment a lot. We donā€™t know where Apple is going to head with the whole fingerprinting thing. Or probabilistic modeling, if you will. Of course, those two are different, but they do come from a similar background. And as long as you have probabilistic modeling in the system, it will be very difficult for marketers to dive into conversion value schema completely. Thatā€™s because theyā€™re still used to getting that D7 ROAS and D30 purchase rate based on probabilistic.

Shamanth

And as far as channels are concerned, probabilistic channels do give you a ROAS, but in my experience, there is still some loss of signal. Then you have Google, which is completely modeled based on what theyā€™re doing. Facebook is SKAN, but itā€™s also based on how they define campaigns under the hood. So how do you recommend marketers compare performance between channels at this point in time?

Piyush

Iā€™ll be honest, itā€™s a struggle. Right now the problem is data inconsistency.

Letā€™s say youā€™re working with 10 partners. For Facebook, youā€™re receiving only SKAN data. For DSPs, you are receiving SKAN plus probabilistic plus deterministic data for users who are opting in on both sides. Thatā€™s three layers of data. So straightaway, you see the difficulty in comparing.

But then, there is also the fact that

Youā€™re trying to compare two different attribution solutions in a certain sense. One is the MMPsā€™ solution and the other is Appleā€™s solution. From the data that weā€™re receiving, like re-download and other stuff, Appleā€™s solution is technically based on Apple ID. And probabilistic is based on, as you said, the signals that you receive and itā€™s by nature probabilistic, as the word goes. It wouldnā€™t be fair to compare the two, because both have their own pros and cons. For instance, the kind of post-install data that the MMP solution or the probabilistic solution gives you is much higher than what Apple gives.

So you can experiment with both those solutions. After all, youā€™re getting data for both ā€“ for instance, on DSPs, itā€™s not as if you wonā€™t get data for SKAN if you run probabilistic; youā€™ll still get both. You can then simply compare the data and understand in which direction itā€™s moving. So yes, itā€™s a struggle, but I think it can be done.

Shamanth

Exactly. Itā€™s never going to be an apples to apples, clear deterministic solution. Now, the last time we spoke, you said some advertisers had actually figured out SKAN as completely as it is possible to, given the privacy thresholds. Can you talk about the mechanics of how these advertisers figured it out, and what about their business model and monetization model made it possible to do so?

Piyush

Well,

Some of the advertisers in my own network have figured it out. For example, one of them is using conversion value schema purely to extract install date. Some others are using it to do prediction within the first 24 hours. A few of them actually extended the conversion window to 48 and 72 hours.

So everybody is basically experimenting.

I believe that in the hyper-casual segment, where most of the money is coming from ad monetization, I think the major players have tried to figure out what conversion value schema would work for them. I canā€™t say much more because itā€™s rather confidential and still a work in progress, but I can say that a few have figured it out.

Shamanth

Of course. And I can certainly see that if they have a lot of conversion value events, I imagine some of those can be collated to get more data points about the users. So even if the exact mechanics are not clear, it doesnā€™t surprise me that the code can be cracked, right?

Piyush

The code can be cracked, but think about a world where thereā€™s no probabilistic at all. You will see a lot more experimentation, a lot more brainstorming on SKAN and conversion schema and other stuff. And although as I said, both have pros and cons and I would not pick one over the other, we still need to reach a world where we have just one source of data. If Apple is very sure that theyā€™re going to continue with probabilistic, maybe SRNs can come on board with the idea.

I know that is a very big statement to throw out there. But if we are not going ahead with probabilistic and Apple says no to it, then weā€™re in a world where itā€™s completely SKAN and then everybodyā€™s focused on SKAN. Right now, itā€™s a transition period, and itā€™s going on longer than anybody could have anticipated.

Shamanth

With all of these different sources of data, different attribution models and different channels, how helpful is non-deterministic measurement? There are a couple of approaches that come to mind. One could be probabilistic modeling. Another could be incrementality and media mix modeling-based understanding of how much incremental lift each channel provides. These are not really dependent on anything deterministic. How effective would you consider these methodologies?

Piyush

Iā€™m not really speaking from personal experience, as we have looked at them but not really used any of them till now. But I can say that itā€™s definitely worth experimenting with these methodologies, at least with incrementality. I wouldnā€™t totally go with media mix modeling, since its input has historically favored FMCG clients and itā€™s not really meant for a gaming world.

Incrementality, however, is a promising modeling and itā€™s a good path to experiment with to understand how exactly you can extract more information and use the output to define your media budget and other details.

It goes back to what I was saying before. Of all these solutions that are out there today, which will survive in 2022? We donā€™t really know, and thatā€™s been my main struggle ā€“ I donā€™t know which one to invest my energy in. These options might work well if we live in a non-deterministic world, but then Iā€™ll personally choose probabilistic modeling. But letā€™s say Apple turns off probabilistic attribution altogether in 2022, and itā€™s only a SKAN world. Then Iā€™ll invest all my energy in SKAN and focus on maximizing the output in the first 24 hours, and also play around with other options out there.

Weā€™ll have to take a call on what kind of users we donā€™t want any information on. For example, if I want to extend my conversion window to 48 hours, am I happy with just having information about your opening the app on day two or day three?

Those are the bigger calls that weā€™ll have to take in a world where thereā€™s just one data source. But right now these other solutions do exist, and I would definitely suggest anyone to have a look at them and read about them.

Shamanth

You talked about some of the changes with iOS 15 and the postbacks that are coming to the advertisers. Help us understand ā€“ what has changed within the interfaces of MMPs or other platforms and what are some of the things that changed under the hood?

Piyush

We have started receiving the postbacks, yes. But again, this is that transition period I was talking about, where weā€™re receiving postbacks only for iOS 15 users, not for those below iOS 15. And the percentage of iOS 15 users is relatively low.

The second problem with receiving the direct postback is that we donā€™t have the matching key. We do receive the raw report campaign ID and ad network ID, but we donā€™t have the key to match it to an ad network name or an ad campaign name, because that exists only with the networks.

So that is somewhat limiting. But there are things we can do at a broader level.

For example, Facebook and Google are not sending re-download percentage in the postback we receive directly. But since we are getting it directly, we can still have a directional metric stating that re-download percentage on Google is around 20% or something like that. Google was also converting all blank conversion values due to privacy threshold to zero, because in their head these were at least installs. But now we will know the ROAS percentage in Google, where we know what percent of the ROAS have null conversion value because of the privacy threshold. So it has a lot more information especially on SRNs, and thatā€™s a welcome change.

Moreover, I know people donā€™t typically associate fraud with SKAN, but I do believe that when you are receiving the postback directly, you know what the last updated conversion value is roughly, and you know that nobody can play around with it.

If the postbacks were going to ad networks, theyā€™ll definitely know that the last updated conversion value can be fudged and changed to something that they need to optimize towards and they can play with it. But with iOS 15, we have the control as advertisers, and thatā€™s a very good change.

Thereā€™s something else I want to talk about, and thatā€™s related to private relay. Iā€™ll give you a fascinating update! I recently updated my phone to iOS 15. Now, Iā€™m an iCloud+ user, and I always assumed that private relay would be opt-in by default, right? Thatā€™s not the case. You have to actually go into your settings and choose to switch on your private relay. And only then will your IP and other details be hidden from tracking on Safari. I didnā€™t really anticipate this, as I thought it would be a default opt-in. Itā€™s not really impacting users because itā€™s similar to LAT.

Shamanth

And going back to the postbacks for iOS 15, you said you could see the campaign IDs, but you do not know which campaign name it corresponds to. Is that it?

Piyush

Thatā€™s correct. Of course, you can be smart about it; since youā€™ve been receiving data from ad networks directly for the past six months, you know historically which campaign ID is associated with which campaign name, and you can sort of manually match them at your end. But if you create a new campaign, youā€™ll have to wait for the campaign ID and campaign name and match it from the ad networks. And then, there are some ad networks that donā€™t provide it. Itā€™s totally up to them, and so youā€™re completely reliant on them to provide it to you. Thatā€™s a drawback. 

Shamanth

And how are you collecting these postbacks?

Piyush

You can import it directly into your system, but weā€™re using the MMPā€™s endpoint to receive the iOS 15 postbacks.

Shamanth

Please correct me if Iā€™m wrong, but as of now, I donā€™t believe that there is an interface in the MMPs where you can go in and see the received postbacks.

Piyush

The postbacks come in through their API or the S3 source they have from where you can pull the data directly. So theyā€™re just doing it on our behalf. You can import the data into your systemā€™s BI tool and run comparisons if needed.

But

You have to keep in mind that you have two different sources providing the same data. One is for users on iOS 15 and above coming directly to advertisers or to MMPs, and the other is going to the ad network and then coming to the MMPs and then to us. So itā€™s similar data, but itā€™s a little hidden and somewhat controlled by the individual networks.

Shamanth

But hopefully thatā€™ll change as iOS 15 gets more widely adopted in the next few months.

Piyush

Definitely. Take SKAN right now, during this transition period.

You have users who are converting through 2.0, where youā€™re getting only click-through conversion. Then thereā€™s inventory on 2.2, where youā€™re getting view-through and click-through conversion. And then SKAN 3.0, where youā€™re getting multiple data points on the last bid and who won and who didnā€™t and who was the assistant installer and so on. And on top of that, youā€™re getting direct postback for iOS 15.

So even within the world of SKAN, there are different layers of data. You donā€™t have one consistent data that you can say is completely accurate right now. For example, I was looking at the data, and I found that almost 40 to 45% of inventory is still on SKAN 2.0. Technically, youā€™re getting only click-through conversions there, and even if view-through conversions are happening, you just donā€™t know it.

And that is why it goes back to your earlier question about comparing probabilistic with SKAN. At what level would you do the comparisons? Because the levels are changing and itā€™s very complicated.

Shamanth

Yes, definitely. Itā€™s like you said ā€“ iOS itself is fragmented. One follow up on what you said earlier. You said you had to get the individual postbacks through the S3 endpoint. So is there no marketer-facing interface at the moment where we can see these validated metrics?

Piyush

Itā€™s not there for the MMP we are working with, but Iā€™m pretty sure itā€™s in their pipeline. The postbacks are in what they call their data locker, from where I can just pull the data.

Shamanth

Right, thatā€™s also the communication I have received from the multiple MMPs that we work with, that thereā€™s no UI that a marketer can log in to, just the developer-facing interface.

Piyush

True, because at the end of the day, you already have one data source. Itā€™s the same data source, but with more data points because itā€™s coming to you directly. Thatā€™s good for advertisers, because for a cynical guy like me, I would want to check for frauds, re-download percentage, how many conversions happening via view-through and how many null conversions because of the privacy threshold and so on.

So these are the extra information that you can extract, but well, you already have a data source. Most of the partners out there are fraud-free and theyā€™re doing a great job.

Shamanth

For all these years until now, we never had a chance to see the postbacks, even in the pre-SKAN IDFA world. But now when you looked at the postbacks for Facebook and Google, was there anything surprising? Did you feel you learned a lot by looking at these postbacks that were not revealed earlier?

Piyush

To be honest, not really. Itā€™s just that you are receiving more information, like for example as I said, the re-download for Google. There was no re-download flag in Googleā€™s raw data earlier, but now with iOS 15, weā€™re receiving it. So youā€™re getting extra information for SRNs, for sure. But itā€™s almost similar to what you are getting for other DSPs already. Thereā€™s nothing significantly different.

Shamanth

Right. And I think a lot of people were wondering what the SANs might be doing under the hood.

Piyush

I mean, whatever they are doing under the hood, the problem is,

Letā€™s say theyā€™re creating multiple campaign IDs on Facebook, which is true we know. But how would you associate it back? I donā€™t know which campaign ID stands for which campaign, even within their own words. So you canā€™t extract unless you have the key. And thatā€™s the limitation.

Shamanth

Ah, I see what you mean! And are you saying these campaign IDs keep changing every day? That would happen if there are multiple campaigns under the hood.

Piyush

Could be, but again itā€™s all fragmented. Right now, weā€™re running a campaign on iOS 14.5 and above, and weā€™re getting postbacks only for iOS 15 and above. We donā€™t really know whatā€™s actually happening in that flow. It’s an analysis that we would like to do when iOS 15 has seen wider adoption. But until then, itā€™s all about the matching key. We can only gain information at a very broad, media source level, in a certain sense, across CTA, VTA, re-download percentage and so on, but itā€™s at a media source level. And I donā€™t see advertisers getting the match key from SRNs to know which campaign ID stands for which campaign name.

Shamanth

Youā€™re absolutely right. Piyush, itā€™s rare for me to find somebody that I can speak to about SKAN and all these minor details, which are actually a huge deal. So thank you for sharing everything that you did today.

Piyush

Thank you for inviting me. I really appreciate this!

Shamanth

Before we wrap up, could you tell folks how they can find out more about you?

Piyush

Anybody can reach out to me on LinkedIn. Iā€™m pretty active over there. And I like to have these conversations because every time, I learn something new.

I also co-host a podcast called Level Up UA with Adam Smart, sponsored by AppsFlyer. You can find me there too.

Shamanth

Great, we will link to those in the show notes. Thank you for being a guest on the Mobile User Acquisition Show.

Piyush

Thank you!

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I donā€™t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform ā€“ be itĀ iTunes, Overcast, Spotify, Google Podcasts or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms ā€“ or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you ā€“ and I look forward to seeing you with the next episode!

WANT TO SCALE PROFITABLY IN A POST IDENTIFIER WORLD?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.