fbpx

Paul Bowen, the GM at AlgoLift, has 20 years of experience in digital advertising and among the folks we look to for his expertise on SKAdNetwork.

In our conversation today, Paul breaks down the various ways in which developers need to think about measurement for SKAdNetwork. He touches upon the complexities and limitations of the conversion value framework – and how it might be used along with IDFV based data to infer probabilistically the value or LTVs of users or campaigns.

This is an episode with quite a few technical details and nuances, and we strongly recommend listening to it carefully to absorb all the wisdom Paul has shared. Enjoy!






ABOUT PAUL: LinkedIn  | Twitter | AlgoLift




ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

🌱 How SKAdNetwork came about

🔄 Everything revolves around the conversion value

6️⃣ 64 combinations of 6-bit conversion values

🔮 How to align conversion values to predict LTV for different apps

🌅 Early purchase behavior is a good LTV indicator for games

💰 Monetization in the first week helps to determine the appropriate conversion value

💸 It can be more challenging to understand the LTV for an app with no day 0 monetization 

🤩 The best indicator of LTV is past spending behavior

🔗 Developers should crack the connection between in-app engagement events and LTV

📶 The 3 data signals needed to define conversion value

🤖 The challenge of programming an LTV signal to trigger a conversion value

🎯 The trade off between accuracy and campaign optimization

⌛ How long is too long to wait for accurate attribution?

⏰ The debate around Facebook’s 24-hour attribution window

🌐 The impact of 24-hour attribution window on other ad networks

📋 Day 1 data is not enough for campaign attribution

🥠 Day 1 behavior is definitely not enough to predict LTV

🧩 Who owns the conversion value pieces of the ecosystem

🙃 Why some developers want to bypass the MMPs for postbacks

✈️ How postbacks travel from ad networks to developers

💔 This is not the time to break away from MMPs

📊 Breaking down probabilistic campaign attribution

💾 The 2 data sets needed to define a probabilistic attribution model

🍰 The 0-100% structure of a probabilistic model

🏗️ The skill sets needed to build these models

📈 Building the model isn’t challenging; extrapolating it is

KEY QUOTES

The importance of day zero monetization

If you can just differentiate between non payers and payers on day zero, then you’re already in a good place to understand which users are going to be high value and which are not. 

It becomes a lot more challenging when you’re talking about apps that have no monetization on day zero, or even within the first week. A good example of that might be a subscription app, where you have a free trial and you spend seven days enjoying the app for free before you’re rolled into a subscription. It’s a lot more challenging there to understand what the future LTV of a user is because the best predictor of LTV is past spending behavior, past revenue behavior. 

Alternative way to predict LTV

When you have no revenue events early in the user’s behavior, you’re going to have to use engagement events and use those to predict LTV. 

How long is too long for campaign feedback

You can wait seven days. The challenge is: is it worth you waiting seven days plus another day—actually eight days before you get that value back? Is it worth eight days after an install has happened to find out how well that install performs? For most performance marketers, that’s just too long.

What developers think about a 24-hour attribution window

Some developers have said to me: “We’re actually just gonna wait a longer amount of time to send back the conversion value to Facebook. So we know that they want that within 24 hours, but it’s just not long enough for us to get a good indication of the user quality and therefore the campaign quality. So we’re gonna wait three days to do that.” And I think there’s gonna be some experimentation as to whether that impacts campaign performance, the ability of the ad network to optimize, but if it gives the performance marketer a better indication of what’s working and what’s not, that may be an okay trade off for them to make.

Why some developers may skip MMPs

Some of the largest developers are speaking to some of the larger ad networks about: “So rather than using an MMP, can I get access to these postbacks then myself? Then I can define and manage the conversion value without going through a mobile measurement partner.” Some of the bigger companies might choose to go that route.

This isn’t a good time to break up with an MMP

I would encourage all developers to work with their MMP today. I think there’s too much uncertainty and you want strong partners at this point in time with all the turbulence going on, so I would encourage all developers to work with their MMP.

How to use probabilistic attribution for SKAdNetwork

Probabilistic attribution is the idea that you try to map installs into your application back to those campaigns that are driving specific conversion values. 

So essentially, what you’re trying to do is create a probability that every install into your application came from a specific campaign. And you do that because you’re looking at the conversion values of a user. So the two data sets you need to do this are your app data and the SKAdNetwork data.

Limitations of probabilistic attribution

The limitation with the probabilistic attribution model is that you would be limited in terms of reporting maybe a D1 to D3 ROAS. I think the question is, how do you then extrapolate that to a D365 or a D180 ROAS.

So the probabilistic attribution model isn’t that challenging. The challenge is how do I then extrapolate that to longer term returns from this specific campaign. Because in this new paradigm, we don’t know how these ad networks are going to be able to optimize. It’s likely that their ability to, for example, find whales is going to be severely diminished. So the LTV curve is going to look very differently than it did before.

FULL TRANSCRIPT BELOW

Shamanth: I’m very excited to welcome Paul Bowen to the Mobile User Acquisition show. Paul, welcome to the show.

Paul: Hey, thanks Shamanth, thank you for inviting me. 

Shamanth: Absolutely. I feel like we’ve been in the same orbits for many, many years and have interacted digitally for a while, actually. I certainly look to and have learnt a lot from what you’ve written around the big transition that we’re going through. We did feature you in our book, Definitive Guide to Marketing in a Post-IDFA World as well, and for all of those reasons, I’m thrilled to have you today. 

We could start by talking about SKAdNetwork which a lot of folks have some familiarity with. So there’s this whole six-bit value framework that SKAdNetwork offers, and without getting too much into the 101 of that, what are some of the options for developers who want to decide what’s the best configuration of those values that they might incorporate for their app? How might they pick what model works the best?

Paul: Yeah, cool. So a very quick recap on just what we’re talking about. SKAdNetwork is the API provided by Apple to provide some form of attribution for users who don’t give permission to be tracked through the pop up that I know you’ve covered on previous episodes. And so a key piece of that API—that Apple’s made available—is the conversion value. And as you mentioned, that’s a six-bit value. And really, that’s Apple’s attempt to give some indication to the advertiser about how well their campaign is performing. 

Apple’s designed it in a way that they don’t want you to backward engineer that conversion value; it’s quite convoluted in the way that it’s built. But really, it’s meant to replace the ability to track post-install revenue, and post-install engagement data at the user level – to be able to track a campaign and an ad network directly to a user who’s now using your application. 

The six-bit value is a series of six ones and zeros. It’s binary ones and zeros, and there are six of them. You can think of those six ones and zeros as potentially different events that a user can complete within the app. With a six-bit value, there are 64 different combinations of ones and zeros. So you can have 10000, etc. And so if you really wanted, you could track each of them separately as one event there and update that conversion value through the timer mechanism that Apple’s made available to track the user in the app. 

I think one of the key pieces about conversion value is that you really want to define it in a way that’s predictive of LTV. And I think that’s probably one of the hardest things for developers to understand: so what happens within my app that is predictive of LTV? There’s a couple of ways that you can think about it, depending on the business model of your application. 

For a mobile game, the best predictor of LTV is likely to be early purchase behavior. So a lot of mobile games see early purchases within their app. They normally see some proportion of their users buying in-app purchase bundles on day zero. And so defining the conversion value as Day Zero revenue makes sense in terms of understanding what the future paying behavior of those users are.

If you can just differentiate between non payers and payers on day zero, then you’re already in a good place to understand which users are going to be high value and which are not. 

However, it becomes a lot more challenging when you’re talking about apps that have no monetization on day zero, or even within the first week. A good example of that might be a subscription app, where you have a free trial and you spend seven days enjoying the app for free before you’re rolled into a subscription. It’s a lot more challenging there to understand what the future LTV of a user is because the best predictor of LTV is past spending behavior, past revenue behavior. 

Then you really need to use in-app engagement data to predict future LTV. And so the goal for an app developer is to try and understand which engagements, which events within the app that a user can complete, that may be causal to future spending behavior. And that’s a real challenge. It’s not something that most people have the capabilities to do. And I think there’s a lot of companies who are working on solving that problem. 

In terms of how you define conversion value—as I said there’s a couple of ways—one is revenue. Especially if your app sees spending behavior from users early in their lifetime, defining the conversion value based on the revenue generated makes no sense.

When you have no revenue events early in the user’s behavior, you’re going to have to use engagement events and use those to predict LTV. 

Ultimately, the optimal way to define this conversion value is to actually predict the LTV of that user, using all the data that you have about them. If you can use the revenue data, the in-app engagement data and then retention data, basically you have all the signals that you need to predict LTV. However, it’s challenging to do that initially when Apple is going to push out these changes because the conversion value needs to be sent by the app to the ad network. You actually would need to get the LTV of the user into the app. So you probably need to build some server side capabilities to make that LTV prediction, then send it to the app and do that all within a short amount of time. 

That’s the challenge there. Initially, I think we’re gonna see a lot of developers use a revenue model, engagements, or a combination of both as well.

Shamanth: Yeah. And I think the bigger challenge, as you pointed out, is that you need to get this within the first couple of days, because the way the timer works, you cannot have like a D7, or D14 signal coming in.

Paul:

You can wait seven days. The challenge is: is it worth you waiting seven days plus another day—actually eight days before you get that value back? Is it worth eight days after an install has happened to find out how well that install performs? For most performance marketers, that’s just too long.

So there’s a trade off between the accuracy of the signal that you’re getting back from the ad network, or giving to the ad network, plus the ability for the ad network to continue to optimize the campaign. And that’s the trade off that we as an industry need to work out.

Shamanth: Yeah. Which is why I would imagine Facebook has said: “Look, we’re going to operate on a 24-hour attribution window.” And they’ve said: “In the future, we could extend that but there’s always going to be a trade off with the accuracy of the signals you’re going to get if we have a longer time period; just not going to be nearly as accurate.” 

So considering that Facebook’s gonna start with a 24-hour time window, with their own custom SKAdNetwork implementation within Facebook Events Manager—obviously nothing’s announced at this point, it will be hopefully announced soon. Given that, what’s the utility of using a custom model, be it retention-based, engagement-based or revenue-based that isn’t tethered to what Facebook’s offering in terms of a 24-hour attribution window? 

Paul: There’s a couple of things there. One is what events you track, and two, is there an opportunity to wait more time—longer than 24 hours—before you send back the conversion value to Facebook. 

The interesting thing here is that Facebook is really setting the standard for the industry, as to what the definition of the conversion value should be. Because the conversion value is the standard definition across all ad networks. So Facebook is saying we want this within 24 hours. If you want to run campaigns on both Facebook and Google, and you want to adhere to Facebook’s definition or requirements for conversion value, you’re going to need to send the conversion value back to Google within 24 hours as well. And that may not be Google’s preference in terms of how quickly they get that back. They may think that their users monetize later within a specific application. And they actually may prefer to receive that conversion value back after three days, because then they get more signal back around the performance of their campaigns. So it’s an interesting move for Facebook to set the standard there, but really it just means that every other ad network has to adhere to that. 

Only allowing an advertiser 24 hours to send the conversion value limits the advertiser in a couple of ways. So the first thing is, if you want to do some form of probabilistic attribution, to try and attribute an install back to a specific SKAdNetwork campaign using the conversion value, you’re only seeing one day’s worth of user behavior. And that’s really challenging because a lot of users are going to display the same behavior on the first day of using the app. Say, for example, you have a tutorial finish as your final conversion value, you’re going to have a high percentage of users who’ve done that specific action. So in terms of how you might try to understand the underlying behavior of a SKAdNetwork campaign, it’s going to be more challenging to probabilistically attribute what that looks like. 

The other thing is within 24 hours, if you’re having to try and predict LTV, that’s challenging. Again, if the final conversion value is a tutorial finish, knowing whether a user is going to be a long term valuable user for you just on knowing that they’ve finished the tutorial is going to be really, really challenging. So you really need more engagement data or more revenue data within the application to get a better idea about what their LTV is. 

So they’re the two main challenges with Facebook’s definition of conversion value. I mean, there is the opportunity, and I’ve spoken with some developers who are gonna test this, but yeah,

some developers have said to me: “We’re actually just gonna wait a longer amount of time to send back the conversion value to Facebook. So we know that they want that within 24 hours, but it’s just not long enough for us to get a good indication of the user quality and therefore the campaign quality. So we’re gonna wait three days to do that.” And I think there’s gonna be some experimentation as to whether that impacts campaign performance, the ability of the ad network to optimize, but if it gives the performance marketer a better indication of what’s working and what’s not, that may be an okay trade off for them to make.

Shamanth: Sure, and for listeners who may not be super familiar, how might a developer bypass Facebook’s default 24-hour attribution window?

Paul: Facebook doesn’t control that; that’s controlled by the application. I would say that defining the conversion value and managing the conversion value are two pieces of this ecosystem that we still need to determine who’s going to own; it looks like the MMPs are going to own the management. However you need pretty sturdy data science capabilities to be able to understand what the optimal conversion value definition is. And right now, I think most of the MMPs are on level one, in terms of helping advertisers determine what that is. So yeah, this isn’t necessarily within Facebook’s hands, however, they can obviously state a policy that they want their advertisers to adhere to.

Shamanth: Right. So what you’re saying is Facebook is offering their default option of a 24-hour attribution window, but it’s not for them to lay down the law if you choose to bypass it; either by using an MMP to send different attribution windows, or set something up within your app, server side, to send a different attribution window. Is that an accurate understanding?

Paul: Yeah, so it’s not an attribution window; it’s when the conversion value is sent by the app to the ad network. So who knows what Facebook would do? This is just what developers have told me they’re interested in testing, but who knows how Facebook can enforce that 24-hour window? But there are developers who want to test it because they feel like they just don’t have enough information on their users after 24 hours to be able to tell Facebook: “This is a good user” or not.

Shamanth: Understood, understood. So they would use either MMP or the SKAdNetwork API to test this. Is that accurate? 

Paul: Well so, today the ad networks have only agreed to send the postback to the MMPs. Especially for some of the largest developers, I think they would like to get access to the SKAdNetwork postbacks directly from the ad networks. That’s something that I think

some of the largest developers are speaking to some of the larger ad networks about: “So rather than using an MMP, can I get access to these postbacks then myself? Then I can define and manage the conversion value without going through a mobile measurement partner.” Some of the bigger companies might choose to go that route.

Shamanth: What would you say is the hurdle just now? From your statement, it sounded like that’s not still fully clear as to how that would be possible?  

Paul Bowen: Firstly, Apple only sends the postback to the ad network—or the app only sends a postback to the ad network. So that’s the first restriction there. The second is that especially Facebook has stated, they will send the postbacks to the MMP for now. So if a large developer wanted to get access to them, they’d need to get them from Facebook. 

Shamanth: Understood. Or via the MMP. Is that right?

Paul: Or via the MMP, yeah.

Shamanth: And if a developer would want to get access to these postbacks directly, who would have to authorize that?

Paul: Basically, Apple or the app sends the postbacks from the app to the ad network. So Apple would need to send those postbacks to the app developers directly from the app. 

Shamanth: Understood. As far as my understanding goes, Apple doesn’t really listen to people; even the biggest developers, Apple doesn’t necessarily listen to them. So what gives them the optimism that this is possible?

Paul: Well, the developers could get them from the ad network. So either Apple sends it to Facebook, Facebook sends it to that developer, and then they don’t need to. Then they can define the conversion value and manage the conversion value as they want to.

Shamanth: Understood. So they would be in the role that MMP is playing just now.

Paul: Yeah. I don’t know whether we’re gonna see too much of that, to be honest. I think some of the large companies would like to go that route.

Shamanth: Yeah. I’m also curious, because in the relatively early days of MMPs, I think there were at least a couple of companies that attempted to build an attribution stack in house, that just didn’t last. I don’t think anybody ended up sustaining that. So would you say that something like this, where a large developer wants to manage a SKAdNetwork postbacks in house, this is something that is fundamentally different from 7-8 years ago, in the early-ish days of MMPs?

Paul:

I would encourage all developers to work with their MMP today. I think there’s too much uncertainty and you want strong partners at this point in time with all the turbulence going on, so I would encourage all developers to work with their MMP. 

I think the interesting piece is what opt-in rate do we get with IDFA, and how meaningful is that. Facebook has just reversed their policy around collecting IDFA. If there’s a high opt-in rate there, you can’t track those users without an MMP anyway. So the point becomes less relevant if you have a really high percentage of users who share their IDFA. But overall, for now, I would say focus on working with your MMP.

Shamanth: Yeah. Eventually, if people figure out it’s less technologically complex, maybe they could take it in house. But I agree with you that there’s just far too many question marks, far too much uncertainty around there just now. 

I would love to speak about a phrase you mentioned a few minutes ago, which is probabilistic attribution. Tell us what that means, and why should anybody even consider that, rather than just rely on whatever they can make off of SKAdNetwork with Facebook’s 24-hour window or anything that an MMP might provide?

Paul: Yep. So Apple has given us SKAdNetwork; it reports conversion value at the campaign level. We don’t know who those users are who drive those conversion values. So

probabilistic attribution is the idea that you try to map installs into your application back to those campaigns that are driving specific conversion values. 

So essentially, what you’re trying to do is create a probability that every install into your application came from a specific campaign. And you do that because you’re looking at the conversion values of a user. So the two data sets you need to do this are your app data and the SKAdNetwork data.

You can use, for example, a hashed IDFV to continue to track users within your app; that’s fully within Apple’s terms of service. And so what you can do is essentially look for users who, for example, have completed the tutorial, and then define the conversion value as people who’ve completed a tutorial, and then try and find users who have the same conversion value back in the SKAdNetwork data. And so then you create a probability that every one of those installs that has that conversion value belongs to one of those campaigns. And so you’re essentially building a campaign membership model, but basically assigning a probability that every install comes from a campaign. 

Those probabilities go between 0% and 100%. It’s zero, if within a specific campaign, there are no installs with a specific conversion value that matches the install. And then it’s 100%, if we know the deterministic attribution of the user, because the MMP has tracked them. Then we know that the probability that that user came from the campaign is 100%. Because we know their IDFA: they shared it on the publisher app, they shared it on the advertiser app, we can track that user. 

What that probabilistic model does is it allows you to understand the underlying behavior of users within the campaign. And so what that allows you to do is update your understanding of what the revenue from the installs that that campaign drove. So once you have a probability of each install coming from each campaign, you can take the revenue that they’ve generated on day two, day three, day four, and basically allocate a certain portion of it based on the probability to each campaign. So it’s this idea that you are looking at the underlying behavior that makes up a campaign, and taking that into account as you learn more about the users who potentially could have come from that campaign.

Shamanth: Sure. So the IDFV-based revenue profile is deterministic, it’s known. So you’re basically mapping the LTV curve of each user in a known manner to the other signals from SKAdNetwork, and also perhaps the spend data, geo data, and so on and so forth.

Paul: Exactly, yeah. The spend data from the ad network will allow you to understand the predicted ROAS because knowing what predicted revenue is is great, but if you don’t know what it took for you to acquire that revenue, then it’s meaningless. So pulling spend data in is key from the ad network, because you can then build a campaign predictive ROAS model that allows you to relatively measure campaigns.

Shamanth: Sure. And if a developer had to build out or execute a probabilistic attribution model, what sort of resources should they look for? What sort of skill sets should they look to build within the teams, if they had to execute something like this?

Paul: You need some data science capabilities to build this model.

The limitation with the probabilistic attribution model is that you would be limited in terms of reporting maybe a D1 to D3 ROAS. I think the question is, how do you then extrapolate that to a D365 or a D180 ROAS. 

So most advertisers use a cohort model today to be able to do that. And then there are important features in that cohort model, or important dimensions in that cohort model, for example, Facebook as a source or a specific VO campaign as a campaign type. Those are not going to be available in this new paradigm. And so how you work out what the D1 ROAS or the D3 ROAS of these users are; how does that extrapolate to a D180 or D365 ROAS—if those are your targets. 

So the probabilistic attribution model isn’t that challenging. The challenge is how do I then extrapolate that to longer term returns from this specific campaign. Because in this new paradigm, we don’t know how these ad networks are going to be able to optimize. It’s likely that their ability to, for example, find whales is going to be severely diminished. So the LTV curve is going to look very differently than it did before.

Because Facebook is very good at whale hunting, they know all the IDFAs of users; they’re not going to notice in the new paradigm. And so your cohort model that you were using to extrapolate D7 ROAS to D180 just doesn’t work anymore.

Shamanth: Yeah. Right. And I think that’s a different paradigm altogether compared to where we live in and operate as of today. 

Paul, I know we’re coming up against time. This is incredibly instructive because there’s so many questions around how SKAdNetwork works. I think it’s also so interesting to hear you talk about, yes, this is everything you can do with a SKAdNetwork but there’s so much more you can do outside of it. And that’s very instructive to know and see and hear about, and I’m sure we will have a lot more answers in the next couple of weeks and months. But this is perhaps a good place for us to wrap. Before we do that, can you tell folks how they can find out more about you and everything you do? 

Paul: Yeah I work at a company called AlgoLift who was recently acquired by Vungle. You can email me at paul [at] algolift if you want to learn more. We provide measurement services for SKAdNetwork campaigns. I’m also on LinkedIn, Paul Bowen, if you want to connect there as well.

Shamanth: Excellent, we will link all of that in the show notes, but for now thank you so much for taking the time to be on the Mobile User Acquisition Show. Thank you, Paul.

Paul: Thanks for having me. Cheers.

A REQUEST BEFORE YOU GO

I have a very important favor to ask, which as those of you who know me know I don’t do often. If you get any pleasure or inspiration from this episode, could you PLEASE leave a review on your favorite podcasting platform – be it iTunes, Overcast, Spotify or wherever you get your podcast fix. This podcast is very much a labor of love – and each episode takes many many hours to put together. When you write a review, it will not only be a great deal of encouragement to us, but it will also support getting the word out about the Mobile User Acquisition Show.

Constructive criticism and suggestions for improvement are welcome, whether on podcasting platforms – or by email to shamanth at rocketshiphq.com. We read all reviews & I want to make this podcast better.

Thank you – and I look forward to seeing you with the next episode!


WANT TO SCALE PROFITABLY IN A POST IDENTIFIER WORLD?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.