fbpx

Our guest today is Patrick Stuart-Constant the CEO of Sociaaal, which is an AI operator for apps. Sociaaal buys apps, and through frontier data and generative AI helps make them into great businesses. Under his leadership, Sociaaal has grown to a $13M run rate, achieved profitability and scaled multiple apps through a lean model powered by frontier technology. I’m excited to speak with Patrick because I’ve known him for a very long time and he and his team at Sociaaal have been at the forefront of adopting AI way before it was cool, they’ve tested massive volumes, iterated aggressively they’ve tested massive volumes and adopted frontier models and technology well before they became mainstream. In today’s interview, I’m excited to dive into his perspectives on how generative AI has completely transformed growth marketing  and apps.



About Patrick: LinkedIn | Substack

Connect with Sociaaal : Webpage | LinkedIn | Substack

About Rocketship HQ : Website | LinkedIn | Newsletter | Youtube | Podcast Website

FULL TRANSCRIPT BELOW

Shamanth:  Quick note β€” some of you heard me shut down the Mobile UA show rather publicly a few months ago in order to launch Intelligent Artifice as something new, a new podcast. Turns out those should not have been two different things. I was trying to pivot and I should have been trying to evolve, so I’m back.

This is the Mobile UA Show. It’s reopened. Intelligent Artifice is our deep dive series on AI and mobile. Same me, same depth, one single feed.

Welcome and welcome back to the Mobile UA Show. Today’s episode is part of our series Intelligent Artifice, a deep dive on how AI is transforming performance marketing. Since 2018, with over 250+ episodes, we’ve been deconstructing how the best performance marketers actually win. Every week I sit down with the operator shaping it, or we tear apart the ad systems behind the highest-performing advertisers in the world. I’m Shamanth Rao, founder of RocketShip HQ. Let’s get into it.

Our guest today is Patrick Stuart-Constant, the CEO of Sociaaal, which is an AI operator for apps. Sociaaal buys apps, and through frontier data and generative AI, helps make them into great businesses. Under his leadership, Sociaaal has grown to a $13M run rate, achieved profitability, and scaled multiple apps through a lean model powered by frontier technology. I’m excited to speak with Patrick because I’ve known him for a very long time, and he and his team at Sociaaal have been at the forefront of adopting AI way before it was cool. They’ve tested massive volumes, iterated aggressively, and adopted frontier models well before they became mainstream. In today’s interview, I’m excited to dive into his perspectives on how generative AI has completely transformed growth marketing and apps.

I’m excited to welcome Patrick Stuart-Constant to Intelligent Artifice. Patrick, welcome to the show.

Patrick:  Lovely to be here.

Shamanth:  Excited to have you, Patrick. I’ve admired a lot of your work for a very long time. Among other reasons for this is the fact that you guys were really adopting AI and automation way before it was cool β€” way before it was easy, even. We’ll talk about a lot of your journey and your learnings as we go ahead, but let’s start at the beginning. What was your creative operation like when you guys really started doubling down on AI and automation, and what was the inspiration for starting to integrate AI so early, given that AI wasn’t nearly as easy or as advanced as it is today?

Patrick:  Thanks for the question. The company is three years old. For the first year we were working with an agency and they were producing creatives the traditional way. We were also doing a lot of fixed CPI deals and things like that.

Then we started to internalize user acquisition and so also internalized creative production. That was about two and a half years ago now. We went straight for AI creatives. The first reason was to reinvent how an app operator functions in the world of generative AI β€” that was part of the premise from the very start. I was personally a relatively early adopter of gen AI. I was using LLMs like DaVinci from OpenAI before ChatGPT became mainstream, and I was following very closely all the work different people at Google were doing. So I was fairly fascinated with LLMs and generative AI for many years.

The natural thing we did is go straight for gen AI creatives. The bet, which has quite a lot of things built upon it, is that if you can build the right systems around these foundational models to extract maximum value and really lean into the infinite potential of generative AI, then as models progress it’s a tide that lifts all boats. If you’ve got the right systems, skills, teams, processes, and ways of tracking data models in place, then as models progress, your whole user acquisition is lifted and it accelerates faster and faster.

Initially, when we started with generative AI ads, it was the Will Smith spaghetti era of generative AI β€” the quality wasn’t great. But our bet was: if we can produce something today that’s just about as good as using live creators, then in a few years we’ll be really outperforming live creators and actors. And that’s how we started with gen AI creatives.

Shamanth:  That’s an interesting pattern. I’ve noticed, when I’ve talked to folks who are really pushing the envelope of AI today, that they were comfortable starting when AI was pretty terrible β€” this is two, three years ago, the output quality was not good. That’s something I’ve noticed with a lot of the folks on the leading edge: they stuck with it because there was a conviction that it would get better, and when things do get better, they’d be at an advantage. Sounds like that was the case for you as well.

Switching gears to today β€” everyone talks about audiences getting tired of AI-generated creative. Talk to me about the ways in which AI fatigue is manifesting, and also what you think is the way to counteract that as a marketer or an entrepreneur.

Patrick:  I think a lot of marketers are worried about AI fatigue and don’t want to lean into generic ads for this reason. There are two main dimensions to gen AI fatigue in ads.

The first is visual. You get an ad that looks a tiny bit AI, and you get comments like “stop using AI in your ads,” etc. That dimension was very true six months ago β€” now we hardly get these comments under our videos because they’ve become nearly impossible to distinguish from reality. Within another six to 12 months, I don’t think you’d really be able to tell visually if an ad was filmed with a human actor or if it’s gen AI. So that dimension is less of a problem than it was and it’s not going to be a problem for very long.

The other dimension, which is more fundamental, is that people might not even know it’s because of AI β€” but they’re tired of AI slop. AI slop is just basically mediocre content. That’s where you really have to inject a lot of creativity. You use gen AI to build, but you work on that with highly creative people to produce content that’s interesting and original.

The problem is that most people use similar prompts β€” “generate a script for this app,” etc. β€” and it converges towards mediocre content that doesn’t stand out. That’s the opposite of what marketing should be doing, which is to stand out from the crowd. You have to be careful how you use gen AI and maintain true creativity in your marketing.

Shamanth:  From what you’re saying, the problem isn’t so much AI itself β€” the problem is lazy marketing. The people who just take the AI output without double-checking. That is the root cause of AI slop. If you actually put in a lot of thought and discernment, it wouldn’t be AI slop. That’s a great point, and it’s something I’m noticing as well. If there’s enough thought and attention going into an ad, AI can produce very good quality ads. The slop happens when that thought and attention isn’t there.

Patrick:  Exactly. The problem in AI slop isn’t the AI part of it β€” it’s the slop.

Shamanth:  For sure. On your team, how do you ensure process-wise that there’s less slop? I would imagine if a marketer or a video editor is tasked with coming up with 10 ads, the natural temptation is to go with whatever’s easiest to make β€” copy-paste from a competitor, copy-paste what’s already working, or just take whatever the AI gives you. What are some of the ways you’ve thought about doing this on your own team?

Patrick:  You have to find the right balance between automating things and keeping humans in the loop β€” the right humans in the loop. The whole challenge is to have a high volume of high-quality creatives. It has been feasible for many years to make a few dozen good ads. The challenge is producing thousands of high-quality ads.

You need to create the right system. And a system is humans, tooling, workflows, processes. In our case, we’re very data-centric in how we operate, even for creatives. We’ve trained our own data models to allow us to see the signal amongst the noise relatively quickly and cheaply, using a Bayesian approach. It’s creating this system which allows you to produce high velocity, high quality, high quantity of creative.

Shamanth:  And if I might drill down a bit on what you said β€” what would you say characterizes the system you’ve come up with versus a system that would basically result in AI slop? What are the big characteristics?

Patrick:  Keeping humans to inject creativity β€” finding new marketing angles, new ad concepts, new video types. That’s one of the very important things. One of the departments in our company where we use AI the most is our creative department, and it’s also the department which is growing the most because of what I’ve noticed as we scaled up from 100 to 500 to 1,000 to 2,000. Now we’re close to 3,000, heading to 10,000 within the next few months. You have to have quite a lot of humans in there to make sure you keep this creativity, keep your ads refreshing, and keep innovating.

Shamanth:  You’re right β€” the more ads you have, the more quality checks you need, the more directing and guidance you need. So this definitely makes sense. I was also very intrigued when you said, as you used more AI, you actually needed more humans. Was that surprising to you upfront?

Patrick:  Yeah, it was. Initially, when we started with AI ads, before I really understood that you needed to maintain a high level of creativity, I was like, “This is amazing. With two people I can produce 2,000 ads a month. This is huge.” And it was β€” even two people producing 2,000 ads a month, especially back then, was impressive. We scaled quite quickly with only a two-person team to 2,000 ads a month. And it was good. We were beating market benchmarks, we were scaling, and it was doing okay.

But I had this feeling that we could do better. To do better, we basically needed to expand the team. So we multiplied the number of people in the creative team by three β€” we went from two to six β€” and we stayed at 2,000 ads a month for quite a few months. We saw performance, in terms of CTR etc., basically double.

It goes back to the point that having a high quantity and high velocity of ads is very important, but it’s also having a high quality and that human touch and creativity that takes your user acquisition to the next level.

Shamanth:  That is so interesting. I would love for you to take me back to the point in time when you had two creative team members producing 2,000 ads. What were you noticing in that setup that made you say, “Okay, I don’t want more ads per designer β€” I actually want to add more humans.” What was breaking at that point?

Patrick:  It was the multiplying variations. We’ve got quite a systematic process where we test each variable in an ad β€” dozens of hooks, dozens of actors, etc. for the same winning ad. That does increase performance, but those videos also have a much shorter lifetime.

The ones that really outperformed β€” the ones that got to what we call internally “unicorn status,” ads we could run at incredibly low CPIs/CPAs at really high volumes of spend β€” those were coming from basically brand new ads we were building using AI tools. The idea was: what if we can build a high volume of brand new, completely different ads? That’s where we’d be able to really outperform.

It’s one of the operating principles of the company since the start: follow the data, and whatever is working, double down on it. As soon as we see something performing, we just double or triple the resources on it. And that’s how we’ve been growing the company.

Shamanth:  So the end goal isn’t to produce 2,000 ads β€” the end game is to come up with a small number of unicorn ads, and the 2,000 ads is in service of that. So if I understand correctly, you started noticing that the unicorn ads weren’t made with AI-driven iterations.

Patrick:  They were AI ads, but they were like side projects β€” brand new ads. “Oh, let’s do this, let’s try this.” They weren’t a variation of an existing marketing angle or concept.

Shamanth:  Right, right. Is there an example that comes to mind of a unicorn ad like this?

Patrick:  This goes back to the point that you can’t rely too much on LLMs to come up with the ad concepts themselves. They’re called artificial intelligence β€” not artificial creativity, artificial imagination. It’s not what these models are optimized for, and it’s not what they’re particularly good at. Ask an LLM to tell you a joke and the joke might be okay, but it will sound very familiar and just be a variation of some classic joke β€” not usually that funny. That’s actually one of the tests I do every time there’s a new model: I’ve got different prompts to generate jokes and I track whether they’re getting any funnier. They’re getting a tiny bit funnier, but it’s not yet hilarious.

Anyway, an example: on one of our apps, we were getting a few reviews β€” maybe 1% of reviews β€” from people who couldn’t understand how to use the app. If we’d asked an LLM for script ideas, it would’ve probably told us to do a tutorial, to better explain or improve the onboarding and UX. What we decided to do instead was take all those reviews from people who didn’t understand how the app worked and β€” done in a very troll-like, light-humor manner β€” essentially roast the people who didn’t understand it. Like, “how can’t they understand this? It’s the simplest thing.” And that ad really outperformed.

If you ask Claude to generate something like that, it won’t. And in a world where there’s nearly unlimited content being produced by LLMs, the way to stand out in your marketing is to produce things that an AI would never produce.

Shamanth:  You are so right β€” this is not the sort of thing that an AI would produce unless you see the idea first.

Patrick:  Exactly. That’s why you have to have a human in the loop in the right place. You do need to see the idea first β€” but once you see it, you can generate the scripts and things for it, and that can do quite well. Depending on which LLM and how you prompt it, but yeah.

Shamanth:  Very interesting.

Patrick:  I want to encourage all brands out there to go out insulting their users β€” for one app where the brand works quite well. It’s quite a Gen Z app. And all the comments under that ad were “best ad ever.” People loved it because it felt refreshing. Even if it sounded like an AI voice, there wasn’t a single comment about AI slop because it felt human in a way.

Shamanth:  Yeah. And obviously if it feels more human, it generates more engagement. If it generates more engagement, the algorithms love it and give it more reach. So that’s a virtuous cycle right there.

Patrick:  Yeah.

Shamanth:  To switch gears a bit β€” in the prep call for this recording, you mentioned that you’re working on something that could change how you decide what ads even get tested. Can you talk about that?

Patrick:  Some of the things we’re working on that really excite me: originally it was hard to scale efficiently to 5,000 ads, and we had to build internal software to manage, test, and deploy all those creatives on campaigns. That was the first challenge, but it was relatively manageable. Now we’re scaling up towards 10,000 ads. But what really excites me is: how can we scale up to 100,000 ads a month? When you’ve got brand new creative directions, you can’t just test all those ads on ad networks because of costs, campaign limits, and the sheer complexity of managing that level of creative testing.

So one thing we’re working on as a first filter is using LLMs, based on audience data, to imitate that audience and basically pre-rank ads before we even test them. You produce 100,000 different ads, and the LLM identifies the top 10,000 β€” those are the ones you actually test on the platform and see how they perform with real humans.

The way I see ads and marketing is that it’s a huge, nearly infinite search space. LLMs help you explore a much wider part of that search space because you have many more attempts. But you have to be looking in the right areas. If you produce 10,000 awful ads in the totally wrong part of the search space, you won’t perform. If you produce 10 ads in the right part of the search space, you might get a few winners. If you’re producing 100,000 ads in the right part of the search space, at least 10,000 of them are going to be winners.

Shamanth:  You’re right β€” the search space is really infinite. And it sounds like you’re looking at using LLMs to narrow that down to what has a high probability of performing. And I think that will also let you explore even more ideas, because if you can rule out A, B, C, you can just stop testing X, Y, Z.

Patrick:  Exactly.

Shamanth:  And what’s the actual form you anticipate this will take? Is this an in-house agent your team talks to?

Patrick:  There have been a few academic papers on this topic. We’re working on an internal version, but there are a few companies that have launched focusing on this. Depending on how fast they go, we might just use their services. Our philosophy is: if something doesn’t exist, we build it; but if it does exist and there’s a good company doing it, we go with that company as long as the pricing is reasonable.

Shamanth:  Interesting. I’ve heard of a couple of services that call themselves synthetic focus groups. Haven’t seen the results, so I’d be curious to find out what you find.

Patrick:  We’re currently working on it β€” it’s going to be in production quite soon, hopefully. Some of the academic literature suggests these LLMs perform better than random. It’s not a perfect signal, but any signal that’s better than random and isn’t just noise is welcome.

Shamanth:  Certainly. Also, to zoom out a bit β€” you’ve scaled your creative production, you’ve scaled the number of humans on your team. How do you think about moats in an AI world, in a world where ads can be copied almost instantly? Now even apps can be copied almost instantly β€” I could drop a prompt into Claude Code and have something very functional in a couple of hours, sometimes sooner. What do you see as a moat? What will lead to sustainable advantage as you grow, or as any company?

Patrick:  I think generally quite a lot of the classic moats still work in an age of AI. Human networks, brands β€” brands may actually matter more than ever. Those things still work.

For our particular case, we’re scaling six apps at the moment. When you are a category leader β€” and a lot of our best apps are leaders in their particular niche β€” there’s no point copying the second or third app. What matters is mastering ads and continuing to lead in that category. That’s where creative velocity is very important. They can copy your ads, but we launch campaigns once or twice a week with new creators, and as soon as they start working they get a lot of spend. So it will always be us showing that new ad concept first to the most relevant users. You maintain your lead by continuing to innovate.

One thing we do that I think is actually quite interesting for anyone with an app that’s a leader in its category: we hardly ever look at or copy what our direct competitors are doing. Instead, we look at what’s happening in completely different app categories and take inspiration from there. Human taste is usually needed to do that β€” we’ve tried to automate it, but human taste has worked best for us. Even things that aren’t apps at all β€” e-commerce, for example β€” taking inspiration from that for your own category has often worked very well for us.

Shamanth:  That’s interesting and it’s not very common. Is there an example you can think of β€” something you’ve looked to in a completely different category for inspiration?

Patrick:  Yes, and if you want I can also go back to the moat question because I didn’t fully finish it. But for an example of a completely different category β€” I don’t know if you’ve seen those viral videos from restaurants where it’s a completely unrelated video, then there’s a really cool, very sudden transition, and it’s like “Giuseppe Pizza” or “Tony’s Kebab Shop.” We did that for quite a few of our apps for a while, and those videos performed well. It was like a guy falling from a roof of a house, and he lands on our App Store page.

Shamanth:  That’s crazy. And was that generated with AI?

Patrick:  Yes.

Shamanth:  That’s amazing. And I know we got sidetracked a bit β€” you were also going to go back to the moat question.

Patrick:  One of the things I’ve really liked about our apps is that we manage to generate very low CPIs for the market β€” around 50 cents, or maybe slightly higher, towards 60–70 cents, for high-quality traffic that we’re bidding on now. But it’s pretty low. So we get a lot of new users, which generates a lot of data, and we’re very data-centric. I think proprietary data is one of the things that will continue to yield moats.

I would say it’s not a moat itself, but it’s the thing which gives you the opportunity to build a moat in the era of AI. It’s just velocity β€” moving very fast. Always being on the leading edge of technology. It’s not a moat per se, but it gives you the opportunity to play again and actually develop one.

Shamanth:  And as you also pointed out, having enough humans in the loop β€” so it’s not just velocity, it’s velocity in the right direction.

Patrick:  Yes. That’s very important. Moving fast in the wrong direction is bad.

Shamanth:  Indeed. Excellent, Patrick β€” this has been incredible. There’s a lot I’ve learned, as I did when I read some of your writings. I’ll link to those in the show notes and I’d highly recommend folks check them out. But this is perhaps a good place for us to wrap. Before we do that, could you tell us how people can find out more about you and everything you do?

Patrick:  LinkedIn is probably the best and easiest place. I also have a couple of Substacks β€” it’s very kind if you link to them. One is my personal Substack, more about technology and philosophy, which has been my main passion for a long time. My other passion is linking the two and taking a more philosophical approach to technology and to AI in particular. And then my company has its own Substack, Sociaaal, which is more focused on practical things for app developers, app studios, and roll-ups. Those are the best places. And also following our Sociaaal LinkedIn page.

Shamanth:  Wonderful. We’ll link to all of these β€” certainly your writings, which I believe are on your Substack. Excellent. For now, this is a good time to say thank you for being a guest on Intelligent Artifice.

Patrick:  Thank you very much for having me. It has been a pleasure.

Shamanth:  Wonderful.

WANT TO SCALE PROFITABLY IN A GENERATIVE AI WORLD ?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.