fbpx

Our guest today is Mike Taylor, an AI and prompt engineering expert. In this episode, he dives into the fascinating world of AI, sharing insights on the principles of prompt engineering and how these techniques are revolutionizing the industry. From breaking down complex tasks to optimizing AI performance, Mike provides valuable tips and real-world applications that can help you harness the full potential of AI in your projects.





About Mike: LinkedIn | Prompt Engineering for Generative AI Book |Brightpool |

ABOUT ROCKETSHIP HQ: Website | LinkedIn  | Twitter | YouTube


KEY HIGHLIGHTS

⚖️ Early AI prompting requires arranging words in specific ways to get desired results.

⏱ Dividing labor among AI tasks helps avoid conflicting instructions and improves outcomes.

💎 Prompt engineering is likened to early growth hacking, emphasizing scalable solutions.

🧷 Running multiple prompts and comparing results can optimize AI performance.

✂️ Using AI as a brainstorming partner is beneficial for generating creative ideas.

🔍 The accessibility of coding and AI tools has increased, making it easier for non-programmers.

🗝 AI tools can automate repetitive and tedious tasks, freeing up time for more creative work.

📚 Using Dreambooth for training AI models with client-specific data improves consistency in outputs.

🛒 AI-generated scripts can improve significantly with the addition of training examples.

📐 Techniques like chain of thought and giving AI thinking space enhance task performance.

FULL TRANSCRIPT BELOW

SHAMANTH RAO: 

I’m excited to welcome and welcome back Mike Taylor to the
Mobile User Acquisition Show. Hey Mike, welcome back to the show.

MIKE TAYLOR: 

Good to be here. Thanks for having me back.

SHAMANTH RAO: 

I’m also excited to have you back because your career has taken a very interesting turn since the last book. I’ve also benefited a lot from just speaking with you one-on-one about the turn your career’s taken. Your work has taken a long time, and you’ve focused on AI. You consult with and advise a number of companies, and your first book has come out or is coming out.

MIKE TAYLOR: 

It is coming out very soon. It came out on the 25th of June.

SHAMANTH RAO: 

Excellent! For all of those reasons, I’m excited to talk to you today and dive into this crazy, wild, brave new world that we are in with Altium’s AI. 

AI is changing pretty much every day. And you know, even like what we did three, six months ago is obsolete. And you’ve written a book on prompt engineering. So, how do you think about AI’s obsolescence and everything you write about?

MIKE TAYLOR: 

Good question. That was the first thing that I talked to my co-author about, James. Because we were like, should we even write a book about AI? You know, it might go out of date the day it’s printed. And so we approached the project specifically to make sure that that didn’t happen.

So what we did is we went back and looked at all the different things that we were doing with GPT. And then what changed when GPT-4 came out, and we kind of mapped, you know, what are the tips and the tricks and the hacks that you know basically faded away and were no longer necessary once the model got smarter. And what were the things that were still useful and didn’t get changed or removed from the toolbox that we’re using day to day to optimize AI applications.

So that was the approach that we took. And what we ended up on was five principles of prompting which we based the book on. We also based our Udemy course on that as well because we didn’t want to be updating it every, you know, every week as new things come out. And once you’re in print, obviously you can’t really update it. So the idea was to make it so that it was future-proof.

Like when GPT-5 comes out, I’m still fairly confident that these five principles will hold because they’re actually, the funny thing is, when you look at them, they are also things that would be useful for humans as well to manage other humans. So kind of that’s the interesting thread that came out of this is that as AI models get smarter, you’re starting to converge on basically management best practices, like how do you work with your coworkers? And I think that’s been a pretty interesting thread for me as someone who went from managing an agency of 50 people to now working on my own but managing, you know, 50 AI agents in the background. So yeah, it’s been an unusual and unexpected thread but it’s been quite welcome.

SHAMANTH RAO: 

Interesting how you mentioned how it helps in managing people because certainly, especially for a team that works asynchronously, that needs to communicate in writing, I think some of these principles very much are universally applicable. But can you tell folks what the five principles you noticed were?

MIKE TAYLOR: 

We kind of use this as a checklist and we go through these in order when we are working with different prompts. So, you know, this is, oh, by the way, I want to caveat this with you probably don’t need prompt engineering for most day-to-day usage of ChatGPT or Midjourney or whatever tools you’re using because these tools, you know, they work pretty well out of the box even if you type something relatively naive, they usually work pretty well. And I would only use prompt engineering when you’re going to be doing that task again and again.

So if you’re building a template that your team is going to use hundreds of times a week, or you know, if you’re building an AI application that’s going to automate some part of your work or something, maybe you’re even going to sell externally as a tool, then that’s when prompt engineering comes in, like when you’re running it hundreds or thousands of times.

So, when you’re doing that, the very first thing I do to improve the prompt is to give direction, and that’s like briefing your client, right? Briefing your team on what side of style you would like and what sort of persona the AI should adopt. Do you want this to act as a Silicon Valley product manager, or do you want it to act in the style of Steve Jobs or whatever it is you’re trying to emulate?

Then, we usually specify the format because when you’re automating a task, it’s really important what comes out of that prompt. What data structure is it? Is it in JSON? You can, as a developer, use that in the next step in the chain. And then that’s usually pretty straightforward now with today’s LLMs. There are some straightforward ways to make sure it returns the same format again and again. But that’s usually the very next thing we figure out.

And then the third one, which makes the biggest difference but is also a lot of work, is to provide examples. So, take other times when you’ve done that task and put them into the prompt just to give it, you know, and show it essentially what all the different things that I value in a good answer are. Because, with a lot of these tasks, because AI is very fuzzy and you can do tasks that aren’t normally very easy to program, then providing an example of what good looks like can give the AI a lot more to go on. And a lot of my work as a prompt engineer is really just collecting good examples and then finding a way to evaluate them.

So, evaluation is the fourth prompt. So what does good look like? You know, what does bad look like? What are some things we want to avoid? Then, metrics for measuring those things programmatically will be defined. Because if you’re A/B testing these prompts just like you would A/B test a landing page on your website or, you know, A/B test an ad, making sure that you define the right performance metric is a really big thing. If you don’t have the right performance metrics, then you can’t make progress or prove that you’re making progress to your client.

Then, the fifth, the final one, is just to divide labor. So, split that task up into multiple tasks. Because once you have that evaluation metric, you start to realize that some parts of that task are doing well and others are not. And usually, you can’t complete a task with one prompt or one AI model. You want to split that up, so if it fails here, we carve that piece off and split that out as a separate task. Then, you can have a chain of events and bring all of them together in the end.

So, they’re pretty straightforward principles once you read them out. They make intuitive sense, but I’ve noticed that everyone else who’s working on LLMs are, you know, converging on these same sorts of principles like they actually map pretty closely to what OpenAI released as their guide as well. So there’s no kind of secret sauce there, but it is, I think, very useful to focus and kind of remind yourself like these are the basics, right? These are the fundamentals that you should be doing every single time.

SHAMANTH RAO:

 And you did say you sort of converged on some of these when you looked at what went away from GPT-3 to 3.5 to 4. Yeah. What were some things that became obsolete when you saw the evolution of these models?

MIKE TAYLOR: 

Good question. The second principle of specifying the format has become easier now. We used to have to threaten to kill someone to get it and return JSON data. So, the models from GPT-3.5 to 4 got a lot better at following instructions. And you know, there are many different things now that require arranging the words in a certain way or using specific types of templates; otherwise, it would go off the rails.

You had to pretend you were in a conversation; otherwise, it wouldn’t complete the task. A lot of that has gone away because these things have been trained to be helpful assistants now. And you don’t see a huge difference when you just change one word, right? There’s no magic hack. If I put this word at the end, then it improves things, like that sort of thing is going away.

But, you know, you see what the principles, giving clear direction in terms of the brief is, is like something that also works for humans, right? For example, it provides examples of how a task should be done and how it works for humans. The LLMs are converging on how, you know, working with them is conventional and how you would work with anyone.

SHAMANTH RAO: 

The principle that surprised me, even though I’ve read and heard it in other contexts, is to divide labor. And I think it surprised me because, in my mind, aren’t computers like omnipotent, if you will?

MIKE TAYLOR: 

Yes, they’re not these like magical genies. Yeah, they make mistakes too.

SHAMANTH RAO: 

Yeah. Yeah. Right. And it also struck me when I did a course on agents and, oh, have specialized agents. And I was like, why? So, can you explain why specialization is necessary? Because if there’s a computer with computing power, surely it should be able to figure out specialization or otherwise.

MIKE TAYLOR: 

Good question. So yeah, part of this is interpretability. So you know, as humans, you know, we’re managing this system. We need to understand where it’s failing where it’s succeeding. And it helps us if we split things out, we say, okay, this agent is just doing this task. And this agent is just doing that task. It makes it easier for us to debug and see which agents are doing a good or a bad job. Right.

So, part of it is just our mental limitations. Same reason, you know, it’s, it’s easier for us, like if we, you know, you work in McDonald’s, you have one person flipping the burgers, you know, one person making the fries, you know, one person manning the till, you know, selling everything. So, specialization of labor is a useful way to organize complex systems and get a handle on them. Okay, where are the weak links in the chain? What do we need to improve?

But that said, I do see that when you do get a pretty long prompt, these tools do start to get confused. Like you’re giving it like the way I see it is the more information you’re giving it, the more likely it is to pick up on something random in that information and kind of realize that it’s conflicting with something else has picked up, and you might be giving it conflicting instructions without realizing, you know, or it might be paying too much attention to one part of that prompt and not enough attention to another part of the prompt. So, so similar for humans as well. So what you try to do is focus it down.

Once you split the individual tasks, you can set individual evaluation metrics for each task. And then that makes optimizing it much easier because you can see, you know, like you’re not getting drift. One of the things I see a lot in projects is when we make a change, it might improve one optimization metric. As it might seem, you’re creating a blog post generator. It’s very often I’ve seen things where it would increase the length of the blog posts but harm the quality of the blog posts. You know, so, so like the hard thing is to do both, right? Like increasing the length and also having high quality. And, that can be pretty hard to do.

So, if you’re not already doing this, a really simple, straightforward approach to a blog post generator is to ask it to generate an outline first and then generate each section in turn. Just by splitting that task out, you end up with much longer blog posts, and the quality is much higher than it would be if you asked it to do it all in one go.

SHAMANTH RAO: 

I imagine this is just because it has yet to figure out the step-by-step approach itself. Interestingly enough.

MIKE TAYLOR:

So there’s like a paper, actually a lot of papers about this, this type of technique called chain of thought, where you ask it to kind of think like label out the steps for us before you, you know, let’s think step by step is one of the common things you put into the prompt. And that improves performance so much that if you ask ChatGPT to do a task, it will do the chain of thought internally. Like, OpenAI has programmed it to, you know, to always approach something with chain of thought. Like they put that in the prompt that they give it.

Similarly, like someone recently found that Claude 3.5, the sonnet, the new one, there’s like, if you ask it to replace the tags with dollar signs, it breaks the formatting. And you find this tag where they give it some thinking space. They call it ant thinking, like anthropic thinking. And so, so what they’ve programmed it to do is to return this thinking first before it answers you. And normally you don’t get to see that. They hide that away. But that is happening because it leads to such a big boost in performance, splitting that thinking task out.

SHAMANTH RAO: 

That makes sense. And, you know, I know something else you said was also that prompt engineering again. I’m sort of rephrasing what you said, but you said prompt engineering can be important when you’re sort of executing a lot of the models at scale rather than a one-off thing. Can you explain and elaborate what you mean by that? And I also ask because for the vast majority of people, they’re like, oh, I probably won’t do anything at scale. So why is that important at all?

MIKE TAYLOR: 

Yeah, good question. And it’s a key point, I think, because a lot of people call this stuff prompt engineering where they’re like, oh, use my prompt template. It’s like, you know, it’s got some magic words in it and it’s going to solve all your problems. And I think that that is actually reminds me a lot of the early days of growth hacking. You know, where people like, oh, I just have this one hack and it’s going to solve all your problems. And really the best growth teams are the ones that are operating at scale. Like the Facebook growth team where, you know, they didn’t just, you know, translate their website. They, they built a tool that would allow people to translate into their own languages. And that scaled massively to like billions of users. You’re like, I think, you know, you, you know, yourself, like when you run an AB test, if you’re just changing small things like the button colors or whatever, or if you’re, if you’re not actually spending that much on ads, your tests are never going to be statistically significant. They’re just not going to conclude that it could be running it for you for years. And it’s a similar analogy here.

Like with just prompting, if you’re just trying to get some ideas and you’re saying, okay, I just want a new title for this blog post. Right. And this is a task that you do maybe once a month or, you know, whatever. It’s not something you do very often. You can just use it as a brainstorming partner. And if it gives you like nine terrible ideas, but one really good idea that kind of sparks your interest, then it’s done its job and you don’t have to worry about it. But if you’re building like a blog title generator, you must think about that problem completely differently. You can’t have nine terrible ideas. So I think you need some system for filtering the ideas at the very least before you show them to the user. So you’ve already now got like two steps you have to optimize.

And you know, when, if I was building a blog, I would be, you know, running it a hundred times with one prompt, running it a hundred times with another prompt and then counting, okay, when I use prompt A, I get, you know, 17 percent good titles. And when I use prompt B, I get 27 percent good titles. So that’s made a big difference. That’s the type of thing you need to consider as a prompt engineer. And that’s why it ends up being so important to learn how to code, I think, to be a prompt engineer, because even though you don’t have to code to write prompts, you do tend to need at least a little bit of Python knowledge or JavaScript knowledge to be able to run these things at scale and be able to run it a hundred times and check the result.

SHAMANTH RAO: 

For any folks that are listening, coding doesn’t have to be this big, gorilla-impossible thing to attain because I use a number of custom GPTs and AI agents. And that scalability is so important because a lot of that’s so critical for our marketing. After all, we like, look, I want ten scripts. Yeah. Five of them are terrible. That custom GPT or agent is not usable. So, I don’t think people need specialized programming skills to make an agent. You probably need some Python and stuff for an agent, but for a custom GPT, you don’t. But you still need to apply many of these principles, even if they’re not programmers.

MIKE TAYLOR: 

I think knowing at least a little bit of technical knowledge or at least not being afraid to, you know, run some code and see what happens, I think is, is really valuable because in some respects, you know, ChatGPT and all these AI tools have made programming a lot more accessible. Like you can actually just ask it like, how would I write a script to do this? I see tons of people who have never coded before who have written like custom Google Sheets functions, you know, where they just run it and I just did what ChatGPT told me and it worked, you know. And I think that that should be, that should be applauded, you know, yeah, you’re going to make mistakes, but you could also feed the errors back into ChatGPT and ask it like, what’s the problem here? That’s a good thing; people shouldn’t be afraid to run a bit of code at least.

And you know, if you ask someone who knows how to code as well, like the people are pretty friendly in the industry. But, yeah, I would say that you know, being able to code or at least run some of this code just gives you a real advantage because, you know, right now you might be stuck using OpenAI’s GPT implementation which isn’t that great, to be honest. There are better ones out there that if you just know how to follow a tutorial and just hit run, run, run, you know, like you can have a fully more customizable, you know, bot that can talk to your documentation or, you know, or, or could, you know, query all of your past experiments and give you an idea for new experiments to run, you know, whatever it is you’re trying to build with just a little bit of technical knowledge, you can customize these things and you can get much better results because these tools are all like, they’ve just been built, right?

Like GPTs came out fairly recently. And the implementations aren’t great, but other people in the open source community have made much better implementations and you can just steal from that and say, okay, I’m going to adopt the way they do this. It gives you that edge. A lot of these tools are not available to non-programmers initially. So I remember when Stable Diffusion came out, it was free and I couldn’t even get access to DALL-E at the time, which was, it looks cool and I wanted to try it, DALL-E 2. And then, and then Stable Diffusion was free, and I struggled. It took me like a couple of days to figure out how to run it. But, but once I figured it out, then I had it running on my computer for free and I could just, you know, make whatever image I wanted, you know, it wasn’t blocked away from, you know, not being in the right beta or whatever, you know. So I think that having a little bit of technical knowledge gives you access to these things ahead of where other people are. We have to wait for, you know, a programmer who gets access to Stable Diffusion to go then build a tool that would let you use it, you know?

SHAMANTH RAO: 

And I think I was reading this piece that I forget who said it, but they were like, the next big programming language will be English because that’s how accessible all of this is now, right? You know, and for a lot of performance marketers, what are some of the underappreciated aspects or applications of AI?

MIKE TAYLOR: 

Yeah, one that really stands out in my mind recently is I’ve been working with an agency called Feature. I’ve done some, a bunch of work with them in the past based in Berlin. And they, they were saying, okay, well, you know, we’re using generative AI to generate new ideas for our ads, you know, for actually, for them it was app store assets. So icons and, and, you know, all the different kinds of images you need in an app store description, but, you know, you could use this for ads as well. And they were like, it doesn’t look good enough. Like it’s not, it’s not like this, it’s not the same style as the client. No matter what, how we prompt it, we’re just not getting the same style. And I think that’s a pretty common problem and getting the characters to be consistent, right? Like there’s, there’s, there’s, there’s this problem of like you can generate one image with a specific character, but it wouldn’t be the right character. It wouldn’t be the one that the client has signed off, you know, or even from image to image, it doesn’t, doesn’t stay consistent.

So I showed them how to use Dreambooth, which is a training technique you can run. And again, this is one of those things where this is freely accessible. If you know a little bit of code and you can run a Google Colab notebook, then, you know, you can run it and it takes 20 minutes and you can train Stable Diffusion on any concept or object or character that you have. So I, I did this in the past to make my Twitter profile picture, which is me in the world of “Into the Spider-Verse,” you know, but, but you can use it for less nerdy reasons as well. So I told them how to train a model on their client’s assets. And we just uploaded 30 images of a specific character from that client. And we had, you know, by the end of it, a model that could prompt flawlessly basically in, in that style. So now they’re generating assets which are much closer to production and much, you know, much closer to being signed off because the client’s like, okay, great. This is, this is exactly like how, how we imagined it to look.

SHAMANTH RAO: 

And that exact principle can be applied also to scripts, which is actually something we have done. You know, I’ve spoke of the custom GPTs. The first attempts at our building scripts were just terrible, right? It’s like GPT speak. But once we added a lot of training scripts, you know, and I think that gave us increasingly better quality output.

MIKE TAYLOR: 

Yeah, exactly. Yeah. And there’s another, I did some work with WeDiscover, the PPC agency, Google ads. And, and, you know, the reason I use this example is because it’s, it’s, it’s pretty typical of the type of tasks that you can use AI for. There’s lots of jobs in PPC where you need some intelligence, but like, it’s also kind of boring to an intelligent person. So like doing a search query reports or, you know, rewriting product headlines, you know, these sorts of tasks where like you might have like a hundred thousand or a million ads that you need to rewrite. And that can be super boring. So, you know, we, we did, we did this in a Google Sheet, like where we coded a custom function where you could just run it and, you know, and then, and then, you know, you don’t actually have to know how to code to use this tool. So I think that that sort of thing is, is, is really automatable. And you can also, you know, figure out some tricks for making sure that the AI doesn’t do anything wrong as well.

So like one of the clever things we did is we pulled the product descriptions from the website into a cell. And then we just had another function which would count all the different words that were used in the product description. And if the AI used any words that weren’t in the product description, then it would count as a hallucination. So we would reject that. So, you know, if this, say you, you have a company that sells like cotton sheets, right? Like, and if it says polyester sheets, right? Like the, you know, if the word polyester is not in the product description, that’s a hallucination. So now you can catch it. So at least I think, you know, there are these, you know, these things you can do, I think, to make your life easier and avoid some of the downsides that come with AI as well. But you don’t necessarily have to be a great coder to write that function, right? Like I literally just asked ChatGPT to write it for me, so yeah.

SHAMANTH RAO: 

And it worked.

MIKE TAYLOR: Exactly. And it’s astonishing how well, yeah, I had to learn a lot of this stuff, which certainly wasn’t something I expected or anticipated.

SHAMANTH RAO: 

Mike, we could continue down this rabbit hole, but I want to respect your schedule. This may be a good place for us to wrap up. Of course, we will link to your book. We’ll also link to your Udemy course, which you briefly mentioned. I’ve taken it and highly recommend it. And that’s the best $9 anyone can invest in.

MIKE TAYLOR: 

We don’t have control over Udemy’s pricing, so I tell people just to wait until Udemy puts it on sale because, yeah, they jump around with the price, but it’s, yeah, we’ve had almost 100,000 people take the course now. It’s pretty crazy.

SHAMANTH RAO: 

I also like and appreciate that you guys updated. I went back to it a few months after I first went through it, and there was a ton of new stuff. So, I would highly recommend that people check that out.

For now, it’s a good time to wrap up. Mike, could you tell the folks how they can learn more about you and everything you do?

MIKE TAYLOR: 

The best place is Twitter.  I’m hammer_MT on Twitter. I’m always on there. Feel free to tweet at me or X at me.  https://brightpool.dev/ is the website for James and me, my book co-author. The book is on Amazon. It’s  Prompt Engineering for Generative AI. And you can tell which one it is because it has a picture of an armadillo on the front. It’s a classic. O’Reilly chooses a different animal for each book and ours is a screaming hairy armadillo, so you can’t miss it.

SHAMANTH RAO: 

Cool. We will link to all of that. And I’m excited to see what else happens in your life and the big world of AI as we move forward.

MIKE TAYLOR: 

Yeah. I appreciate you having me on again.

WANT TO SCALE PROFITABLY IN A GENERATIVE AI WORLD ?

Get our free newsletter. The Mobile User Acquisition Show is a show by practitioners, for practitioners, featuring insights from the bleeding-edge of growth. Our guests are some of the smartest folks we know that are on the hardest problems in growth.