Have you ever wondered how AI is revolutionizing the world of food content creation? 🤔 Well, let me introduce you to an amazing story of how AI is helping designers and developers like William Sayer create mind-blowing photographs of food he's going to share with his friends!
00:02:47 BigMealShare: a platform to share meals, with food photos by AI!
00:05:33 Training AI to be a prompt engineer
00:07:51 Research shows 7 key elements for image prompts
00:11:50 No code tools
00:16:41 LEGO camera!
00:21:14 Social media reflects our desired personas visually
Follow up:
Instagram.com/will_straya
WillSayer.com
Instagram.com/bigmealshare
BigMealShare.com
Next mastermind:
To attend the next mastermind, go here: PromptEngineeringMastermind.com
Stay in touch on:
Youtube: youtube.com/@PromptEngineeringPodcast
Telegram: https://t.me/PromptEngineeringMastermind
LinkedIn: https://www.linkedin.com/groups/14231334/
Support the Show:
Prompts I sell on PromptBase: https://promptbase.com/profile/promptgreg (cover letter generator, event planner, etc)
If you sell ChatGPT prompts, get my analysis of all PromptBase's prompts: https://gregschwartz.gumroad.com Use coupon code "podcast" for 10% off!
Aspiring artists: my friend Graham created an awesome course called Instant Expert Artists Breakthrough. It'll teach you how to create amazing art with Midjourney. Sign up at http://AiArtForBeginners.com/greg to get 80% off.
Support the showWelcome to the prompt engineering podcast, where we teach you the art of writing effective prompts for AI systems like chat, GPT, mid journey, Dolly, and more. Here's your host, Greg Schwartz. All right. Hello, everyone. So. We'll go ahead and introduce yourself.
William Sayer:hi, I'm William Sayer. I'm a designer and a low code developer. I've been working in UX UI for the last I studied that in university and then I worked in a startup for a few years. Now I'm working on my own project and I'm trying to bootstrap a variety of tools together using some of the AI suites particularly creating images going from text to image generation. And most of my experience is creating images of meals. However, I've been using it for a variety of other tools and I play around with it a lot. I have a little bit of experience just from trial and error, I'd say
Greg screen:So what got you into using AI?
William Sayer:Yeah. Yeah. It's a great question, Greg. I think originally it was the, I, the concept that you could produce content. Blogs are a huge way to draw traffic into any website and. Knowing that you can, like pretty much like incredible content that will explain a topic for you that you may not even understand. It could summarize a variety of information out there and, distill that to you on a level that you may understand. I feel like that as a UX designer similar to you, Greg, it's that there might be a few concepts about. I don't know why a framing that I just maybe didn't have my head around or certain things like that. And I felt that GPT, it was like an amazing tool that like I could use it to basically expand and collapse any topics into further or less detail. Then manually looking up for it, like on the internet.
How do you use AI?
William Sayer:To go back to that blog sphere, it's, engaging in freelance work. And I've done a couple of projects for clients where it's oh, you need to create a blog. So it ideally attracts more visitors to your website. It's okay, I don't even know how to structure this blog. Like what are maybe 10 different points I could talk about? Bam. There we go. ChatGPT has spat out 10 different potential articles. And then under each of those, you could say, Expand on one of those points, or that's too complicated. Refine that into, so that an eighth grader could understand it.
So do you mostly do text generation?
William Sayer:Yeah. Yeah, that's correct. Early days, yeah, it was just chat GPT and now it's progressed a little bit further and it's gone from, it's gone to image generation as well. I feel like I was maybe. Definitely text oriented first, and then now it's actually, wow, there's so much more out there that you can do rather than just text. A website that I've been working on. It's it's called Big Meal Share. Basically, it's a platform to help you share meals with like your friends and your family. Say you live in an apartment block, one of the use cases would be, hey, you can share food with your neighbors. it's basically like a concept where, I'd come home late from work and think, oh, it's 7 p. m. and now I've got to cook dinner for myself. Wouldn't it be amazing if, somehow I could sit at my neighbor's table because I can smell their food while I'm walking up my apartment stairs? So that's basically this project I've been working on recently. I'm sure there's a lot of kind of like legal implications and risks. And I don't know, I'm just trying to get an MVP out there. And as part of that to attract people to the meal that you're proposing on the platform having an image is a very crucial part. And a lot of the times people haven't yet cooked a particular meal and have a photograph of it. So I've been using Dali to create basically four different images for the user to choose from how that meal could look.
Oh, interesting. Wait. I thought those were photographs of the food. You're saying the meal images are actually all generated from AI. Wow.
William Sayer:That's right. Yeah. Yeah. Yeah. So that was different. Yeah. Different. That's something else that I guess we'll talk about a little bit later. But yeah, no, some of the food images when I first started out using Dali too, Oh my gosh, they just looked like. Ooh, dog food, really. And I think that was, yeah. It's who's going to RSVP to this when it looks like that. I think something that really helped improve my prompting on that regard is like. Studio lighting defining a specific type of camera, maybe Sony alpha defining a lens, just saying what the aperture is. And I feel like anyone who has a basic understanding of photography that can come very simply for them, whether you want, aperture blur or that bokeh in the background or does image blurry or crisp. Just a basic understanding of photography there goes along the way towards the final image and how it looks.
How do you build those prompts?
William Sayer:this was in chat GPT. And the idea is that you're training it to understand, like to take the role of a prompt engineer. And I know GPT wasn't really trained up until. I think 2021, so I believe there's not a huge amount of like information that has helped it learn what a prompt engineer is. So this will often take a little bit of refining. So for example, sometimes it won't spit it out as like a paragraph, but you can train it because it's got memory in that chat. Basically saying, I want you, this is the formula for a prompt. And then say, change the formula to include this, and modify it to include that. I basically end up have, having a back and forth for chatGPT a few times, until you feel like it's spitting out a reasonable prompt. And then if you're in mid journey or whatever, instead of just writing teddy bear, You could then go over to chat GPT and write a prompt about teddy bear. And then it spits out a paragraph, which then you can, it almost, once you've got it set up in chat GPT for the same amount of time input, you can get a huge, like really detailed prompts that can often really improve the results.
So those prompts are using shot prompting to say. Oh, use a cannon. F 2.0 lens with a polarization filter, things like that.
William Sayer:I'm still testing out and playing around I do think it does work better with shot prompting like you suggested. However, I think there are some scenarios where shot prompting, like the information that. It's been trained on maybe that shot prompting can be a little too specific and maybe it doesn't have, maybe there aren't that many photographs on the internet with a Canon five D mark three using a 70 mil lens at aperture 5. 6. It's like how many sure the very popular cameras and lenses have a whole bunch of information to train on, but I feel like sometimes it might take away and I'm still trying to figure out when that's useful and when that's not.
What are the core parts of your image prompts? Are there any specific things you always think about.
William Sayer:I've done a little bit of research on that and I think there's the understanding that I have is that there's maybe seven things to touch on. And so that's in my first prompt there, firstly, the subject. Then you've got the medium, is it a line drawing, is it a watercolor, is it a photograph, the environment the lighting, the colors, the mood, and the composition, and sure, maybe mood and lighting overlap a little bit, but I feel like having these words in there really help. The, especially if you're going from chat GPT into mid journey when you're creating that prompt in chat GPT, for example, the the composition, right? I may say, give me an image of a pirate. And without me having to rack my brain think what's a composition of a pirate? Instead of me having to type, okay, maybe a tri tipped hat with a pirate on his shoulder walking down a boardwalk. I feel like that's all the sort of thing that takes time that you could... Pass over to a tool like ChatGPT, and it comes up with that setting for you. Spits out a paragraph that you can then straight away put into mid journey. And that time to iteration just is so much faster and you can get so many more images and you're like, Oh, I noticed one thing in this image that I love something else in that image that I love. And then, if you're creating say a hero image for a website or something where you need it to be picture perfect, how you want it, I feel like it's great for, you've got all those bits of inspiration that can now come together.
Got it. So it's describing all the parts. So I'm curious, there's been a lot of controversy around people making images in the style of a popular artist. How do you feel about that?
William Sayer:I feel like it's very popular for people to say in the style of Monet or Van Gogh or whoever it may be. And I feel that brings up a whole new discussion of Do we have their permission? There are famous, they're famous, like current living artists who, do digital art. And then if you say in their style, if they put enough work out there, sure, Midjourney can replicate it. But then I feel like there's that, you've got to feel that maybe you've referenced that artist. If you're going to produce that piece of work or, I think that's a blurred line that we're still in the very early days. We don't really know how to address it just yet. But it's worth pointing out and to feel that like maybe I'm not gonna specify an artist by name to copy their style because they've spent 20 years of their life figuring out how to write in this how to draw in this particular style. Do I feel morally okay to just copy that? So that's something worth keeping in mind.
I definitely know what you mean. Sounds like you don't use artists' names in your prompts.
William Sayer:I've definitely done that, but for my own personal, just like curiosity, I feel like as soon as you start making money from something, then if you've got that permission, sure. I heard about Grimes announced that if someone wanted to create like an AI version of some of her music, then she would expect, she's happy for them to go ahead and do it, but as long as 50% of the money that they make. that they give to her. So I feel like at least contacting the artist and say what you're using it for is an important step if you're using it commercially.
Oh, interesting. Do you think that'll help or hurt her brand?
William Sayer:Good question. I'm sure you've probably seen or heard of the A. I. Drake's, the A. I. Kanye's, and it's, it'd be really interesting what it does for their brands, like whether they allow other people to create music, because it's almost like It could potentially help them grow, but yeah I don't know. It'll be really fascinating to see what happens.
So what AI tools do you wish existed?
William Sayer:I use a variety of no code tools, right? So Webflow is one of them, Figma is another I use Airtable. I use a variety and if people aren't familiar with them that's fine. I feel like there's a whole, there's a whole suite of no code builders that are available out there these days. And. I feel like sometimes there's a very simple thing that I just don't know how to do and if I would love to be able to describe to some sort of AI agent how to use a particular software for me. So how it gathers that data I'm a little unsure, maybe if it did screen recordings or say, for example, Figma, maybe we can use that for an example, because I feel like a lot of people in the audience might know what Figma is it's and for those who don't, it's just basically an online tool that you can draw wireframes and digital designs to help, design for like screens. And so there's so many users out there and I feel like maybe you don't know how to use auto layout or maybe you don't know how to use components or something like that. I feel like if there was like some sort of agent where you could ask it like, Hey, how do I do this? And it could almost show you where the mouse is meant to go along the screen. How it gathers that data is another interesting question, but so I think something like that for people who are new, because I, there are so many times where I've been caught up on one little thing and I know if I was a professional, I'd be able to do this in 10 seconds, but I'm sitting there for an hour trying to figure out how to do it. I think one of the big benefits of AI is that it's democratizing a lot of these services. For anyone out there who's just got like access to the internet, I feel like what they didn't know before and what was only possible by big companies or agencies 10 years ago, they now have all this power so that all these sort of small indie hackers builders can bootstrap a variety of tools together and make services that can address niches that maybe there wasn't the market value to address before. So I feel like we're going to be finding a whole bunch of these software services popping up to address tiny little issues that maybe we haven't been able to address before. So hopefully we'll get a lot more services coming up and improving our lives because we can, and there's people out there to build those.
All right. So want to talk about one of the prompts that you shared with me? This is a little long, but I want to read through it So capture the essence of innocence and purity in a hyper realistic portrait of a baby that transcends reality. Every minute detail is meticulously rendered, revealing the softness of the baby's delicate skin, the wisps of fine hair and the sparkle in their wide curious eyes. The lighting is masterfully employed, casting, gentle shadows that enhance the three-dimensional quality of the image. While highlighting the subtle contours of the baby's face. The color palette is expertly balanced with a natural yet vibrant rendition that brings the portrait to life. The mood of the photograph is tender and captivating. Evoking a sense of wonder and enchantment. The composition showcases the baby's captivating features as the focal point while incorporating elements that accentuate their vulnerability and beauty. This hyper realistic portrait of a baby is a Testament to the photographer's skill, artistry, and ability to capture the fleeting moments of early life. With astonishing precision it's remarkable attention to detail and emotional impact. Make it a strong contender for a prestigious photography award. wow. That's a lot. I'm particularly impressed with a couple of pieces of that. Like the transcending reality part. How did you come up with that?
William Sayer:Yeah, I think you're exactly right. That's exactly what I put into into chat GPT baby after a few back and forths with creating the mid journey formula that all you have to do is put in baby portrait and then it comes out with something like this. I think when it says that transcends reality, maybe there are little bits that are lost on the journey and it won't quite be able to capture everything that's in that poetic description and it might be a little bit over the top. But it certainly saves the user time in terms of like for me to write. So I don't even know if I'm capable of writing something like that, to be honest, some of this is probably lost on mid journey and you do want it to be a little bit concise, more concise like this. And when I was reading through that prompt, I was like, maybe I need to refine the agent a little bit more so it takes out some of that language, but I think that just goes to show an example of what something that it can do.
Wow. Okay. It still seems long and a lot of adjectives. Definitely. Yeah. Do you think that's something you should shorten?
William Sayer:Yeah. Yeah, no, I think it, it totally was. And I feel that maybe a lot of those adjectives in there, like meticulously rendered and softness and delicate. I feel like that actually adds. Some some objects that maybe you wouldn't expect. And there's actually a pretty interesting sort of like scenario that I wouldn't have predicted. Something that happened before was I was experimenting with these prompts and I was like, what will happen if I if I say Lego as my input prompt to chat GPT, and so what it did is it actually ended up spitting, I was it. Described setting the aperture of the camera to one maybe F stop for, and changing the shutter speed to this and this. And I feel a lot of the language that it used in the prompt was to do with the photography style. And so the interesting thing that Mid Journey. spat out to me was it was a Lego camera. But there were like hands that were like adjusting the, like the settings of this Lego camera. So I feel like it used a lot of that terminology to, it almost included that into the image as well. That's not what I was going for, but it gave me an unexpected image based on all that kind of, set up with the photography stuff as well.
What other tips can you share to write better mid journey prompts?
William Sayer:Yeah, exactly. Something that really upped my game in mid journey was learning about remix mode. Basically what that means is if it spits out like an image, you can say remix, and then it like just spits out the entire prompt. And then you can just fine tune a few little things in that prompt. And then it'll go and tweak that using those little changes. So that was something that I guess. Really helps with mid journey. And sometimes I feel like you can use chat GPT to create a whole bunch of creative scenarios and run those few, a few times, but once you're happy with a direction that you want to go down and find you in something a little bit more, maybe that's where you just change a few of the words in, in mid journey.
Oh, interesting. Okay. How does remix work?
William Sayer:Normally you've got like upscale one, two, three, four, and then you've got variation one, two, three, four. It'll also give you a ninth button that comes out there. And then when you click on that it basically uses the previous four images that have come out. And instead of just saying, give me a variation of one of those or upscaling, it'll now. Use what you, it'll come up with a text input and you can put certain things into that and then that'll consider that and apply it to those four different images. I didn't learn about that until recently and I feel like the space is always changing and it's it's just, it's really exciting to hop on Twitter or YouTube and just every now and then just see, just watch tutorials by other people, what they've put out there and I feel like that's sometimes the best way to stay on top of, knowing what works.
Got it. So what projects have you shipped AKA, released to the public?
William Sayer:Yeah good question. I think the only product that I've shipped it's still in its MVP mode, but I think I was describing big mail share before. And so it uses those prompts it uses, four distinct prompts to generate images of mails. And so that's where it's at the moment, and anyone can go on and describe a meal, put in, your date, your location, yada yada, you say who your friends are, and it'll send the invite to them, and yeah, it'll come up with these images. Which used to look like dog food, as I said, but now it's including a few more of these, type of photograph and I think award winning photography. That's one of my favorite catchphrases that I've been putting at the end of things recently, studio lighting, all that sort of stuff and feel like that. That really helps it look like a professional grade, which is almost, it sets a standard for what someone's mail needs to look like now. So I don't know if it's a good thing or a bad thing, but it definitely will get people RSVP.
So do you share the art you've created on social media? In fact, I could actually see posting the meals to like Instagram or Pinterest to generate buzz for the site. Is that something you've thought of?
William Sayer:Good point. Good point. Honestly, I've never been, I've never really taken photos on Instagram sort of thing. At least at meals more I'm more I just cool times in my life, maybe if it's meals really good, I will. But no, you, I feel like bringing that whole social element back into things like you can see how you can if it's a cloudy day, you can remove the clouds from the back of an image, or if there's other people on the beach, you can just scrub those out of the image. It's, I think it's a really interesting space where we're going with all these social media, because for some of the younger generations, like a lot of the socializing is done through these social medias, right? For everybody. And it's there, there've been many times where I've met somebody and if I don't have the same type of social media as them, we just haven't clicked as much as. Like someone who would I think how what you post is a huge visually. It's like a huge, it shows who you are as a person and the fact that we're able to like change so many things now, it's is it leading us down this rabbit hole, taking us further and further away from reality. That's maybe one way to look at it. Or it's like you're creating this cool internet persona. That's like an artistic way to reflect how You know, you want people to think of you. I think there are pros and cons of all these sorts of things. And we're entering a whole new age of the unknown. So yeah, it's, it'll be interesting what happens.
It's been awesome having you on the podcast. What are some ways people can see more of your art and follow up with you?
William Sayer:The Instagram handle I use is will underscore Australia, S T R A Y A, which is like Australia. It's a bit of a play on the word. That's where I'm from, you can probably tell. So yeah I, and I've recently actually been really getting into Twitter. So I, even though I don't really have many followers, I've, as, since I've moved to the Bay Area, it's I feel like That's where to get the news. And I've never really used it before, but I've been getting into it a lot recently. And I feel like it's a whole new world. And I love it. It's so interesting. Sure. There are some parts to both Instagram and Twitter that maybe considered toxic or whatever, but I feel like if you use it in the right way, it can be a great source of information. My personal website is will sayer.com.
And how about the other project? You mentioned big Mealshare. For the people that want to get together with their friends and share meal.
William Sayer:So that's big meal, share. com. And then we're probably most active on the Instagram which is simply to handle big meal share. And then we've also got a Twitter and then I'll be working on a Tik TOK for that soon as well. Yeah, thanks for the plug.
Awesome. Thank you again for coming on. It's been a lot of fun having you. Thanks for coming to the prompt engineering podcasts podcast dedicated helping you be a better prompt engineer I also host masterminds where you can collaborate with me and 50 other people live on zoom to improve your prompts Join us at promptengineeringmastermind. com for the schedule of the upcoming masterminds. Finally, please remember to like and subscribe. If you're listening to the audio podcast, rate us five stars. That helps us teach more people. And if you're listening to the podcast, you might want to join us on YouTube so you can actually see the prompts. You can do that by going to youtube. com slash at prompt engineering podcast. See you next week.