This ain't your grandma's vibe coding. It's also not last week's vibe coding. If we're being real with each other, by the time you read this sentence, things will have already evolved again. Let's talk about different tiers of vibe coding and what they unlock for you as a developer!
As with all livestreams, I'm looking forward to answering YOUR questions! So join me live and ask in the chat, or you can comment now, and I can try to get it answered while I stream.
View Transcript
Just waiting for the streams to stream. We'll see where things are going here. No Instagram today cuz uh we're going to be coding and I feel like Instagram's vertical video and code probably is the worst combination I could possibly imagine. So, we're not doing that. Um I do appreciate people joining. Uh friendly reminder, my live streams are always in AMA format. So, um literally any point, please just interrupt me um when I'm wrapping up a thought. I'm I'll try my best to answer questions and stuff. This is how we do on these streams. This stream's going to be a little bit different. Periodically, I will do a coding live stream. Um I used to try and do multiple streams a week. One of them would be a coding one. The other one would very much be a um sort of like code commute kind of
topic. Thanks for all the hearts on Substack. [laughter] Uh like what's going on over there? There's all these hearts flying in. Thank you. I appreciate it. Um so we'll be coding today. Um the topic is going to be vibe coding. And I wanted to talk about vibe coding um and sort of evolution of it. And vibe coding at this point. It's it's so funny to think about this. It's such like a [laughter] It hasn't even been that long, but the phrase itself is already so overloaded. Um, so I want to talk about that. We're going to go through some code. Um, total disclaimer, like because because it's coding live and because it's going to be with AI tools and stuff like that, it's not going to go perfectly smoothly. I've been kind of, you know, talking myself up over the weekend and stuff like, and
even today, I'm like, just prepare. Like, it's not going to go as planned. And I hope you all know that. That's totally fine though. But we'll explore some things. And really my goal with this is to kind of look at some of the different tools we have. By the time we're finished doing this, I'm sure there's going to be a whole new set of tools because things are moving so fast. But um I wanted to talk about sort of the evolution of how these things are going. So with that said, folks on Substack, you already know where the newsletter is. um for other folks. Give me one sec. I'm just I got too many things pulled up for for trying to do this stream and my camera it's very unfortunate. My camera blocks like prime real estate for trying to dock windows. Pain in the
butt. But this is the newsletter I'm just going to put into the chat and just to quickly show you when I switch my screen over as long as nothing breaks here. Um, so this is the newsletter. And so what I do in this newsletter, this one was written a little bit differently. Usually it's kind of feels like a blog post um with like a bit of storytelling so to speak. Um, and then this one is really just like I broke down what I'm looking at like four highlevel areas for air quotes vibe coding. And we'll quickly see that as we start talking about this, your definition of vibe coding versus someone else's versus where it started is going to be very, very, very different. And that's kind of the point that I want to get across here. So, this article talks about a bunch of
different parts of that. Um, and it's linked in the chat, like I said, on Substack. You know where to go. But um to kind of go way back in time to really not that long ago when people first started talking about vibe coding, the idea with vibe coding was really like the least amount of software engineering that could go into building something. And I don't mean that to be a jerk or to be condescending to anyone. I mean like by definition it was to not care, right? It's going with the vibes. So, whatever the LLM was telling you, you'd say, "Hey, LLM, go, you know, make me a bicycle." And like the LLM is like, "Sure, here's a bicycle." And you go, "Cool." And then you go to try the bicycle and you're like, "Bicycle, no work." And then the LM's like, "No problem. Here's
a different bicycle." And then you try the bicycle again and you're like, "Bicycle move, but bicycle uncomfortable." And LLM is like, "No problem. I got you." And like it's the like I said the least amount of actual effort going into this. Now, for some people, and again, none of what I'm saying is intending to be uh condescending, right? For some people, this is groundbreaking because for maybe the first time, they felt like I can actually get software out of this, right? I can I'm someone who was interested in software building things and at no point ever before have I been ever at have I been can't can't speak have I been able to take like a concept and start getting something out of this and truly I mean this like in a in a sincere way that's that's pretty incredible for some people to be
able to go from like I could not do anything with this before or I felt that way to holy crap like you're telling me that I can get something out of this, I can, you know, talk back to this LLM and say, "Hey, it's not working. Here's an error message and it keeps incrementally improving it." That's that is pretty incredible. Okay. And then there's other folks that have some software development background, right? And this is where things start to diverge because there's all ranges of experience and skill going from people who had none of it before and said, you know, there's no way I could code or build anything in software to now individuals who had at least some experience going, hey, wait, you know, I can see that with this code that came out of this LLM, I can already see that there's some
problems here, right? Let me proactively kind of do a little bit more with this. Hey, like, hey, chat GPT because that's what we're all using in the beginning. Hey, like, you know, that's not quite right. Like, I think you got to go back and adjust this, right? So, already we're breaking the rules, you know, the rules of vibe coding because we're giving a crap about what it's saying. It kind of sounds funny because like obviously we want to give a crap about what it's saying. We're trying to build software. So the evolution or the different stages I'm going to talk about here are not meant to describe this in a way where it's like level one is is bad and if you use level one and focus on level one then you must be dumb or you're a terrible developer or anything like that. I
want to talk about these in terms of like what you put in and what you get out. And so we're going to start with GitHub Spark. And I did a live stream on this before and I this was genuine. It was a real reaction when I first used GitHub Spark. I I put a video on YouTube and it's a legitimate real reaction and I told the editor I was like I actually thought that I went through recording this video using GitHub Spark and we get to the end where it goes and it's like hey I'm done. And I remember being like, well, guess I got to throw this video out or I guess I'm gonna publish it and it's going to be a huge flop because it it didn't work, but it did. And it literally worked in like the last second and it ended
up loading up the page and stuff. And we made a Pokédex. It was incredible. And the prompt was like, "Build me a Pokédex clown." And like, you know, not not quite that, but it was so simple. [laughter] There's I There's no way it's going to build something that actually works. I didn't even know there was a public API for getting Pokemon data. I didn't even tell it to use a public API. It just did it and it worked. So, with basically no input, we had a Pokédex. And so, we're going to use GitHub Spark. We're not going to do a Pokédex. I'm going to try to keep it mostly on the same theme as we go through this. We'll start with GitHub Spark. Then we're going to talk to chat GBT after and say, "Hey, chat GPT, if we want to go build something, try
to keep it on theme. What can we get? How does this look? How do we go back and forth with this?" And we'll try to drop it into Visual Studio. See what we can build. And then we're going to go to Visual Studio and use Copilot. And the whole idea that I want you to be thinking about as we go through this again is that we have different stages of what we're doing. So, with if you haven't used GitHub Spark, I'm not sure if many people have, um, you you don't put a lot in and you get something out and it's kind of neat. Maybe it'll totally flop this time around and then I'll direct you to YouTube where you can see the real [laughter] real fascinating experience, but then we'll see that with chat GBT, we can go back and forth a little bit
more. There's a little bit more context. We can take snippets of code, give that to to ChatGpt. And by the way, when I'm saying chat GBT here, you can replace this with, you know, whatever your favorite LLM is in the browser, whatever. The point is it doesn't have full context to your code because that's the next step where we're going from giving it snippets of code and saying like look at this pattern to hey like cursor or co-pilot in VS Code or Visual Studio, you're you're in my IDE, you have access to all my code. I can point you at some context, but I can also tell you how to find other context. And we'll see that that gets better and better. And we'll probably won't get to it today, but um something that I use for my own development when I'm building brand ghost
because our codebase is split across two repositories. In GitHub there's spaces and in spaces what you're able to do is say that I have these sources of data and you can pick multiple repositories. You by the way you can do this with cursor if you have the repositories cloned down. Uh if folks are familiar with claude and cursor and you're like hey you can do that with multiple repositories in the browser with like cloud agents just say so in the chat. I haven't done it so I haven't seen it. Um, and like I said, by the time this video is done, I'm sure we'll have different tools, but we won't get to that. But I use um GitHub spaces to kind of blend a bunch of these things that we're going to talk about together. And that's kind of like the level four of the
of the newsletter article, which is essentially you can go back and forth with the the agent. There is tons of context. You can put guard rails in place, send it off, and have it build stuff. Now, before we get into it, just a friendly reminder. I don't think that um I am someone who has AI perfected. I never want to come across that way. I absolutely don't. So, we're probably going to see me prompting it and getting frustrated and being like, "What the heck? Why isn't this working?" But, um I still think it's important to keep trying uh improving how we're prompting. Uh, one of the things, uh, that I learned over this weekend that I'm trying to do a little bit more of, uh, I don't know, are folks that are on this stream right now, if you're in the chat, is anyone a
net developer? And if you're not a .NET developer, what is your language andor text stack of choice? I'm curious. We're going to focus on some C development here just because that's what I like to use. But over the weekend, I was saying I'm getting sick and tired of C-Pilot or Cursor or Claude. But it doesn't matter what tool. I'm getting sick and tired of having things explained in my my instructions file. I have a custom agent. I have documentation and my prompt. And I'm telling it, look at these other patterns that happen. Look at the patterns in the codebase. Look at this file. And it's still sort of breaking some of the rules that I have. And I'm like, I'm sick and tired of it. So, um, inn net we have something called Roslin and we have analyzers. So it's kind of like a llinter
but powered up. And so what we're able to do is we can write custom analyzers. And the whole idea that I'm trying out with this is that if I have patterns in the codebase that I can check with a Roslin analyzer, I'm putting those in now. And that means that when I set co-pilot or any other agent out off to go do work and it's like I'm gonna totally disregard all the instructions that Nick had. Haha. like I ignore patterns because that's what I like to do despite Nick saying it all the time. Now it's going to do that and then it's going to try to build the code and these analyzers will kick in and say actually that violates the pattern. So then it's going to say, "Oh crap, I guess I can't move forward unless I fix the pattern." So this is TBD
in terms of how effective this is, but this is one of my new strategies going forward. I'm not the person that invented this by any means. I'm just going to be trying it out for myself. So, enough blabbing from Nick. Let's go over to Spark and I'm going to switch my screen here. This is Spark for folks that have not seen it. Tada. It's a chat window. And you can see even from before we we did a Pokemon uh Pokedex, right? This was this was August 3rd. That's like a hundred years ago at this point. By the way, um I just saw on YouTube, good morning to you as well. It's evening here, but that's okay. Good morning to you. I hope you have an awesome day and thanks for being here for your morning. But this is GitHub Spark. I'm thinking because I have
an example uh codebase checked out locally where from one of my YouTube tutorials we did um sort of like a how do I describe this? I have it's called lorebot. Okay. So I have I have a bunch of markdown files that represent a wiki and what I wanted to do was use LLMs to be able to create more content for the wiki. So it's like a game knowledge base. Okay. So it's my lore knowledge base. That's why it's the lorebot. And so the idea that I had was I wanted to do indexing or do embeddings across my knowledge base and then I can ask it questions. Seems kind of neat to me. And the idea behind that is that if I go to enhance this I can say hey look I want you to go write a new short story. Right? So, the idea is
that, you know, I'm not as creative as I would like to be and I get writer's block a lot and if I want to spend any time creating like game content or something, I don't really have time for doing that. I would love to, but I can actually use AI to help me with that, but because it's lore in a game world, I can't just have it randomly creating things where the lore isn't consistent. So, I said if I can basically do some indexing here and I can have the LLM reference some of the content, then it can try to stay on track with the lore that it's creating. I don't think it's going to be perfect. Not without a lot of tuning, but can we do that? So, lorebot was doing embeddings and then you could ask it questions. I never actually extended it
to go generate more lore, but that's the idea behind it. So I figured maybe in Spark we can do something like that. It's not going to be able to uh sort of work off the data. But how do we want to do this? I want to Okay, so the idea with Spark is that it's going to do like a rapid prototype for us. And I say rapid, I hope it's pretty rapid [laughter] or else we're all going to be sitting here waiting. But um I'm going to ask it. It's not going to be quite the same. And that's kind of it's kind of the point. We're going to use this to prototype something. It's going to be maybe not featurerich, but that's okay. So, maybe what I want to do is I would like Is my microphone hooked up? Can I talk like this? Yeah,
I can. Okay. So, um I have a microphone and a foot pedal hooked up so I can just talk to the LLMs more easily. So, I would like a user interface that can browse the lore of a game world. So, I'd like to be able to browse the characters, the monsters, some of the historical events, and the different regions. I would like this to have hyperlinks between the different entities that exist so I could check out the monsters for a particular region or the characters that existed for particular events on the timeline. And then I think the probably the only I don't know what it's going to do with that, but I I kind of want to tell it like I'm inclined to tell it that it needs a data source and it doesn't have one. We could send it off like this and see
what it does, but um maybe I'll just tell it use a mock data source that is formatted in markdown for the different entities that exist in the lore. I don't know. Point is, I bet you it's going to do something pretty interesting with this despite it being a kind of a crappy prompt. If you were to take a prompt like this and give it to chat GBT and say, "Go make me this in C," [laughter] it's it's going to do probably not much that's useful. If you were to try and do something like that with say Claude on your desktop or in cursor or in Copilot even starting from scratch, it will probably do something. And whether or not you even get code that compiles and runs is a whole different story. So, the point is pretty crappy prompt in this case, but I bet
you it's still going to get us to a point where we have something that we can click through. So, I'm going to send that off. You can attach an image. So, I didn't do that. Maybe if there was an example of something and I'm like, I want it to kind of look and feel like this, it would work. But, um, the whole idea here is we're going to send this off, see what it does, and in the meantime, I can sort of blab a little bit more while it's running. It's probably not going to be super exciting to watch. So, you can watch my face for now. If I see it doing anything crazy, I'll go back to it. Um, because I kind of want to be able to move some other stuff around as I'm talking. So, we have the lorebot there. Maybe maybe
I will just go back to this one sec. So, back to this article. So, you can see GitHub Spark just as an example of like something with minimal input. This is really intended for being able to prototype, get quick results. You can do the same thing without GitHub Spark, right? The whole point, I mean, there's other tools that can do this kind of thing. Point is, not a lot goes in, you get something out. Is that going to be an application you ship? Probably not. Like, could you perhaps, but probably not. Um, and are you going to be able to build on this as sort of like a foundation for, you know, building something that you can eventually ship? Also, probably not realistically. And I think this is where some folks that focus purely on the like the the vibe coding as it originally was,
this is probably where they start to have some missteps because if they're seeing other people saying, "Hey, look, like I was able to, you know, I just used AI and I built something and I built 10 apps in a weekend." Like if someone who has no technical experience at all thinks they're going to have um you know a bulletproof ride doing that? Probably not. Is it possible? Surely. Surely it is possible that you could have success doing that. I just I don't think that it's the the paved path forward. And I think that anyone who is doing that is probably mostly getting lucky doing that. And again, that's not me trying to say, hey, look, you should never. I'm actually I'm literally trying to show folks like we can go back to the lorebot that's being created here and like this is a good use
case for it. If I wanted to show all of you, I said, "Hey, I have this cool idea and I'm trying to describe it to you and you're like, Nick, I don't know what you're talking about. How like with not much time at all, hopefully we'll be able to see like this is something that you can go click through. I'm really banking on GitHub Spark, making something that's tangible here. But that's the whole idea is that with not a lot of time and effort, I could downloading more RAM. That's awesome. Um, I could I could have something where I can demonstrate it to you and show you and we can click through and and talk about it and and see what's going on and that is valuable. But to am I going to ship this? No. Because odds are for folks that haven't done like
software development and shipping software, what's going to happen once I ship it? It's going to be, hey, can you make it also do this? Or, hey, this isn't working. You're going to have to make changes to it. And if I were using GitHub Spark to do this, I can't imagine that sort of developer experience of trying to reship things like it's not going to be okay. especially if I'm someone who does not have any experience trying to iterate on software like that. Okay? And I'm not again I want to be very clear. I am not saying this to try and gatekeep from it. I'm just saying that if you want to be able to do things like that like trying to ship software and iterate on it, you will probably have to learn a little bit more about shipping software and iterating on it. And
that's okay. I'm not saying you can't do it. I'm actually like one of the reasons I create content online is literally to explain to people that yes, you can and yes, you can use AI and get a huge head start. I think that's totally awesome. I am not like afraid of more people using AI and building software. I think that's super cool. But I would encourage people once you get something going and you're trying things out, like don't stop there and say, "I don't have to code." There's literally right before I turned this stream on, I saw someone post on LinkedIn, I don't know who said it. It was someone who was saying something along the lines of like, "I'm not technical. I don't know how to code, but I build software like a surgeon operates." And I was like, "You absolutely do not. [laughter]
No, you don't. And if you wanted to, that's okay. I bet you you probably could, but you're going to have to spend some time building up the skills and working on that. By the way, this is still not doing what we need yet, but it's on its way. Okay, so let's break over to the next thing, which if we go to level two, conversational wiring to get guided outcomes. So, we're going to go to chat GPT. It's actually if we were to go to co-pilot, we'd have an even better experience. But the idea is that we can go have a conversation with an LLM and start talking about the pieces of the system that we want to go build. Okay. So, let me go over to chat GBT and do we still have the old prompt? Can I click this copy prompt? Cool. Okay. So,
and then I would say something like um I'd like help designing and architecting a service or system that can do this. I would like to use C and Blazer. Okay. What else do I want to say? Let's start with a highle overview of the different components that you think that we would need for such a system. Okay. So in my mind I just want to start super high level and the whole ideas and I'm going to have this conversation where I'm getting feedback, right? I'm going to see what it has to say. I might have some ideas in my mind. The great part about this is if you don't have any experience, you could start instead of having this, you could say like give me suggestions about um programming languages and text stacks that I can use and tell me why. Right? In my case,
I have experience with C. Am I a blazer expert? Absolutely not. Have I used it? Yes. I think that would be comfortable because I don't really like using React even though Brand Ghost front end is in well we got Nex.js and TypeScript and all sorts of things. But let's try this. Okay. Do I think this is the best prompt? Absolutely not. But we're going to see if we can start going down a path by chatting with it. And again, this isn't rocket surgery. I didn't invent this. I'm just trying to show you that we can start building on these different steps. Okay. So core concepts lore world consists of interconnected entities. Um so yes got this right has a markdown file source of truth. Um right so in this case that's kind of like I took the prompt that I gave to GitHub Spark. Maybe
maybe we don't want that. Right? So that might be something I question. Let's keep reading though because maybe I don't want a markdown file for each thing. Maybe should we use a database for storing it? I don't know. Contains hyperlinks. Okay. Yep, it got some of that. Reference other entities by ID or name. So, parse markdown into structured models. Maintain a graph relationship. Okay. Presentation is in Blazer business logic. So, for for folks that are kind of catching on to what's on the screen, by the way, can you guys read this? I don't know if I need to zoom in. Sorry, I didn't think about that. Um this is structuring things with like a typical like layered architecture presentation application data. Okay. So um like a tri-layered architecture do we want that I don't know like if if we have opinions or we have questions
we should ask it right. So key components. Oh man I don't know where it got that from. YAML. We're not We're not doing YAML. Absolutely not. Um markdown parser. Okay. Link resolver lore graph. Okay. Like so far I'm not I'm not hating this. Interesting fact, too. The actual lorebot that I built has markdown files in a wiki format literally like this. So, we should ask it like should we do this? Should we have a database? sort of the pros and cons. It has a bit of a structure for the markdown files. Okay, so it has a bunch of stuff. It even has some next steps. So I don't know if I don't want to get into the project structure yet. I want to ask it some questions first. Okay, so you can do this I mean in a million different ways. We could ask it
a couple of questions and get some highlevel you know build on the highle ideas. We could start going down a particular path. I think for me there's a couple questions I'd like to clear up in the beginning. So the first one is um is there a set of pros and cons that we should consider for using markdown files as the source of truth for the data versus storing the wiki information for these different entities in markdown format but in a database. I would like to hear your analysis for the different pros and cons of each approach. Okay, so not a, you know, not an amazing prompt. Like I said, I'm not an expert on this. There's something else I was thinking about. Um, YAML. I don't want to use YAML. Get out of here, Grammarly. Uh, okay. Uh, I don't want to use YAML for
anything. Why do we need to use YAML? explain the pros and cons of using YAML. And one more. Uh, again, I find that if you're going to start piling in a bunch of stuff, it's going to be a little bit all over the place. I'm kind of down for that right now because I don't know where I want the conversation to go necessarily. So, um, I noticed that you are using a layered architecture with presentation layer, application layer, and data access layer split out. explain the pros and cons of using this architecture and other variations of software architecture that we could use. So the reason again that I'm doing this is to have a bunch of different things for brainstorming. If you have in your mind that you want to focus more on one of these and go down a particular path, then you know
maybe keep a list of things that you want to ask separately and kind of dive into them one by one. Build up. I personally find in GitHub uh spaces I I don't know why like chat GPT I use for conversations like this but GitHub spaces when it has the context of your repositories I don't know why conversations like this feel really good and I feel like it's because it's understanding what you have in place already and so when you're riffing back and forth being like why would we pick this it's giving you in my opinion my experience really good experiences of things that are going on that you already have. Let's send that off. While that's firing off, let's go back to here and see what we got. Um, look at Eldren Stormblade. Like, it made something, [laughter] right? This is cool. Um, monsters, shadow
fends. So, it kind of looks like, by the way, I don't know if this is How do we collapse this? There we go. There's too much going on. Okay, so I zoomed in a little. Hopefully you can see a little bit more clearly. Um, but like I went to events, we got Battle of Crimson Fields because it, you know, made some some data for us to play with in the beginning. But if I press the Age of Unity. Okay, nothing actually happens. [laughter] Okay. Uh, can we go to regions? Sorry, there's not a lot of screen real estate. Let me zoom back out. I'm hoping that the text is legible. I'm sorry, folks. It's just that um yeah, it's not a lot to play with here. So, if I press these things, it doesn't feel like anything's actually changing. I was kind of hoping that
we would get Am I just being silly? One sec. You can see that it says previous now, right? I don't know if you folks noticed that. I didn't notice it until just now. Okay, we can't press You can't press it. But if I press events, for example. Oh man, there's a whole other part of the screen, it doesn't do anything. Okay, it changes the top. So, there's a lot about this I don't like. That's okay. the topic of today's stream. Um, so I don't know if you can see in the chat, maybe not. That's okay. Uh, we have this most recent newsletter article I put out. We're just talking about different sort of levels of of vibe coding. And the idea was that I tried to say in the beginning that what people were calling vibe coding is not vibe coding anymore. People are using
AI tools. It's really AI augmented development. People have different definitions of vibe coding. So we're just talking about different tools that we can use and how sort of the more that you put in the more you get out. So for this example we use GitHub Spark pretty lazy prompt and it built a little UI for us. One of the things that I had asked it about though and maybe this works if I press this one. I said I want these things to be linked together so I can navigate, but it's not really working. Oh, holy crap. It It loaded down here. Some of them aren't loading though. Am I crazy? They are loading. Oh my goodness. Okay. Sorry, GitHub Spark. You are you're amazing. Um, okay. So, in this case, it's just it's a pretty terrible UX. So, I click Elder Stormbblade and you can't
see anything happen because it's all down here, but they put in some hyperlinks and I can go click Shadow Fend Drake now. So, this did make a like a wiki that I can click through. So, I was talking smack, but it actually worked. Okay. It's just that this user experience is pretty terrible. If I zoom out. No. [laughter] So that's, you know, at this point, how do we get this back? Um, you know, let's go back to some terrible prompts. Make this mobile friendly. Okay. What does that mean? I don't know. But you can go back and forth with GitHub Spark and it will iterate. Um, but I find that you're really dealing with something for prototypes. Okay, if we go back over to here, um, we asked G, uh, this is chat GBT. We asked ChatGBT some questions about what it was proposing, right?
So, I said, you're proposing markdown files because I think that was probably in my original prompt a little bit, but what if we want to use a database, right? So you can treat your lore data either as markdown files on disk or markdown stored in a database. And I said I want some pros and cons. So simplicity for markdown files, a database might be a little worse. Version control for markdown files, you don't you don't get that out of the box with a database, right? But you could if you're tracking it in Git. So like my actual lorebot uses a wiki that is checked in to GitHub. So I do get source control which is pretty cool. Portability uh you know if you used a SQLite database you might be able to get portability right? So if this is just as an example, if we
were like this is the most important thing to us, what's more portable? Do you want to have to zip up a bunch of files to go move them? Or if you had it in a single SQLite database, maybe that is more portable. So we could go take this and we could go ask chat GPT and say I want to focus on portability, right? Like how do we how do we do that with a SQLite database instead? or do you have examples of databases where we can get portability and see what it says? Maybe there's something other than SQLite. Um, Challenger 7379 says, "Is VIP coding away?" Oh, my chat's on the screen. Sorry, folks. Cover my face a little bit. Um, question is, is VI coding a way uh to practice coding in a stress-free manner? Uh, but people are just using AI to make
their coding experience less stressful by doing all of their work. Um, personally, um, I don't find working with AI stress-free. I find it's, uh, sometimes can be pretty stressful. And I think it really just depends on what you're after. So, the idea with this question, let's kind of do this. So I I think we're talking about a couple different things here and I I'm going to say my answer to your question is no, but I think you're touching on some interesting things. So vibe coding I would say is not necessarily a way to practice coding in a stress-free manner. I would say that vibe coding is at least by the you know sort of original definition is it's a way to get things built without having to pay much attention to the actual code. So that could be a helpful thing if you don't know
how to code. It could be a helpful thing if you don't care about the code. And that's kind of what I'm trying to demonstrate with GitHub Spark is like I do know how to code. I've been coding for a very long time. I actually don't care what the code looks like because I just wanted to show everyone here's a workable demo that we can click through, right? So, it's a different goal. So, it's not practicing coding, but it might be a pretty low stress way. Like, just to give you an example, I'm going to go back to this. If I had to go build this in Typescript by hand, as someone who does not actively code in Typescript or basically much front end at all, I would be having a terrible time. It would be significantly more stressful than me asking Copilot to do it.
But if I needed to go actually build a whole application like this and continue to build on it and feel confident when I'm building, I don't know. like I would probably need to spend a little bit more time with the code. So, it's kind of like it can be a shortcut, but ultimately it's going to be a long-term kind of challenge for me because I'm going to have to go learn and understand this to build on top of it. The other part to the question here though is people are using AI to make their coding experience less stressful by having uh coding by having it do all their work. So, um I think we're picking what things we want to stress about. We're picking the challenges we want to have is what I would say. So again, um the example I just gave, I don't
like doing front-end development. Personally, I like spending time in C. I like doing back-end development. And because I have the flexibility to not be forced into other things, that's what I do. Now, if I needed to, I would abs like Brando has a front end that's in Typescript. I can work in the front end. I can do it. I just I have other people that will do that instead. So I'm I'm kind of picking the battles I want to have, right? And I think we can do the same thing with AI. So for example, um I I personally like spending more time being creative and thinking about design and architecture. Like that's kind of exciting for me. So if instead I just told AI, you go do all the architecture, you go build all of it. I don't want to think about that, but once
it's in place, like I'm going like I don't think many people are like this. I'm just I want to go write the test for it, said no one ever. But um you know, if that's the thing I want to focus on, then I'll let AI go do the thing, right? So to your point about it being less stressful. Yes, I think that's one of the goals is that it lets us pick the things we want to focus on. Um, now I've had a bunch of experiences where it's not been less stressful and that's just because, you know, um, I need to give it better guard rails. I need to be better at prompting all sorts of things. But, um, great question. I hope that answers. Um, you feel good about that. Let's get I'm going to turn the the chat off again from here. I'll
turn it back on if we're going through stuff. I said, let's make it mobile friendly. Pretty uh, lazy prompt. Did it do it? kind of. Oh, kind of. Maybe not. I don't know. [laughter] This is kind of working, but uh you can't tell because you can't see my hand uh trying to scroll on the mouse wheel, but um it's it put this here, but I can't really navigate. So, anyway, I think you might get the idea, right? We can we can still go back and forth with it, but um for not a lot of effort, you can get a little bit out of it. Back to here, we asked it some stuff around the file formats and stuff like that. Uh I asked it about YAML, like why the heck are we using YAML? Uh it's often used for markdown files or front matter. Okay,
I don't like it. Um pros of it, cons of it. Cool. So, it explains if you don't want it simple markdown comments you parse manually. So I guess it's trying to say like you can get a little bit more structure but if you want more structure instead you could do markdown files in JSON. Okay, interesting. And then the last thing that I I challenged it on was around the layered architecture because if we go back up it did propose a where is it? I can't find it now. Yeah, layered architecture. Okay, so I don't have anything necessarily against it, but like what else is there, right? If you this is if you're not familiar with this kind of stuff, my my recommendation is like use it to ask questions, right? It it's spat out we're going to do presentation layer, business logic, data access layer.
Why? It's not that it's wrong, but if you don't know why it's suggesting that, just ask it. say explain like give me pros and cons. Software engineering has an incredible amount of analysis that should be going into it so that we can make informed decisions. So if I just did what it said and like air quotes vibe coded it, I mean we're just talking about a highle design right now, but if it was giving me code and it was structuring things in um a layered architecture, I might just blindly copy that. And the reality is like maybe that's not a good choice. Maybe that's just the first choice that it had. Maybe it's a great choice. Maybe it is the right choice for us. But if you're not making decisions like on purpose, I think that's where things become kind of risky. We want to
make informed decisions. So it gives us some pros and cons for layered architecture, right? So just to kind of let's look at the cons. More boilerplate. This is very true, right? We don't even we don't even really know what we're building yet. Do we really need to split into layers? Like what literally why can't I just have my user interface calling the database? That might totally work. It's usually not like, you know, it's not good practice. I don't know. Did I tell it we're shipping this to a million people with tons of daily active users? No. So, who cares? Maybe that's totally fine. But this is where you can start to give it context, right? If you're like, "Yeah, like why the heck do I want all these layers? It's just me and my my good buddy building video games. I don't care. I don't
need multiple layers. It's just going to be two of us randomly clicking around like it should be okay, right?" So, that's something to think about, right? Overengineering risk for a small lore browser. Direct calls from the UI. Okay. I I literally didn't even read this, but it's right there. Cognitive overhead developers must understand interfaces, dependency injection, abstractions. Sure. For me, I don't care about this. I understand this. I'm okay with that. So, not a deal breakaker. Performance overhead, right? Again, I don't care for what we're building doesn't faze me. But this is an important thing because if you're reading this going, "Oh crap." Like, I should have told it, but you know, this is going to be the next Wikipedia. No, it's not. But, you know, if you're like, "This is going to be used by so many people. that's what I need to design
for. Then it might be like, hey, we want to like tell me more about this. Why is there performance overhead? And then we have some examples of alternatives. And again, my my recommendation for folks is like if you're going back and forth with AI like this, you know, ask for alternatives and ask for pros and cons. Ask for explanations of things. Literally have this conversation. And then what I do recommend is that you start to as things are kind of forming in your brain around like hey you know what you know I'm just going to give you an example like maybe maybe I kind of do want to just like think about a monolithic architecture maybe I do want that maybe when I was looking at some of the pros and cons like I really care about testability and flexibility but like performance I don't
really care about for this right start to make some notes around the things that you do care about and that way you can start to direct this conversation and really hone in on like what are you trying to build? Same with the technology choices, right? We said markdown files, but you know, maybe portability was a huge thing and you get to the point where you decide SQLite with markdown in the text columns is is what you want. Awesome. But start to have this conversation and form your opinion based on the conversation you're having with the LLM. Okay. So now it's like you want a system architecture diagram, component flow description. If you keep having this conversation and you get to the point, I'm gonna jump ahead because I would like I I try to do these in an hour and obviously I'm going way over
time here, but um if you were to get to this point where you're like, I would like you to Why am I typing this? I can just say it. I'd like you to give me the highlevel most important code files for our Blazer application that would demonstrate this functionality. Give me a repository and one of the uh controllers or Razer pages that we could use to demonstrate some of this code. Right. This is probably a terrible idea because there's there is nothing to go build on. It's just going to It's probably going to give me garbage. It's okay. My point with showing you this. What did I lose internet? Oh man. Am I still here? Can someone in the chat let me know if I'm still alive? Okay, I'm alive apparently. And chat's delayed, so it's awkward. I'm going to get a message in like
20 seconds and someone will be like, "You're alive." Um, [laughter] okay. So, oh, weird. Maybe I'm not alive. I asked her for the code, right? I would like you to provide some code examples for a page for navigation as well as one of the repositories that we can use for accessing the data. Let's see what it has to say. So, just to kind of call it out, right, I don't recommend this approach, first of all, because unless you have no like unless you have no idea what's going on and you don't actually care, fine. Um, but I think that you probably like if we're doing this, we should start making informed decisions. Like, do you want this? For me personally, I I can't stand having my like models in a separate folder. I like having things grouped together by their use, right? Which is sometimes
why layered architectures are a little bit weird for me. I'm not a huge fan. Um I do layers within things. So, this might be something where I see immediately, hey, look, like it's doing something I'm not a huge fan of right now. Okay. Okay. And if that's the case and it's going to start to output code, right? You can see lore browser. It has a models name space. It has a model. I'm reading this and I'm like, oh man, like why does it have a class? Like we have positional records. Why would I ever use a class when I have positional records? This could be, you know, a line of code and now it's that many. That's disgusting. So if you're seeing things that you already don't like, tell it. Right? This is like this is the time to be going back and forth and
like kind of forming up what you want to have happen. And you'll probably relatively quickly realize some of the limitations of this. Okay. And it can be super cool depending on the scale of what you're building. Like if it's pretty small, you might be able to have a lot of uh context that it's keeping in the chat. Um chat GPT has like projects and stuff so you can upload files. I haven't used the codeex feature, so I haven't actually wired up my code bases into u into open AAI. So, personally, I don't have experience with that, so I can't comment on their tooling. But in this format, when you're chatting, it can be helpful, but you're going to start to reach some limitations because the amount of context that you're working with is just kind of awkward. But that's going to bring me to we're
going to wrap it up with like sort of level three of things. And so let me just get my myself settled in here. I got to switch over to Visual Studio. We'll get that pulled up. Um just double check in couple things. Okay. So if I switch over, we got code. Um do I need to move my head? Maybe. Move my dumb head. Give me one sec. Am I okay up here? Maybe down here. I'm gonna put me down here. Um, yeah, let's do that. If I am blocking stuff, I'm sorry. I just figured it's probably You know what? We're going back to We're going back over here. I'm just going to make myself smaller. Okay, I feel a little bit less awkward about that. I just want to make sure that we have some screen real estate. And I realize if I'm going to
be chatting, it's probably helpful to have that showing. So, this is the code that I have for Lurbot. And so, if we want to go run this, I realized before I was starting the stream, I accidentally clobbered the um [laughter] the Docker image uh that has my knowledge base. So, we'd have to go regenerate it, but then I have to get the Docker thing set up. I'm not really doing that. So, um this is Lorebot. And by the way, this is on GitHub if you want to check it out. Um, so it is on my dev leader repository on GitHub. Thank you, Jackson. I just saw your message on Substack. Thank you for letting me know I'm alive. I do appreciate that. So, Lurbot, we can uh uses semantic kernel. Okay. Uh if you have if you don't know what semantic kernel is, you're not
familiar, I have some YouTube videos on it, but it's basically a convenient way for us as C developers to interface with a lot of LLM functionality and we can add things like plugins. So we can give it's almost like I want to describe it kind of like MCP, but you can give it C functions in semantic kernel and then the LLM can access those C# functions. Uh, I'm using Spectre console here. But if we wanted to, I could ask it. I could ask the lowerbot questions or I could go look up information. The difference between them is when you ask, it will give you like a written answer. And when you ask to look up instead, it will give you sort of like uh more like detailed entity structure about the pieces of knowledge that it's going to go look for. Um, right now I
don't have the Docker image running. Okay. So, that's not going to be super helpful. So, what can we do with this? Right? What I wanted to do is basically show you that it runs because it's a real thing. And then the next thing is maybe think about some different functionality. So, because we're in C-Pilot inside of Visual Studio, it has the full context of my codebase. All right. I'm going to switch it to ask mode just for a moment. Do we have different models? We we'll stick with with Claude. I have I don't know why I feel like this is so silly. I have a clawed max plan, but if I want to use an API key, it isn't tied to my account somehow. I don't understand how that works, but I'm paying for max. Let me use an API key. Um anyway, I'll keep
it in ask mode and I'm going to say something like um let me type it with my voice. Currently, the lorebot is able to answer questions for us based on the lore that it has semantically indexed. I would like to add functionality to the lorebot so that it can produce additional lore in the form of markdown files. My idea is that I would like to be able to provide the lorebot a topic to write about and the type of content such as a character, a historical event or something about a location. And then I need to think about what else I want to ask it. I kind of want to do the same thing we did in Chad GPT and say like like what are your thoughts for integration points for this? Maybe that's all I have to say. What are your thoughts for integration
points for this feature? Okay. And because it's in ask mode, it's not going to go coding for us. But the great thing is if I wanted to do something like this with chat GPT, what I am required to do if I want to have what's going on here? Yeah, confirm. Why do I have to confirm things? Oh, it's looking it's never run that tool. It's going to look for code samples. That's funny. Um, if we're using chat GPT and I'm saying I want to build on what I currently have, I am required to if I want to get good results, I literally need to start copying and pasting context into chat GPT. If I don't, then it doesn't know unless it's in the chat history. And that problem doesn't necessarily go away with co-pilot, right? Like I still need to steer it. I can give
it examples, right? So, I'm using Visual Studio Insiders. I'm not sure if it's new in Insiders. VS Code gets all of the new features before the paid for Visual Studio, but you can go to Copilot actions, add to chat, and then it adds the context. Pretty sure VS Code and cursor have this kind of thing. So, you can give it the specific context to work with, but you could also say like go look for every file that ends in repository.cs and blah blah blah. But let's go look at what it has to say about this design. Okay, it it put a lot of stuff in here. So, great idea. The feature Sorry, the text is probably microscopic and I Oh, I can zoom in. Wow. Never mind. Um, great idea. This feature would transform your lurbot from readonly assistant, which it basically is. We ask
it questions into creative content generator. So, it it got what I was trying to say based on your code base, right? I didn't here's a difference here. I didn't tell it go look at the repository pattern we have. Go look at the Spectre console command system that we have. I didn't do that and we could follow up with that based on what it's saying. I could be like hey look like neat but I really need to understand better like how we're going to integrate this. So focus on and then I could give it like you know plus I could go files and then I could say I really need you to go focus on the lower repository because you know you're not thinking about how the data is structured and here's a reference to some code that I need you to look at just as
an example. So it's proposing a new generation service layer. Uh, by the way, Lorebot was entirely vibecoded. Air quotes vibe coded. So, what you're seeing, all of this application that's running is me using Claude. I'm pretty sure I use Claude to go build this. It's either Claude or Cursor. It's on YouTube if you want to check it out. Hello, Acon TXT, I think, on Twitch. Welcome. So this entire application's vibe coded and I kind of did that on purpose so that we could see what we could build with with vibes. Um but which version or level of vibe coding probably would have been closer to a level three I would say because I was giving it like a specification to go build off of. So I actually used pretty sure I used some chat for that. Built it up and had to go build it.
So it's prop this is co-pilot now proposing a new generation service. It's saying that to go generate some lore for us. It has a custom response. We'll look at that after um the topic that we want to work with the type of the lore because I I told it like you know maybe it's a historical event or a character or whatever. So has some idea of a type. So, we need to get the context that's relevant and context chunks. Oh, that's a count. Okay. Sorry. This is a Oh, man. You there's a a horizontal scroll bar that we can't see apparently. So, that's fun. Um, so we get the context, build a prompt. So, I'm curious to see what that prompt looks like. Maybe I'll have some input and I want it to be a configurable prompt. Then once we have the prompt, we're going
to go generate content somehow. So we have to go see what those look like. We have a generate lower response which is just it tracks the topic for us, the type of it, the actual content, and then the references where it was finding relevant content. So I think that's pretty neat. Um, it has a proposal for enums. For folks that aren't familiar, I have a like a strong opinion about enums. I don't like using enums for things [laughter] and that's because I've just mostly personal preference. I've spent a long a long time in my career [laughter] uh correcting code or fixing code where uh people took enums and just kept adding stuff to enums. Ideally, in what I would call a perfect world, you use an enum to represent a fixed set of things. Days of the week are a great example of enums. There's
seven of them. They're not going to change. [laughter] When you have an enum that is for a set of things like this, just as an example, what happens when I want monsters, right? What happens when I want um I want religions like I have to start or deities or um I don't know like if I start down this path it means I'm going to have to add more things to an enum. Is that the end of the world? No. I might not die on that hill. I don't like it. If I were being really picky, I would probably go back and say, "Ah, we got to change this. We should have this loaded up from a database, right? I want to be able to configure through datadriven solutions like what I have for my content types. Doesn't matter. Let's keep going. It's going to talk
about the semantic kernel integration. So I mentioned for folks that don't know about semantic kernel, this is how we would uh define a plug-in in semantic kernel. They're called functions. and kind of interesting. Like I said, I feel like it's very similar to MCP concepts, but you give descriptive definitions of your parameters because when the LLM is going to use it, it's kind of like what tools do I have available? It's like what what functions do I have? So, you describe it in a way that it can interpret them. Um, so some of these kind of read like your typical doc comments in a uh a documented public method which is you know this is topic and it's like the topic to write about. That's very descriptive and helpful. Thank you so much. Uh but is this a terrible API? No, I think that could
be helpful. Uh it's not giving us the implementation. That's fine. We're just chatting back and forth. an API endpoint because apparently I made this a browserbased thing too. Neat. Okay. I don't even remember doing that. I vibed so hard on this thing that I for [laughter] I forgot there's a web application. We should try running that and see if it works. Optional auto ingestion. Okay, so add a method to optionally save generated lore. Okay, that's interesting. So again, I didn't tell it about this. So it's making a interesting suggestion for us, right? So we said we want to generate some lore and it's like, hey, what if we have an option that once it's generated, we put it back in. We'll save that to where the rest of your lore is. Um, and it knows, by the way, maybe not obvious for folks that are
watching this, but the I I said this earlier in this in this talk, but my lore is written out into a um a directory structure with files for GitHub. They're all markdown files that make a GitHub wiki. So, it knows that because it can see my codebase. I didn't tell it that. it went and figured that out. So, leverages existing infrastructure, right? Uses your current retriever for gathering uh the context, which is true, right? It didn't go define, oh, like you need to go look up lore. Well, we better go write some code for that. Like, nope, it knows that exists. We can move on. It knows that we're using semantic kernel. Again, I didn't tell it that. It knows that from the codebase. So these are all things maybe we should try this as a quick experiment. I don't know. There's so many directions
we could go with this. But if I were to go ask chat GPT to go enhance this application, if I didn't tell it we were using semantic kernel, it might give us some suggestions that are totally out to lunch because it's like that's just what I would recommend. And then it's up to you. [laughter] It's up to you to give it the right context. If you don't, how is it ever supposed to know that there's semantic kernel? Right. Perfect example. I'm going to talk with my hands here just because it's exciting. The perfect example here is I literally didn't even realize that there's a web application in Lorebot. No idea. Maybe I knew at one point. We're talking this is back in August of this year. That's that's centuries ago, right? I forgot there's a web app. If I was trying to get this feature
built and talking to chat GPT, I would need to give it the context of the things I want built. I probably accidentally would have built this thing and only had console support, which isn't so bad because that's all I thought existed. But then what would happen is I would start building and I go, "Oh crap, there's there's literally a whole web application that we didn't design for." Right? So again, the limitations of the previous sort of level that we were working at is that it's really up to us to make sure we're giving all the proper context. You can improve what you're doing working with co-pilot by giving it more specific context. So, it's not like that problem goes away, but it's able to go explore and find more context based on what's in your repository. So, that's a it's a huge value ad. Here's
another thing that I didn't mention. I haven't even said it once on this stream. Needler plug-in compatible. Does anyone even know what needler is? Like, I do because it's literally it's literally a Nougat package that I put together for my own development. I didn't tell it that but it knows that from the codebase right so one of the things that's really frustrating for me even at you know um the level of working with co-pilot like this or using co-pilot with you know custom instructions and custom agents that literally tell it about needler and how needler works is it still makes mistakes with it. So that's why I said earlier on this live stream, I'm starting to put analyzers in place so that it can look for antiatterns in the code. Okay, so they did a pretty good job of like talking about the integration points
and what I would recommend. You can see benefits to this design. Where's Okay, this one didn't say like next steps. Um, let's ask it. What would you recommend are the next steps for building out this functionality? Um, the reason I'm doing this, by the way, is because I just want to show you that if you don't know, ask. It doesn't mean that it's going to be the right answer, but if you literally don't know, instead of just saying, "Well, I saw someone else say this," be curious. What do I think is the next step? Personally, I think that we have to go back and forth a little bit more on the requirements because I don't think they're quite right. And then from there, once I am set on what I actually want, I would ask it, let's get a plan put together where we can
have this sort of carried out step by step in an order that let's incrementally deliver it. Because if you have some experience working with LLMs, building, you know, larger features, they go off the rails. right? They they start building something and then out of nowhere you're like what what is this thing even doing at this point and it's because it didn't have like a checklist to go through. So you'll see a lot more of the tools now doing a lot more checklist work. So they'll have like planning mode and they'll literally write a markdown file that has checkboxes. If you use GitHub Copilot, you know, in the the pull request it's making, it will go have like tasks to go do. So, this did way more than I was expecting that it was going to do. Uh what is this even like it wrote the
code? [laughter] So like this kind of this already went off the the deep end, but uh based on your codebase structure and the PRD pattern you're following, here's a recommended. Oh, did I leave that in the repository? Maybe. Maybe I did. Cool. I could ask it. So I mentioned a little bit earlier when I vibe like air quotes vibe coded this, I had it follow a spec. So maybe there's still a [laughter] a document. Um, so it put a lot of code together, which I wasn't expecting. You can see, uh, if you're paying very, very close attention, it's already changed this enum. [laughter] We didn't have mythology before. That didn't exist. Now it's on there. I don't think we had timeline. I think we have timeline and we have historical event. What's the difference? I don't know. But like, here's an enum. So again, before
getting it to go blast code, this is why I'm saying this phase is super important. I would tell it like I don't want this. Here's what I want instead. Um it has it. So it's it went ahead and did a bunch of code. I'm not a fan of um jumping to this phase right away. And the reason I say that is because let me again talk with my hands a little bit. for folks that have um been working in the industry for a little bit. If you've been on a poll request, okay, so someone's like, "Hey, review my code." And you're like, "No problem. I love reviewing your code. You're the best teammate ever." And you open this pull request and you go, "Oh my god, like what what is this?" And it's not because their code is bad. It's just that there's so much
of it, right? And if you're like, "Holy crap, wait a second." And like it's not that the code is bad, but I have so many questions about why you designed something this way or the decisions. What feels really crappy at that point is going through all of the bits of code and going uh like what if we changed this and what if we changed that and like what if you just redid the whole thing differently. That's a really awkward, crappy conversation to have because if you feel that way, you pro like it's, you know, rightfully so, you probably want to have some input on the design that's happening. And it probably feels extra crappy for the person that did it because they're pro, maybe not all the time, but they're probably like, I just spent all this time doing this and it works and like
I feel good about it and now you're you're going to be the person that's stopping them from making progress. But the reality is it's like it's no one's fault directly here. What probably should happen is that earlier in you know the development life cycle you're talking about the design of the feature that you're putting together. So instead of just landing this big pull request and someone's like where the heck do I start with this? You go hey like maybe this is something where you need a design dock for it. Maybe it's something that you need, maybe it's not even a formal design doc because you're like, "We don't even do that." Like, uh, go whiteboard it together. Maybe put up a draft pull request and just touch a couple of things and say like, "Here's what I'm going to be, you know, here's the areas
I'm going to focus on." There's so many ways you can do this, but the point is that the upfront earlier conversation where you're putting input and having a conversation is going to save you so much time down the line because now if I were to go tell Copilot, go build this and I just wasn't paying attention. It's the same thing of as not having a conversation early. It's going to go put these enum types in there and then I'm going to see it later and go oh co-pilot like please go change this and it's it will happily oblige and you know what's going to happen? It's going to go change it but it designed the original code for it to be there. So now you have one at least one layer of tech debt slapped on there and you're going to say, "Oh, co-pilot, um,
I'm just making this up." Like, I really don't want this autoingest feature at all. Like, that was a terrible idea. I hate it. And it's like, sure. And then it removes the autoingest feature happily for you, but then it forgets to remove this autoest flag everywhere. So now you have this auto injust flag in your codebase and no one knows what it does because it's not hooked up to anything, right? So the whole point is that trying to catch this stuff up front can smooth things out later. So I prefer to do that personally. Um spending more time up front. You can see maybe it's not obvious, sorry, but this is a whole bunch of markdown for prompts that it's doing. This would be another thing where I'm like it's a small project. It doesn't really matter, whatever. But I would love for this to
not be in the code. I would love for the prompt to be configurable. Technically, it's a small project. I can just change the code and recompile it, but I would love for it to be in a file and I don't have to recompile it. Something about that feels a lot better to me. So, I would tell it right now like, let's not do that. But overall, that's fine. Whatever. And so it just has a lot of code that it put together. And like I said, not what I would want to have this early personally, but let's keep going because I'm basically going to go ask it to implement this and we'll see if it can do anything. But there's a lot of code that it put together, right? Optional auto ingestion. So like it it did it. Let's go ask it in agent mode to
go do it. Uh actually one more step. Um, uh, let's let's get it to do a a step-by-step task list. Awesome work, best friend. I would like you to put together a ordered task list so that an LLM agent can execute this work and do it with a high degree of accuracy. Make no mistakes. You have to include the make no mistakes part, right? I'm just memeing here at this point, but um you know, so this is it's probably not going to be great results, and I know it's not going to be great results. Not that the task list is going to be terrible, but I don't think it's going to be great results because I didn't refine what I actually wanted early on. Okay. And unfortunately with my crappy prompt, I didn't want it to go write out all of the code again because
that's a complete waste of time. I should have been more clear and said literally just give me a markdown output of the steps and don't include the code. If I knew it was going to do it, then I would have told it not to. But here we are. Let's let it finish. um because I'm just going to ask it in agent mode and it's going to go actually implement this. You'll notice again if you're paying close attention, we don't have any tests in this repository and maybe that's why it's not talking about tests at all. I don't know that for sure. But if I were reading this as someone who wants to develop amazing software, I would say, "Hey, wait a second. you didn't you didn't mention anything about tests and tests are important where where do the tests come into play and so I
would tell it hey like we need tests and then it might say back okay well I'm just making this up it was it would say like great we're going to use nunit for writing tests and blah blah blah and I go wait a second no I use xunit right so you need to when it's telling you what it's going to do like don't just take it at face value like if it's not what you want correctly it early so that you can keep moving forward down a path you want because yes, you can change it later. It's just a bigger pain in the butt the later you're changing it. This is doing way too much. I just wanted I'm going to stop it. Give me one sec. I just want a simple bulleted list of the steps for the agent to execute. I don't want
you to repeat all of the code and specific code changes. Keep it simple. Let's go. See if we can do it much better. This is what I was hoping for. Is this going to be enough? Um, probably not. If you were to take this out of the chat and like say you're like, "Hey, you know, co-pilot was really good to have this conversation, but I want Claude to go build this." Now, sometimes um like I will go between Claude cursor and co-pilot. So, sometimes I'll have, you know, co-pilot doing some things like this and I'll say, "Great, give it to me in a markdown file." Um I'm going to ask it to go into agent mode now. Um and if you're Yeah. So that's coming. So yeah, what I would do is if sometimes I would ask for it in like a markdown file. Sometimes
I'll do this in chat GBT or even in co-pilot in in GitHub and if I want to like be building with the AI, then I might ask it for give me the markdown. I'll go give it to cursor. But you need to consider what context it has. So as Jaban mentioned in the chat, which is not on my screen by the way, sorry. There it is. Um, so Jabon was saying agent mode. Yeah. So we're going to do that now. And because I have the chat history. Boop. Boop. Now we can go ask the agent to go build it. Execute the to-do list. Write the best code ever. Again, I'm just meing at this point. Um, use existing patterns in the code base. Make no mistakes and you'll receive a cookie. So now we watch it go build this and hope that it works. But
in my opinion, how we've prompted this, the guidelines that it has, it's probably not going to do it uh with a high degree of success in my opinion. And even if it works, I already told all of you that it's it has decisions that it's thinking about doing that I'm not a huge fan of, right? Like I don't like the enum approach. I don't like that the prompts are, you know, hardcoded. Do I think that that's going to stop it from from building this? No. I think it will work. So, it's fine. So, that's why I'm just kind of letting it go. It's not something I'll keep in the code and commit up to GitHub. But just for the sake of demonstration, I think that it should be able to do it. Now, let's get this popped over because I do have in Visual Studio,
I have the plan mode enabled. I'm pretty sure you get this in VS Code. I realize most people probably use VS Code, but like I've been using Visual Studio for like longer than some people watching this stream have been alive. Maybe not, but like for a while. And so it's really it's really a pain in the butt for me to switch to VS Code. I I really I don't like using it. Be the first to say it. Yes, I work at Microsoft. I don't like using VS Code. I like the original the original Visual Studio. So mentioned this earlier, Visual Studio is lagging a little bit behind the um the AI features compared to VS Code. But uh we do have plan mode in Visual Studio. This is the insider. So, it's 2026 edition, but it's it's building stuff, right? And I realized that my
zoom I'm zooming way way out. So, you can see like it's going it's adding files. Can we see what they look like? There's the interface doc comments and everything. Wow, it's beautiful. Do we have This is the templates file, right? So, this is what I said. If I were reviewing this code and I hadn't read what it was saying earlier and I like I truly just vibed it and didn't pay attention. Now, I get to this point. I'm like, "Oh, wait. I see what it's doing." I'm like, "Ah, I don't I don't want the prompts in here." Or maybe I'm looking at this and I'm like, "Okay, I can see that it's doing some text replacing." Right? So, we have some templated text with the content type. We have the topic. It makes sense, but I'm like, h like I also want the word count
to be templated. Oops. Um, is that something that's going to be a deal breaker in terms of the design? Perhaps not. So, maybe I let it finish and I make a note for myself. I'm like, hey, in order for us to finish this feature properly, I'm going to have to go, you know, come back and say we need the word count to be parameterized as well. Right? So, I would go through this kind of thing, but it'd be good to catch early. Uh for folks that are familiar with um GitHub co-pilot like in agent mode when it's working on pull requests this is like for me a very common mode where um so it will go write the code in the poll request and basically I will get you know if I want to sit there and watch it I can but usually I'll in
bed and I'm like making GitHub issues for my own code and then assigning it to GitHub and then I wake up in the morning and I'm like it's Christmas, I go review the stuff. Most of the time, um, for the stuff that I'm asking it to do, front end work, I find it does pretty well most of the time. Some of the backend stuff, I'm like, I can't commit that. Like, it's not going to work, but it's not too far off. So, I would do something like I'm just going to use this method as an example. Say this is a code review I'm doing. Uh, Ru on cake, I see your message. just give me one sec and I will I'll answer your question. Um I would review this and say like hey I'd leave a comment and I would say you know on this
line so line 40 I'd say hey like I also want this parameterized and then I would personally go through the code that it wrote and be like where do the other parameters come from? Right? Laura search response is the context. So then I'm going to kind of walk back and be like, "Okay, how did we get that context built?" And it's probably because there's some other, you know, request that came in. By the time we get here, we're using the response from the lore search. And so I would either need a new parameter passed in on the method or I'd have to go add it onto this uh this DTO. But I would go leave Copilot a comment and say, "Go change this." And it's interesting because the sort of point I'm at with working with co-pilot is like it gets really frustrating when it
does such this is going to sound ironic but it does such a good job and then there's like one thing like this and I'm like I literally could have just pressed the [laughter] could have pressed the you know the squash and merge button but I can't because it's got one thing it needs me to do. I had something like that this morning where it wrote um what was it doing? It was uh it wasn't another analyzer. I was adding telemetry to some part of the the Brand Ghost back end and it did it the code change spot on how I would like it and then I'm like cool. It wrote tests and I looked at the tests and I said didn't follow any of the testing patterns. There's 3,000 tests [laughter] goes. There's 3,000 of them. And it decided it was going to use a
different testing approach. The co-pilot instructions explain it. The testing documentation explains it. The custom agent has instructions for it. And it still wrote different styles of tests. So I have something that I could delete the test and I could land it, but I'm like I want it to be done right to my standard. So I have to leave it a comment. And now it's not a deal breakaker, but it's the end of the day for me. And that's something that I could have landed in the code when I woke up this morning and it's still not landed. So in the context of like work which is going to lead me to uh Ratu's question I can lose a lot of potential productivity potential code landing from just simple things which is why I'm spending a lot of time trying to um put more guard rails
in place. Okay, one sec. I'm just going to go back to this question. This is from Ratu. So do you find yourself using AI to code in your job? So I don't but only because in my my role I'm not writing code. So um I'd use AI for writing code every single day outside of work. So the um and I'm going to I'm going to wrap this stream up in just a moment. But um on code commute, I filmed a video today talking about what I did on Saturday night and I'm going to make a a dev leader video kind of diving a little bit deeper into it, more polished. But on Saturday night, I was lying in bed and I talked to co-pilot. I say talk. I I was texting co-pilot late at night and uh I had I basically said like uh I
wish I could show you. Can I I don't know. One sec. Can I just show folks? I don't think I'm prepared on this stream to show because I don't know if I need to like redact any information. So I apologize. But um like the chat mode in co-pilot spaces, I mentioned this earlier in the stream. and it has access to my repositories. And so I said, kind of feels like I'm talking to chat GPT, but it's co-pilot. And I said, I want to start adding analyzers into the code so that I can put guard rails in place. I said, I want you to look at the co-pilot instructions. I want you to look at the custom agent text. I want you to look at the documentation, and then I want you to do, this is a really generic thing to say, but I want you
to do an analysis of the code. And I said, I want you to look for the patterns that we could enforce like that we could theoretically enforce with analyzers, which I'll repeat for folks that aren't familiar. Think about a llinter for C. That's really powerful. So I want to write a custom analyzer. Give me a list of potential analyzers and sort of the your score of impact and why, like why I should go do that. And it did a tremendous job with a pretty crappy prompt. listed them all out, a bunch of different ideas, and I sat there in bed for like 30 minutes just making GitHub issues. I think I made eight or 10 issues, assigned them to Cop-ilot and said, "Go build these analyzers." So, I woke up in the morning and I spent my Sunday, I knew it wasn't going to get
the analyzers right, but I spent my Sunday pulling down the pull requests, like getting the branches locally and just working with it. And I think I got all but one because there was like 500 errors because I don't even follow the owner rules of my codebase. So um we're two that's a long way of me saying no I don't use copilot and AI in my coding at work but that's because I'm not really coding at work. um some of the tooling that I have at work in terms of Azure DevOps because we use Azure DevOps, not GitHub. Um at least in my or I feel like Azure DevOps is a little bit behind and so I feel a little bit spoiled using GitHub and then wanted to use Azure DevOps and being like h it's not quite there. But there's probably, you know, in the
very near future, there's probably a lot that even though I'm not coding at work, I could use AI to help my team with a bunch of little things, stuff that I don't want them to have to think about, you know, like package upgrades and we were we had a brief conversation today. I was talking with one of my employees and talking about like, hey, like someone was copying a pattern from somewhere and it was like, wait, that's the wrong pattern. Like, you know, that's a spot in code. someone should go fix up. So instead of me being like, hey, like go fix that and like where is that in the priority list? Like I don't know, we got a million priorities. It's like we could just go file a GitHub or an Azure DevOps issue, have it open up a pull request with Copilot, go
fix it. It's uh it's explainable enough to me as someone who's not in the code that as a C developer, I could go understand how to go move it. Even though I've never, you know, used that part of the code, I'm pretty sure we could tell an agent to do it pretty um pretty easily. So, I think there's a lot of little things like that that um as an engineering manager, I could take that off of people's plates and and kind of help with productivity. So, um, yeah, long-winded answer. I don't use it at work, but that's why, just because I'm not coding. Let's do a quick check on how far we got with this. I think the answer is, uh, not very far, but let's see. Let me try from the get root. Is this still going? No. So, I think uh my my
whole IDE is like frozen. I'm just going to press keep so it stops lagging. I can't scroll. You can't probably can't tell. I can't really scroll in this chat. There we go. Um I think it just stopped. I don't know how many PowerShell terminals we need. Yeah, but it looks like um Oh, I wonder based on what I'm seeing if it was trying to go launch um [laughter] it might have tried to go launch the actual application and needed Postgress or something. Anyway, that didn't quite work. But how far did we get? Lower controller. console UI. I should have looked at the results first, but does it build? It's always a good test. Okay, builds. So, it might have failed because it was trying to go run it. And it will run unless it messed up. Oh, okay. So generate new lore. It's not going
to work because I think I was calling it out right. It's actually trying to get um the Postgress Docker image running. And good luck. Um generate new lore. What topic would you like to write about? Um monsters. Oh, so awkward user experience, right? Like I didn't I should have maybe picked this first. That's kind of what I thought I was doing. Um item, right? Um so kind of neat. It's not working because this which is really funny because the username is Postgress and the password's Postgress and that's what I have it configured as and it's not working. But anyway, that's TBD. It actually seemed to have coded something. Is it going to work? Like I don't know. I'm very curious. Can I get Postgress to launch properly? Maybe I was just being silly. Um, one sec. Let me see if I can. It'd be cool
if we can do it, but I also don't have the data ingested, so that might just be a terrible idea. I don't necessarily want to waste people's time, but let's delete that. I'm going to delete this as well. I think you can't see it. I'm not sharing my screen just because I don't necessarily want things to accidentally pop up. You can you can look at my face in the chat this way. Um, but I'm going to try seeing if I can start up a new Postgress image and get the password to work. I don't know why it wouldn't, but we'll try again. Optional settings. Just want to double check the the port. One moment. Thanks for bearing with me. Okay, it says it's right. I don't trust it. Um, what is the Postgress? I never remember what this is. Uh, Postgress environment variable password. What
is it? Anyone in the chat know PG password? I'm pretty sure I did this. Let's find out. Okay, we'll run that one now. It's not. So AI overview on Google lied to me. It's Postgress password. Now I got to go remake this whole thing again. [laughter] Okay, one sec. We'll do one more attempt here. Thanks for bearing with me. And if this still doesn't work, then I don't know what to say. But that's the password. It's in there. It's in there. So, let's see. What's going on? Ready to accept. Okay, so I'm just watching the Docker image. So, generate new lore. Um, flaming sword of doom. And we're going to do item Ah, okay. Fix the password. We don't have the schema. [laughter] There's a migration. So, why didn't that work? Interesting. Anyway, um feature seems like it should mostly work. Obviously, it's missing. We're
not executing the the really exciting part, but for the most part, I think it did okay. Um, do I think like am I happy with the design of it? Like not necessarily. What's this function? This is a new one. Lork functions. So, what does this look like? Again, enum parsing. I'm just not big on enums. Overall, like I don't I don't know. I don't hate this. I really I'm not a huge fan of like commenting every line like this. I don't I don't need a comment for that. Uh but these are kind of like nitpicky things. I think overall that part looks okay. So, I don't know. Like I think really we need to see this running. And also a thing that's missing and I knew it would be missing is that we don't have tests on this. And I think that's a that's a
big deal for me when I'm building stuff like this is I want to make sure that we have test coverage because otherwise I don't know what we're actually getting ourselves into. And one of the tricky parts is that almost always almost always I have to iterate with the AI and I might have something that's working or mostly working but then I ask it to iterate and then it's breaking stuff. So I like having tests so that when we iterate I can see if it's still uh maintaining functional behavior on the other spots that I expect. So I think that's it. Let me just do a quick little spiel on the YouTube channels and stuff like that. So, let's jump back over to here. Um, I'm going to pull my face back up into normal spot. But this is the newsletter article that we were going
over, right? The last sort of section for uh, you know, the fourth level, if you will, is really just getting to this point where you have enough guard rails in place. So you have your co-pilot instructions or your you know your agent MD file. have um a custom agent which if you're not familiar with that like there's examples on GitHub um awesome agents I think is the repository but there's plenty of examples they're just markdown files that explain behaviors right you have documentation the thing that I mentioned that I'm doing more of is building analyzers for my car code to really just kind of force the agent to not make mistakes um or and when it does to get back on track but All of this is like enhanced more and more with your, you know, more effective prompting and context management. But one of
the really nice things about what I was trying to talk about at this level is that operating in an environment where you have full context is tremendously valuable. So I think about my brand go situation where it's multi-repository. We have a front-end repository and a backend repository. different tech stacks. You know what's really cool is being able to talk to C-Pilot when it has full knowledge of both and I'm saying I need to go build a new feature end to end, not just from the API back, but like I need a user interface for this. I need to know what types of APIs I should build that will be effective for the user interface. It's not just I built an API and then I give it to one of my teammates and they're like that's nice but you're the way you built this I have
to go call it a hundred times to populate this page like why didn't you do a bulk one and I'm like well I didn't know how you were building it right so with having context of both um you can get a lot better results I find but this is just coming back to you know more effective prompting better context management and then guard rails for your agents and and you can start to put more of that in place, you can feel more comfortable letting agents go do stuff and having a bit higher degree of confidence that it's not going to do a terrible job. Now, when it is doing a terrible job, if you're like me, you probably get frustrated, but I think what I'm trying to remind myself is like, it's not going to be perfect. So, when it's going off the rails, what
is it doing? Can you figure out and understand why it got there? Pro tip, you can ask it. [laughter] It will probably tell you why it's making such decisions. So, I do recommend that you practice that. And I think that you can uh start to improve your success rate. Um, for example, you can say like, hey, I wrote this in the instructions like you know, why did you choose to do that instead of following the instructions? Um, it will probably say you're absolutely right. that you can say like how should I rewrite the instructions? How would you rewrite the instructions so that an LLM agent would go follow along, you know, and perform this action as expected? And you might see that it's like, hey, you you're talking about the right thing, but you didn't phrase it properly, or you didn't give a a good
example, and it would really help do a better job if you had an example. So, I think there's lots of things to practice like that. Okay. Um, Dev Leader is my main YouTube channel for folks that aren't aware. Um, this is where my programming tutorials go, AI coding videos go. So, if you are interested, um, what do we have up here? Yeah. Um, I wanted to see if No, I don't have it up on the on the homepage here, but there's, you know, I did one on um, Claude. I've done one be angry with it. Yeah, [laughter] that's right. Sorry, I don't have the chat pulled up. Um, it's from Jabo. But yeah, the uh there's coding. Here it is. This one here. This one did pretty well. Um, this one was basically setting up Claude to go run multiple agents um with something called
Claude Flow. Since that video has been put out only three months ago, like uh Anthropic has like multi- agents running in parallel like out of the box like you know. So, this stuff's moving so fast that all the stuff I'm complaining about today. I in a year from now, we're going to have totally different problems to be worried about with agents and trying to make them work effectively. So, it just it moves incredibly fast. It's a I think a really exciting time, but it's all the more reason to sort of pay attention to what's going on. So, um I do make programming tutorials and working with AI tooling uh on my Dev Leader YouTube channel. And if you're interested, if you like sort of my sort of teaching style or whatever, um if there's video topics you want covered and you're like, "Hey man, I
don't see a video on that." Message me. Leave a comment on a video and say, "Hey, could you make one on this?" I am happy to to go make it. I'm always looking for ideas. Happy to do that. it makes more sense to me if you're requesting a video that I'm like, I know at least one person wants to watch it, so I'll go make it for you. Um, Dev Leader Path to Tech is where I do resume reviews. I'm a little bit behind. I have a couple sent in that I have to do, but if you're interested in having your resume reviewed, um, send it in. It's totally free. If you watch a video, I have the instructions for how to do that in one of the videos. It's in like the first minute, so don't worry. You don't have to sit there. But
I would recommend if you're interested in a resume review, watch one of them. See see what kind of things I'm going to go over. Um you might even try to make improvements to your resume before sending it over. But like I said, it's free. Um this is my way of trying to do a little bit of uh giving back. And uh it's not grilling, so I'm not roasting your resume or making fun of your resume. The information is redacted. So, you know, if if you work at like a big tech company or, you know, a household name, I leave the name on the on the resume, but your information is redacted, that kind of stuff. So, nothing to worry about, but I'm happy to try and help review your resume. Code Commute is my YouTube channel that um I do like basically five videos a
week. And so, I think, yeah, what am I up to? 383 vlog entries. This is uh a channel where it's basically Q&A. Okay? So, you submit questions. I answer them as best I can. Software engineering, career development, that kind of stuff. Um, so if you want a question answered with a video response, just leave a comment on a video or if you go to codemute.com, you can submit questions totally anonymously that way. Literally, it emails me and it says it's from me. So, I'm like, I don't, you know, if you forgot to leave or add some information, I I can't respond to you. So, I hope I get it right. Um, but yeah, I I like making videos on this channel. Um, it's topics that I really enjoy talking about. If I don't have questions, I go to Reddit and then I pick topics off
of Reddit, usually from the experienced devs subreddit. So, that's a friendly reminder for folks that are watching this. Um, if you read Experienced Devs on Reddit, like if there's stuff that you want me to talk about on Code Commute, like tag my Reddit account. Uh, if you want to, if you watch Code Commute and you like it, share my videos back on Reddit. I think that would be super cool. Um, this channel is kind of sad with number of subscribers for how many videos I have. Uh, so sharing is caring. I do really appreciate it. Uh, and then last couple things, I do have courses on dome train. So, if you want to learn how to program in C, I have the getting started and deep dive courses, but I also have not only a couple of other C courses, I do have some other
ones that are with Ryan Murphy. We partnered up on them. And these are more like uh soft skills, career development, promotions, that kind of stuff. So, again, if you enjoy the format of my teaching or discussions, um, there's more of that in a more polished way. it's less uh less all over the place and a little bit more streamlined. So check that out on dome train. And then finally, final thing, Brand Ghost is the product and service that I build outside of my day job. And so I use Brand Ghost to post all of my social media content. So if you are here because you came across social media posts of mine whether that's on LinkedIn, Twitter, Tik Tok, Instagram, wherever um everything that I post is through Brand Ghost. So Brando is, you know, I was talking a little bit earlier on this stream
about using AI agents to go build features or to go build the analyzers that are going to go, you know, guide the other agents. Uh I do a lot of AI development in support of Brandos, but Brandos is the platform that I build for posting content. It is absolutely free for people to crossost and schedule. So if you're uh someone who's considering doing some learning in public and you're like, I don't know how to like I don't want to pay attention to like what platforms to go post on and stuff, just drop it into Brand Ghost. We'll go post it for you for free. Um, if you are a small business, there's probably some functionality where if you're finding that, you know, staying consistent on social media is a bit of a pain in the butt. We have features uh in the paid for version
that help you stay more consistent. Again, this is what I use for all of my content. Um, pull up a quick screenshot. One sec. Or not screenshot. I can pull up the application. So, come on. my calendar [laughter] in brand ghost. Uh I I always love showing this because I think it's kind of like kind of uh fascinating. But you see how I'm going into January, February 2026, March, right? Here's March 17th. My face is in the way, but all of my content is already scheduled. Okay, this is why I love Brand Ghost. This is why I build brand ghost. I don't know if you if it's not obvious just from me scrolling super quickly through here, but every single one of these blue scheduled lines is a post to a different social media platform. Right? So on March 17th, I have currently a short
form video scheduled that's going out to 1 2 3 4 5 6 7 8 9 10 11 12. If I count it properly, 12 different social media platforms already scheduled on March 17th. And if I keep clicking, I don't know if I have to prove it to you, but if I keep clicking through, it's already all scheduled for me. And it's not that um you know, it's not like all AI generated content or something and I'm just blasting noise out there. Those are short form videos that I've clipped. Now, will they repeat? Yes, absolutely. They will repeat after x number of weeks of me posting them. Um, I do have a brand ghost YouTube channel where I'm posting a little bit more sort of behind the scenes for people that are interested in content creation, but I was uh the most recent Brand Ghost video.
Can I even pull that up? One sec. The most recent one that I was doing was I was explaining I had a couple of like se semiviral posts uh not the right channel and the idea was really like I posted those videos before so I'm not gonna I just wanted to show you the graphs because it's it's kind of insane. Okay, here we go. Um, just I just again if you're I'm trying to let you know that if you have uh like a small business or you just want help with being consistent with content creation. Um, can I move this out of the way? That tool tip is kind of annoy. Okay, perfect. So, here's a like a a screen grab of my LinkedIn. Okay. Um, forget the total number of impressions, doesn't matter. It's really this chart that I want to show you very
quickly. So, I had three posts on these dates. They did very well compared to the other posts. It's not like there was no traction, but relatively speaking, a lot better. By the way, my LinkedIn doesn't do as well because I don't spend time commenting on LinkedIn because I'd rather just be coding to be honest with you. Um, but these did really well. All three of these um spikes up are posts that have gone out before. They've literally been posted before. One of them, this was the third time posting it. Okay, I'm going to go a little bit further ahead. You've already seen that. Okay, here's Twitter. [laughter] I don't know for folks that are uh are here from Twitter, but I had a couple of posts right around November 2nd, as you might see, um that are a little bit of an outlier compared to
the rest of the posts. You know what's unique about these posts? Aside from the fact that they performed really well compared to the rest, all of those posts I have also reposted before. In this video, I literally show the timestamps of when I added them in that content into Brand Ghost. It was in 2024 [laughter] and it's just reposted. I didn't manually schedule it. I didn't do anything. I just I create the content. I add the content. I let it do its thing. This is why I'm still able to go build software on the side, work my day job, do live streams, and not have to like live my life on social media. My social media would in fact do significantly better if I spent more time commenting. I don't because at some point I'm like, I don't want to live on social media. So
anyway, that's Brand Ghost. If you want any help with content creation, that kind of stuff, you're interested in Brand Ghost, even the free version, right? It's free because we want to help people do it, and I mean it genuinely. So, if you have questions, you want to get started, just send me a message, ask about Brand Ghost. I'm happy to try and help you, okay? Like for people that don't know this, I started Dev Leader in 2013 and I started it as a blog for myself when I was trying to be a little bit more like I don't know focused on oh crap I'm an engineering manager and like I really don't know what I'm doing. I'm like I'm going to write about this and I did it for a couple of months and I was like really excited about it. I'm like, no one
reads my blog. Like, why am I doing this in 2013 and then I just gave up. That's uh that's over a decade ago. So, at the start of 2023, I said, I'm going to do this for real. I'm going to make content every single day. And I knew the one thing that was screwing me up before was it's such a pain in the butt to go schedule and post it everywhere because I know if I'm going to make the content, I want to share it as much as I can so people can see it. So that's why I built Brand Ghost. So I know what it's like to go create content and be like, "H, why am I doing this?" Like no one's reading it. If you, the fact that you're watching this right now is literally just a product of the fact that I
used something like Brand Ghost to just make sure that I can keep posting. I'm not special, right? I'm not special at all. I have lived experiences. I'm sharing them and I am confident that you do, too. You might not believe it yet, but I'm sure you do. Even if you're just getting started, there's always someone who is a step or two like right behind you. And if you start sharing your learning experiences, they will be right behind you going, "Oh man, like this is super cool." There's some stuff that I've done in my career and if I go to talk about it, people are like, "I don't know. Like, I can't really relate to this guy. This guy's got a bald head and white in his beard. Like, he doesn't know what it's like to to be a new software developer because I've never done
that before." Right? But I I don't I might not be relatable in that sense. But if you are new and you are getting started and you are learning about things, post about it. Share what you're learning because I guarantee you, I literally guarantee you, you will inspire someone else who is like, I also don't know what I'm doing, but this person's doing it and they'll learn from you. So that's my my words of encouragement. Thank you so much for watching. I hope to see you all next week's live stream. Same time, the topic will be coming from Dev Leader Weekly. So, check out the newsletter. You don't have to subscribe. Just head over to Dev Leader Weekly. And I'll see you next time. Take care. Thank you so much.