Keeping AI From Going Off The Rails - Principal Engineering Manager AMA
December 2, 2025
• 38 views
podcastpodcastssoftware engineersoftware engineeringsoftware developersdeveloper podcastpodcast for software engineersdevleaderdev leadernick cosentinopodcast episodespodcasts for programmershow to be a software engineerwhat is software engineeringdeveloper interviewsday in the lifeday in the life of a software engineercareer in software developmentcan i be a software developersoftware engineering interviewsoftware engineering podcast
When the AI isn't working, is it immediately just 'Skill issues, bro'? Is it just because you suck at prompting?
Maybe. But the reality is, even as we structure clearer prompts and guidelines, there is always room for error. What tools do we have to keep agents more on track?
As with all livestreams, I'm looking forward to answering YOUR questions! So join me live and ask in the chat, or you can comment now, and I can try to get it answered while I stream.
View Transcript
All right, we're just getting going here, folks. We'll wait for things to connect. I apologize. I have like a bit of an eye infection and it's disgusting. I was making YouTube videos on Sunday and I was literally wearing an eye patch. This is the worst. Uh, so I do apologize. Um, but that's kind of how life is right now. So, we're going to be coding today, which is always fun. Um, coding live is always this great opportunity to remind me um, how awkward it is to code in front of people because you get to see all the things I'm messing up and then I get to sit here and stress out about doing dumb things. Uh, but that's just again how it goes. Uh, is my chat not showing up? What the heck is going on? Weird. I don't know why that's happening. Sorry. Uh,
no one's talking in the chat yet, but my chat just like isn't it's not doing anything. If I flip over to here. Crazy. Yeah, reream. Uh, chat just isn't showing up. I can I can see chat. So, I apologize, folks. I like having the chat on the screen so you can see the messages coming in. I don't have that. So, I'm going to try to pay extra close attention. Um, yeah. dope of pictures like that, man. Just got home. Uh, done at the gym. This is Devon. And now I'm uh wonder whether I sanitized enough. Yeah, like literally I think I get eye infections and stuff from the gym and it's almost always from like doing this with my shirt like after sweating and like rubbing my like just to wipe sweat off my face. And there we go. So, I have to deal with
that and hopefully it's gone right away. We're going to be we're going to be coding though. Um, I just wanted to before I jump into the actual code, I wanted to go to Substack and pull up a newsletter article. Um, just to make sure that if there's like some meta points before we dive into coding that I can touch on those. I originally had a very hard stop at the top of the hour, but uh, things kind of moved around, so I can go a little bit over. Uh, I still try to wrap it up, but um, I want to spend most of the time coding. So, let me pull this up here. Okay. Um, one of the things, let me flip over to this so you can see what's going on. Uh, yes, I used a thumbnail from a recent video and my video
editor, uh, I find the best thumbnails that he makes are the ones where I look ridiculous in them. So, that's not actually me in case you were wondering, but um, I'm approaching uh, this level of physique. So, um, soon. But, um, this newsletter article, I should put it into the chat so folks have it. I know folks on Substack, you already know where it is cuz that's where you're at. Um, I think that I tried to capture as much as I could around uh, do I want many viewers? Yes, let's block you. I guess you guys can't see the chat, so that's awkward. Um, but yeah, I get people spamming in the chat. So, this newsletter article and this topic again came from Code Commute. Um the things that I've been kind of focused on more recently in my own software development have been around
I use a lot of uh you know agentic programming uh some vibe coding I am leaning into the AIS but uh no genuinely I I've been trying to use it more and more and one of the things that I've said on code commute over the past I don't know like almost almost 12 months now is like some of these tools I mean everything's changing so fast but some of these tools are working well for me, some aren't. And instead of just writing them off when they're not working, it's like trying to learn and understand more about them, right? Like maybe it's not a good fit right now, maybe things will evolve. Um, and then trying not to get totally blocked on on what's going on. But one of the things I've been arriving at, and I've been kind of joking about this on social media
in different capacities. I'm sure for most people watching this, if you are on social media and you are in sort of any tech space on social media, you will see people talking about AI, of course. But the other thing that you'll see a lot of is that you get people on different like completely different ends of the spectrum for supporting AI or uh being against it. And when you have people that are so much supporting it, not only do you get the the sort of the post that we're all totally burnt out from, like, oh, software developers won't exist in 5 months, like it's been five months, it's been longer than five months, we're still here. Not only do we get that, but we get people that will make like kind of absurd claims about things. So they'll say like, "Oh, you know, like I
use one prompt and I wrote like 10 million lines of code and like I am a software AI god now." And it's like, dude, this technology has not been around for to use long enough to like claim like the the mastery level that you have. But what happens with this kind of stuff on social media is that there's a lot of like almost like superiority complex where it's like, well, you must just be stupid if you can't do it. And I really hate that. I think that's not helpful at all. I think that uh people are embellishing things a lot, but I genuinely think if you have things that are working for you, like awesome. I'm trying to get there myself. But um I think when people just jump to absolutes, it's not it's it's really hard to have a conversation with anyone that's speaking
in absolutes. So I try to avoid it. I don't love it. But we keep seeing people that are like, "Oh, it's like you just don't know what you're doing. You just can't prompt properly or it's just your instructions file is wrong or just your agent, your custom agents wrong, or you're just using the wrong model." It's always like you just must be a dummy. And I don't like that. And the more and more that I'm using this stuff, the more and more I'm learning, which is great. And I hope all of you are kind of um doing this, right? I hope that you are trying to improve on things and that's great. But what I'm starting to notice is that even when I, you know, I'm some of the examples that we're going to code today, I literally have lived examples where in my prompt
I say do not do this. In my co-pilot instructions, I say do not do this. In my custom agent, I say, "Do not do this." In the documentation, in the project, I say, "Do not do this." Guess what the agent does anyway? Like, how many times do I have to put the context in, you know, it's it's ridiculous. And so, I'm not saying that those things don't help, but what I am saying is almost like no matter how hard we try, there are still opportunities for things to go off the rails. And I want this to be sort of a meta point for you to think about. It's like that's actually the same thing with people, right? We can put all these things in place. We can try to force people into the pit of success, but there will always be these opportunities where it's
like, oh, like, you know, we put all these automated tests in place and someone still managed to like push code in and break the build. Like, how? There are still ways that this kind of stuff happens with people. So anyway, just want that to be a reminder in the chat. Uh Baron Bite says, "Old school example of embellishment. People said we won't exist in 2000 because PCs." Yeah. Y2K, right? Because PCs wouldn't be able to handle all the zeros. Yeah. Uh when we wrap around from 1999 to to 2000. That's it. Uh Deon says, "I like how the primogen puts it. We're 28 months into AI asking or taking our jobs in six months." Exactly. It's uh don't get me wrong, things are changing. Things are rapidly evolving. Tooling's changing, model, new models, all this stuff. All of this stuff is moving and changing and
you know, by the I've been joking in some of the videos I'm making and stuff, it's like by the time I finish this sentence, there's already, you know, uh the number of tools has doubled. Like it's things are moving fast, but um we stay on top of this stuff. So, what we're going to vibe code today, this is um primarily fornet developers. We're going to be looking at C code, but I don't want that to like, you know, scare people off if you're like, "Oh, I hate C. This these these damn Microsoft guys and their C." I used C long before I worked at Microsoft. But, um the concepts like we're going to look at Roslin analyzers and um the concepts are kind of what I want you to take away. And that means that while we're making a rosin analyzer, if you don't know
what that is, it's something that will uh effectively give us like static code analysis. So we can inspect the structure of the code with like custom rules that we'll define and we're going to vibe code them. That's why it's awesome. So we're going to vibe code analyzers that can check out rules in our code and then we can enforce these things so that the code won't even compile if our rules are broken. And the goal is that when we put more guard rails like this in place, we can have agents stay on track. So if you're using a different language or, you know, technology stack and you're like, well, I don't use C. We don't have ROS on analyzers. Uh, I have a llinter. Awesome. Start with your llinter, right? Make it so that you can't uh progress if the linting is off. So, you
can't uh if you're trying to do like a release build of something and the linting is not working, no dice, you know, uh uh commit hooks and stuff like that, the linting's busted, no dice. Can you do more advanced things than just your standard llinter? Like, I don't know. Depending on what you're using, like what other types of static analysis do you have where it can inspect the structure of your code? And so I would kind of recommend if you're not a C developer like what things like that are available to you in your tech stack. Um the two analyzers we're going to look at today are very simple and I want to give you a heads up that I actually recorded YouTube videos for both of these um on my main channel Dev Leader. Uh they'll go out either this week or or the
week after because I actually managed to record videos. the one weekend I'm like I'm ready to go and I have this stupid eye thing going on and I'm like I got to be a pirate for my YouTube videos. But anyway, I got them recorded. Um so if you, you know, kind of want to recap on this or you just want to watch the edited down version, um you can check that out on on my main YouTube channel, Dev Leader, when that's all set. Okay, enough blabbing. Who wants to see some code? I do. I do. Um, okay. Let me just clean some of this up cuz before I share my Oh, I'm already sharing my screen. Don't look. Don't look. Don't look. Okay. Um, I just wanted to get rid of some of the sort of fixes that will spoil it. Um, because like I
said, I I made some YouTube videos to go through this. Another reason too, by the way, is like some of the AI stuff when I'm making videos, because it's not perfectly deterministic, it's a bit of a pain in the butt when you go to make a video and you're like, "And go." And then the agent does something completely different and you're like, "Okay, like how many times do I want to sit here and prompt this thing and try to do a take of it working?" So, I'm trying to structure my videos in a way that like um like the AI can go off the rails and we will work to bring it back. So, some of these videos are a little bit longer, too. Uh Devin says in the chat, uh that's one area where C# and Visual Studio makes me happy. Agent mode likes
to at least make sure everything compiles and VS Code on a JavaScript stack. It just kind of yolos the changes. Yeah, this is a it's like some some of the minimum bar expectation stuff is kind of silly because I I can't wait for like a year from now, two years from now, all this I feel like I'm hopeful all the stuff that we're dealing with where it's like how is this thing literally not even trying to compile the code? I hope that that is something that we're telling our kids and our grandkids and like campfire stories. were like back in my day, my agent wouldn't even compile the code and they'd go, "Oh my goodness." And they would say, "Tell us, Grandpa, what's code?" Because they don't even know. Um, but yeah, it's a bit of a pain in the butt. Um, you know, I've
I've talked recently about I like was trying out cursor and agent mode. I really love the ability to kick off work for my phone and just like, you know, fire off like 10 features and bug fixes before I go to bed or before I drive to the office and then like check them at the end of the day or when I wake up in the morning. Like I love it. And uh you know, I'm kind of getting more used to like it gets it 80% of the way there and I finish the rest. That's what I was doing before uh streaming here. And if we have time, I'll talk about that. But like cursor in agent mode doesn't seem to have and maybe I can configure it somehow but like whatever containers it's using or you know the images for it there's no there's no
build tools from Microsoft there. So it builds stuff and I find cursor does a really good job but when it doesn't have that feedback loop to at least compile it's like come on man like think about it back to my analogy that I said earlier that I want you to think of like a lot of this stuff it's like think about people what would people do you literally use a language that compiles write code and go I think it's good not compile like a complete lunatic and then push it up for code review or just like push it to the bill system. Like no, like you're you're going to if you're in Visual Studio, you're pressing F6 or F5 at least once. You have to like it's it would be nuts to not do that. So when you have these agents building things, just going
off and doing whatever, and then they're not, when I say building, I mean writing code, and then they're not building it, they're not compiling it. What are we doing here? So anyway, um that's a fun thing for sure. Jaban, good to see you. Code code. Um, let's jump over to it. So, I'll flip back over here. Sorry, the text is going to be a little wonky. Um, and that's because usually when I am doing YouTube videos, I This is a 48 inch monitor. I go down to 1080p uh for my resolution to record. Um, but when I'm streaming, I'm using this monitor as four monitors right now. so I can see what's going on at the same time. So, I apologize if some of the other text is very small. Uh, sincerely, I'm sorry. I don't have a better way to do this in my
streaming setup right now. Um, let's do a new chat. And what I'm going to walk through is that we have a very simple program. Um, the context of this is literally that I just wanted a class with some methods that have some boolean return values. By the way, in the chat, uh, let me know if the size is good, if you want me to zoom in or zoom out. Happy to do that. Um, I had some feedback on a YouTube video recently. Someone said that the size was a little bit off, but it's always recorded at the same resolution, but I guess different uh, zoom levels, so just let me know. Um, but yeah, basically kind of just like some dumb code. Uh, this was literally just tab tab tab and co-pilot just putting in dumb stuff. It's totally cool because we don't actually care
about what this class does. Uh then I asked Copilot to go write tests for it. Again, it's a silly class. It wrote the tests. Um I just want to call out something that it did was if you look at these two sets of tests. Again, for folks that aren't familiar, theory is an XUnit um attribute and a theory is just a parameterized test. A fact in xunit is just a test without parameters. So you can see that this is a theory because it's marked as such and it has this int input. So it has a parameter. What it did originally was basically combined these. So it's like is number even. There's two tests for it is number even. Then it's return true uh and returns false. And it basically just had true and false as parameters as well. And then used assert equals. I want
it to be assert true and assert false for the second half of this video because I want us to build a different analyzer. So, um you know what? Let's let's start with that. I'm just looking at the code now. Let's start with that analyzer because otherwise to demonstrate what I want to do, I'm going to need co-pilot to rewrite the tests for the other analyzer. I don't want to do that. We're going to start backwards. So, what we are going to do because all these tests are structured the same way. I'm not sure if folks are familiar with this, but you can see I have a failing test because there's a at least one because there's a red X here. If I go run these, it should fail. Sorry, I know this is the boring part, but I just want to kind of set things
up. Okay, two tests pass, one fails. Uh, sorry, the text is probably tiny. Um, but this is the thing that I cannot stand. This is a personal preference thing. It's you're welcome to disagree. That's totally cool. Uh, the goal with this is to show you that we can write analyzers and you don't have to know how to write them. So, when we have an assert true or assert false fail and you don't put a message. Okay, take care Devon. Um, Devon's got to leave. So when we have assert true or assert false like this when it fails the message is the most unhelpful thing ever assert true failure expected true got false thank you that's literally the only way that assert true can fail um with a simple test like this and if you do try to have like single assertions uh in your test
I certainly don't live by that rule if you do this probably feels like nonsense but Um, when you have more complicated tests with more assertions and then you see one of these popping up, how the heck are we supposed to know what's going on? So, I personally in my own projects, I have an analyzer rule that I vibe coded that will force this behavior, right? So, it puts a message into here and that means that when this fails, we'll get an actual helpful message. I mean, the one I wrote there is certainly not helpful, but hopefully you get the idea in just a second, right? So, we can get a message when it fails. And that way, when you're like, "Oh crap, my tests are failing." And you're looking at your your CI/CD pipeline, you can see why instead of going, "Okay, there's this test
and it got false and it wanted true." Like, what are we talking about? So, this is something I like to enforce and we're going to vibe code it. So, oh man, I hope my microphone is working. One sec. Um, in my last YouTube video I recorded yesterday, my mic setup stopped working, which made it really fun. So, one sec. I'm going to try somewhere else because I think it's taking for my clipboard. Testing. One, two, three. Oh, it works. Okay. When I was making the YouTube video, it was just pasting random things from my clipboard instead of transcribing what I was saying. So, bit of a pain in the butt. So, we're going to go over to agent mode. Let me get rid of this. And I'm going to ask, let's use GPT5. You're welcome to use whatever you'd like, but I want to show
you that we can go make Roslin analyzers with tests associated with them and then have it fail our build when the rules are busted. Let's try it out. So, um, full disclaimer, I have 34 Roslin analyzers in Brand Ghost. That is a product that I'm building and I have vibe coded every single one of them and they have all helped me tremendously with trying to keep my agents on track and enforce coding standards that that I prefer that aren't simple like naming. They're kind of like a little bit more advanced and this is one of them. So, let's try it out. Let me get my foot on this pedal. I would like you to create a Roslin analyzer and associated tests in a new project. The Roslin analyzer should report errors for assert true and assert false when a message is not provided as the
second parameter into the method. We'll start with that. I want to read it back again. Text is probably tiny, so I'm very sorry. Uh, I'd like you to create a Rosin analyzer and tests in a new project. The Rosin analyzer should report errors for assert true and assert false. Okay. Um, when the analyzer reports an error, it should have a helpful message that tells the developer that it needs to provide a descriptive message as the second parameter for assert true and assert false. Um, a heads up why I'm saying this is that I noticed I had an agent that was listening to my analyzer and it said, "Oh, you need me to put a message? No problem." And then it literally wrote in like I got um it was like unexpected value or something and I'm like, "Come on, man." So, this is part of
the prompt that I really think that you can tune this part up, right? I'm still being kind of generic. You can say like what exactly you want it to be, but I'm not doing that. Hey, Kenneth, welcome from LinkedIn. Good to see you. Wonder if this chat's going to come back ever. No, it's still It says in uh in OBS, it's like your chat is totally displaying. It's not. It ain't there. Um can I like drag this onto the screen for a sec? Like there's chat. It's real. Um I promise. Okay, so we have this part of the prompt and then the other thing that I want to do is enable this analyzer for the console app tests project and then I want to double check this. Um, so what's the next thing I want to say? Basically I want to make it ensure that
it's going to build. I feel like Visual Studio is a lot better now with uh their latest updates. like the agent mode seems to try and build at least unprompted, but let's try it. When running MS build on the console app test project, once the analyzer is enabled, I expect to see compilation errors because the assert true and assert false methods currently do not have a message parameter. So, we'll try that out. I'm going to fire this prompt off. Uh, for all of you that are prompt wizards and prompt sorcerers and level 99 prompters, sure, I could do a better job prompting. We'll get there. Um, let's see what the plan says. Create an analyzer project. That's right. Create analyzer implementation. Yes. Uh, rules. That's just building the analyzer. There's a diagnostic descriptor. If you're not familiar with what some of these terms mean, the
diagnostic descriptors are going to be the things that tell us when we're building what's going wrong, what it can do better. It knows it needs to make an analyzer test project. So that's right. Add the unit test for the analyzer behavior. One opportunity that I see already to improve some of the prompting that we did was like, what tests do I expect? I said to add tests, but I can't be disappointed if it adds a test project and pretty useless tests. I didn't specify. So, that's, you know, sort of a little little lesson for us to reflect on. Reference analyzer in the console app test project is analyzer. That's an important note. Um, I don't want to forget to call this out, so I'm going to call it out now. Sorry, it's trying to build and it's flashing stuff up in front of me. um
reference analyzer on the console test project as analyzer. If you don't do this, then it doesn't apply the the analysis properly. So, you have to add it a special way. There's just another flag that you have to put in. I'll show you that when it's done. And then run the build to validate it. So, this all looks good to me. I feel good about that. Hello, as well on LinkedIn. Despina, I believe that's how to pronounce your name. And yes, I I sorry language is not I think you're saying good day and then I think this the last word is December. So yes, we've just we've just started December. Lovely. I remember when it was summer and now it's dark at like 300 p.m. That's okay. It'll it'll brighten up soon. Okay. Come on, co-pilot. Is it done? Oh, I think it's done. I was
expecting to see this last thing kind of finish off, but I guess it didn't update the plan, but it looks like it's done running. So, um, it didn't quite work. And this happened when I was recording the other day. What it's doing right now, let me flip back just to make sure when I open my file browser, it doesn't go to the wrong spot. Um, what's happening is that it made the analyzer project. I'm pretty sure. We're going to confirm. Um, but it didn't refresh Visual Studio to add it into the solution, which is a bit of a problem because we need it in there in order for the analysis to get picked up in Visual Studio. So, I'm just going to I'm just adding those two projects in. Give me one moment because it made them. I see them both. Okay, back to here.
We'll get rid of this task list. So, we have the analyzers and analyzers tests. Let's have a look at the tests. Tests are always fun. Let me keep that. So, um the tests don't pass. So, I mean, they don't compile. That's not a good start. Come on, co-pilot. You had one job. Oh, there's so many compilation errors. Okay. And what's the analyzer look like? Let's just keep that so we can read it. So ah feature target type object creation is not available in C 7.3. Please use language version 9 or greater. Interesting. So this is not um using features that are recent enough. And so kind of funny it's using net 8. I'm I'm cheating a little bit. This isn't totally vibe coded. I'm just trying to see if we can speed some things up a little bit. Um, still not going to work. That's
because this is in the we need to set the language version of the project. Let's vibe code that. So, I'm just going to tell Copilot the analyzers and analyzers test projects that you created do not currently compile. Fix the compilation issues so that the analyzer works and then run the analyzer tests. Another thing that I just realized is that um I did not explicitly tell C-Pilot that I have an existing test project that it's using a particular version of XUnit. I didn't give it those details. Should I expect that it did the same thing? Like I would love to. I don't I don't have a high degree of confidence. So like as an example, these are the the console tests that I had. Uh coincidentally, this is the leftover one from before. Let's get rid of that. I will keep that. Sorry, it's a little
messy, hard to read. But this is the analyzer. I'm just going to start pulling some of this down so you can read it all at once, but this is the new project that it added. It added the output item type as analyzer. The speech to text is the ultimate vibes. Yes. Um I prompt a lot uh via talking and sometimes I catch myself especially when I'm like going back and forth like being frustrated because agents are doing dumb things. Uh I I feel this is how bad it gets. I feel like I want to hold shift and like type in caps to yell at the agent and I'm like wait a second. I can I can use my foot pedal and yell at this thing. That's not really the right response to have. It should be maybe you should prompt better. Maybe you should take
a break. But sometimes I'm sitting here like, why are you so dumb? Um I think it feels good so I do it. But I'm not suggesting that's how you live your life. So uh I do really enjoy having the you know the the speech to speech to prompt. It's very nice. But yeah, you need to have this output item type is analyzer for that to take effect. But this one is net 10 xunit v3, right? 321 321 for the version. What's this one that it added? So it switched the tests back to net 8. Why? I don't know. I didn't tell it to use net 10. It probably thinks that net 10's in preview mode based on my earlier conversations with it. So that's annoying. And then xunit completely different versions. Right? I said it was 321. It's not. So is that going to matter?
It might not. Right? These tests will run. But in a real project, what I should have done in the prompt was said to refer to the existing test to use the same packages. We can use central package management. So we can define these in one spot. So that's a bit of a miss on my part. Um, currently Copilot is just just going at it and it's uh it's having a real difficult time. Let's see what it's doing here. It's adding in ver like I don't even know what verify is. Like why do you I guess whatever training data it picked up on this time is just oh my goodness. Oh, look at this terrible nonsense. This is the the third time that I'm doing this, by the way. This is by far, just for reference, this is the worst one that I have seen it
do. Um, this is reaching a point where I see so many things failing and this test does not resemble anything useful. I would personally start a new chat or I would flip the agent or the model, sorry, to being probably sonnet 4.5 because that's what I have there. And I would just tell it to like start again. Um because this is terrible. This is a pretty pretty disappointing in my opinion. And not only do I see like all of these like back and forth like it's trying and failing, trying and failing, but another giveaway is like when it starts slapping in all of these different um packages. It's cool at this point. It's actually trimmed some of them out. That's great to see, but I don't have a high degree of confidence the way this is going cuz see how it's adding them back in.
Like it kind of feels like it's just trying random stuff. And this is what I would probably call like stupid mode. And I don't mean that to be like condescending to co-pilot. I use co-pilot non-stop. And um this is just a very terrible sort of track that it's going down. We don't need humanizer added into this at all. Right. So what I'm going to do is stop this because this is pretty embarrassing. Um, I'm just going to Let's just keep these for a second. I'm going to keep everything. And then I'm going to delete the tests because they're terrible. And I'm going to Let's take this. Actually, let's take the whole top. I'm just basically setting up this analyzer one to uh not be in the worst shape ever. So, at this point, there we go. Um, it's set up to be the same, right?
There's no tests in it. So, I'm going to I'm going to keep the same chat. I'm going to change the model. And then I'm going to say add the tests for the assert message analyzer. Very generic still. I'm hoping that it can just add some test coverage while it's going to do that. I just want to make sure it starts. I've seen far too many times now where we go to kick off an agent or I think I am and then I come back like a couple hours later and it's like, do you want me to start? And I'm like, I wanted you to start hours ago, man. Like, what are you doing? Um, okay. It's editing great. I wanted to go jump. We haven't even looked at the analyzer. Right. That's the the beauty of this. And in the YouTube videos that I put
together, that's kind of the funny part is like we go through the whole thing and it's like we haven't even seen the analyzer code and it was working. So, it's kind of cool to see. But this is a Roslin analyzer. Um I am very transparent about this that at this point in time like could I read what this analyzer is doing? Can I explain what's going on and like make sense of it for the most part, but there's a bunch that I can't. So, I know like how descriptors are used. So, this is so that you get different error messages and stuff in MS Build and in Visual Studio. It's how these these definitions work. Um, there's more to it because you can version them for Nougat packages, all sorts of stuff. By the way, I'm saying this to kind of quiz myself, just so
you know. Um the entry points and the usage for analyzers are a little bit weird. I see that they almost always have supported diagnostics and an initialized method and then this is really where all the logic is and this is where it gets kind of weird for me because with analyzers we can sort of navigate the expression tree of our code and that includes things like comments which is kind of neat, right? So we can we can kind of see what's going on. we can traverse this this tree of information, but like I don't have experience with the different things that we have access to. So all of my experience with analyzers up to this point has been purely vibe coded and now I'm in a position I think I said earlier I have like 34 of these analyzers in my codebase. I have a
great excuse now to go like understand them better. Um, as someone who creates content and stuff, it's like that's something that I wanted more experience with. It's not coming up in my regular work, I have other things to make content on, so I'm just not getting around to it. So now I think I have that chance. Let's go check our tests. This is failing terribly again. This is very embarrassing. Um, all the test pass in the analyzer works as expected. Um, that's a really uh interesting claim because the code doesn't compile. Okay. Um, I'm opening up a new chat because I feel like that one's just poisoned. The assert message analyzer tests do not currently compile. Fix the compilation errors and run the tests. I'm uh I'm purposefully not giving it a lot of information because I have already had success multiple times with it
building these analyzers out and writing tests for them without much information. So, I'm just trying to demonstrate that it's doable. But one of the meta points that I was making in my YouTube videos on the weekend was that if you don't know what you're doing, do you see how painful this is? It's a pain in the butt. So, this is the kind of thing that like if you are finding yourself in a similar spot where you're like, man, like I spend so much time just like trying to fix dumb errors and what's going on, like understand it better. The better you understand it, the more that you can convey it to the LLM. So, like why is this stuff not working? My only guess right now is like it has the wrong packages. I have a feeling personally it has a lot to do with
this up here with .NET 10. And the reason I think that is because when it's pulling in packages, I I don't know if you if you recall from a little bit earlier, it pulled in older versions of XUnit. Why? Well, it did that because it was using an older version of .NET. It was using older versions of XUnit. Why? Like that's what it's trained on. and I didn't tell it explicitly. So, it's also trying to like live a little bit in the past and it's pulling in some older stuff for uh code analysis. It says says it worked. Oh my goodness. Look at that. We have tests in here. It's beautiful. Can we run these tests? It said that they passed discovery. I see them. I see them. Let's run them. I want to talk a little bit more about the test that we're going
to see the result of our Oh, they don't pass. Come on, man. What's going on with these tests? Did it say that it Oh, I don't think it ran the test. It says that it they build and it can um Oh, and it says executed net test. None of the tests pass. I'm just going to leave it at that and see if it will do something. Um the analyzer might work though, right? The analyzer might work. It might just be that these tests are pretty terrible and that's okay. I want to go back over to this. Oh, come on. It jumped me to it. I'm gonna undo the changes in this file. It's very annoying when Copilot's doing that. But you'll notice something interesting, right? All of these now have um a red squiggly. And they have a red squiggly because the analyzer is running.
So, I'm sorry because Copilot keeps flashing stuff up when it's trying to run things and it's it's closing the tool tip and all that, but um on my screen I realize the text is small. It says that the error is VC1 provide a descriptive message as the second parameter when using assert.true. What I'm going to do is after Copilot finishes hijacking my u my IDE here, I'm going to use a different C-pilot technique and ask it to go fix these up. And we're going to see if it genuinely writes descriptive messages into there or not. So, this analyzer is not super fancy. It's not really all that exciting. Point is, I just want to show you that you can do this and we can vibe code a lot of stuff and you will do even better if you know what's going on. Um, these tests
still don't pass. Um, I love too that co-pilots like I am GitHub co-pilot. Thank you so much. Thank you so much for letting me know. I don't know who who at Microsoft do I need to talk to to say take that out. We know it's GitHub copilot. Um, fix analyzer test project reference. Cool. Updated that. Adjusted this one. Okay. Build successfully. Test execute via net test. Want me to Okay, that just doesn't make sense. So, um, I'm just going to say that all the ones that say report error don't pass. Actually, let's leave that because we can see this analyzer going. That's just going to be a vibe coding circle of life. So, what I want to do is show you that we can highlight all of this. I'm gonna rightclick co-pilot actions. Oh, no. That's not where's the keyboard shortcut is not that. It's
alt slash alt slash is ask copilot. Is that the same thing that was up? Oh, it's at the top. My apologies. I'm just losing my mind. Okay. So, um I'm just going to say it to it. The assert true and assert false methods in this class are failing with compilation issues. Address the errors very generic. My again my goal with some of these prompts is to leave them generic and to see like are we putting guard rails in place that can help us with generic prompts? Because if you can keep refining your prompts and doing a better job, that's great. But what I said earlier was like you can have really good prompts and AI will still mess up. How do we keep it on track? Let's go. Uh when we do the second analyzer, it should go faster, I hope. U we might might
skip the tests on that one just to save time because apparently we're having a bad time with vibe coding test today. But I want to show you if it works like my YouTube video did that will watch co-pilot write it wrong the first time. it will try to build and it goes, "Oh, I need to change that." Because the analyzer is catching it. So, this I feel like is pretty descriptive. We're gonna tab through and read through the changes, but it went through and for every single one of these expected is even number input to be true but was false. That's pretty descriptive to me. So, let's go run this test and see how much better that feels for us. Right? I know it's in the test name. I know this is a silly example, but when this test fails, expected is number even 43
to be true but was false. Now, I know exactly what's going on. Right? If you think about some of the tests that you have in your projects, this might be something fun to explore. You might have other things that are a pain in the butt for you and you can go write analyzers like this. So, we're going to do one more and I'm going to show you what that looks like. So, we'll start a new chat. We'll keep it. Thank you so much, co-pilot. You're the best. Um, okay. I'm going to ask this thing to write tests for us following the arrange act assert pattern. Okay, this is my favorite. So, um, pretty common to write tests with a range act assert. If you haven't heard that format before beginning part of a test is where you're setting your test up, then you act. So,
you are exercising the system under test and then you assert. So, you've done something. What are we checking afterwards? Okay, so arrange, act, assert. What I really don't like, personal preference, if you like doing this, totally cool. I'm not stopping you unless you're writing code in my codebase because I have an analyzer. But what I really don't like is when we have comments for arrange act assert. I don't need the comments. I think they're polluting. I think that it's a distraction from what's going on in the test. I know that it's following a range act assert because every single test that I've written in memory has been a range act assert. The comments aren't adding anything for me personally. You're welcome to do anything you would like. So what I'm going to do is say I would like you to add tests for the fun
with numbers class following the arrange act assert pattern. Okay. And you might say, well Nick, that was a dumb prompt because you just told it to follow that pattern and you didn't tell it that you don't want comments. Like that's your fault. you are right. But what I said earlier in this live stream, if you were on at the beginning, uh, or you're watching the recorded video, um, what I said was in my own development, I put it into the prompt. I put it into the co-pilot instructions. I put it into my custom agent. It's in the documentation of my codebase. None of the other tests have comments that say arrange assert, and it will still sometimes do it. I don't know. So, I prevent it with an analyzer. So, we're going to hopefully see that it writes tests. Where did this thing go? Aha,
there we go. You can see that it did what I don't like. Uh, arrange act assert on every single thing. Uh, also I said this earlier but you can see it's using theories with two parameters which is what it did for me the very first time I put all this together. So we get a range act assert a range act assert like I know it's a range act assert. I don't want the comments. In fact I want to prevent them from ever happening. So let's do that with an analyzer. Where's my foot pedal here? Um let's try it out. I would like you to add a new analyzer that looks for arrange act assert comments inside of tests and marks them as an error. We do not want any tests to have arrange act assert comments. See what that looks like first. Um I feel
okay about that prompt so far. Um I'll be a little bit more explicit. add the new analyzer to the existing analyzer project. You do not need to fix the fun with numbers tests. And we should see that the arrange act assert comments that are present are marked as errors. Cool. I'm just trying to constrict it a little bit so that we can we can go clean that up ourselves and see it working. So, um, the followup when we see that's all working is that now that we have the analyzer, I'm going to copy and paste the exact same prompt we used when it wrote with a range act assert comments and we'll see if we can repeat what I saw yesterday, which is it tries it puts a range act assert and then it catches itself thanks to the analyzer. By the way, chat went
very quiet. Do folks have questions about what's going on or you want to see follow-ups? This is a good opportunity because if I don't have time to walk through it in this uh live stream, I will make YouTube videos all about it. I'm happy to try my best and, you know, be the guinea pig, try things out, and report back and have a video for you. So, um, just let me know. Ah, I think it's working. I can see some fun information in the output window. Okay, I think it's going to report that it's done. Ready? Where are my tests at? Aha, there we go. So, this is cool. Um, we got a range act assert comments getting flagged. Can I put something else here? Nice. Can I put something here? Can I remove the comment? Nice. Can I do this? Oh, co-pilot. You missed
one. So, actually the first implementation I had from the weekend, it did this properly, but that was one of the things I checked for. So, assert message test, arrange, act, assert, comment, analyzer, test. Um, if we go back to here. So, this is a good example because I want to show you this. The tests are a little bit hard to read. Um, let's get this cleaned up. Simplify stuff. Let's do it in the whole project. Sorry, it's just there's a lot on the screen. Um, it's especially hard to read because sometimes it writes these these examples of code on like one line, which is silly. Because if you're trying to work through this and understand it, like that doesn't help. Like you don't write code like this as a human. I hope because it's a pain in the butt to read. But like I don't
want my tests to look like that either. But that's just co-pilot being co-pilot. We could ask co-pilot to go change these, right? But one of the things that I would recommend is when you're trying out your analyzer and you're like, "Wait a second. I wanted this to be caught. It shouldn't do that. It should flag this as an error because it's also a comment because what we don't want to have happen just as an example is like we don't want co-pilot to go oh no problem uh I can't use you see what I'm doing here right like you don't want me to use the double slash comment no problem I won't you're getting a range act assert anyway we don't want that right so we we need more guard rails So I would say to co-pilot um let's do this. I'm going to add this
to the context. C-pilot actions add to chat. We need the analyzer to also catch arrange act assert comments in block comment format not just with the double slash comment style. fix this in the analyzer and add tests to cover this behavior. So, we'll go do that. We'll send that off to co-pilot. And again, like most of this is being vibe coded right before your very eyes. My message, my message is not that this is the best approach. I do highly recommend you spend time going to learn how these things work. I personally plan to do more of that. I'm going to keep reiterating that in the rest of this video, but um I figured it would be fun to kind of show people that, you know, as someone who's been using C for a very, very long time, I don't know how analyzers work because
I haven't made time for it and I'm building with them and I'm eventually going to learn them. I have to. So, here you go. So, we added more test coverage. We can see that um these actually are failing now. So not that, not that. There's too many things pulled up now. This is the actual analyzer itself. We'll keep that. This is the test. So, oh, this is the wrong one. No, it's the Yeah, this is the same one. Yeah. and then reports error for block ara is the arrange act assert in case you were wondering what that was but um it says here right report errors and then we look here arrange act and assert are included as block comments so it did include a nice little example there and we can clearly see that um where was it there you go now it's being
caught okay so we're going to do one more final thing here we'll delete them all again Okay. And I'm going to do a new chat. We'll keep it continue. Um, but I want to go back to get my prompt. Not this one. Um, did I go? Oh, I went The list is in the opposite order than I thought. So, not that. Is it this one? Yes, this. Let's take this prompt. new thread. Paste it. So, all that we're saying is I want you to add tests that follow the arrange act assert pattern and we're going to watch it because I'm very curious if it's like the YouTube video I made. We'll watch it put the comments in and then we'll watch it undo it because it realizes it's not allowed to. So I think that when you can prompt more effectively then you can sort
of take preventative measures early on to keep the agents on track. So that's ideal, right? It's not wasting any cycles, but ultimately we need some guard rails in place. So you can see arrange act assert was put in here. We have the analyzer firing and saying, "Oh, you can't have that." It's saying it's reading the errors now. Boom. Trims them all out. Look how cool that is. Look how cool that is. I think it's cool. Um, so that's just an example of putting a guard rail in place. These are both sort of like trivial examples. You might say, I don't care about that. I'm fine with the range act comments. It's not the point. Uh, the point is to show you that if there's different things that you want enforced in your codebase, then you can very much do that, right? There's all sorts of
like, you know, the one of the best things to do is like warnings as errors. As a .NET developer, get that turned on um from the start. If you're in an existing project, you don't really have a choice, but you can start turning on rules incrementally, right? Keep chipping away at them. Get your errors uh your warnings as errors. How do I say this? You can't ship the code if you have errors. It's not going to compile. So eventually you you keep trying to drive your warning numbers down until you get to the point where like you're turning on so many rules. Um and then you might say, "Okay, make everything that's a warning as an error and then you have a bit of an allow list for some things that you don't care about ever." Um if you're starting from scratch, it's a lot
easier to turn it on in the start and then abide by that. But the whole idea is guard rails, right? I think that's the biggest lesson here is try to get guard rails in place. Um, do we got time? We got a couple minutes. One sec. Let's see. Let's see if I can show what I was working on. Um, what is a good way to do this? Ah, okay. Got to pull some stuff around here. Pull this window up. Sorry, there's too many too many windows. Um, I'm going to pin this. I just want to kind of walk through something that's like kind of kind of fun today. Um, yeah, DJ Neil, getting rid of the comments is super necessary. I agree. Um, okay, full screen. Boom. This is brand ghost. Um, particular some of the tests that I was playing with a little bit
earlier. Um, I ran into a bit of a scary snag, and I'm I'm glad that I approached things this way. So, the YouTube videos that are coming out over the next couple weeks, I I already mentioned a little bit about them. Two of them are going to be kind of the content we just covered. And um, two of them are going to be about how I'm use like literally real examples of using GitHub Copilot to go create features for Brand Ghost. And I showed one of them. We go from having a chat with GitHub Copilot, translating that into a a GitHub issue. I feel like it had pretty good structure. And then the second video was like going through the code that it produced and seeing like did we do an okay job with that prompt. And spoiler alert, it did. It did a pretty
damn good job. Um, there's a couple of follow-up things that I asked for and it started screwing up. that's not in the video. But, you know, it it honestly got like 90% of the way there and then I got greedy because I said I want more tests because I'm not confident yet. So, this what you're looking at on my screen though is another feature that I was basically doing the same way and I want to give you a little bit of context and explain why um was pretty nervous about this change. And so the idea with Brand Ghost, if you're not familiar, Brand Ghost is a social media cross-osting and scheduling platform that I'm building. I use this for posting all of my social media content across every platform. So if you came here from somewhere on social media like Twitter, LinkedIn, Instagram, Facebook, Tik
Tok, where else? Uh Blue Sky Threads, you name it. I post there. Uh, and I only am able to post there because I use Brand Ghost to do it all for me. I write the content, Brand Ghost sends it out. One of my philosophies with Brand Ghost is that um, you can make the argument that if you want to have highly effective content, you need to tailor it to the platform. I don't disagree with that, but I don't consider myself as someone who's like a full-time content creator. That's not what I I'm going to do for my the rest of my life. I like making content. It's not um that's not how I identify if that makes sense. I build software. That's how I identify. So content's a side thing for me. I don't want to spend more time customtailoring posts and stuff for each
platform. I want to write my content once and then say, "Hey, Brand Ghost, put this out everywhere for me." I don't want to think about, "Oh, do I have to resize this picture? Oh, the text is a little bit too long for this platform. No. Uh, oh, it's a it's a PNG, but this platform only takes JPEGs or I had a WEBP file and it's not PNG. Like, I don't want anyone to ever have to worry about that when they use Brand Ghost. And one of the things that came up recently was I had someone who tried to post a really big picture and they said, "Hey, it didn't go out." And I had to go investigate to see why. they didn't have their notifications on to tell them uh like what the reason was. And I was looking at this thing going that's kind
of bizarre like why did it work for it was a huge picture. I said why did that work for a couple of platforms and not others? It worked for a couple of platforms because those platforms have constraints on uh the resolution. So what we did was we resized it for the resolution. The resulting file size was obviously much smaller then. So, it worked on those platforms. The platforms where it didn't work don't actually have resolution constraints. They just don't or they do and they're not published and we haven't had anyone hitting that issue. But because they don't have a resolution constraint, we didn't try to resize the resolution. We just said that picture is too big. Sorry, like file size is too big. So, I said, you know what, we have a gap. We need to make sure that when there are file size limitations
on these platforms, even if we don't need to uh do a resolution scale, we need to change the size of that media. We need to do it, I think, because I don't want anyone using the platform and having to think about those constraints. Let's figure it out. So, I told C-Pilot, hey, I got this code that exists already. It does um does the resolution scaling. And I actually found some code that I didn't realize I had this, but from talking with Copilot, it does resize pictures that are JPEGs in a very specific situation. But it found the example code. And I said, "This is perfect. We have to put these things together because then we can essentially get the resolution scaling. We can get the image format conversion and we can get some size limitations in terms of the file size. we get the the
trifecta. This is great. So I said, "I need you to go add this image compression stuff in." And first attempt was terrible. It was absolutely terrible. It was one of those ones that was so bad where I'm like, "Oh my god." Um, I have to start this again. I thought, and I don't think I'm going to make a YouTube video on this one. I might, but I thought the prompt was very good. It was lots of structure. I say the prompt, the um the GitHub issue, but it was just simply too much. It was so much content. And in hindsight, I'm like, "Yeah, that's stupid. It it probably lost the context like, you know, one hour into reading how big this GitHub issue was." So, I said, "Let's focus on a core part of it. Let's deliver the core change and then we'll hook it
up to the consuming parts later. I actually feel more comfortable about that. Let's do it that way." So, it did it and it gave me all these tests. Code looked good, but it's more complicated than I'm comfortable with. There's a bunch of different paths for different types of image resizing, different strategies based on the format. Resizing a JPEG is very easy with this library. If you want to have a size limit, awesome. It's like one line of code. if you want to do it for some of the other ones, it doesn't work so well because it doesn't have um it doesn't have a parameter where you can set an upper limit. So like we have to build that and Copilot did. So it built all this stuff and then it wrote some tests for me. And I said, "Excellent. This is great. There's tests." And
the tests all passed. Awesome news. What wasn't awesome news is that a lot of the tests had um let me collapse this for a second. So this is a big test file. A lot of the tests said something kind of ridiculous. Let me find an example. Um here's one, right? Assert true that the compression result quality iterations. It says greater than or equal to one. This is actually better than what it was because it used to say right at the top, you can still see it in the name, quality iteration count is returned. The quality iteration count has to be returned. It's part of the return object. It's not not going to get returned. So, kind of a crappy name. And then it was basically looking like this. It was saying make sure that there's at least zero, you know, equal to, greater than or
equal to zero. And I'm like, that's not that's not okay. That you're not testing anything. You tested that this code didn't crash, but we didn't test anything uh useful. So, there are some examples like this. This one's still in pretty rough shape. It's passing. But um how many like what I'm trying to figure out how to convey this to co-pilot properly is like how many quality iterations do you expect this to take? Because the answer isn't supposed to be greater than or equal to one or greater than or equal to zero. It should be five iterations. It should be six iterations. Give me an exact result. So I'm glad that I was reading through that. Um let me scroll down a little lower. I'm trying to find some more examples. Some of them have been touched up though. Um, this one was better because it
was just saying make sure that it's fitting within the limit. That's that is a fair assertion. Uh, what was another sort of ridiculous one? Uh, there was a handful that were like the number of iterations and the quality values that it was checking were just like greater than or equal to zero. There was one that was like make sure the quality is less uh less than or equal to 100. It's a scale from zero to 100. So like by definition it must be right. It's a completely useless uh assertion. So I've been catching a bunch of stuff like that and I got to the point where I was like, you know what, I've I looked at the code. It seems like roughly it's doing what I would expect. I feel okay about that. Um it's a little bit complicated, but some of the stuff is
a little bit complicated. So, I can buy that. Um, I was happy with the test scenarios that it added for the most part. I feel like that's offering some confidence, but I still had some failing tests and in particular, I had some failing tests around quality. Let me scroll back up a little bit just to see if there is one. Um, there were some of the tests that say like, you know, they're doing something and there's a quality check somewhere. If I just search like here quality iterations but what's quality let's find another one I don't just want quality iterations. There's literally a check for quality. Um I thought there's one that says quality iterations greater than or equal to two. So that's good. Um but why are we doing ranges like just make it an absolute value? Uh this right here, right? So this
one is saying the final quality of this should be 100%. And the test makes sense if we think about it, right? I just you don't have to understand the code. I'm not showing you that yet. That's fine. This test says try to compress this. We're starting off with maximum quality so that if we're trying to compress it and we're starting with the maximum quality, we actually want this to stay the maximum quality. And that's because we have a very generous limit for the size. So, you know, if I'm just making this up, if this file size was 10K, 10 kilobytes, and we're checking this qual uh this quality here, what am I trying to say? This size limit against 500K, it's like, hey, that's well within limits. We don't have to do anything. So, you would expect that we're starting with max quality. We didn't
even resize it. Give us a max quality image back. I found a problem though and it's not obvious from the code. So I'm just going to jump into this to show you. DJ in the chat is saying that's the quality of AI testing. Even the AI doesn't want to write tests. Yeah, it's it's it's tricky, right? I feel like it's very easy to go look at a big wall of tests and go it did it. But like did it? Because it did something. Um just to show you what was going wrong here. Um if we take this is create an uncompressed result. Okay. So it's taking a magic image. This is the library I'm using image magic. Um it takes this image in. This is the original image. And it's writing it's doing two things wrong by the way. It's writing this to a new
stream so that we can get uh the stream is like the result that we're passing back. We want people to have a stream to go do things with. But there's two problems here. One of them is that the final quality is the original image quality. In this example, it should be okay. It should be fine because whatever quality we started with, that's what we should have ended up with. But you'll notice image here, this variable, we didn't make a new image. We just wrote the stuff to a stream. This is technically misleading. If we wanted the actual final quality, technically what we need to do is go load that stream into an image object and check the quality. That's what we technically need to do. I say that because there's other parts of this code in this this class where they do a similar thing
like they do some work. they actually resave some things out, blah blah blah. And then they're also using the original image quality. And I don't think that that's correct. So, that's one thing. The other thing that's very interesting about this is if we check out the tool tip, it says gets or sets the JPEG um I don't even know what a a Miff is. Um MIFF, I don't know what that is, but JPEG, MIF, and PNG compression level. The And by the way, that's super confusing. quality and compression level. Those seem like the inverse, but what do I know? Um, it says, uh, that's the compression level. The default is 75. Okay. When I went debugging this, I said, "This code is doing what I expect, right? It's it's I'm following along. I'm stepping through and I'm going, I don't get why this is failing.
This looks like it's doing exactly what I expect." The image quality of the original image that was coming in was just zero, which is interesting. I don't know enough about the library, but it was just saying like, yep, I'm zero. So, even though we are using the original image, writing it back out, we're still taking the original images quality right here, and the original image had a quality of zero. And I said, I don't know. I don't know what to do about that because it seems like we can't trust this property the way that we expected. So anyway, moral of the story here is that um there are times where AI writes tests for me, like a feature and tests, and it's concise enough. I'm familiar enough with it and I feel good about it and I can check it out um you know in
GitHub, do a little like code review and sign off, send it, I'm good. This was an example where I started to feel uneasy about what it was doing and I kind of trusted my gut and said like I don't think I feel comfortable doing this in GitHub. I don't want to sit there and hope said I need to be able to run this. I need to step through it. I want to investigate. I want to have my familiar tools and see what's going on here. And so I wanted to share that with you because it's just an example that's real. This is literally from right before this live stream where I said this is getting a little bit too complicated and me working with AI right now is now it's slowing me down. I'm I'm very glad that it got me to where it got
me to. I think the code structure is correct. I think the tests are good tests. There's a couple of improvements we need to make with some of those assertions, but I feel very good about it now, but now I have it committed locally and I want to make sure that I can do some iterations and say like, hey, like if this quality property doesn't work, when doesn't it work? Can we use it at all? Do we have to redesign like what this looks like? Good news is at least we have tests now, right? So, I want to see if I can get some of those a little bit further, but um it's been a it's been a really unique experience on this feature for sure. Kind of uh challenging to navigate. So, that's that folks. Um I probably should go share stuff that's going on.
I think there's a big sale on Dome Train. I think what's going on on Dome Train? Oh, yeah. Black Friday sale. One sec. Um nice. Cool. Black Friday sale. Um, for folks that don't know, I do make courses on dome train. So, if you like the content I put out and you would like to try out some courses, there's a big sale. Um, there's a Black Friday one in particular. There's also um, get 40% off forever. There's got to be Oh, Black Friday 25 is not 25% off. My bad. The year is 25. Duh. It's 40% off everything. So, that's cool. I'm really bad at sales and marketing is if you haven't guessed. Um, but I have courses on getting started in C and deep dive for C. Uh, most people do these kind of backto-back. It's 11 and a half hours of C material.
And then I have a couple of other courses that are uh not Created that I paired up with Ryan Murphy on. So, you can check those out. behavioral interviews, career management, and then there is a reflection and refactoring for C devs, and I will keep dropping a couple of spoilers. If you liked this video, then you may like the next course that I'm putting together. And that's all I'll say for now, but um you can stay tuned for that. For folks that don't know, I do have other YouTube channels and stuff. So, give me one sec. I'm just going to start pulling all these things up onto my my browser up here. There we go. Uh Dev Leader is my main YouTube channel. Um this is where I used to do the live stream and stuff like that as well. But um this is my
main YouTube channel where I have programming tutorials and um it's mostly in C, but other other than that it's like using AI tools to program. So you can check that out. There is the Dev Leader podcast. Um if I could spell I usually have all these tabs open, by the way, and I just like was a little bit flustered before I started streaming. So, you can see I'm live right now. But, um, yeah, we did I put out this one last week and then put this one out yesterday. Unfortunate that only got four views and I feel like this channel is doing rough. I was I was instructed by a YouTube coach to split out my material. Um, and that way it's not so confusing to YouTube when it's serving content. But like, man, some of these videos it's like, you know, 16 views 3
weeks ago. Like, oh my goodness. Uh, feels kind of sad coming from uh my main channel, but that's okay. Code commute is the other one I'm going to promote right now. Um, thanks folks for joining from Code Commute. Means a lot. I I really like the code commute community. It's cool. Um, so if you haven't seen code commute, code commute is a different format from my other YouTube channels because as you can see, it's usually me commuting and I do Q&A style videos. So if you have questions that you want answered, you can just leave a video or leave a comment on any code commute video. You can send me a message. Some people message me on LinkedIn, Twitter. Uh, you can go to codecommute.com and you can submit totally anonymously that way. But um it's a lot of fun. I really enjoy making code
videos and this is a channel like if you look at the number of videos compared to the number of subs. It's like it's kind of kind of unfortunate but like I don't edit the videos. I don't make thumbnails and I really enjoy all the topics that we're going through. So there's been times where people are like oh you know if you're feeling overwhelmed like it's okay like we'll wait on code commute. I'm like, "No, man. Like, the these are the videos I like to make." Um, it's a lot more about um it's a lot more about like general software engineering. Um DJ says, "It's more confusing for the user. I have to hunt them down. The podcast I listen to on Spotify for simplicity." Yeah. And so that's a good point. Thanks for bringing that up. There's a a Spotify URL. It is just Spotify.codemute.com.
Boop. And sorry, uh, DJ, you might have been talking or the podcast I listen to on Spotify. Yeah. So, you can see that I have Code Commute here, but I also have the podcast. So, um, Spotify lags a little bit behind YouTube, so I don't have the latest podcast up here. But you can check both of those out if you go to Spotify, search for Dev Leader or uh, Code Commute, either one. But yeah, I try to put content up there as well. Um, especially these types of videos, people are saying like they just leave them on in the background and like that's awesome. Please do that. Um, watch time still accumulates if you do it that way. And that's that's the number one metric for us on YouTube. Um, and then I'm just going to do a final little shout out for Brand Ghost
because we got to see some code for Brand Ghost today. But Brand Ghost is the platform that I am building for posting content to all the social media platforms. We're doing a little rebrand. We haven't updated our logo on the site yet. Um, but colors and stuff are mostly going to stay the same. Logo is going to change a little bit. Still the ghost because we're brand ghost. Um, but this is the platform I build. So, I like reminding people that it's totally free to use. Uh, if you just want to crossost and schedule content. So, if you're trying to get into some content creation, you want to do some learning in public, totally cool. It's free. Just use it. Crossost schedule, great. Um, if you have a small business or if you're at the point where for your content creation, if you're making money
from it, um, and you're like, hey, I would like to have, you know, more regular content posting. I'd like to make sure that I'm easily posting to multiple platforms, that kind of thing. Um, our paid for tiers have more advanced features. So, um, another I'll just do one more. I'll do one more. Um, I love showing my content calendar because I think it's kind of funny. So, this is the thing that I like to show people because when I say brand ghost really helps me with my content, this is December, by the way. The little numbers that you see are the number of posts that go out in a day. Um, and I don't just mean like What's a good way to say this? I don't just mean like, oh, I'm going to make nine posts because it's one post to nine platforms. I mean
like I know my head's in the way. I'm sorry. Here we go. This is one day of posting. Okay. So, I'm not there. I'm at the bottom now. So, that's one day of posting. And so, that's cool. That's December. But here's January and here's February. And here's March. And here's April. Do you see a pattern? Like I didn't manually schedule any of this. I put my content into Brand Ghost. It automatically schedules it for me. And some of the features that we should be turning on very soon. Um I probably took a little bit too long just playing around with them, but uh if you have content that you've written, so just as an example, um these are all different posts. Um, if you have content in Brand Ghost and you're like, I want more content like this, the more stuff that you have in
Brand Ghost, the more that we can try to emulate your voice. So, if you want, you can use our ghost writing tool to be able to generate more content. And that way, if you're like, I got writer's block or I don't know what to write about. It's I feel like with AI, the writing is never perfect, but uh, we've been getting pretty good results, I would say. And it gets me to the point where I'm like touching things up. Like I always tell people, I'm not against M dashes. I just don't use M dashes normally in my writing. I just don't do that. So I use because I'm typing naturally. If I want an M dash, I do two dashes. So when AI is writing my content and it's giving me a first draft, I'm like, I don't want M dashes. It's just not how
I type. So little things like that. But uh overall, um it's a lot of fun. And uh yeah, Brand Ghost has definitely saved my butt for content creation. So that's all I got this week, folks. Um should be on next week, same time and everything. Probably not vibe coding again. I don't know what the topic's going to be. Maybe it will be more coding stuff. We'll see. Thank you so much for watching. I hope to see you all next week. If you want to know what the topic is going to be, if you go to weekly.devleer.ca CA and you check out the the newsletter. You don't have to subscribe. You can just check it out on a Saturday or a Sunday. That's going to give you a heads up for what the live stream is. It's either that newsletter topic or you can click the
link or subscribe to the YouTube channel. You'll see that it's scheduled. That's all different ways to go check it out. Uh DJ says, "I'm seriously going to need to pony up and buy the pro sub scheduling post." Uh time is consuming. Yeah, pain. I see why. It's why I don't post a lot. Yeah, exactly. I I always remind people like I started Dev Leader in 2013. I started this over a decade ago and then I gave up for 10 years. I just gave up. And I literally gave up because I was writing blog articles and I was like, I need more views. How do I get more views? I will put them on social media. And then I'm like every day I'm post not every day, but I'm posting these blog articles. no one's reading them and I'm like burying myself trying to like post
all these platforms like no one's reading my stuff. Oh, this is the worst. Like why am I doing this and I gave up? Could you like could you imagine if I kept going for 10 years in between then and now? I kick myself in the ass sometimes for that, but that's okay. Um because we got brand ghost now and that's what matters. So again, thanks folks for watching. I really appreciate it and hope to see you all next week. If you can't wait, tune in to Code Commute. You get to hear my my beautiful voice. I'm sure that's why you're tuning in. Okay, see you folks next time. Take care.
Frequently Asked Questions
What is the main focus of the live coding session?
In this session, I'm focusing on coding live, specifically working with Roslyn analyzers in C. We're going to create analyzers that enforce coding standards and help keep our code on track.
How do you handle situations when AI tools don't perform as expected?
When AI tools like Copilot don't perform as expected, I try to learn from the experience rather than getting frustrated. I analyze what went wrong and adjust my prompts or approach accordingly to improve the outcomes.
What are some tips for using AI tools effectively in software development?
These FAQs were generated by AI from the video transcript.To use AI tools effectively, I recommend understanding the underlying concepts of what you're working on, refining your prompts for clarity, and setting up guardrails in your codebase to ensure quality and consistency.
