We've previously looked at vibe coding Roslyn analyzers but what about helpful tools to fix up those analyzer errors? That's where code fixup and source generators come into play! Let's vibe code one of these!
View Transcript
In this video, we're going to be looking at vibe coding another source generator for code fixup. And this is based on analyzers that we've built in previous videos. In the last video, we vibecoded a source generator for one of the analyzers. And this time we're going to jump over to Visual Studio here and look at one more for assert true and assert false. So if I type assert.true in my solution right now. If I type something like this and I have assert true. This will trigger one of our analyzers where we see provide a descriptive message as the second parameter when using assert true. And the same thing happens if I put assert and I put false. So our analyzer what it's expecting is that we have another parameter here with some message. And as long as we have the message provided, actually this is
going to complain to me because it's a different analyzer triggering. But you can see that as long as I have the second parameter, it satisfies our analyzer. Right? So what we want to do, just like the last video, is provide code fix up. And that means that instead of us having to go type all these messages, we can have a quick action that's going to basically put a little message in there for us. Instead of us having to go through, you know, our entire codebase and go, okay, I have to go type in some message here to satisfy the analyzer, some message here to satisfy the analyzer. The code fixup allows us to hover over and give us a little tool tip where you can say go fix it right here or go fix it across the project, the solution, etc. That means we're going
to vibe code this and we have to explain to co-pilot what we want. But this one's a little bit interesting because we don't actually have a great way in my opinion to have a really helpful message is going to be applicable as a fixup pattern. And what I mean by that is like if I say assert true this even this example is kind of silly. This test itself is doing assert equals because we're using parameterized tests, right? It's an xunit theory. So we can pass in the parameters. That means we can compare. So we're using assert equals. So we wouldn't use assert true and assert false like this. So if I wanted just as an example to have assert true like this like what should this message be? How can we have a code fix up pattern that's going to be applicable sort of across
the board to go fix things up? And I think that one way to do this, it's not perfect because I think that it's not going to be a great way to have a generic way to have a descriptive message for every sort of assert true. I think something that could help is that if we get the name of what we're trying to assert and that means there's probably two examples like this, probably more that we'd have to consider. So I'm going to leave you with a little bit of a thought after here. But for something to consider here is if we have assert true and we have expected what should this message be? I think one way to do this is to have something that looks like this unexpected or maybe we could even say expected to get false for and then we'd give it
the name of the variable. Okay? So name of and then we could do something like this. Put it in quotes. I mean this isn't perfect but this is one example of something that you could go ask to have in your code fix up that way if we wanted to assert false right expected to get true for this and that means that we can enforce that we have a message and we get a quick way to go apply that to all of the use cases. But if you're thinking about this a little bit further ahead, not every time that we have assert true do we have a variable. You might have a method name. So that might be something to consider. And then what are the other things that you could have in here? Right? Could you do this? [laughter] If you have a different expression
or something like that, how do you get the name of? Right? We need to have some way that we can get the name of this. So I'm going to try to express that to Copilot. And the thing that I'm going to leave you with is that this isn't going to be a perfect solution, but I want to show you that you can go back and forth and build things with Copilot this way and tune them to your liking just by having a back and forth with it. Okay? Building AI agents is a lot of fun. Building AI agents with AI is even more fun. But that also means that we have to be a lot more careful because we've all experienced this kind of thing, right? We're building with these tools that allow us to move so much faster using AI. But that means that
we can go from ideas to having something rapidly put together. And it means that we can often skip over some of these steps that are truly important. Things like security, things like tokens and authentication. And we've all seen the memes. We've seen the posts on Twitter where people have accidentally leaked their credentials into production. Ozero for AI agents comes into play. It's the complete solution to balance that speed with security. You can use features like user authentication and fine- grained authorization to ensure that your agents can only access the information that users are intended to. Their token vault functionality simplifies connecting to external thirdparty tools so you don't have to manage all of that by yourself. And for more advanced workflows, you can leverage asynchronous authorization. And that means that you can keep a human in the loop when you need to. If you're serious
about building secure, scalable AI agents, I highly recommend Ozero for AI agents. Check out the link in the description to start building with Ozero today. So, let's try things out. I'm going to get my foot pedal and my mic set up here so I can speak for context. I'm using Opus 4.5. I mentioned in the previous video, I've been going between 4.5 for Opus and GPT5. Some of the other models like the 5.1, they're not working properly in agent mode for me. They're more like they're stuck in ask mode permanently. Kind of weird. So, just a heads up, that's what I'm using. By the time you're watching this, there might be new models and stuff. So, use what you're comfortable with. I would like a new source generator for code fixup that addresses our analyzer that handles assert. and assert.f false in order to have
dedicated messages as the second parameter. Just going to look at what that looks like for now. Okay, casing and stuff like that doesn't really matter. Shouldn't be an issue. What's the next thing I want to say? So, I'm thinking that I want to give it some explicit instructions around three different scenarios. Okay, so we will need to handle the following three scenarios. And I I want to type this part out because I I might want to give it some examples. The first parameter is a variable. So, we want to use a message template such as, and I'm saying such as, but um you might want to be a lot more prescriptive. So, I'm just going to copy that in. I like putting code in triple quotes. I think I mentioned in the previous video, I don't have like data to show that this is necessary,
but when I'm mixing sort of just general words and then I want to have code, I like putting it most of the time in in triple quotes, uh, sorry, triple back ticks. I like to think that it's giving it a little bit more context that this is actual code, but I'm not actually sure if that is the case. Uh the next one, let's just copy that and say the first parameter is a named method. So we want to use a message template such as expected get false for name of um and I shouldn't say expected. Uh this is kind of confusing. I realized as I'm recording this I just originally I just took a variable name. We don't want to um have expected here. So let me say name of variable and then name of method. So the first parameter is an expression and then
I'm going to give it an example such as some code again. So triple back ticks x= 1. So we want to use a message template such as expected to get false for. Now this is not a super helpful message, right? expect to get false for the expression. What expression? Um, but I think if you're about to try doing a bulk code fix up across your codebase, that might not be so bad. My perspective on this though is that this is a great opportunity for you to play around with it. Okay, so we're going to get C-Pilot to go build this for us. We're going to try out the code fix up that it creates. I just want that to be a reminder to you if you're like, "Hey, I don't like that." Cool. Like you can tune what we're doing here to be whatever you'd
like. Right? if you don't even want this to be supported where you have an expression that's being evaluated like this. Um, in fact, I actually have an analyzer in my my own projects that prevents this. And the reason for that is like if you're doing this kind of thing, you probably want this assert. [laughter] You want assert equals. So, just as a heads up, like you can kind of tweak and tune these things however you want. The whole reason I'm making this series is so you can see how these are vibe coded. And then number two, you can see how to put guard rails up that are the way that you like. Okay, so let's go run this and we will get co-pilot to go build this for us. If you haven't seen the previous video, one of the things that I mentioned is that
the first set of source generators for code fixup. It's a bit more of a pain in the butt because you have to get the project reference. You have to make sure it's added to the solution when copilot's generating it. We already have that in place now. So if you are doing this and this is the first one that you've watched go watch that previous video you can you know fast forward but it will show you that we need to get the project added in the solution. It will show you that we need to reference the generator project where you're referencing the analyzer and you'll be able to see that we go from it not being discoverable in Visual Studio to being discovered in the menu. So once this finishes, we will have to go restart Visual Studio so that Visual Studio detects that we have
a new generator and that shows up in our menu. But I suspect this one will likely go pretty quick. Okay, looks like we're all done. Visual sorry, Copilot has said that we have these things covered here, right? Including an expression. It figured out, too. This is kind of a little bit of a test for Copilot. I gave it examples, but I only gave it the assert true or assert false. I didn't give it both. So, it actually parameterized that information here. So, we have that. I'm going to close Visual Studio. We'll reopen it and see how it works. Okay, I've reopened Visual Studio. Just wanted to show you that we do have our code fixup provider. There's a bunch of code here. You can see that we have the different scenarios, but there's actually more than we said. So, this is variable method invocation member
access could be a property or field. We didn't talk about that, but Copilot ended up adding in a scenario for it. And again, this is my reminder to you that if you're like, "Oh, wait. We didn't add yet another one. There's another case that you're dealing with and going, we're not getting a message added for a particular code fix up or it's doing it incorrectly." You know, I wanted to show you we can go look through the code. We can go see where this stuff's coming from. You could explicitly say to C-Pilot, "Hey, we're missing case number five, and here's the scenario. So, go add a new case into, you know, this method that's called create message expression." Sorry, I'm just scrolling a little bit lower to see if there's anything else interesting, but I think I want to try this out. We had these
already fixed, right? So, let's go see if we get code fix up now. So, I just press Alt Enter. It's the same as hovering over and pressing the light bulb with the red circle and the X. Then, you can see we have a new menu here. Add message parameter to assert true. And the preview even says expected to get true for the expression. And there we go. Oh, it gets added in, right? Let me change this one so that it's input. Oh, I can't do that. Sorry. Let's say result. My apologies. Um, so you can see that this is going to complain. Add message parameter assert false. And then you can see in the preview expected to get false for name of result. Pretty cool, right? If we had another method in here, private bool does something return false. Sure. And then I said assert
true. You can see co-pilot's being smart and it's like I'm going to add that message in because that's the pattern. Yeah. Okay. Thanks co-pilot. But we're going to forget that for a second. What happens when we do this, right? And you can see that it's going to do name of do something. So that is working. One more. And this is one that I didn't call out but co-pilot seem to pick up on. Private. Uh sure. Let's do read only bool. Some field equals false. Thank you so much co-pilot. What happens if we do assert false not fail false [laughter] typing is very difficult some field right what do we get add message and it does name of some field so this is something that you could do through co-pilot just like we did where it's completely vibe coded and if you're not happy with how
it's performing or the code fix up that it's doing you can go back and forth and like I said if you missed particular scenarios or co-pilot pilot didn't include them. You can always go back and say, "Co-pilot, here's one more scenario." Try to give it the example of what you're trying to look for and then the example of what you wanted to be fixed up into. So, you have your before and after and that way it has something to work with. But just to show you, we can actually go do this more broadly. This is not specifically from the analyzer and the fix up. This is just one of the cool things that you get with these code fixups in general, but I can go fix it in the document. And you can see that it's going to go do it for all of our
Oh, I have two analyzers. One for assert true and assert false. So, uh, it did all the asserts. That made my example a lot less exciting. Sorry. And then I'll do it for assert false. And you can fix them up in bulk this way. So, if you introduce an analyzer like this, one of the things that I've mentioned before is that yes, you can go ask your LLM, you can go ask co-pilot to go through your solution, go fix up all of these build errors, and it will probably be able to do it, especially if you gave your analyzer a nice descriptive error message. However, I've noticed that when it's doing a lot of files in bulk, if unless you're babysitting it and checking up on it, one of the things that can happen is that it's doing really simple changes like this and then
it starts being destructive or kind of going off the rails. I've seen it go delete other comments. I've seen it delete entire tests and it's like, why are you doing that? So if you have a code fix up pattern like this, especially for really simple things, that can be really nice so that it programmatically does this and you're not relying on an LLM to go fix up everything. So with all of that said, you get analyzers that can detect this stuff and that way it keeps your agent on track a little bit more. You can have descriptive messages so that as your LLM agent is working through things and making mistakes, it can use the descriptive error messages from your analyzers to keep it back on track. So it knows how to go fix it. And then you have this option, right? You can say,
I have a new set of rules. I want AI to go fix it up for me or I want to go use code fix up with source generators so that I go fix it programmatically. I think having all of these together as tools in your toolbox can be really powerful with trying to clean up code, work with agents to keep code a little bit more on track. And at the end of the day, if you don't know how stuff works, you should be asking the LLM to understand it better. Right? So, if you're going through this and going, I don't know what any of this does. Right? Copilot actions, add to chat, switch over to ask mode, explain what this code does. I am very new to working with source generators. Explain it simply. Okay, you can do this kind of thing and if you're
still not understanding certain parts, ask it for more clarity. Okay, I highly recommend you do this because the more code that you add into your codebase that you don't understand. When things stop working, they go from being nice, helpful magic to a pain in the butt. You don't want to be stuck there going, I guess I have to turn this entire thing off. Try to take some time to understand what's going on. So, thank you so much for watching. I hope that helps. And I would love to hear from you if you're building analyzers, code fix up, source generators. What are you building and what types of things are you trying to get your agent to stay on track with? Thanks for watching. I'll see you next time.