BrandGhost

How To Leverage Roslyn Analyzers For Agentic Guardrails

If you're like me, then you're getting tired of your agent making silly mistakes in coding patterns. You've got your instructions file, you've got your custom agent, and you're not sure how many times you can hear sKiLl IsSuE from Internet strangers. Let's see of Roslyn analyzers can keep our C# agents on track.
View Transcript
Did it just do it super quick? [laughter] Way better than expected. I guess more and more of us are vibe coding. But of course, the more that we're doing this, the more that we're running into agents doing some pretty weird stuff despite all of our efforts to keep them on track. And that means that if you're familiar with using things like your co-pilot instructions or having a custom agent, you're giving all the guidance and the instructions and all the information upfront to hopefully give your agent the right things to go do. But you've still probably encountered where it goes a little bit off the rails. So, the more that I've been working with this kind of stuff, the more that I'm trying to see what kind of other guard rails I can put in place. So, in this video, we're going to walk through some analyzers or at least go over the concept of leveraging analyzers in our C applications so that we can try to make sure that we're keeping agents on track. What we're going to do is jump over to Visual Studio. On my screen, I have a badge repository inside of Brand Ghost, which is an application that I'm building. This is following a repository pattern. the details of like the queries and all of that not really important. But what I wanted to talk about is that it's a common practice that I have in my entire codebase to have everything wrapped in a tracer call when I'm dealing with the database. Now, why am I doing this? Well, that's because I want to make sure that I have some data in terms of how long queries are taking. Yes, there's lots of different ways that we can do this, but this is a pattern that I have in the codebase. Now, with that said, I do have things like in my agent instructions like, "Hey, co-pilot or hey agent, whatever." Go make sure that when you're doing a repository pattern, you are using a tracer wrapped around the database call. The other thing that I have in my instructions is to follow similar patterns. I basically also put this into every prompt to have the agent follow similar patterns. and my custom agent that I have where essentially just took one of the example agents from awesome agents GitHub repository tweaked it a little bit and I try to make sure that I have these patterns like this tracer wrapping that you see right here on line 20 you know basically it's wrapped around the entire body of this method I suggest to the agent to do this now with that said I might go spin up a new repository or have an agent working through some new feature and lo and behold it does not actually do that. The reality is that when I'm not wrapping something in a tracer call, the code is still valid. It still compiles. It still functions. It still does what it might need to do. And if the agent is, you know, writing tests and stuff for this new feature, it will still work and everything is okay. But what ends up happening is that if I'm not paying close attention to this stuff, it ends up being something later where I go, "Oh crap, that wasn't caught." Right? If I could check it in a code review, that's great. you know, there's all of these opportunities where I can catch it, but the reality is I'm a human and the agent is only an agent. This is just one very simple example of something like an agent going off on a different path. If we take this idea, and this is just a tracer call wrapping a, you know, database query. This kind of thing, we can extrapolate that to all sorts of different coding patterns. What I have been doing is getting a little bit sick and tired and fed up of people saying, "Hey, just skill issue in your prompt, right? Get better." I'm like, "Look, I understand. I understand that I can keep getting better." And I understand that agents and models will keep getting better, but the reality is at some point I feel like I'm dumping so much context and instruction for the agent to go follow every single possible pattern that it starts to feel just unrealistic. So I said hold on there's got to be a better way to do this and certainly I'm not the person to invent this right but we have analyzers innet now up until this point I've been saying to myself I want to use Roslin analyzers I want to use code generation I want to start making videos showing this kind of stuff but I haven't yet gone and experimented myself with it this has been the first time where I've started leveraging this kind of stuff and if you have a quick look over in my solution explorer I have a Nexus AI anal analyzers project here. I have a whole bunch of different analyzers that we've added in. And these are just patterns that I'm starting to add in to our analyzers project because I'm doing these code reviews with agents going, "Oh crap, it missed this again." So, just a quick look over here like strongly typed ID analyzer. This is because we had um different types of identifiers being put into the codebase and the agent was not marking them as strongly typed IDs. I also want strongly typed IDs excluded from code coverage. So I want that attribute on them. The agent was not following that. So despite there being plenty of examples and instructions, it's just one of those things that it ends up missing. I have some other things for like needler registration. So my dependency injection framework. What happens is you know very expectedly I suppose I have agents going to build out functionality and they're going okay the typical pattern innet is that we want to register things onto the service collection so I'll make sure to do that but my dependency injection scanner will go register those things all automatically for us anyway. So I don't actually need it to go write dependency injection code. I have that stuff in my documentation. I have it in my co-pilot instructions. it will still go do it. The idea is that I'm starting to leverage analyzers to literally force the agent to correct itself. Now, it's still early in this process for me. So, I wanted to make this video to talk about an example. We're going to go through this in this badge repository, but it's still kind of early for me. So, I want to show you that hopefully we get Copilot to generate some code that does what I'm expecting it to. I did a quick test before recording this. It seemed to do what I wanted. So, I copied the prompt. We're going to try it again, but obviously it might not do the exact same thing. But the reason I say this is still early is because with things like analyzers, the agent is able to do what a human would do. And that's one of the meta points I want you to think about with this video is that what I'm doing with this agent and trying to put analyzers in place or having instructions and all this kind of stuff. What I want you to think about is that forget AI for a moment. If we just had a new developer working in our codebase, personally, this is the philosophy that I've always had. I would want people to feel comfortable coming into a codebase and writing code. And that means that I'm kind of, this is going to sound like a funny uh saying, I guess, but like I want people to, you know, be forced into the pit of success. They need to fall into it. I want the paved path that they're following to make them do the right thing. And that means that it's harder and harder for them to screw up. it's harder and harder for them to follow the wrong pattern. This is the same thing that I want with my agents. I want them to be able to write code and go, "Oh crap, that's not the right way, or this is the right pattern to follow. Let me go do that." And just have it start doing the right thing more. And when it doesn't do the right thing, what do we do with a human? We remind it. We say, "Hey, that's not the pattern. This is the pattern." Right? I just want this to be more and more automatic. And if you think about that, if you're a human, and if you're watching this, I hope you're a human. If you're a human and you're a developer in a codebase, what better way to tell you you're not following the right pattern than as soon as you type it into Visual Studio, it literally tells you, "Hey man, that's not the right thing. This is the right thing instead." And we can get that kind of functionality with analyzers. What I'm going to do is we're going to add in a update badge definition method. I'm going to start writing it cuz I want to show you um one more thing that's not a custom analyzer and I will make another video on this. If you're interested, just let me know in the comments if you want to see some more stuff like this. Again, it's not going to be groundbreaking. I'm just trying to show you that these are things that we could start trying to do more of in terms of guiding agents. Okay, so first thing public async that's not what I want copilot. I'm always curious when I start typing is co-pilot just going to know what I want. Sometimes it does a really good job. Other times there's certainly not enough context right now. So public async we're going to do it's a task and I wanted to do update badge definition async and we'll co-pilot do it now. Cool. It does some stuff. I'm going to just do this for a moment. Forget the co-pilot part there. But see how already I have red squigglies in the IDE. Some people might say, "Oh, this is annoying." But I just want to show you that this is not a custom analyzer. This is just for my global config and editor config settings. It says remove unused parameter badge ID if it's not part of the shipped public API. So I have a rule turned on. This is uh IDE0060. I have it turned on to basically be an error. If I'm not using a parameter, I want the IDE or MS build to tell me, hey, look, no one's using that. You should not have that included. And I'm turning more and more of these things on to basically start to enforce better patterns. And this might seem like a silly one, and I know we're not talking about the custom analyzer yet, but just to give you a super quick example, I was recently refactoring some code in a test file. And this was a test file that Copilot had written for me, and I noticed that a parameter was passed into a method that it was using for asserting data, and it wasn't even using the parameter. Should I have caught this in a code review? Yes, of course. But this is the thing. When we start having more and more code being generated by agents, there's more and more opportunity for us to miss this kind of thing, right? We're reading more files. We're reviewing more files. I want more things to start enforcing this. So, this is one such rule that I have in global config and editor config to make sure that this kind of thing gets caught. But what I'm going to do is basically have Copilot build this API out for us. And what I want to see if it will do properly is that it needs to have this tracer call. If I were to go write this myself and it would look something along the lines of like using var connection. Okay, thanks Copilot. So I have my DB connection factory. I'm not an entity framework core user. I prefer having my SQL queries in the code. That's why I was saying don't pay too much attention to the details of this because if you're an entity framework user that's totally cool. You do you. I like having my queries here. But if I were to have this connection and I wanted to have a Dapper call. So connection and then I want to say like execute async. Let's see if Copilot will do something. Lovely. Now let's put in a weight in front too. See how there's a red squiggly over this as well. This is now using the custom analyzer I have and it says database operation execute async must be wrapped in a tracer with tracing call. Okay, so this is one of the first examples where this is a custom analyzer. This code is technically correct, but it needs to have what you see up here, this tracer call wrapping it. Okay, so I'm going to have C-Pilot go build this method for us. And like I said, hopefully it does the right thing. But the idea is that it should give us something like this. Again, I'm not paying too much mind to the actual query or the syntax or whatever. You know, I don't care right at this moment if like, you know, the query was something like this and it was just the wrong thing. I want to see that it's following some of my rules and hopefully we get it to do that. Okay, so let's get rid of that. I'm going to pop open the chat. I'm just going to be using GPT5 for this. I have the query. Oh, unless I Okay, I still kept it on my clipboard. Excellent. So, I'm just saying add an update badge definition async. I should have said method method to the badge repository. It should allow callers to update a badge definition using Dapper. Now, is this prompt amazing? Not really, right? Do I want it to go right test? Do I want it to follow the other patterns in here? The answer is like usually yes, I would. Um, I do want it to follow other patterns in here. Usually, I would want it to write tests. I'm not so interested in the tests right now cuz I just want to show you that hopefully it will go add the tracer in. And if it doesn't, hopefully when it tries to build this, it will have a build error and then it will put the tracer in. It might just get it right the first time and it won't even try to build. And essentially, we got lucky and it didn't even need to rely on the analyzer. But we'll see as we press run on this. Okay, so at this point it is going and adding the call into here. If I have a quick scroll through, it did already add the tracer in here. So that's good news. Did it just do it super quick? Okay. Better [laughter] way better than expected, I guess. Uh I was kind of hoping to see that it would do maybe something kind of silly, but it like just got it perfectly the first time. So awesome. But let's add that in. Right. So it has caching in here as well. I didn't even tell it about caching. I have fusion cache that I'm using in my repository. But just to give you a quick preview of this, right, if I got rid of, let's just take this out for a second, right? We'll take the tracer call away. So if it would have written code like this, if it would have, and it clearly didn't, but um if it would have, you can see that it's saying down here that the uh the tracer called needs to be wrapping this dapper method. This is a safeguard that we have. And if it tried to build, it would see that it has this error. But like I said, we got kind of lucky this time that it just did it right the first time, which is excellent. So, there's kind of uh multiple layers to this, right? If we can have the instructions up front, I didn't tell it that in the prompt. But if I have my co-pilot instructions set up properly, if I have my custom agent running and it has instructions in there as well and they say things like rapid and the tracer call, these are all things along the way that we should be doing to give it better guidance and then failing that at some point, right? Cuz if there's enough patterns or the context is kind of going off the rails cuz it's doing a lot of work. If we forgot to explicitly say, "Hey, you better wrap this thing in a tracer call." we have this safeguard at the end when it's trying to build and it will say you can't even compile man you're missing this thing here and like I said at the beginning of this having a tracer call wrapping a database method you don't need that for the code to be you know correct or for it to compile or for it to do the right thing but this is a pattern that I have in my codebase that I want to enforce along with a bunch of these other ones that I have in my analyzers project so just wanted to give you a quick example of that. And the other thing that I just wanted to kind of leave you with to be thinking about is that I also mentioned that I use Fusion cache in my repository. Right? So I didn't tell it to go add this caching this removal of the cache right at the bottom. It's not adding it to the cache either, but it's at least doing an invalidation. I didn't tell it to do that anywhere. Right? If we pop open this prompt again, just says add this method in here. All right. So, either from the co-pilot instructions because I do have some information like that or saw the documentation, although I didn't see it actually saying anything about that in here, right? Which is kind of interesting. So, it didn't say that it was reading documentation or whatever else. I'm assuming that what it was able to do is see the other relevant patterns in this file because it's using that as the context. Again, if I go back here, it's using the active document. So, my assumption right now is that it was using the existing patterns in here. Maybe that includes the tracer as well and that's why I got it right right away. But if I want to enforce that there is caching happening or that we're invalidating the cacher. If I add a new method, it needs to play nicely with the caching patterns, what I should do is have that in my co-pilot instructions. I should have that in my custom agent if that's a pattern I want to keep enforcing. But one of the things that I am experimenting with now and what this video is all about is maybe I should have a custom Roslin analyzer that will go look for this type of thing and say, "Hey, look, you have a database method call in here in a repository pattern with other caching and you are not doing anything with the cache. Probably you need to have some invalidation or updating of the cache in your method as well. So that's something I want you to think about. Could we go write a custom analyzer for this? What would it look like? So I'm not going to go through the details of how to implement this analyzer. That will be a future video. I personally want to make sure that I am more experienced with building custom analyzers because I'll let you in on a little secret. Every single analyzer that you see in my project here was coded by co-pilot. I said, "Here's patterns that I want you to use and I want you to write an analyzer. I want you to write tests for that." And then I pulled down the code because I was using this in GitHub Copilot. I ran it and then I played around and tried to see like if I structure the code this way, will the analyzer pick it up? I went back and forth and vibecoded all of these analyzers. So, they're probably not perfect. They can probably be improved and I will continue to improve them. But in future videos, I will go through how we can make analyzers. We can vibe coat some analyzers. I'll show you how I did that. And I just wanted to leave you with that thought. So, this is how I'm starting to guide more of my agents with a little bit more structure. And I hope that that gives you some insight into how you can start guiding your agents a little bit better than just, you know, skill issue, bro. Get better at prompting. So, thank you so much for watching and I will see you in the next video.

Frequently Asked Questions

What are Roslyn analyzers and how can they help in coding?

Roslyn analyzers are tools that help enforce coding standards and patterns in your codebase. They can automatically check your code for specific rules and patterns, alerting you when something doesn't comply. This helps ensure that agents or developers follow the right practices, making it harder for them to make mistakes.

How do I implement custom analyzers in my project?

To implement custom analyzers, you can start by defining the patterns you want to enforce in your code. I used GitHub Copilot to help generate the analyzers based on the patterns I provided. Once you have your analyzers set up, you can test them in your codebase to see if they catch the issues you want to address.

What should I do if my agent doesn't follow the coding patterns I've set?

If your agent isn't following the coding patterns, it's important to provide clear instructions in your prompts. Additionally, using analyzers can serve as a safety net to catch any deviations from the expected patterns. If the agent still misses something, you can refine your instructions or add more analyzers to cover those gaps.

These FAQs were generated by AI from the video transcript.
An error has occurred. This application may no longer respond until reloaded. Reload