This AI Migration Saved Me Months of Work
January 26, 2026
• 391 views
agentic codingclaude codeai agentsagentic airoslyncopilotvisual studio copilotagentic developmentvibe codinggenerative aiai agentai codingartificial intelligenceai automationai agents use casessoftware developmentdotnetcsharpC#how to use copilothow to use an ai agentcopilot instructionsdev leadernick cosentinogpt5ai assisted codingprogrammingcodinggithub copilotgithub copilot spacessoftware engineeringauth0
In this video, I walk through how I used both Claude AND Copilot CLI to migrate my C# dependency injection framework from reflection to source generation. Did it do it perfectly or did I need to take the wheel? Let's find out!
View Transcript
All right, I wanted to make this video to walk through some more sort of real examples of using AI tools for building software. And so in this video, we're going to be looking at how Claude and then Copilot CLI actually helped me with getting my Neidler dependency injection framework migrated over from reflectionbased to source generated, which is something I really wanted to do and I had no idea how to do it, at least not in practice. So let's jump over and look at sort of the change set here. And I just want to give you an idea of over the past couple days because we had a long weekend here. What I was able to accomplish. My goal is not to try and say, "Oh, look how many commits are made." Because you can write something that makes a bunch of commits really fast and
that's not impressive. I'm not going to show you the lines of code that were changed and be like, "Look how many lines that AI wrote." Like, these are all sort of stupid metrics. The thing that I want to talk about is that I was able to take this library that I put together and if you're not familiar with it because no one really is. It's sort of this combination of some dependency injection patterns that I really like. So the built-in net stuff for dependency injection using service provider I service collection. There is something called screwdor if you're not familiar with that that does assembly scanning so that you can go look for things to go inject into your dependency container. There's Autofac, which is historically something that I've really liked to use for building plug-in based systems. And really got to the point where I
said, I don't want dependencies on these other things. The built-in net stuff is great, but I want some of my own scanning because I do a lot of plug-in development. Now, this is all reflection based, and it works that way because that's how I've always built software. I don't have experience with source generators. very very minimal recently because of doing some Roslin analyzers and things like that. Being able to take this whole concept of this you know dependency scanning dependency injection framework tool that I built and take that from something I'm very familiar with reflection and migrating that over to source generation. The reason that's interesting to me and I wanted to share with you is because I actually don't know how to do it and AI got us there. So, the part that I have highlighted on here is from my last sort of
alpha release. And then this one here where it says type filtering and zero reflection parity test. This is about as far as I got with Claude. And it's not because Claude was being stupid or anything like that. If I jump over to Claude, I'll show you the problem that I have with Claude. And that is if I say something like this. Um, oh, there we go. It's because I hit my limit. For some reason, I wanted to use Claude in particular for putting this together. I think that I was interested in having the planning mode in Claude. If you haven't used that, by the way, awesome. But as I kind of got to learn like even co-pilot CLI, which we'll jump over to in a second, I can still get it to walk through planning and have good conversations that way. So, the clawed planning
I liked because it would ask you like questions back. It's almost like if you use some of the research options in chat GPT or other tools, you ask it something and it's like before I start, I got a couple questions for you, right? And so Claude planning mode does that. I like that. It got us off to a good start, but we just ran out of steam. The reason I'm not showing you all the history, I actually planned to make this video and over the weekend I was so excited. Uh I restarted my computer to do updates and uh stupid me. Um, yeah, terminal stuff is not just saved like that. So, I lost all of it. Otherwise, I would be scrolling through the chat history to show you. But what I did in Claude to go from this part where it's my alpha.19 release
to as far as it did get was basically going into planning mode. So, I don't know if it will let me do it. We'll try it out just to see. Um, but okay, I can type plan at least. And I'm in planning mode. So, I'm not going to be able to get it to do any planning because I'm I'm at my limit. But in planning mode, I basically said to it, I have this Neler dependency injection framework. You know, you can see right here, I'm in the repository. And I said, I want you to do a comprehensive analysis of how we're using reflection to be able to do assembly scanning to look for types, how they get registered, and I want to be able to have support for source generating all of this. But then I was very clear and I said I don't want
to do this all in one shot. And the reason for that is that some of my previous attempts if you go on to the repository if I haven't deleted it on GitHub some of the pull requests and the issues that were made I tried giving it to co-pilot to be like even with a big spec and everything it no dice. So I said I think I'm going to have to do this incrementally. I told that to co-pilot in the plan or to claude in the planning here and then got it to kind of think through come up with a plan and then I went back and forth with it. So, some of the other videos I've made, I talk about different modes of working with AI. And you know, I still use AI like in chat only mode, right? So, I use chat GPT to
program. I actually really enjoy using chat GPT to go back and forth and just kind of like brainstorm and think through things. And then I've also said in the past that I like using GitHub spaces, GitHub copilot spaces because it's like chat GPT but it can see multiple repositories that I have and that works really well because then I can do the chat GPT style thing except it has a lot more context, right? So I do that kind of stuff and that's all that I was really doing in planning mode was going back and forth until it said like here's a plan. We're going to do it incrementally. here are the different phases and I remember going like I feel like I got this far with co-pilot before when coming up with a plan and like it really didn't deliver uh twice and this was
a GitHub co-pilot like uh that you use in the browser with the cloud agents and I said look I'm sitting at the computer like I can babysit this thing right I can steer it if it's going off let's just try it and so that's what happened that's what these commits are right so initial source generation this was its first attempt at getting some way there. Right? So, type registry generator. You can see that if you're paying close attention in here, there's this structure that says need alert generators, right? So, it's actually making the generators. So, this code in here is actually not a generator. It's just support for it. This one is the source generator. Let me see if I can make this a little bigger. So, in here, uh, it's actually going to write code out. [laughter] It's ridiculous, right? Um, but basically it
was able to go create these generators and incrementally as it's going through this. So I said it got up to uh here I believe as it was going through it kept making small changes and then I got to a point cuz I think around here and it's probably not obvious to you watching this. That's what I'm trying to explain it but it got to about this far. So a few commits before I said look I'm getting kind of nervous about this. And when I say nervous, I mean like it says it's working and I'm like I don't I just don't believe you. Like even it's writing tests and stuff along the way and I'm like look I've seen AI do this dumb kind of thing before where it's like oh I finished what I'm doing all the tests pass and I'm like the tests don't
even compile. Like you're that's absolutely a lie. I wanted to make sure that I could have um what I'm calling parody tests so that we can exercise the same sort of setup but using reflection the legacy way and then using source generators which is the new way I'm building or having claude in this case build in these parody tests if I can maybe I'll pull one up. Yeah, let's see here. What is this one? Both factories discover same plugins. Right. So in here it's arranging a reflection factory and a generated factory and then it's basically doing you know getting plugins for reflection getting plugins for the source generated stuff and then comparing them right so I have a bunch of tests that say like do this thing with reflection then do it with the source generated path is it the same I built a bunch
of tests like this in the beginning it was kind of silly and I might not have even committed it because it was so silly And I don't see it in here, which is good. But there was there was some code that was like, as predicted, you know, here's the the tests. They're all set up in a way that compares like that. And then it showed me and it left a comment saying, "This is simulating what the source generated code would do." And I'm like, "No, we're not simulating the source generated code. Use the real source generator." I got it to build some parody tests and then got to this point where I had one more thing. So, it was actually really good timing that Claude kind of ran out of steam uh in terms of the limit. But you'll notice that I have this trimmed
AOT example. I'm going to just pull up Visual Studio now. Oh my goodness, that is not helpful for anyone. Okay, so I'm going to talk with my hands a little bit here. One of the things with uh source generation versus reflection is that when we have things trimmed using AOT applications, we actually have a comp u compilation time, there's a lot of stuff that is literally trimmed out of the assembly. It's just not part of it. And that means that if you're trying to use reflection on something that's been trimmed out, that information is no longer available for you to use at runtime with reflection. I know a lot of people have disliked reflection in the past for various reasons, performance or people just, you know, completely misusing it to do ridiculous things. Don't do that. One of the sort of more modern reasons to
to not like reflection is that if you're trimming, you run into a lot of problems and it just won't work. If you need to trim, reflection's probably going to be a pain in the butt, impossible for you. I had gotten to this point working with Claude before it gave out and it's like it seems like it set everything up probably how it's supposed to. I have tests that show both paths are working but I'm still not confident. So what I had it do is if I go to this test folder uh things have changed uh quite a bit but I had these tests. So these integration tests, there's this parody folder, right? So kind of like I was saying this is what we just looked at in git extensions. I also have these example projects, right? So, if you want to see how neededler works,
you can go open up one of these. I said, "Look, we have to do it with the real thing. I need to have something that is AOT. It needs to be trimmed." So, I had it go um put one of these together. And when I say it, at this point, this is where I switch over to co-pilot. So, just want to explain this and then I'll talk about my move over to Copilot. Um, this is going to be a lengthier video just because I want to talk through how I was building this and I wish I could have scrolled through and shown you the conversation as it was happening. This is basically set up so that I can have a trimmed AOT application. If I go over into program.cs, really this is the needler part. Some of this stuff isn't even necessary. It's just because
in these example applications I'm trying to show it. right here is telling our syringe which is doing all of the the discovery of types for us that we want source generation. Okay. And then in this case we're doing it for a web application. This final line builds it. These couple lines here just u extra for the example. But really this builds a web application. It's a has a bunch of minimal APIs. It has a plugin defined in a whole other assembly. Sorry, not that one. It's over here. Right. So, it has a bunch of code in here. And you can see that down here, it's actually hooking up some additional minimal APIs. And this is in another assembly. You saw, unless you think that I'm lying to you, um there's there's no wiring up of that other plugin explicitly. There's no code to do it
because Needler does it. So, it built this application in here and then it has a publishing profile where we can actually have it publish and that output trims it. All of that to say that I got to this point and I said, I need this application to truly prove that the source generation worked because I really need like almost like an integration test or an end toend test to show what happening. And so at this point, I noticed that yes, it did a really good job because this thing would run, but when it would trim things out, what was happening was that there were still some code paths that could fall back to reflection. I got kind of lucky in that the way things were set up, it wasn't. But when I was running this and investigating it, I realized, okay, Claude did a really
good job. It got source generation to work. It maintained the reflection and then on top of that, it made it such that it kind of inferred a default behavior, which is try source generation and then fall back to reflection. So originally this here using source gen you had to opt into source generation but it would always do reflection for you no matter what if source generation couldn't find what it needed. We changed that. So this is where co-pilot came in. And so again I'm just explaining that part so you understand how far I got with claude. It did a really good job. The plan it came up with worked. It's just that once I got to the end of that plan, I needed more to prove the source generation worked as I expected. Let's jump back over here. So, there's a lot more stuff that
Copilot did. So, I'm just going to show you that what I did with C-Pilot working back and forth was pretty significant. There was a point where I was just adding extra stuff cuz Co-Pilot was doing a good job, but I think that honestly it got to around here. about that much was me getting Needler to the point where I feel that it had done a really good job in like doing really what I set out to do which is to have a a source generation code path and not rely on reflection. So if we have a quick scan over what's going on here you can see removing some reflection fallback. So I had a couple of examples where I was pulling some code out to show copilot like this is what I want to accomplish. But I did the same thing. I went into co-pilot.
If you're not familiar, like I said at the beginning, this there is a copilot CLI very much like Claude Code. Doesn't have the cool little graphic. That's okay. But when you're in here, you can just talk to it like you do when you talk with Claude. I made another video to talk about model selection. I struggled a lot when I first switched over to Copilot after Claude ran out. Almost gave up, but really I was just struggling with uh the model selection. and I was trying like some of these newer uh GPT5 codec models, just not working for me. 5.1's fine. Opus 4.5 is my my preference at this time of recording. And so once I switched over, I could use C-Pilot on the CLI just like I use Claude. And I'd say one of the biggest things that seemed like a pain in the
butt aside from model selection was that the approval process of commands was terrible. It was like every single step it was like approve it, approve it. I'm like, why don't I just code it myself at this point? Because you can remember it for the session. It made it a lot easier as things were going on and it was more rare that I was encountering a command that I needed to approve. But I had co-pilot go do planning. Now, if you try in here, at this time of recording, there is no slash plan like Claude has, but I said to it something along the lines of like, let me speak into it. I said something along the lines of I need to work with you on an architectural plan for removing some of the fallback paths that we have for reflection. Recently, we've migrated the needler
projects to move from reflectionbased as the default to source generation. The current code keeps reflection as the fallback from source generation. We need to make sure that we can completely isolate and separate these so users can select to have source generation or select reflection. Right? Okay. So something like this. And then I added on a little bit more to give it some context in terms of where the code is. And then you can kind of go back and forth with it. And what I really liked is that I mean Claude, like I said, is going to ask you back some questions. Copilot got into it, but I could still hash out the details before continuing. And then so I moved along. We got some reflection as the fallback removed. Let me just expand this a little bit. We got um some changes here to have
there's this really weird issue where like I have an ASP net core uh specific needler bundle. Even though I started pulling back the reflection as the fallback parts, this thing still required both source generation and reflection. So, I had to do work with C-pilot to kind of pull these apart even more. But the really cool part with all of this, right, I mean, once Claude got the basis down, I could work with Copilot and start refactoring things. And one of the reasons I wanted to make this video is not just to say like, hey, it's pretty cool like, you know, just how advanced this stuff is getting and how powerful it is. I wanted to talk about the the benefits of like still knowing how to write code and like software engineering in general and building things because I think that this keeps getting buried.
You know, everyone's got their own opinion online. Everyone's freaking out that like, you know, you'll never, you know, if you're a programmer, you don't need to exist anymore. All this kind of stuff. If I would have just relied on Claude or Copilot just to do what they were saying, this wouldn't have worked. It never would have worked. And the reason that we were able to make progress, we as in me and Claude and then me and co-pilot, we were able to make progress on this stuff because when it was going down certain paths, I could stop it. I know that as a user of needler, if you want the source generation path for your AOT application, you want source generation, it can't rely on reflection at all. It can't because if it's depending on that for your application to work, it simply will not work
for your use case. Claude and Co-Pilot as LLMs, they don't really just understand that out of the box. You have to kind of really articulate that as you're working through it. And that means that sometimes it thinks it's optimizing code to like preserve behavior, right? Like, oh, I'm not going to make that decision because it's like if I do that, I'm going to break existing callers of this code, right? If we go from having no reflection fallback, that's dangerous, right? It's not going to work. And I'm like, that's exactly what I want to have happen though. I need that to happen. There's things like that. There's certain things where if I just jump into this, like this one is the right spot. Okay. There was some calling patterns where like originally it was basically making callers type all of this stuff out. And so my
paradigm with this fluent builder syntax that I have I had tried to make more convenience methods so that you can literally do something as as simple as this. So now we do I made one more step that is mandatory. So you do have to pick source generation or reflection. I can talk about the bundle mode in one sec but then you can either do a service provider or build a web application right like this will scan everything for you. as it was doing the migration, just to kind of uh jump back to this code though, it started, you know, in every spot it was like making callers do this. And I said, "No, no, no." Like, we need these convenience methods. They exist for a reason. And it got to the point where even with the bundle, the bundle package that I have lets you
actually do source generation and reflection as a fallback. You have to opt into it though. As a result, like it trimmed down this builder syntax a lot. It makes a difference when you understand how to build things because you can think about how the code's going to be consumed. You have your own preferences for the direction you want to go. But like I wouldn't have those preferences or maybe these are bad examples. There's probably some more technical things that I could use as examples here, but if I hadn't spent time building software, right? And even earlier when Claude was like, "Hey, the test pass. I have the parody tests. Do you do they actually work? I had to go build an AOT trimmed project to go prove that it actually works. Okay. So, I think it's incredibly important that you do still spend time learning,
understanding, building software. I think that's still going to be a skill for a very long time. And I think that it only helps when you're building stuff with agents this way. Talk about one more quick thing before I end this video cuz I want to end it off and show you that it wasn't perfect um even after all of this. So, one of the things that I thought was pretty cool was that I got to this point where I was working with co-pilot and I said, "Hey, like what else should we do here?" So, we got into doing some Roslin analyzers to help guide people to do the right patterns with Neler and then benchmarks. So, Claude, sorry, co-pilot in this case had picked up on the fact that we're doing source generation versus reflection. So, like source generation should be faster, right? Let's get
some benchmarks to prove it. So, we did this, but like I said, if you don't have background in building software, if you've never built benchmarks before, right, and you're just kind of vibe coding them in, the first pass, first couple of passes that Copilot did were just wrong. And what do I mean by wrong, right? Like, that's a pretty generic thing to say. The benchmarks passed, the benchmarks had results. The benchmarks were on the things that I was interested in. It followed the general instructions, but the benchmarks, some of them were ridiculous. They had I don't know where the right one is. Um, they had some changes in here where the actual like these benchmark methods are actually single lines now, which is good. That's what I wanted. Before it was committed, the benchmark code like was doing a lot of extra setup inside the
benchmark. So, it was giving me these results and I'm like, those results don't make any sense. And when I looked this benchmark code wasn't, you know, do the thing on one line. It was like do a whole lot of things plus the method we're interested in benchmarking. So given that I have built benchmarks before and I understand how benchmark.net works, I can guide it to go do the right thing because on the surface it's succeeding for the LLM, but it's truly not the right behavior. So benchmarks are really cool and it was a great idea to do benchmarks because it actually called out once I got far enough and had actually working benchmarks. There were some performance problems. There wasn't there still is a situation where some code paths or source generation can be slower and I realized this is the sort of culmination of
having AI go through and just like churn through this codebase. So when you hear people talking about like tech that just being added on and stuff like that, yeah, like we can use AI to write code faster, we can use it to review code, we can use it to go do lots of things. And I'm just speaking from my own experience in particular on this project or this, you know, this whole repository is that AI did a lot of awesome stuff here, but there are some things that systematically or like fundamentally as part of my architecture, they don't make sense anymore. They don't make sense. And I wouldn't have figured that out without C-Pilot starting to recommend some of the benchmark stuff cuz then I could see like, oh, we have this issue with Yeah. So some of the generated code basically the generated code
using source generation we know all the assemblies at the time that we're doing it. That's the whole point of source generation. But a lot of the APIs in the code allow you to pass in assemblies because using reflection we need to go ask those assemblies for their types. Doesn't make sense when it's source generated. We know the assemblies. We were generated from them. there's some code paths and they were slower and it's because what was happening was that we had this like list of assemblies or this I innumerable of assemblies being passed in the AI I feel like rightfully so said hey look if you're giving us a set of assemblies like maybe you're trying to restrict the subset of assemblies that we were generated from and you only want types from a subset could be interesting I think that's a valid use case that
we could have so when you do that in these code paths. It is so much slower. It's slower than the reflection code path. And I think it's probably because it's doing some stupid stuff getting like the assembly name and whatever else off of here. It's not in the diff. It was so much slower than actually reflecting and loading up the types. If I put in this enhancement, which was basically in the source generated one, we don't need to pass in the assemblies, then we get to see the real behavior and it's like orders of magnitude faster with source generation. my understanding of software development without working with AI to go like push me in this direction I wouldn't have encountered that the final thing that I want to talk about and this is going to segue into the next video it deserves its own video
to talk through is that I got to this point where I said okay great let's go publish this thing because I have a bunch of passing tests I have some example projects I feel I feel good about this not good enough to go give to everyone on the internet and be like everyone go use this it's perfect but good enough that I want to start consuming it for real because in brand ghost I use Needler and I'd love to switch it over to the source generated version. So that's time for basically publishing over to Nugat I already had a tool that does that for me. I worked with Copilot in this case to do basically like a change log generator. So some of this stuff here is basically just me re-releasing and some of the change log stuff was kind of ridiculous. I said build
a tool to go do it. So no problem. We speced out this tool. It built it and then it just made its own change log anyway without running the tool and it was wrong because it was using random stuff from its history that didn't exist. Anyway, we got it bumped and then I started consuming it in Brand Ghost. But it does not work and it doesn't work because there are some use cases, some scenarios that I just was not privy to purely from looking at it from the perspective of this repository. So, if this seems cool, in the next video, what I'm going to do is show you for real, cuz I lost all of my Co-Pilot chat history. I'm going to show you how I go basically get Copilot to help me fix up the gaps that we still have. And some of those
gaps, like I said, it's not that the code doesn't compile in here. It's that we have some use cases that we just do not support anymore. And I need to make sure that we can find a way to bring those back. I'm pretty confident Cop-lot can do it. So, when that video is ready, you can check it out next. And I hope you get to try out GitHub Copilot in the CLI. Thanks, and I'll see you next time.
Frequently Asked Questions
What AI tools did you use for the migration of your dependency injection framework?
I used Claude and Copilot CLI to help with the migration of my Neidler dependency injection framework from a reflection-based approach to a source-generated one.
Why did you choose to use AI for this migration instead of doing it manually?
I chose to use AI because I had minimal experience with source generators and didn't know how to approach the migration practically. AI helped me come up with a plan and execute it incrementally.
What challenges did you face while using AI tools during the migration process?
These FAQs were generated by AI from the video transcript.One challenge was that sometimes the AI would suggest code that didn't align with my requirements, like maintaining reflection as a fallback when I wanted to eliminate it entirely. I had to guide the AI and make decisions based on my understanding of the project.
