In this video, I'll go through my first impressions of launching VS2026 and opening one of my big C# projects. We'll play around with some simple Copilot features and see how things work!
View Transcript
Okay. Interesting. I've never I've never done this before. All right, folks. We're going to jump into Visual Studio 2026 here. I just installed it. This is the first launch essentially after it's been installed. I have not yet opened any solution, so I figured why not start with the Brand Go solution I have, which has hundreds and hundreds of projects cuz I like building stuff with plugins and everything's kind of split out by projects. Figured let's try opening this, see what it's like, and then I'm going to kind of poke through some of the features and things that they've listed and see if there's anything that really stands out. I'm going to do some follow-up videos on more specific features, like how they have the benchmarking project templates built in and how some of the co-pilot features work. So, we're just going to kind of poke
through this for now and then I'll have some follow-up ones that we dive into some of these other features more in depth. So, let's see what happens. Um, so far it does not feel snappy, of course. And because this is the first run, everything's kind of in a spot that I don't really like. So, let's go ahead and start moving some things around. Uh, get changes. I actually like having co-pilot kind of like my goodness, where's my cursor? I like having co-pilot right here, but we don't need that for right this moment. And then I guess I'll just pull the get changes over. I'm still not personally using like Git in Visual Studio. I probably should, but I don't find myself gravitating towards that. I use Git extensions. I'm maybe I'm weird or old school. I don't know. But anyway, um this loaded up. You
can see that I got all these projects in here. It did take a little bit to load, but like I said, this is the very first launch. I know that some people were reporting that they had like ridiculous speed. Um I don't I don't know how many projects I have. If we try to build it, let's see if that actually goes pretty quick. I imagine it should because I should have, you know, build artifacts from before. What else is this is going? So, they talk about adaptive paste. I can probably just pull this over for a moment. They talk about adaptive paste. Code actions at your fingertips. So, some of these things I'm kind of interested in, right? Like so we can actually, you know, get the context menu up. We can ask copilot to explain things, generate comments. I actually have been doing the
comment generation kind of thing a little bit more recently. and not for I don't document my code with comments necessarily, but I have some like SDK way to explain this is like SDK methods or classes that um when I'm building things I'm actually trying to see if the LLMs do a better job when they have like public dot comments on them. So, I have been adding that to some sort of like common reusable areas having uh co-pilot do that or claude. But I guess now that it's uh you know as a context menu, I'd like to try that out. Same with generating tests. I have made videos before on tests being kind of hit and miss with co-pilot or LLMs in general and some strategies for trying to improve that. But I'm very curious if we have this as like a first class thing. Like
is it smarter? Is it more effective? Um, not a huge fan of you asking an LLM to go make tests and then it does the thing where it confidently says it's tested something and you read the test and you're like, you're not even testing the real code. So, we'll see how that goes. I don't do a lot with mermaid charts to be honest. Maybe I can play around with having the LLM make mermaid charts and then I can actually uh view them. That might be kind of cool. Uh, some of these editor controls not really going to be that big of a deal for me. So, um, not really that interesting. Same with file exclusions and search models. To be honest, I don't really have a strong use for this yet, but we'll see. I'm not saying it's not use useful at all. I'm just
saying for me, it's not really a big deal. I think this one, I want to make a follow-up video on this where we make an application and I point it to one of my repositories. And you know I know that my repository online for needler does not have any sort of data in the training set that uh that the models are used on that co-pilot will be using. So I want to see if I can uh share the URL that's public and see if it can make sense of what's going on. This for me is a huge huge value ad. It's a very, you know, seemingly simple thing, but um especially there's stuff where I'm trying to integrate with like um you know, social media APIs for brand ghost and stuff like that. And I've noticed that if you ask C-Pilot, regardless of the model,
hey, go create some code for me that interacts, uh it will generate something for you and it's based on either it's made up or it's based on some API that isn't relevant anymore, whatever, right? but it does it confidently and you're like, "Hey, this is cool. It worked, but it doesn't work at all." kind of thing. So, I'm hoping because I run into this kind of scenario all the time that I can say, "Hey, look, I found the documentation. Here you go." Like, go wire this up for me. But I I see this one personally being, you know, huge value ad despite it seeming like pretty simple. Like if I use chat GBT for example, that's outside of my IDE which makes it a pain in the butt, but it does it quite well if I give it a URL. Sometimes I just have to
be a little bit more instructive or clear in my prompt because even with a URL, sometimes it won't actually go search or open the page. Uh better co-pilot responses. Again, I'm not sure how I'm going to validate this one in particular aside from just playing with things. So, I don't know if I can necessarily uh make a video proving this, but I'm excited to use co-pilot and have it feel like it's uh more intelligent or less dumb. Sometimes it's not just a co-pilot thing. This is I've seen this kind of thing in cursor. I've seen this kind of thing in Claude. But um when they can improve the workflow and how it's maintaining context and stuff like that, we all win. So, I'm really looking forward to that. uh the external symbol thing. Again, I'm not really sure if that's going to be a super
important use case for me. And then I already uh hinted at this, but uh benchmark.net project templates, we're going to make some videos on that. And I want to play with the profiler as well. Oops, didn't mean to collapse that. So, what we should be able to do is uh you know, a combination video or maybe back to back where we'll do the benchmarks and then we will ask co-pilot the profiling agent. We'll have it go work through doing some optimizations based on the benchmarks. I have tried playing around with this sort of workflow in Claude before where I tell Claude like let's go build some benchmarks. I want you to go run the benchmarks. I want you to go use a tracer and then I want you to go look through the data and suggest improvements. Tried doing this. It's been kind of hit
and miss with Claude, but I'm very curious to see if we can do it with Copilot, especially when they kind of built this as like a first class uh workflow for us to use. They have some Git tooling. Like I said, maybe I'll try to move over to more Git stuff in Visual Studio. I don't know why, out of habit, I just use the external tools that I have. But yeah, I think the debugging and diagnostic stuff, I'm pretty interested in uh diving more into that. Uh we'll maybe try out some Git tooling. That's for me personally to try and shift over to. Did you mean uh could be interesting. I think I'll have to see how this works. Right. It says did you mean work seamlessly with allin-one search? And so basically if you if you don't have your search perfect, it might suggest
like uh did you mean this instead? I don't know. I'll see code coverage. So it says code coverage is now available in Visual Studio Community and Professional Editions for the first time. This doesn't really affect me because I have enterprise, but for all of you, I think this is awesome. I think that if you haven't used code coverage with the highlighting and stuff, this is such an awesome feature to be able to have this at your fingertips where you can see what your code uh sorry, what your tests have actually covered in your code. So, do highly recommend using this. Uh, I have noticed in the past, I haven't tried it in 2026 because you just saw me launch it for the first time, but I I think that hopefully there's some performance improvements because I've noticed that sometimes this uh type of highlighting can
can bog down the IDE a little bit. Um, and maybe this organization is a little bit better because I've also noticed the code coverage sort of results to go through can be a bit of a pain in the butt to navigate. But, uh, overall, if you haven't used the code coverage like highlighting and stuff before, it is awesome. highly recommend new look and feel. Okay. Um, and then obviously I'll cover this more in some future videos, but .NET 10 and C# 14. In this video, I'm not going to be going over all those features. Hot reload and Razer editor improvements. Not a huge focus for me personally, just because it's not something I'm actively doing. But let's jump back over to Visual Studio and see if we can kind of poke around a little bit. I can already like it might not be totally obvious
but I can tell like the UI feels like it's built with completely different controls. I'm not sure yet like it's different so I'm not sure if I'm a fan yet. But the other thing is to record this video I'm using a resolution that's not okay for this 48 in monitor. So it looks a little bit silly to me but I think it comes through a little bit better in the recorded videos. But if we go pop open something like uh SDK, pop this open, you know, just to pull open some files here. I just want to see if we get some of these extensions or context menus open. Copilot actions, right? So, generate comments. I just want to try this out. So, for example, do we even have to highlight? Can I just do this co-pilot actions? Um, generate comments. Oh, interesting. So, if you've
tried this before, um maybe not in here necessarily, but how I used to do this, let me get co-pilot anchored here. So, I'll minimize that for a second. How I used to do this is like I would highlight this and I think it's alt uh forward slash and I would bring this menu up and I would say something like add um helpful doc comments uh for developers right something like this and pick uh generally now I have access to GPT5 so I probably use that or opus and so I would do this and then it would add in a comment and I find actually that GPT models it will do it in Opus will basically answer like almost in a chat form but like using this control. It's a little bit weird. And then I have to press apply. So you see this chat on
the left, right? This co-pilot part that would if I picked an opus model, it would happen sort of like here and then I'd have to press apply, right? So I think they're kind of pushing stuff to go to this chat window now. But if I press apply over here, you can see it write it all out and then we can accept that. Um, I guess I'm not I'm not opposed to this if it's like uh consistent. I guess sometimes it feels a little bit odd to me where they're animating like the changes. It's kind of feels like it's flashy when it's like writing out line by line and you see it going. But like just for transparency, like I already see the code I want. Like it was probably faster if I just copy pasted and did that and paste it over top, right? Versus
pressing apply and having it go like running through all the lines to go right out. For bigger changes, maybe not. Maybe that is more effective, but I feel like the the animated replacing of text is just feels a little bit weird. Like I don't need that. Just change the code so I can move on. But overall, that's a that's a handy feature. What else is here? Copilot actions. I could explain this. Here's a breakdown of the code. Okay, so it's writing out a ton of stuff. Let me see. I I know this code cuz I wrote it. So yes, it's an extension method. Um the purpose it creates structured logging. So if you're reading the code that's on my screen and you're not familiar with what it is, uh we can basically create a scope for our logger so that when you call logging methods
on it, uh we have this information that's also added in. So that way when we are looking at our structured logs, every log that comes in with the scope means that it has some of this information associated with it. And that way you don't have to have your log message go right out all of these things. You could just say like log hello and the actual logging entry will have the data that is part of the scope. So it's pretty handy. It's useful for tracing and debugging authentication flows. Okay, because it's an O scope parameters social an object containing authentication details. I guess it's pulling that just from what it sees right here. Returns an I disposable. Yes, that's accurate. Uh why is this useful? Okay, so it tells you how to use it, right? So kind of what I was saying, you can begin
the O scope and then let me drag this out, right? So if you were to log this line, right, logging information, you would still have all of this other data associated with that log line. So this is very specific to structured logging. It says, gotcha. If you forget to dispose return I disposable by not using a using statement, the logging scope may persist longer than intended. Okay, so that's helpful, right? So if you saw this, this is kind of neat. If you saw this in the codebase, you saw someone using begin scope and you're like, I don't really understand why someone would do this. Like why wouldn't we just have the logger have this information like this will tell you why? And it actually gave a little helpful tip, right? Make sure that you're calling dispose on this. Not just because it's disposable. You should,
right? Have a using statement, but it tells you what could happen if you don't use it properly. But yeah, structured logging so it helps correlate log entries scope context. The context is automatically cleaned up when the scope ends preventing accidental leakage. So I think that's pretty helpful. I could imagine, you know, this is maybe a crappy example because I'm very familiar with this and I could also just explain it to all of you, but I think this is a really helpful thing that if you're navigating a codebase and you're like, I don't know what this does, right? This is a pretty typical thing if you're building software in Teams. You're maybe exploring something that's a little bit new. You can go say like, "Okay, I don't know what this code does." You saw that all I had to do was like, you know, click on
this, go to co-pilot actions, and explain. So, that's pretty cool. Let's see if I just do add to chat. Let me just highlight some text here. I'm curious if I go copilot actions, add to chat, right? It gets the the actual uh the lines of text, like the context that we want to add. So, I think that's pretty cool. It's handy because what I was doing before, and maybe there was a way to do this already and I didn't know, but I would highlight the text and then I would go over to the plus sign. I would go to selection here. But now I can just kind of do it all in one spot. And uh there's no it doesn't look like there's any shortcuts for these co-pilot actions yet, but maybe that will come. And um I want to see generated tests, but u
maybe I'll do a quick example of this now. It's going to be a really crappy example just looking at this, but let's go ahead and try it. So, um, copilot actions generate tests. What I'm finding interesting so far is you can see that it's running in ask mode and so instead of operating in agent mode where it's already changing this code for you, it is going and writing tests, but it's in the chat. Let's see a couple of things here. So, this might be a bad example, but I don't often in this code base. This isn't really a uh I don't usually creating mocks for everything like this, but this is very much like a very simple type of test. So, maybe it's a really good fit for something like a unit test with mocks, but it's all in the chat. I'm curious if I
change this to agent mode. If I do the same thing, will that make a difference? So you can see that this maintained agent mode. Okay. Interesting. So again, I've never I've never done this before, but if you noticed, I had the chat left in ask mode before. So it was basically writing the tests in the chat window, but I switched the chat to be agent and then I did the same thing. When I have the chat already in agent, it went ahead and did that. To me, that's a little bit confusing because I think that I would maybe expect, you know, if I'm going doing co-pilot actions here, like if I wasn't paying attention and it popped open a new chat to go do the work, if it was in agent mode already, maybe that's not what I want. So, it's a little confusing. But
you can see now it's because it's in agent mode, it's iterating on on doing the right thing, so to speak. Like, it's trying to figure out what's up. So, we'll let that kind of finish. But now I'm curious and maybe one of the last things I'll try here is that I will use sort of the same thing. I'm going to rightclick and I'm going to I'm going to do the thing to generate dot comments again, but I'm going to leave it in agent mode and see if it goes through. And furthermore, I'm going to switch the model because I actually want to see if now if I switch the model and do this um if it will actually do what I expect like actually use agent mode, use the right model and update the text. Because again, as I said, if I use claw opus
or something like that, it seems like it wants to do it in chat in line. That's not really what I want. So, we will see. But this is still trying its best. Um, let's see what it's actually complaining about. Ah, yeah, it doesn't. Um, there we go. I'm using strongly typed IDs, which is a Nougat package, and so it was trying to put strings into these strongly typed IDs, but you can see so like it went ahead and made tests for this. I can't stand this. I don't know why. I guess it's been trained on enough data where people literally put arrange act assert. Um, but I have this in my co uh my co-pilot instructions to not do this and it still does it. It's a pain in the butt. I don't need these comments. But overall like this seemed like it did the
right thing. So that's fine. But yeah, let's go back to here. I'm going to see if I leave it. Let's do I do a new chat. Leave it in agent mode. I'm going to pick sonnet form and then I'm going to get rid of this. I'm going to say right. So I'm just going to right click on that generate comments. And I'm hoping that because I've put the chat window into agent mode and I've switched it to sonnet. I'm hoping that it will actually just go update things. Right? This is like these are really simple things, but when you're doing them on repeat, if you can find ways to save yourself some time, that can be super helpful. So, look, this interesting. I I can't stop it. I don't know why it's uh going off the rails on this, but you can see no compilation
errors. However, I can see some issues that need to be fixed based on the coding standards. The test file violates the coding standards by using arrange act assert comments. Why did it do it the first time then? Why does it figure it out now? This is not doing what I asked at all. I'm pretty sure I said generate comments, right? Slashdoc the code in here is the actual thing that was run. So this um completely failed to do what I wanted. Completely failed. It updated the test. It got rid of the arrange assert. Thank you very much. But um it completely failed to do what I asked. So let's try one more time. Okay. So agent mode cla set 4. I'm going to go uh co-pilot actions generate comments once more. Okay. So this this is just clearly not working. I wonder if I uh
go to GPT5 generate comments. I'm trying to figure out if it's the model or if it's the prompt itself, which we've already seen be successful or if it's cuz it's agent mode. But it is fortunately it's going to co-pilot instructions cuz um I don't know why the other one did not decide to use the coding standards. Can you clarify what you need? Very interesting. I wonder if I need to highlight more of it. In my opinion, this clearly still needs some work because like this is pretty simple. Again, I'm going to try one more time after this going from agent mode to ask mode and seeing. It seems like it doesn't know how to use slash dock in agent mode. See, this is doing um my goodness. No, you got to stop that completely. completely kind of crazy here. Okay, so if I put it
back to ask mode, let's do a new chat. Ask, can I pick? There's opus right there. Okay, let's see if this makes a difference. This should be similar what I had in the beginning, but let's try co-pilot actions generate comments. So, yeah, we can get comments generated in chat. We seemingly cannot get them at all when it's in agent mode, which is kind of silly, but I could probably do this, right? Like, uh, so how did this work? Highlight co-pilot actions, add to chat, add add the dot comments to the method. Oh, it switched back to 4.1. I don't I don't know. I guess in this case, it's because the model wasn't available for agent mode, right? Right. I had it on Opus and we don't have Opus there. But anyway, like that worked. But I feel like that kind of uh that flow needs
to be improved because in my opinion, there's no reason why slashdoc the code with some some reference to the coach does not work in agent mode. I'm sure that this will get improved, but that's good feedback for the the VS team, I think. But overall, I guess this is one of those things like I've I've made other videos talking about using AI tools in general. It's the same thing with my IDE. It's like this is this is going to be the new Visual Studio that I need to get uh familiar with using. So, I probably as long as it's not going to inhibit my ability to do work too much, I will switch over to it, get more comfortable with it, and that way as things are evolving, I'm using sort of the the latest cutting edge tools. And that's what I've been doing with
LLMs as much as possible, even when things are kind of weird, not really working the way that I'd like, trying to use them, leverage them in ways that I can make effective. And that way, because all of these things are changing so rapidly, I'm kind of trying to stay at the forefront as much as possible, but it's seemingly impossible to do that, right? So, at least I'm getting the latest changes as uh as I'm trying to work through stuff. So, overall, maybe not the best uh first initial experience with 2026. There are a handful of features that I'm excited to do. So, like I said, we'll do some videos following this one, looking at benchmarking and then using the profiling agent. I'm very excited to see how this one works. So, I'm going to see if there's some stuff in Brand Ghost that I can
benchmark and tune up or I might go to something else I have and see if we can do some optimizations on that. So, thank you so much for watching. Feel free to go try out VS 2026. And like I said, right, the community edition does seem to have the uh code coverage highlighting, which if you have not used that before, it is extremely helpful. So, give it a shot. And I'd be curious to see if your experience is uh at least a little bit better than mine. And depending on when you watch this, maybe some of this stuff is already fixed up. So, thanks for watching and I'll see you in the next video.