xUnit v3 is packed with awesome features and takes full advantage of Microsoft Testing Platform. If you're like me... you didn't even know it was coming out (even though you use xUnit every day).
Check out what's new in v3 of xUnit for you to enhance your C# tests!
View Transcript
If you're like me, then for about a million years now, you've been using XUnit to go test all of your C code. But like many things that we end up using sort of out of habit, we forget that they have new additions or new features and changes that we can start exploring. Hi, my name is Nick Coantino and I'm a principal software engineering manager at Microsoft. I've started to make some videos looking at MS test, looking at the Microsoft testing platform, and wanting to compare the different testing frameworks we have access to. And when I started to put together the content for the XUnit version of this, I realized, hey, look, there's probably a lot of stuff in V3 of XUnit that it's worth covering before we start getting into the Microsoft testing platform. In this video, we're going to walk through some of the
new additions, make sure that we get on the same page, and then in the follow-up video, I'll explain some of the differences in code going between the two. If that sounds interesting, just a reminder, subscribe to the channel and check out that pin comment for my courses on Dome Train. Let's jump over to the docs and see what's new. All right, so to kick things off, new templates to get installed. Personally, I have not gone ahead and used this yet. I did go run this command, so I have access to the templates, but I have not used them yet in Visual Studio. So, that was for me was a good reminder because in some of my other code, I've already gone ahead and started using the Nougat package, but I didn't go grab the template. So, be interesting to see what they have there. But
essentially, one thing to point out that's really important if you're like, "Okay, what is even XUnit v3?" Um, when you go looking at the Nougat packages, they actually have a different name to them. If you were getting Nougat packages installed for XUnit, you'll notice like the version would be like 2.9 something and that keeps going up. But when they went to version 3, it's an entirely different package. It's not the same package with, you know, 3.1.0 or something like that. It is a whole new Nougat package. I think for me by not paying attention to things like you know on social media or looking up other news I didn't even realize that XUnit V3 was coming. I just would have assumed that I could keep getting the new updates through the existing package but that's not the case. So one thing that we will end
up seeing is that XUnit v3 is based on Microsoft testing platform. Like I said I'll have a follow-up video where we can actually see the benefits of that in code. I think it's pretty cool. In some situations, you might say, "Ah, that's not really that beneficial." But I think that over the long term, just getting in the habit of doing things this new way is going to be pretty powerful. Let's keep going down a little bit lower. I think there's lots of libraries that add different assertion types and things like that. So, they have added a few new ones. You can do skipping tests at runtime, which I think is pretty interesting. I don't think that I've personally come up with a use case for this, but essentially if you're not familiar, historically you've been able to add an attribute on your test methods. This
is going to mean that when it's doing reflection to go look at what test to go run, it can dynamically figure out which test to go skip. But I think this is pretty interesting because you can during the test execution you can go look up some condition to see cuz one of the challenges when you're using attributes is that this is going to be sort of like static or constant information. You sort of have to know this at compile time, right? So you be able to say like skip is true. That way that when it's doing reflection, it already knows that you want to skip it. When we have a skip inside of code now, we can be executing logic. So we could check the environment. We could go look at some I don't know like parameters in the configuration or in our CI/CD pipeline
to say oh we should skip this and that way we can dynamically have skipping. So I think that's pretty cool. I just don't actually have a use case. But I can imagine for more complex projects, more complex pipelines and especially having different environments to go run in that this could be a pretty powerful thing where you're no longer constrained or having to kind of jump through hoops to make that happen. There are a few more assertions for uh some older frameworks. So that's not super interesting to me. Dynamically skippable tests. We kind of just talked about that. You can see that there's uh situations where they're talking about like runtime environment and stuff like that. So I think that to me seems like one of the primary use cases, but be curious, you know, if you're watching this and listening through, do you have situations
where you would really benefit from that? And if so, like what does that scenario look like? I'd be very curious to hear. This seems like almost uh the inverse, right? Explicit tests. So it says tests which are normally not run unless it's requested that all explicit tests are run. Kind of interesting that you could now go have this property turned on. Again, I can't really think of a good use case for that in my own scenarios, but uh the options there. I always like reflecting on this kind of stuff when people are building, you know, huge platforms for people to leverage like XUnit. There are going to be so many different scenarios that as someone who's building infrastructure like this and a testing framework, you're not going to know all the scenarios, right? You're going to be trying to develop for yourself or for scenarios
that you're aware of. And then the more people that adopt this and XUnit is used quite extensively. There's going to be so many other scenarios where you might say, "I never thought of that." When I'm seeing the skippable tests and explicit tests in my mind, that's kind of what's happening. like I didn't realize that people really had this need but I think it makes sense. Test filtering expressions. This is pretty cool. Um this is like a query filter language. Personally, I have had scenarios like this. And just to give you an example, when I was working at a digital forensics company, we had tests that were being written that would go uh essentially like UI tests and they would go click through things. We had tests like that. We had unit tests. We had more like functional tests that were or like integration tests that
were slower. We had different scenarios in our CI/CD pipeline where we said hey look some of these tests are so slow but they are valuable and we don't think that that slowness is worth running on you know every single build kind of thing. So, we might go do I'm just making this up cuz I can't remember the exact details, but it might have been like every build we'd go run the unit tests and then I think even like the integration tests that were not slow we would go run as well. And then we would have like UI tests that might run once a day or once a week or something like that. I think they were daily. And then we had these like really really longunning tests because we were a digital forensics company. We were building software that would do searching and we would
have to do these enormous searches and comparisons to make sure that we could validate that our software was still working. We would not run that every single build. We had a subset of those that would run nightly and we had another subset that would run weekly because they were significantly larger. Being able to have a filtering syntax like this is really cool because I can remember the way that we had to make this work was a little bit tricky. Um there are conditions that you can go check for and like categories of tests, but to be able to have like a first class type of like quilter uh query filter language, I think would be super helpful cuz I'm just remembering some of the pain around that. So that's pretty cool. Test context. This is interesting because I hadn't even thought about this before. And
like I said, I've been using XUnit for ages now. When I upgraded to V3, I noticed that there were more like Roslin kind of analyzer rules running. And I had a lot of code in my test that was being highlighted like you should change this. And I'm going what the heck is happening here? Like these tests, you know, I've had them for like 2 years or something like why am I suddenly getting like this recommendation? By going to V3, it was saying that I was using basically like a cancellation token that was just default, right? A lot of methods that are just asynchronous. And in tests, I'm like whatever. or like I don't I'm not hooking this up to some, you know, button that needs to cancel or like a web request that needs to to chain the cancellation through. So I'm just going to
use the default cancellation token which is none. Now we have the ability on the test context which you can access I think they say either through the static way so like test context.curren or you can use this test it test context accessor that's really difficult to say through dependency injection. Either way, you can get the current test context and then you can ask for the cancellation token off of that. Again, this seems like really simple, but I think it's really interesting that if you need to do something like stopping your tests, I'm not sure if you've ever run a test suite where there's like thousands of tests and you press stop or something and you're like, okay, like it's time to stop and you're kind of just sitting there waiting and things seem a little bit hung. But I think honestly that being able to
have access to like cancellation tokens like naturally through your test is not only like good practice because I mean we're already pass in my case already passing in like a none cancellation token but I think that if we can use a real one now then we can get that response of stopping would be super cool. small detail, but these are the types of things that over time when you're building up your test suite or working across many projects, especially in your career, like that could be something that ends up saving you a bunch of time that I think is actually very helpful. There's a few more things here that I have not yet played with and I think they might be pretty interesting. So, send diagnostic message. I'm not sure how I would use this yet. I think in my traditional tests, maybe not. Um,
key value storage is also pretty interesting because it says that you can uh can be used to communicate between different stages of the test pipeline. Again, I don't have a scenario yet. Now, in one of my products that I'm building called Brand Ghost, I do have tests that are using test containers. They spin up a Docker instance for the SQL database. And I'm curious, maybe I have some situations there where there's like something about the test pipeline that could be helpful to figure out, but I'm not totally sure yet. So, I want to explore this a little bit more and add attachment could be quite interesting, right? Attachments to the test results that will be recorded into the various report formats. I don't have fancy reporting in Brand Ghost. And I'm trying to think even in a previous product I was building called Meal Coach,
there could have been something interesting there because I had uh sets of tests that were like basically like performance tests or benchmark tests. And I think adding an attachment that was the benchmark results for comparing that could have been super cool. I'm curious about using that more in my current setup though I don't really have a good use case for it. And add warning to test results which will be reported to the test runner as well. For me, this one doesn't really stand out, but I do have some helper methods on some things. And I wonder if maybe add warning is something that I could leverage more from like a test infrastructure perspective instead of like, you know, the current test that's running kind of thing. So, there's a handful of things on this test context that I think are pretty interesting. I just the
only one I've really used is the cancel token right now. So, if we move down to theory, data, rows, and metadata, uh, we do get some more options here. for being able to have theory data which is pretty helpful. We can see an example here, right? So, we've introduced a new base class theory data row for untyped data rows and 1 to 10 type argument generic version of theory data row for strongly typed data rows. So, you can see you can use a property setting pattern for these data rows like this, right? That's pretty interesting. As well as the fluent construction pattern, you know, I like the fluent style syntax. I think it's pretty nice in general, but it's it's nice to kind of see that we have some options for that. I haven't used the matrix theory data yet. Off the top of my
head, I can't think of a test right now that I have in something like brand ghost, but I know that I have written tests like this where I really want to do like a matrix of data and have that kind of multiplied out. You can see like here's the syntax, right? But if we just look at, you know, these five entries, right? So we have in the first row we have three values, in the second we have two. Because it's a matrix of combinations, we should get, you know, the product of these two things. So then we have six entries down here. So that's pretty cool because I know that I have written tests like this. I know that I have done like the inline data or I've done uh theory data or something on a separate class or method to try and manually come
up with all the combinations. And then what happens ultimately is some point in the future I go hm I need to add one more into this matrix. And now I'm manually kind of blowing out that whole matrix. This is really cool. Like I said, off the top of my head, I don't have a test in mind for this, but I need to keep it in mind because I know that I do write tests like this, and when it comes up, I need to stop myself from manually going, "Okay, time to go write up all of the the combinations. I really just need to do this matrix." So, I think that's pretty cool. They say in V2, we do not capture any output calls to console, debug, or trace. I have historically noticed this. It's not something that I rely on heavily, but there are situations
where if I'm trying to debug something and it's maybe like a race condition or something like that, if I put a break point in, then it might not occur. So, there are times where I'm going to do the the classic like, let me add some print statements to debug. And I've noticed that in the past using some stuff like this, I'm going crap, it's not working the way I want. I'm not seeing it where I expect. So we can see here in V3 we've added two assembly level attributes that you can use to capture output and redirect it as though you had written the output through this which enables capture for all tests in the assembly. That's pretty cool to be able to kind of treat that as like a first class kind of thing. So capture console and capture trace. I will want to
keep that in mind. Like I said, it's not a super common case for me, but I can absolutely remember scenarios where I was trying to get some logging in place to the console to debug and it's just not showing up where I'm expecting. Test pipeline startup. A new test pipeline startup capability has been added which allows unit test developers to be able to run startup and cleanup code very early in the test pipeline. So this is interesting, right? This differs from an assembly level fixture because of how early it runs. Even for me, I have not done assembly level fixtures to be able to set things up. Like I was kind of mentioning with brand ghost, I do some stuff with test containers that I am currently looking to optimize. So this differs from that and they say because of how early because it runs
for both discovery and execution whereas fixtures only run during execution. So that's quite interesting. The intention with this hook is to perform some global initialization work that's needed for both discovery and execution. To give you an example, in my testing with Brand Ghost, I do have test containers set up. They will spin up Docker instances for a MySQL database. One of the challenges is when you run this in parallel, the way that it's run historically is that these are across different processes and then you can't share static state. So you can't have like uh you know a static reference to your test containers because these things are running in different processes. The side effect of that is that when you're running things in parallel and there's too many in parallel, I don't want to go spin up a 100 Docker containers. So I need to
limit the concurrency or come up with creative ways to essentially only have one or a few instances of these running. So for me, I'm curious if there's something for some global initialization that I can do that would help with that. So, I'll be exploring this a little bit further. Culture override. This is something that I don't think enough people talk about or think about in general when they're building software. This is a lesson I had to learn super early on in my career because working in digital forensics. What we're ending up doing is parsing unstructured data a lot of the time. like the majority of the job in a nutshell was taking like data that's not structured and trying to structure it so that someone whether that's a human machine whatever can do more with that structured data. Now many of us are used to
thinking about the data that we're currently working with in our current culture because that's really most of what we're seeing. But we may not realize that other cultures might have a different separator. Right? So in English we have like for decimal numbers it might be like 1 period 2 3 4 but in you know in France you might have one comma 2 34 because they will use a comma instead of a period. Now when you're trying to parse this stuff and the culture is not matched up you get big problems. So culture in my experience in my career has been very important to make sure that we understand the culture of the data that we're working with and the culture of the thread that is running it. So this says in V2 the culture that your unit test ran with was the default culture of
your PC. And for the most part this is like going to be consistent. This isn't like an XUnit specific kind of thing, but they said in V3 that default is the same but you can override it in your configuration file. So that's super cool. That might mean that you could have not only like just consistency across your development environment, but maybe there's something you could do for your CI/CD pipeline where you could change the culture of your entire test infrastructure. Um I again I'm trying to think back to the digital forensic days of being able to say hey if the you know investigators or the examiners that are running our software are on a different platform can we change the culture of our testing infrastructure and make sure that the program still behaves effectively. I think there could be some really interesting things that come
up with this. I'm just spending a little bit of time on this one because I wanted to draw attention to the fact that I don't think a lot of people often think about the culture repeatable randomization. Randomization in tests. What's going on with that? Right. Randomization of test cases in V3 is stable until you rebuild and then the order may change in an attempt to help developer track down issues related to the particular random order of specific test cases. We will print the randomization seed we use when diagnostic messages are enabled. Very interesting, right? If you've ever ran into this issue with concurrency in tests, this is something that will suck up tons of time and energy, you know, run all your tests in Visual Studio, you see green check mark after green check mark. You're like, heck yeah. And then later on, you're like,
I should run these tests again and then, you know, one test is red and you're like, "Oh, that's weird. I thought that they were all green." You run it again, now it's green, and you didn't change anything. Maybe a new one's red and you're going, "Oh, no. there's a raise condition, there's something going on, right? So, this is super helpful to be able to have that randomization seed and understand that we have some stability in terms of the order. So, I think that's really cool. I don't know if in practice, like this part, I don't know if in practice that will be as helpful as I'm imagining, but I think it's a step in the right direction. At least, I'm just feeling for the people that have had to deal with that kind of race condition issue in their tests. It sucks. Okay, some of
the rest of the stuff in here, it's not that it's not important, but I think that we're going to start getting into the weeds of some of these things. And I'm not entirely sure that primary audience will find them uh that valuable, but I do recommend if you're interested, I'll have the link in the description and stuff. You can see it in the title bar, right? Just xunit.net docs. And then they have a getting started guide for the V3. This is super cool. You can see one of the last call outs here. And like I said, I'm going to make a follow-up video on this because that's what inspired this video. I was starting to go make this content. So support for Microsoft testing platform that's going to be the focus of the next video in this series. But overall I was able to take
brand ghost which is the primary thing that I'm building on the side and for me to switch over to it. This is how simple it was. Search and replace for all of the xunit uh references to go to the new assembly or the new nougat package overall. And then I could actually remove two other references. I could remove two other references and switch over to this. So, if that sounds interesting, I'll explain more of that in the next video. But I think that's all for this one. Would love to hear your thoughts. Have you already started using XUnit V3? Are you holding off? Did you not even know about it? Would love to hear from you. So, thanks so much for watching and I'll see you in the next one. Take care.