Testing C# Source Generators: A Practical Guide
If you've built a C# source generator, you already know how powerful they can be. Source generators run at compile time, inspect your code, and produce new source files -- all without any runtime overhead. But once the generator is written, you face a challenge that most .NET developers haven't dealt with before: testing C# source generators requires a fundamentally different approach compared to testing ordinary runtime code.
This guide walks you through everything you need to get solid test coverage on your generators. You'll learn how to use Microsoft.CodeAnalysis.CSharp.SourceGenerators.Testing, write assertion-based unit tests, verify that your generator emits the right diagnostics, and adopt snapshot testing with the Verify library to keep your test suite maintainable as your generator evolves. All code examples target .NET 9. The differences for .NET 10 are callout-noted where they apply (TargetFramework, reference assemblies package, and Roslyn package version).
Why Testing Source Generators Is Different
Unit testing a regular service means calling a method, checking a return value, and asserting on side effects. Testing a source generator is categorically different. Your generator produces compilation output -- new C# source files that are fed back into the compiler. The thing you're testing isn't runtime behavior; it's whether the compiler, fed a specific input source, produces exactly the source code you intended.
This distinction matters for concrete reasons. A generator that produces subtly wrong code may compile fine but cause runtime failures in consumer projects -- and those failures are often hard to trace back to the generator. Inputs are syntax trees, not simple method arguments. Whitespace, indentation, and line endings all matter, and the testing framework needs to normalize comparisons or tests will be brittle across machines.
Diagnostics are also first-class output. A well-written source generator reports errors and warnings when inputs are malformed, and you need to verify those diagnostics as rigorously as the generated source. The good news is that the Microsoft.CodeAnalysis.Analyzer.Testing package family gives you a purpose-built harness that handles all of this automatically.
Testing Approaches Overview
There are two primary approaches to testing C# source generators in modern .NET, and they complement each other well.
Assertion-based unit tests use CSharpSourceGeneratorTest<TGenerator, TVerifier> from the Microsoft.CodeAnalysis.CSharp.SourceGenerators.Testing.XUnit package. You declare your input source and your expected generated output, run the test, and the framework compares them character by character (with line ending normalization). This approach is explicit -- every character of your expected output is spelled out in the test, making failures immediately obvious.
Snapshot testing with Verify takes a different angle. Rather than specifying expected output upfront, you run the generator, capture everything it produces, and persist that output to disk as "verified" snapshot files. On subsequent runs, the framework compares the new output to the persisted snapshot. If they match, the test passes. If they differ -- because you changed your generator -- the test fails and you review the diff before approving the new snapshot.
Both approaches are valid and often used together. Assertion-based tests are best for critical, stable parts of your generator's output. Snapshot tests shine when you're iterating quickly or when your generator produces substantial output that would be tedious to type out in full.
Setup: The Test Project
Create a separate *.Tests project targeting the same framework as your generator. Source generator test infrastructure works cleanly with .NET 8, 9, and 10.
Here's the project file you'll need for both assertion-based and snapshot testing with xUnit:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net9.0</TargetFramework>
<IsPackable>false</IsPackable>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>
<ItemGroup>
<!-- Core Roslyn packages -->
<PackageReference Include="Microsoft.CodeAnalysis.CSharp" Version="4.9.2" />
<PackageReference Include="Microsoft.CodeAnalysis.Analyzers" Version="3.3.4" PrivateAssets="all" />
<!-- Source generator testing harness -->
<PackageReference Include="Microsoft.CodeAnalysis.CSharp.SourceGenerators.Testing.XUnit" Version="1.1.2" />
<!-- BCL reference assemblies for test compilations -->
<PackageReference Include="Basic.Reference.Assemblies.Net90" Version="1.7.9" />
<!-- xUnit -->
<PackageReference Include="xunit" Version="2.9.0" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.8.2" PrivateAssets="all" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.11.0" />
<!-- Snapshot testing -->
<PackageReference Include="Verify.Xunit" Version="26.4.4" />
<PackageReference Include="Verify.SourceGenerators" Version="2.3.0" />
</ItemGroup>
<!-- Reference your generator project -->
<ItemGroup>
<ProjectReference Include="..MyGeneratorsMyGenerators.csproj" />
</ItemGroup>
</Project>
For .NET 10: use
net10.0,Basic.Reference.Assemblies.Net100(separate NuGet package), and Roslyn4.12.0.
A few things worth noting. The Basic.Reference.Assemblies.Net90 package provides the .NET BCL reference assemblies that the test compilation needs to resolve types like System.String and System.Console. Without these, your test compilation will fail to resolve standard types and your generator tests will report spurious errors rather than testing your actual logic. This is the single most common setup mistake when getting started with testing C# source generators.
The Microsoft.CodeAnalysis.Analyzers package is marked PrivateAssets="all" because it's a build-time dependency only. The Verify packages are optional if you're only doing assertion-based tests, but adding them upfront is worthwhile given how useful snapshot testing becomes over time.
Unit Testing with Microsoft.CodeAnalysis.CSharp.SourceGenerators.Testing
The CSharpSourceGeneratorTest<TGenerator, TVerifier> class is the foundation of assertion-based unit testing for C# source generators. You instantiate it, populate TestState.Sources with your input, populate TestState.GeneratedSources with the expected output, and call RunAsync().
The framework handles everything in between: creates a CSharpCompilation from your inputs, instantiates your IIncrementalGenerator, runs the generator driver, collects all generated source files, normalizes line endings, and compares against your expected outputs. When outputs diverge it reports a clean diff. This is far more reliable than hand-rolling the compilation setup or testing by running your actual application and checking the obj folder.
Setting Up Input and Expected Output
The TestState.Sources collection accepts raw C# source strings. The TestState.GeneratedSources collection accepts tuples of (typeof(TGenerator), "filename.g.cs", "expected content"). The filename must match exactly what your generator passes to context.AddSource(hintName, content). One useful ergonomic detail: the framework normalizes line endings, so on Windows and on Linux will both match the same expected string.
Writing Your First Generator Unit Test
Let's work through a complete example. Imagine a generator that reads classes marked with a [GenerateToString] attribute and emits a ToString() override for each one. Testing C# source generators like this means verifying that given a specific annotated class, the generator produces exactly the override you expect.
using Microsoft.CodeAnalysis.CSharp.Testing;
using Microsoft.CodeAnalysis.Testing;
using Xunit;
namespace MyGenerators.Tests;
public sealed class ToStringGeneratorTests
{
[Fact]
public async Task GenerateToString_WithMarkerAttribute_GeneratesCorrectCode()
{
var test = new CSharpSourceGeneratorTest<ToStringGenerator, XUnitVerifier>
{
TestState =
{
Sources =
{
"""
using MyGenerators;
[GenerateToString]
public partial class Person
{
public string Name { get; set; } = "";
public int Age { get; set; }
}
"""
},
GeneratedSources =
{
(typeof(ToStringGenerator), "Person.g.cs",
"""
// <auto-generated/>
public partial class Person
{
public override string ToString() =>
$"Person {{ Name = {Name}, Age = {Age} }}";
}
""")
}
}
};
await test.RunAsync();
}
}
Notice a few things. The input source is the exact C# code that would appear in a consumer project. The expected output is the exact code your generator should produce -- auto-generated header comment, formatting, and all. The (typeof(ToStringGenerator), "Person.g.cs", ...) tuple must match the hintName parameter passed to context.AddSource inside your generator. If the hint name doesn't match, the test will report an unexpected generated source -- a clear signal to check the name alignment.
When actual generated code differs from expected, the test runner shows a side-by-side diff. This is especially valuable when testing generators that automate common patterns. If you're building generators that automate factory method registration, you want to verify that the generated registration code matches exactly what you'd write by hand. If your generator produces builder pattern boilerplate, a unit test catching any deviation before it reaches consumers is invaluable.
Testing Generated Diagnostics
A well-designed source generator doesn't just produce code -- it also reports meaningful errors and warnings when its inputs are invalid. If a developer applies your [GenerateToString] attribute to a class that isn't marked partial, your generator should emit a diagnostic telling them what went wrong, not silently produce no output.
Testing diagnostics follows the same CSharpSourceGeneratorTest pattern, but instead of populating GeneratedSources, you populate ExpectedDiagnostics. The framework verifies that the generator produces exactly the diagnostics you declared -- no more, no fewer.
using Microsoft.CodeAnalysis.CSharp.Testing;
using Microsoft.CodeAnalysis.Testing;
using Xunit;
namespace MyGenerators.Tests;
public sealed class ToStringGeneratorDiagnosticTests
{
[Fact]
public async Task GenerateToString_WithNonPartialClass_ReportsDiagnostic()
{
var test = new CSharpSourceGeneratorTest<ToStringGenerator, XUnitVerifier>
{
TestState =
{
Sources =
{
"""
using MyGenerators;
[GenerateToString]
public class Person // Missing 'partial' -- should trigger diagnostic
{
public string Name { get; set; } = "";
}
"""
}
}
};
// Expect our custom diagnostic TSG001 at the class name position
test.TestState.ExpectedDiagnostics.Add(
new DiagnosticResult("TSG001", DiagnosticSeverity.Error)
.WithSpan(4, 14, 4, 20)
.WithArguments("Person"));
// Generator should bail out on invalid input -- no generated sources expected
await test.RunAsync();
}
}
The .WithSpan(line, startColumn, endLine, endColumn) call pins the diagnostic to a specific source location. You can omit this if your generator only reports diagnostics without precise positions -- just call new DiagnosticResult("TSG001", DiagnosticSeverity.Error) and the test will match on the diagnostic ID and severity alone.
Note:
DiagnosticResult.CompilerErroris an alias for error severity and technically works for any diagnostic ID, butnew DiagnosticResult(id, severity)is clearer for custom generator diagnostics because it makes the intent explicit rather than implying a Roslyn CS-prefix compiler error.
Diagnostic testing is especially important for generators that enforce architectural constraints. If you're building a generator that validates strategy dispatch patterns or enforces singleton semantics at compile time, your diagnostic tests become the specification for what the generator considers valid input.
Snapshot Testing with Verify
Assertion-based tests are explicit and reliable, but they come with a maintenance cost. Every time you update your generator's output format -- even trivially, like changing indentation or adding a license header comment -- you need to update every test that asserts on that output. For generators with large or complex outputs, this gets tedious fast.
Snapshot testing with the Verify library solves this. The workflow is simple: the first time you run a snapshot test, it writes the generator's output to a .verified.txt file alongside your test. That file becomes the approved baseline. On every subsequent run, Verify compares the current output to the approved file. If they match, the test passes. If they differ, the test fails and shows you exactly what changed -- and you decide whether to approve the change or fix the generator.
This approach makes testing C# source generators much faster to maintain during active development. You're not manually typing expected outputs. You're reviewing diffs and approving them when intentional, which is a much lower-friction workflow when iterating on generator output.
Setting Up Snapshot Tests
With Verify.Xunit and Verify.SourceGenerators installed, the snapshot test pattern is straightforward. The [UsesVerify] attribute on your test class wires up the Verify infrastructure for xUnit.
using Xunit;
namespace MyGenerators.Tests;
[UsesVerify]
public sealed class ToStringGeneratorSnapshotTests
{
[Fact]
public Task GenerateToString_WithMarkerAttribute_MatchesSnapshot()
{
var source = """
using MyGenerators;
[GenerateToString]
public partial class Person
{
public string Name { get; set; } = "";
public int Age { get; set; }
}
""";
return TestHelper.Verify<ToStringGenerator>(source);
}
}
The shared TestHelper class builds the compilation and runs the generator driver, then hands the result to Verify:
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp;
namespace MyGenerators.Tests;
public static class TestHelper
{
public static Task Verify<TGenerator>(string source)
where TGenerator : IIncrementalGenerator, new()
{
var syntaxTree = CSharpSyntaxTree.ParseText(source);
// Build a compilation with BCL references so type resolution works correctly
var compilation = CSharpCompilation.Create(
assemblyName: "GeneratorTests",
syntaxTrees: [syntaxTree],
references: Basic.Reference.Assemblies.Net90.References.All,
options: new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary));
var generator = new TGenerator();
// Create and run the incremental generator driver
var driver = CSharpGeneratorDriver.Create(generator);
driver = (CSharpGeneratorDriver)driver.RunGenerators(compilation);
// Verify snapshots all generated sources and any diagnostics
return Verifier.Verify(driver);
}
}
The first time you run this, the test "fails" and opens your configured diff tool showing the received output. Review it -- if it looks correct, approve the snapshot (either copy .received.txt to .verified.txt or use the Verify diff tool's accept button). Every subsequent run compares against that verified file and passes silently unless something changes.
This pattern works especially well for generators that implement structural automation, such as plugin discovery generation or decorator pattern boilerplate. Snapshot testing lets you lock in the shape of the generated output and catch any drift as your generator evolves -- without manually maintaining a wall of expected-output strings.
Testing Edge Cases
Thorough testing for C# source generators means going well beyond the happy path. Cover these scenarios to catch the bugs that only show up in real codebases.
Empty input: What happens when no types match your generator's filter? Your generator should produce no output and no diagnostics. This guards against NullReferenceException errors in your collection handling logic that surface as confusing build errors rather than clean diagnostics.
Multiple types in one compilation: If multiple types carry your attribute, the generator should produce one output file per type without interference. Namespace collisions and duplicate hint names are a common source of bugs here -- if two types share a name across different namespaces, hint names must be unique or the compiler silently rejects one.
Invalid or partially valid input: Test what happens when a class has the attribute but is missing something your generator requires. Your tests should lock in whether the generator emits a diagnostic, skips the type gracefully, or bails out entirely -- whichever is correct for your contract.
Generic types: If your generator should handle generics, verify that <T> tokens and type constraints are preserved in the generated source. Forgetting to carry constraints through is a subtle bug that tests against concrete types will never catch.
Adding these edge case tests pays dividends quickly. They represent exactly the scenarios that only surface when your generator is used in real, messy codebases -- not the clean toy example you built it against.
CI/CD Integration
For assertion-based tests, no special CI configuration is needed beyond a standard dotnet test invocation. Snapshot tests have two additional requirements.
The first is snapshot test files.Commit all .verified.txt files to source control. The CI pipeline should run in "CI mode," where Verify treats unverified output as a test failure rather than prompting for approval.
- name: Run tests
run: dotnet test --configuration Release --no-build
env:
CI: true # Verify auto-detects this and disables interactive approval
With CI=true, Verify fails immediately if the generator's output doesn't match the committed snapshot. GitHub Actions and CircleCI set CI=true automatically on hosted runners, so you often don't need to set it explicitly on those platforms. Azure Pipelines sets TF_BUILD=True instead -- add CI: true to your pipeline env: block explicitly when using Azure Pipelines.
One final consideration: pin your Roslyn and Basic.Reference.Assemblies package versions explicitly. A floating Version="*" can cause subtle test failures when a new Roslyn release changes whitespace handling in syntax trees -- a frustrating debugging session for what is effectively an upstream version bump.
Frequently Asked Questions
What is the difference between ISourceGenerator and IIncrementalGenerator for testing purposes?
ISourceGenerator is the original interface introduced in .NET 5. IIncrementalGenerator was introduced in Roslyn 4.0, which shipped with the .NET 6 SDK and Visual Studio 2022. It is the recommended approach for generators used with .NET 8, 9, and 10 toolchains. That said, if you're writing a new generator, use IIncrementalGenerator -- it has better incremental compilation performance (avoiding redundant work on each keystroke in the IDE) and the testing tooling is better aligned with its design. The assertion-based CSharpSourceGeneratorTest<TGenerator, TVerifier> pattern works with either interface. The TestHelper.Verify<TGenerator> snapshot pattern as written requires IIncrementalGenerator; to adapt it for ISourceGenerator, change the type constraint and use the ISourceGenerator overload of CSharpGeneratorDriver.Create.
How do I add BCL types like List or Task to my test compilation?
Pass Basic.Reference.Assemblies.Net90.References.All (or the .Net80 or .Net100 variant) to CSharpCompilation.Create as the references parameter. This collection provides all standard BCL reference assemblies, giving your test compilation access to System.Collections.Generic.List<T>, System.Threading.Tasks.Task, System.Console, and everything else in the .NET runtime. Without this, the compilation will fail to resolve standard types and your generator won't run correctly against the test input.
Each variant is a separate NuGet package. If you're targeting .NET 10, add <PackageReference Include="Basic.Reference.Assemblies.Net100" Version="..." /> to your test project alongside (or in place of) the Net90 package, then reference Basic.Reference.Assemblies.Net100.References.All in your TestHelper.
Do I need to define the generator's marker attribute in the test source?
That depends on where the attribute lives. If your attribute ships as a pre-compiled assembly referenced by consumers, add a <PackageReference> or <ProjectReference> to that assembly in your test .csproj. If your generator emits the attribute itself as a separate source file (a common pattern for self-contained generators), add a TestState.GeneratedSources entry for the attribute file and treat it as part of the expected output. The simplest fallback for quick tests is to inline the attribute definition directly in TestState.Sources alongside your test input.
How does Verify handle line-ending differences between Windows and Linux?
Verify normalizes all line endings to before comparing snapshots. This means .verified.txt files committed on a Windows developer machine will match output produced by a Linux CI runner without any special Git .gitattributes configuration for snapshot files. The normalization is applied both to the received output and to the verified file on disk, so the comparison is always against a consistent baseline.
Can I test multiple source generators interacting with each other in the same test?
Yes, though the CSharpSourceGeneratorTest<TGenerator, TVerifier> class is designed for a single generator at a time. To test generator interactions -- for example, where Generator A produces output that Generator B consumes -- use the TestHelper pattern with the raw CSharpGeneratorDriver API. Create a driver with both generators registered via CSharpGeneratorDriver.Create(generatorA, generatorB), run it against the compilation, and collect the combined output. This lets you verify end-to-end scenarios where your generators cooperate.
How should I structure snapshot files in source control?
By default, Verify places .verified.txt files in the same directory as the test file that generated them. The file name includes the test class name and test method name, making them easy to associate with their test. Commit all .verified.txt files as part of your normal source control workflow. You can configure Verify to use a different snapshot directory via VerifierSettings.UseDirectory("snapshots") in a ModuleInitializer if you prefer to keep them centralized rather than co-located with test files.
What is the fastest way to debug when a source generator test produces no output at all?
When your GeneratedSources assertion fails with "no generated sources were found," the most common causes are a filter mismatch in your IIncrementalGenerator.Initialize method or a compilation error in the test input that prevents your syntax provider from running. Add test.TestState.ExpectedDiagnostics assertions for any compilation errors in your input, then check whether your generator's SyntaxValueProvider.ForAttributeWithMetadataName (or equivalent) correctly matches the attribute applied in the test. Running the test with the debugger attached is reliable here -- breakpoints inside your generator's transform lambda will hit normally since the test runs in-process.
Conclusion
Testing C# source generators is not as mysterious as it might seem once you have the right tooling in place. The core insight is that you're testing compilation artifacts -- source code produced by your generator -- rather than runtime behavior. Shifting your mental model to "I'm testing a code transformation" makes the entire testing strategy click into place.
The Microsoft.CodeAnalysis.CSharp.SourceGenerators.Testing.XUnit package gives you a purpose-built harness: it creates compilations, runs generator drivers, and compares outputs with normalized diff reporting. Add Verify snapshot tests for generators with larger outputs, and you have a suite that's both rigorous and maintainable -- reviewing diffs instead of maintaining walls of expected-output strings.
Start by setting up your test project with the right NuGet packages and BCL reference assemblies. Write assertion-based tests for your core generation logic and diagnostic reporting. Layer in snapshot tests for complex outputs, and cover the edge cases -- empty input, multiple types, invalid input, generics -- because those are the scenarios that surface bugs in real consumer projects. With these practices in place, testing C# source generators becomes a routine part of development, and your consumers get a generator they can depend on.

