Every time you save a file in Visual Studio, your source generators run. Every CI pipeline build runs them again. If your generators are slow, that cost multiplies across every developer on your team, every branch push, and every incremental build throughout the day. C# source generator performance is one of those topics that developers rarely think about until the build timeline suddenly shows a 10-second penalty for a generator that should take milliseconds.
This article covers how to actually measure that penalty, identify what is causing it, and apply targeted optimizations using the incremental source generator model introduced with Roslyn 4.0 (.NET 6) and significantly extended in Roslyn 4.3 (Visual Studio 17.3) with ForAttributeWithMetadataName and RegisterImplementationSourceOutput. Whether you are writing a DI container source generator, an auto-mapper, or anything in between, the patterns here apply directly.
Why Source Generator Performance Matters
Build time and developer experience are not abstract concerns. Every second added to an incremental build slows the inner development loop -- the cycle of write, compile, test that developers repeat hundreds of times per day. Source generators that perform poorly create friction that compounds across teams.
There are two distinct surfaces where generator performance shows up: the build and the IDE. The build surface is easier to measure -- you run dotnet build, look at the output, and either see timing data or profile it with tooling. The IDE surface is more insidious. A slow source generator means Intellisense lags, completion suggestions stall, and the red-squiggle feedback loop that developers depend on becomes sluggish. In a codebase where a generator produces navigation properties, service registrations, or serialization code, IDE responsiveness is directly tied to generator throughput.
The good news is that the Roslyn team designed the incremental source generator API specifically to address these problems. The incremental source generator model is not just a stylistic improvement over the original ISourceGenerator interface -- it is a fundamentally different execution model that trades raw execution speed for smart caching. Understanding that model is the key to writing source generators that scale.
The Two Performance Axes: Build Time and Runtime
Before profiling anything, it helps to be precise about what you are actually measuring.
Build-time performance refers to how long your generator runs as part of the compilation pipeline. This is the axis where optimization effort pays off. Every millisecond spent here is real time taken from developers' lives.
Runtime performance is, for practical purposes, not a concern for source generators at all. Generated code is compiled code -- it runs at exactly the same speed as hand-written code. There is no interpreter, no reflection, no dynamic dispatch added by the generation process itself. If your generated code has runtime performance issues, that is a code quality problem in the templates, not in the generator infrastructure. This distinction matters because it tells you where to focus: all of your optimization effort belongs in the compilation phase.
The incremental model also introduces a third consideration that sits between these two axes: design-time (IDE) performance. This is the cost of running your source generator on every keystroke change in the editor. It overlaps with build-time performance in terms of the techniques you use, but the stakes are different -- a user will tolerate a 500ms build overhead that they rarely notice, but they will immediately feel a 200ms Intellisense lag on every keystroke.
| Axis | When it runs | How users feel it | Primary optimization lever |
|---|---|---|---|
| Build-time | Every dotnet build or CI run |
Seconds added to pipeline and local builds | Caching, equatable models |
| Design-time (IDE) | Every file save or edit trigger | Intellisense lag, slow completions | RegisterImplementationSourceOutput, fast predicates |
| Runtime | Deployed application execution | Not applicable -- generated code is compiled code | Generated code quality, not generator infrastructure |
How the Incremental Source Generator Model Solves Performance
The original ISourceGenerator interface had a fundamental problem: it ran the entire generator pipeline on every compilation, even when nothing changed. The incremental source generator model -- IIncrementalGenerator -- solves this with a caching pipeline backed by value equality.
The pipeline is composed of provider nodes. Each node transforms an input (syntax trees, compilation symbols, additional files, build properties) into an output. Roslyn caches the output of each node and only re-runs downstream computation when the cached value changes. The "change" detection relies entirely on value equality: if the output of a transform step produces an object that compares equal to the previous output, Roslyn skips everything downstream of that node.
This is why IEquatable implementation on your data models is not optional -- it is the mechanism by which the cache works. If your model does not implement equality correctly, Roslyn treats every run as a cache miss and re-generates everything every time.
The pipeline stages are:
- Predicate -- a cheap boolean filter run against the raw syntax tree. This executes on every keystroke.
- Transform -- extracts structured data from matched nodes. Runs when the predicate passes.
- Source output -- generates the final string output. Runs only when the transformed data changes.
The key insight is that work moves as far left in the pipeline as possible. Cheap checks happen in the predicate, expensive semantic work happens in the transform, and code generation -- the most expensive work -- only happens at the end when the transform output actually differs from its cached predecessor.
Measuring Source Generator Build Performance
You cannot optimize what you cannot measure. There are three main tools for measuring c# source generator performance during builds.
MSBuild Binary Log
The --bl flag tells dotnet build to write a full binary log of the build:
dotnet build -bl:build.binlog
# Open with the MSBuild Structured Log Viewer GUI:
# Download at: https://github.com/KirillOsenkov/MSBuildStructuredLog/releases
The MSBuild Structured Log Viewer renders the binary log as a searchable tree. You can search for your generator's assembly name and see exactly how much wall-clock time the Csc task -- which includes generator execution -- takes. You can also compare before-and-after binlogs to verify that an optimization actually reduced time.
The ReportAnalyzer Property
The most targeted option for generator-specific timing is the ReportAnalyzer MSBuild property. Add it to your consuming project's .csproj:
<!-- In your .csproj to see per-generator timing -->
<PropertyGroup>
<ReportAnalyzer>true</ReportAnalyzer>
</PropertyGroup>
With this property set, the compiler outputs a timing summary for every analyzer and generator that ran during compilation. You will see output lines like:
Total analyzer execution time: 1.23 seconds.
MyCompany.MyGenerator.MyIncrementalGenerator: 0.87s
This immediately shows you which generator is the bottleneck without requiring you to open an external tool. It is the fastest way to get a baseline measurement.
dotnet-trace for Deep Profiling
When the binlog tells you a generator is slow but you need to know which part of the generator is slow, dotnet-trace lets you capture a CPU profile of the build process:
dotnet trace collect --profile cpu-sampling -- dotnet build
The resulting .nettrace file can be opened in PerfView or SpeedScope to show a flame graph of CPU time spent inside your generator code. The --profile cpu-sampling flag collects actual CPU call-stack samples, which is what produces a usable flame graph. You can optionally add --providers "Microsoft-Roslyn-Compiler" alongside it to also capture Roslyn-specific structured lifecycle events in the same trace.
The Cache Hit Rate: The Most Important Metric
Once you have timing data, the next question is: how often is Roslyn actually skipping work because of a cache hit? A source generator that runs fast on every invocation is worse than a source generator that occasionally runs slowly but caches aggressively.
The cache works by comparing the output of each pipeline stage to its previously cached output using value equality. This means your data model types -- the records or classes that flow through the pipeline -- must implement IEquatable<T> correctly. If they do not, equality falls back to reference equality, which always returns false, which means every pipeline run is treated as a change and all downstream stages re-execute.
internal sealed record GeneratorModel(
string Namespace,
string ClassName,
EquatableArray<PropertyModel> Properties);
// EquatableArray<T> wrapper that properly implements equality for ImmutableArray<T>
// Primary constructor syntax requires C# 12+ (.NET 8+).
internal readonly struct EquatableArray<T>(ImmutableArray<T> array) : IEquatable<EquatableArray<T>>
where T : IEquatable<T>
{
private readonly ImmutableArray<T> _array = array;
public bool Equals(EquatableArray<T> other) =>
_array.IsDefault ? other._array.IsDefault : !other._array.IsDefault && _array.SequenceEqual(other._array);
public override bool Equals(object? obj) => obj is EquatableArray<T> other && Equals(other);
public override int GetHashCode() =>
_array.IsDefault ? 0 : _array.Aggregate(0, (hash, item) => HashCode.Combine(hash, item.GetHashCode()));
}
ImmutableArray<T> is a common trap here. It does not implement IEquatable<ImmutableArray<T>> in a way that compares elements -- it uses reference equality on the underlying array. Any model that contains an ImmutableArray<T> directly will always fail equality checks unless wrapped. The EquatableArray<T> pattern above is the standard solution used in the Roslyn team's own generators.
The PropertyModel type in the example also needs to implement IEquatable<PropertyModel>. Records do this automatically for simple value types, but only if all their members also implement proper equality.
Common Source Generator Performance Anti-Patterns
When incremental source generators underperform, the root cause almost always falls into one of a handful of predictable categories. The patterns below are the ones most commonly seen in performance reviews, and each one has a clear, mechanical fix. Recognizing them early -- before they compound across a large codebase -- saves significant profiling time later.
| Anti-pattern | Root cause | Symptom |
|---|---|---|
| Capturing the Compilation object | Compilation changes on every edit | Cache miss on every keystroke |
| Expensive predicate logic | Predicate runs against every syntax node | Constant CPU overhead during typing |
Missing IEquatable<T> on models |
Equality falls back to reference equality | All downstream stages re-execute every run |
Using CreateSyntaxProvider for attribute targets |
No built-in pre-filtering | Unnecessary semantic work on every change |
Capturing the Compilation Object
The single most damaging thing you can do in an incremental source generator is capture the Compilation object in a transform or pipeline step. The Compilation object changes on every single edit because it contains the full semantic model of the entire project. Any pipeline node that depends on Compilation directly will re-run on every keystroke.
Instead of passing Compilation downstream, extract exactly the data you need -- a string, a type name, a flag -- and pass that downstream. The semantic model should be touched as early as possible and then discarded.
Expensive Logic in the Syntax Predicate
The predicate runs on every syntax node, on every edit. It must be as cheap as a type check and nothing more. Anything expensive -- attribute lookup, symbol resolution, string parsing -- belongs in the transform, not the predicate.
Missing IEquatable on Models
As described above, any model type without correct value equality causes permanent cache misses for all downstream stages.
CreateSyntaxProvider When ForAttributeWithMetadataName Is Available
One of the most impactful switches you can make in any attribute-driven generator is replacing CreateSyntaxProvider with ForAttributeWithMetadataName. The latter method was added to the Roslyn API specifically because filtering by attribute in the predicate is both the most common pattern and one of the most expensive. The internal implementation pre-filters syntax nodes by attribute metadata name before your predicate even executes, which eliminates the bulk of unnecessary invocations for most generators.
Note:
ForAttributeWithMetadataNamerequires Roslyn 4.3.1+, which ships with Visual Studio 17.3+ or .NET SDK 6.0.400+.
// ❌ SLOW: Don't do semantic checks in the syntax predicate
var slow = context.SyntaxProvider.CreateSyntaxProvider(
predicate: (node, ct) =>
{
// This runs on EVERY syntax change -- must be extremely cheap
if (node is not ClassDeclarationSyntax cls) return false;
// ❌ Never access SemanticModel in the predicate
return true; // Keep predicate fast
},
transform: ...);
// ✅ FAST: Cheap syntax check in predicate, semantic check in transform
var fast = context.SyntaxProvider.ForAttributeWithMetadataName(
"MyNamespace.GenerateAttribute",
predicate: static (node, _) => node is ClassDeclarationSyntax, // ✅ Just a type check
transform: static (ctx, ct) =>
{
// ✅ Semantic work belongs here, runs less frequently
var symbol = (INamedTypeSymbol)ctx.TargetSymbol;
return ExtractModel(symbol, ct);
});
ForAttributeWithMetadataName was added specifically to replace the common pattern of using CreateSyntaxProvider to find attributed types. It is implemented with internal Roslyn optimizations that filter by attribute name before your predicate even runs. If your source generator targets types with a specific attribute -- which covers the vast majority of source generators for things like factory pattern generation or builder pattern generation -- ForAttributeWithMetadataName should be your default choice when targeting a single attribute by full metadata name.
Source Generator Optimization Techniques
The anti-patterns above describe what not to do. The techniques below describe what to do instead -- each one maps directly to a specific part of the incremental source generator pipeline and compounds with the others when applied together. Applying even two or three of these consistently will move most source generators from "noticeable overhead" to "effectively invisible" in build timelines.
Fast Predicate Filtering
Even with ForAttributeWithMetadataName, predicates benefit from the "cheap first" rule. If your generator only targets partial classes, the predicate should check for the partial modifier before checking anything else:
predicate: static (node, _) => node is ClassDeclarationSyntax { Modifiers: var mods }
&& mods.Any(SyntaxKind.PartialKeyword),
This eliminates non-partial classes in the predicate itself, before the transform allocates any objects.
Extracting Only the Data You Need
The transform step should extract a minimal, equatable snapshot of the symbol -- not a reference to the symbol itself. Symbols are tied to the compilation and cannot be cached safely. A transform that returns a symbol reference will cause the same permanent-cache-miss problem as capturing Compilation.
The pattern for decorator generation or singleton detection is the same: extract strings, flags, and simple value types from the symbol, wrap collections in an equatable wrapper, and return a sealed record.
RegisterImplementationSourceOutput for Non-IDE Code Paths
This is an underused optimization with significant impact on IDE responsiveness.
Note:
RegisterImplementationSourceOutputrequires Roslyn 4.3.0+ (Visual Studio 17.3+ / .NET SDK 6.0.400+).
// RegisterSourceOutput runs during both IDE analysis AND build
// Use this for code the IDE needs to know about (public APIs, types)
context.RegisterSourceOutput(models, static (spc, model) =>
{
spc.AddSource($"{model.ClassName}.g.cs", GeneratePublicCode(model));
});
// RegisterImplementationSourceOutput ONLY runs during actual build (not IDE/design-time)
// Use this for implementation details the IDE doesn't need to analyze
context.RegisterImplementationSourceOutput(models, static (spc, model) =>
{
spc.AddSource($"{model.ClassName}.impl.g.cs", GenerateImplementationCode(model));
});
RegisterImplementationSourceOutput tells Roslyn that this output is an implementation detail. The IDE does not need it for Intellisense, completion, or navigation. The generator skips this output entirely during design-time analysis, which directly reduces the latency you feel while typing. For generators used in plugin architecture systems or cross-cutting decorator pipelines, the implementation bodies are typically large and expensive to generate -- exactly where this separation pays off.
Use static Lambdas Throughout
Mark all predicate and transform lambdas as static. This prevents accidental capture of outer variables, makes the no-capture intent explicit, and eliminates a class of bugs where captured state inadvertently creates hidden dependencies on the compilation.
Source Generator IDE Responsiveness
The IDE runs your source generator on every compilation trigger, which in practice means on every file save and sometimes on every edit. The incremental source generator model caches aggressively, but there is still a minimum cost per run: the predicate must execute against every syntax node that could potentially match.
To minimize IDE impact:
- Keep predicates to a single
ispattern match and one or two modifier checks. - Use
ForAttributeWithMetadataNameto pre-filter by attribute name before your code runs. - Use
RegisterImplementationSourceOutputfor any generated code that is not needed for symbol resolution. - Avoid calling
.GetSemanticModel()anywhere in the pipeline except inside transform steps.
One underappreciated factor is allocation pressure. Every object allocated during a generator run that is not cached adds GC pressure during design-time analysis. Prefer static methods, value types where appropriate, and pre-computed string templates over interpolated strings for hot code paths.
Benchmarking Your Source Generator Changes
Micro-optimizations in source generator code are hard to evaluate without a repeatable benchmark. The recommended approach is to create a small but representative consuming project -- one that has enough syntax nodes to exercise the predicate at scale -- and measure it with both ReportAnalyzer and a binlog before and after each change.
A useful benchmark project contains:
- 50 to 200 classes that do NOT match your generator's criteria (to stress-test predicate rejection)
- 5 to 20 classes that DO match (to stress-test the transform and output paths)
- A
Directory.Build.propsthat sets<ReportAnalyzer>true</ReportAnalyzer>
Run dotnet build three times and average the generator timing. The first run is always slow due to JIT warmup. The second and third runs with no source changes should show near-zero time for the generator if caching is working correctly -- this is the most important number to track.
For CI integration, compare binlogs between a baseline branch and a feature branch. The MSBuild Structured Log Viewer can be automated via MSBuild.StructuredLogger as a .NET library, making it possible to write a test that asserts your source generator's measured time stays below a threshold.
FAQ
The questions below cover the decisions and edge cases that come up most often when developers start measuring and optimizing incremental source generators. Each answer connects back to the core principle: reduce unnecessary work at every pipeline stage and keep the Roslyn cache hit rate as high as possible.
How do I know if my incremental source generator is actually using the cache?
Add <ReportAnalyzer>true</ReportAnalyzer> to your consuming project, build once to warm up, make a change to a file your generator does not care about, and build again. If the second build shows near-zero time for your source generator, the cache is working. If it shows the same time as the first build, you have a cache miss and likely a missing or broken IEquatable<T> implementation on one of your model types.
What is the difference between CreateSyntaxProvider and ForAttributeWithMetadataName?
ForAttributeWithMetadataName requires Roslyn 4.3.1+ (Visual Studio 17.3+ / .NET SDK 6.0.400+). CreateSyntaxProvider gives you a fully general callback that runs against every syntax node. ForAttributeWithMetadataName is a specialized method that uses internal Roslyn optimizations to pre-filter by attribute metadata name before calling your predicate. For attribute-driven generators in real-world codebases, ForAttributeWithMetadataName is almost always the faster option. Use CreateSyntaxProvider when your match criteria are not attribute-driven.
Can I profile a source generator running inside Visual Studio?
Yes, with effort. The most practical approach is to set MSBUILDDISABLENODEREUSE=1 and launch a separate dotnet build process you can attach a profiler to. For IDE-specific investigation, Roslyn 4.4+ exposes GeneratorDriverOptions { TrackIncrementalGeneratorSteps = true } in test harnesses -- it lets you assert which pipeline steps were cached (reason Cached) vs. re-executed (reason Modified or New) on each driver run. This is the right tool for verifying your caching model in a unit test, though it does not report wall-clock time. For most developers, the ReportAnalyzer property and a binlog provide enough signal without requiring a full profiler setup.
Should I use records for my source generator data models?
Yes. C# records automatically implement IEquatable<T> based on all their declared members. This makes them the best default choice for source generator data models because you get correct equality for free for simple types. The caveat is that any member that is itself an ImmutableArray<T>, a collection, or a type without value equality will break the record's equality. Use the EquatableArray<T> wrapper pattern for any collection members.
What is RegisterImplementationSourceOutput and when should I use it?
RegisterImplementationSourceOutput registers a source output that only runs during actual builds, not during IDE design-time analysis. Use it for generated code that is implementation-only -- method bodies, private helpers, internal infrastructure -- that the IDE does not need to resolve symbols or provide Intellisense. The public-facing generated code (types, public members, interface implementations) should still use RegisterSourceOutput so the IDE can see and index it.
How do I test that my source generator is fast enough?
Build a benchmark consuming project with a representative mix of matching and non-matching types. Add <ReportAnalyzer>true</ReportAnalyzer> and measure baseline timing. Then set a threshold -- for example, no generator should add more than 100ms to an incremental build when nothing it cares about has changed -- and automate that check in CI using MSBuild binary log analysis.
Does source generator performance matter for small projects?
It matters less in absolute terms but still matters relative to expectations. A generator that adds 500ms to a cold build is more forgivable in a large project than in a project with 20 files, where that overhead is the dominant build cost. More importantly, habits established on small projects carry to large ones. Writing source generators with correct IEquatable<T> models and fast predicates from the start means you never have to retrofit performance later.
Conclusion
C# source generator performance comes down to one principle: teach Roslyn what has changed and what has not, and let the cache handle the rest. Every technique in this article serves that principle -- fast predicates minimize unnecessary work, equatable data models enable cache hits, and RegisterImplementationSourceOutput narrows the set of outputs the IDE needs to track.
The tooling to measure this exists today and is straightforward to use. A binlog and the ReportAnalyzer property give you enough data to identify problems without a specialized profiler. Once you know where time is going, the fix is almost always one of the patterns above.
Write source generators with the same performance discipline you apply to runtime code. Your team's build timeline and their Intellisense response time are worth it.

