BrandGhost
Build a Multi-Agent Analysis System with GitHub Copilot SDK in C#

Build a Multi-Agent Analysis System with GitHub Copilot SDK in C#

Build a Multi-Agent Analysis System with GitHub Copilot SDK in C#

When you build a multi-agent system with GitHub Copilot SDK in C#, the core insight is that one CopilotClient can power multiple independent CopilotSession instances -- each with its own specialist system prompt and zero memory of the others. In this article, I'll walk through a console app that runs three specialist agents (Code Review, Documentation, Testing) sequentially against a C# source file, then assembles their outputs into a unified markdown report.

If you're new to the SDK, the GitHub Copilot SDK for .NET: Complete Developer Guide covers the foundational patterns -- CopilotClient, CopilotSession, streaming, and configuration. This article assumes you've got the basics and are ready to compose multiple agents into a pipeline.

What We're Building: a Multi-Agent Analysis System with GitHub Copilot SDK in C#

The goal is a console app that takes a C# source file path as input, runs it through three specialist agents in sequence, and saves the combined output as {filename}.analysis.md. Here's what the pipeline does:

  • Reads a C# source file from disk
  • Runs CodeReviewAgent -- finds bugs, SOLID violations, and performance issues
  • Runs DocumentationAgent -- generates XML doc comments and usage examples
  • Runs TestingAgent -- writes xUnit v3 test cases with AAA structure
  • AgentPipeline collects all three outputs and assembles them into one markdown report

The folder structure is clean and purpose-driven:

ai-multi-agent/
  ai-multi-agent.csproj
  appsettings.json
  Program.cs
  Configuration/
    MultiAgentConfig.cs
  Agents/
    AgentBase.cs               (abstract: protected RunAsync)
    AgentPipeline.cs           (sequential orchestration)
    CodeReviewAgent.cs
    DocumentationAgent.cs
    TestingAgent.cs

Each agent lives in its own file and does one thing. AgentBase handles all SDK interaction. AgentPipeline handles orchestration. The agents stay focused purely on their domain.

Full source: devleader/copilot-sdk-examples

The AgentBase Pattern

AgentBase is the architectural center of this multi-agent system with GitHub Copilot SDK in C#. Every specialist agent inherits from it, which encapsulates the complete SDK ceremony: session creation, event handling, streaming accumulation, the TaskCompletionSource sync mechanism, and disposal. Derived agents provide only their system prompt and user message.

Here's the full AgentBase.cs:

using GitHub.Copilot.SDK;

namespace AiMultiAgent.Agents;

public abstract class AgentBase
{
    protected readonly CopilotClient Client;

    protected AgentBase(CopilotClient client)
    {
        Client = client;
    }

    protected async Task<string> RunAsync(
        string systemPrompt,
        string userMessage,
        string agentLabel,
        CancellationToken ct = default)
    {
        Console.ForegroundColor = ConsoleColor.Cyan;
        Console.WriteLine($"
[{agentLabel}] Starting...");
        Console.ResetColor();

        var reply = new System.Text.StringBuilder();
        var tcs = new TaskCompletionSource(TaskCreationOptions.RunContinuationsAsynchronously);

        await using var session = await Client.CreateSessionAsync(new SessionConfig
        {
            Streaming = true,
            SystemMessage = new SystemMessageConfig
            {
                // Replace ensures each agent has only its own persona -- no inherited context
                Mode = SystemMessageMode.Replace,
                Content = systemPrompt
            }
        });

        session.On(evt =>
        {
            switch (evt)
            {
                case AssistantMessageDeltaEvent delta:
                    Console.Write(delta.Data.DeltaContent);
                    reply.Append(delta.Data.DeltaContent);
                    break;

                case AssistantMessageEvent msg:
                    Console.Write(msg.Data.Content);
                    reply.Append(msg.Data.Content);
                    break;

                case SessionIdleEvent:
                    Console.WriteLine();
                    tcs.TrySetResult();
                    break;

                case SessionErrorEvent err:
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine($"
[{agentLabel} Error] {err.Data.ErrorType}: {err.Data.Message}");
                    Console.ResetColor();
                    tcs.TrySetException(new Exception(err.Data.Message));
                    break;
            }
        });

        await session.SendAsync(new MessageOptions { Prompt = userMessage });
        using var reg = ct.Register(() => tcs.TrySetCanceled(ct));
        await tcs.Task;

        Console.ForegroundColor = ConsoleColor.Green;
        Console.WriteLine($"[{agentLabel}] Complete.");
        Console.ResetColor();

        return reply.ToString();
    }
}

All SDK complexity lives here -- every derived agent inherits complete event handling, streaming, and cleanup without writing a single line of SDK boilerplate itself.

await using var session means the session is disposed as soon as RunAsync returns, freeing resources immediately. You create the session, send the message, wait for SessionIdleEvent, then the using block cleans up. No resource leaks.

agentLabel keeps multi-agent terminal output readable. When three agents run in sequence, you see clear [Code Review Agent] Starting... and [Code Review Agent] Complete. boundaries in your console. Without it, streaming output from different agents would blur together.

The TaskCompletionSource + SessionIdleEvent pattern is the same one used throughout the advanced GitHub Copilot SDK patterns for multi-agent systems -- it's the standard synchronization mechanism for Copilot SDK sessions. SessionIdleEvent fires when the model has finished producing output. At that point, tcs.TrySetResult() unblocks the await tcs.Task and RunAsync returns the accumulated reply string.

Why SystemMessageMode.Replace for Agent Isolation

Every specialist agent needs its own persona. The Code Review agent should think and write like a code reviewer -- not like a general Copilot assistant. The Documentation agent should focus on XML doc comments, not on reviewing bugs. The Testing agent should write xUnit tests, not documentation.

SystemMessageMode.Replace completely overrides the default Copilot system prompt for that session. When a session is disposed and a new one is created, the new session has no knowledge of the previous session's system prompt. The Code Review agent's "you are an expert code reviewer" persona does not bleed into the Documentation agent's session.

The alternative is Append, which would keep both the default Copilot behavior AND your custom instructions active simultaneously. This makes each agent less focused and produces less deterministic output. You'd get a code reviewer that still behaves like a general assistant, mixed with the reviewer persona -- which is the opposite of what a specialist agent should be.

With Replace, each CopilotSession gets a clean slate. The session knows exactly one identity: the one you gave it in systemPrompt. Combined with await using var session disposal between agents, you get full isolation -- no state, no context, no cross-contamination between agents.

The Three Specialist Agents

Each agent is 15-20 lines of code because AgentBase handles everything else. The derived class only defines what the agent does -- the system prompt specifies the persona, the user message specifies the task.

CodeReviewAgent.cs:

using GitHub.Copilot.SDK;

namespace AiMultiAgent.Agents;

public sealed class CodeReviewAgent : AgentBase
{
    public CodeReviewAgent(CopilotClient client) : base(client) { }

    public Task<string> ReviewAsync(string fileName, string sourceCode, CancellationToken ct = default) =>
        RunAsync(
            systemPrompt: """
                You are an expert C# code reviewer with deep knowledge of .NET best practices.
                Review code for: correctness, performance, SOLID principles, naming conventions,
                error handling, async patterns, and security concerns.
                Be specific and actionable. Use Markdown with severity labels:
                - **Critical**: bugs or security issues that must be fixed
                - **Major**: significant design or performance concerns
                - **Minor**: style or minor improvements
                """,
            userMessage: $"""
                Review this C# file: `{fileName}`

                ```csharp
                {sourceCode}
                ```

                Provide a structured code review with specific observations.
                Group findings by severity (Critical / Major / Minor).
                """,
            agentLabel: "Code Review Agent",
            ct: ct);
}

DocumentationAgent.cs:

public sealed class DocumentationAgent : AgentBase
{
    public DocumentationAgent(CopilotClient client) : base(client) { }

    public Task<string> GenerateAsync(string fileName, string sourceCode, CancellationToken ct = default) =>
        RunAsync(
            systemPrompt: """
                You are a technical documentation specialist for C# and .NET.
                Generate clear, accurate XML documentation comments for public members.
                Focus on WHAT the code does -- not HOW it does it internally.
                Format output as Markdown containing ready-to-use XML doc comment snippets.
                """,
            userMessage: $"""
                Generate documentation for: `{fileName}`

                ```csharp
                {sourceCode}
                ```

                Provide:
                1. A high-level summary of what this file/class does
                2. XML <summary>, <param>, and <returns> comments for all public members
                3. A usage example showing the typical calling pattern
                """,
            agentLabel: "Documentation Agent",
            ct: ct);
}

TestingAgent.cs:

public sealed class TestingAgent : AgentBase
{
    public TestingAgent(CopilotClient client) : base(client) { }

    public Task<string> SuggestAsync(string fileName, string sourceCode, CancellationToken ct = default) =>
        RunAsync(
            systemPrompt: """
                You are an expert in .NET testing with xUnit v3 and Moq.
                Write complete, compilable xUnit test methods following the AAA pattern
                (Arrange-Act-Assert). Use the Given_When_Then naming convention.
                Cover: happy paths, boundary values, null inputs, and exception scenarios.
                """,
            userMessage: $"""
                Write unit tests for: `{fileName}`

                ```csharp
                {sourceCode}
                ```

                Produce complete xUnit test class(es) with:
                - All necessary using statements
                - Mock setup where dependencies exist
                - At least one test per public method
                - Edge cases and error condition tests
                """,
            agentLabel: "Testing Agent",
            ct: ct);
}

Notice that none of these agents import event types, create TaskCompletionSource instances, or touch session lifecycle. They delegate everything to AgentBase.RunAsync. Adding a fourth agent -- say, a SecurityAgent or ArchitectureAgent -- means writing 15 more lines with a new system prompt and method name. The plumbing is already done.

The AgentPipeline

AgentPipeline owns the orchestration for the multi-agent analysis system with GitHub Copilot SDK in C#. It creates agents, runs them in sequence, and assembles the report. Here's the full implementation:

using System.Text;
using GitHub.Copilot.SDK;

namespace AiMultiAgent.Agents;

public sealed class AgentPipeline
{
    private readonly CopilotClient _client;

    public AgentPipeline(CopilotClient client)
    {
        _client = client;
    }

    public async Task<string> RunAsync(
        string fileName,
        string sourceCode,
        CancellationToken ct = default)
    {
        // Each agent runs sequentially and independently with its own session
        var review = await new CodeReviewAgent(_client).ReviewAsync(fileName, sourceCode, ct);
        var docs = await new DocumentationAgent(_client).GenerateAsync(fileName, sourceCode, ct);
        var tests = await new TestingAgent(_client).SuggestAsync(fileName, sourceCode, ct);

        return BuildReport(fileName, review, docs, tests);
    }

    private static string BuildReport(
        string fileName,
        string codeReview,
        string documentation,
        string tests)
    {
        var sb = new StringBuilder();

        sb.AppendLine($"# Multi-Agent Analysis: `{fileName}`");
        sb.AppendLine();
        sb.AppendLine($"_Generated: {DateTimeOffset.UtcNow:yyyy-MM-dd HH:mm:ss} UTC_");
        sb.AppendLine();

        sb.AppendLine("---");
        sb.AppendLine();
        sb.AppendLine("## Code Review");
        sb.AppendLine();
        sb.AppendLine(codeReview);
        sb.AppendLine();

        sb.AppendLine("---");
        sb.AppendLine();
        sb.AppendLine("## Documentation");
        sb.AppendLine();
        sb.AppendLine(documentation);
        sb.AppendLine();

        sb.AppendLine("---");
        sb.AppendLine();
        sb.AppendLine("## Suggested Tests");
        sb.AppendLine();
        sb.AppendLine(tests);

        return sb.ToString();
    }
}

Three sequential await calls -- Code Review completes, then Documentation starts, then Testing. There's no interleaving, no race conditions, no shared state between agents.

BuildReport is pure string concatenation. It doesn't know anything about the SDK -- it just combines three strings with markdown headers. This separation means you can test BuildReport independently, swap out the output format, or add a fourth section without touching any agent logic.

new CodeReviewAgent(_client) passes the shared CopilotClient -- agents share the client infrastructure, not session state. The client handles connection, authentication, and transport. Each agent creates its own CopilotSession independently through that client.

The pipeline is the only class that knows about all three agents. The agents themselves are fully decoupled -- CodeReviewAgent doesn't know DocumentationAgent exists.

Sequential vs Parallel: Why Sequential Wins Here

You could run all three agents with Task.WhenAll. On paper, that sounds like a 3x speedup. In practice, it creates complexity you don't need for this use case.

The GitHub Copilot SDK's concurrency behavior for multiple simultaneous sessions from one client isn't documented. Sequential execution is safer, simpler, and easier to debug. If DocumentationAgent fails, you still have the Code Review output already captured. You can log it, save the partial result, or retry just the failed agent -- none of which is straightforward when all three are running in parallel.

Sequential execution also provides natural rate limiting. The Copilot API has rate limits. Running three simultaneous API sessions can hit them. One at a time keeps things predictable and avoids retry complexity.

The latency tradeoff is real but acceptable for a batch analysis tool. You're analyzing a file and saving a report -- 30-60 seconds total is fine. If you genuinely need low-latency parallel execution, each agent would need its own CopilotClient instance and you'd need to verify the SDK's concurrency guarantees. That's a different architecture for a different use case.

This design tradeoff is also relevant when comparing orchestration frameworks. The GitHub Copilot SDK vs Semantic Kernel comparison digs into how the two frameworks approach multi-agent coordination and where each one fits.

The Entry Point

Program.cs stays thin. It handles I/O, configuration, and wires everything together:

var sourceCode = await File.ReadAllTextAsync(targetFile);
var fileName = Path.GetFileName(targetFile);

await using var client = new CopilotClient(clientOptions);
await client.StartAsync();

var pipeline = new AgentPipeline(client);
var report = await pipeline.RunAsync(fileName, sourceCode);

var reportPath = Path.ChangeExtension(targetFile, ".analysis.md");
await File.WriteAllTextAsync(reportPath, report);

Clean separation throughout: Program.cs handles I/O and client setup. AgentPipeline handles orchestration. AgentBase handles SDK interaction. Each derived agent handles only its domain expertise. No class is doing more than one job.

Sample Output Structure

Here's what the saved report looks like after running the pipeline against an OrderProcessor.cs file (abbreviated):

# Multi-Agent Analysis: `OrderProcessor.cs`

_Generated: 2026-03-31 14:22:05 UTC_

---

## Code Review

### Critical
- **Missing null check on line 42**: `customer.Email` is accessed without null check...

### Major
- **ProcessAsync lacks timeout handling**: The method awaits external calls without...

---

## Documentation

`OrderProcessor` is a service that coordinates order placement and inventory updates.

```csharp
/// <summary>
/// Processes an order and returns the confirmation result.
/// </summary>
/// <param name="order">The order to process. Cannot be null.</param>
/// <returns>An <see cref="OrderResult"/> indicating success or failure.</returns>
public async Task<OrderResult> ProcessAsync(Order order)
```

---

## Suggested Tests

```csharp
public class OrderProcessorTests
{
    [Fact]
    public async Task ProcessAsync_ValidOrder_ReturnsSuccessResult()
    {
        // Arrange
        var sut = new OrderProcessor(Mock.Of<IInventoryService>());
        var order = new Order { CustomerId = 1, Items = [new OrderItem { ProductId = 42 }] };

        // Act
        var result = await sut.ProcessAsync(order);

        // Assert
        Assert.True(result.Success);
    }
}
```

Three agents, one file, one report. The output is immediately usable -- copy the XML comments into your source, create the test file from the test suggestions, and prioritize fixes from the code review severity labels.

Key Discoveries

Building a multi-agent analysis system with GitHub Copilot SDK in C# reveals several patterns that apply broadly to agent architectures:

  • One CopilotClient handles multiple sequential sessions cleanly -- you don't need a separate client instance per agent
  • AgentBase eliminates the repetitive SDK ceremony that would otherwise appear in every agent implementation
  • await using var session inside RunAsync guarantees session disposal immediately after each agent completes
  • SystemMessageMode.Replace is the right choice for specialist agents -- Append would leave default Copilot behaviors active, blurring the agent's focus
  • Sequential pipelines are simpler, more predictable, and naturally rate-limited compared to parallel alternatives

If you want to see how an alternative framework handles multi-agent coordination, Multi-Agent Orchestration with Semantic Kernel in C# shows the AgentGroupChat model -- where agents can communicate bidirectionally instead of running in a linear pipeline.

Frequently Asked Questions

Does each agent need its own CopilotClient?

No. One CopilotClient drives the entire pipeline. The client handles connection, authentication, and transport. Each agent creates its own CopilotSession independently through the shared client. Session state is fully isolated between agents -- the shared client provides infrastructure, not shared context.

Can I run agents in parallel instead of sequentially?

You can, but it requires careful design. The GitHub Copilot SDK's concurrency behavior for simultaneous sessions from one client isn't documented. For safe parallel execution, each agent should use its own CopilotClient instance. Sequential execution is the recommended default -- it's simpler, more predictable, and provides natural rate limiting.

What is SystemMessageMode.Replace and why use it for multi-agent systems?

SystemMessageMode.Replace completely overrides the default Copilot system prompt for a given session. This gives each agent a clean, focused persona with zero bleed-through from the default assistant behavior. With Append, the default Copilot behavior stays active alongside your custom instructions -- producing less focused, less deterministic output. For specialist agents in a multi-agent system with GitHub Copilot SDK in C#, Replace is the correct choice.

How do I add more specialist agents to the pipeline?

Create a new class that inherits from AgentBase, define your system prompt and user message in an appropriately named method, then add a sequential await call for it in AgentPipeline.RunAsync. The new agent gets full SDK support -- streaming, error handling, session lifecycle -- without writing any plumbing. Five to fifteen lines of code is typical for a new specialist agent.

Does the AgentBase pattern work with non-streaming sessions?

Yes, with a small adjustment. For non-streaming sessions, you'd rely on AssistantMessageEvent instead of AssistantMessageDeltaEvent. The TaskCompletionSource + SessionIdleEvent pattern stays identical. The AgentBase.RunAsync implementation already handles both event types via the switch statement, so non-streaming sessions work without modification.

How large can the source file be before hitting context window limits?

Large files can hit the Copilot API's context window. If you're analyzing files over a few hundred lines, consider chunking the source into logical sections (methods, classes) and running agents per section. The pipeline architecture makes this straightforward -- each agent already receives sourceCode as a string you can slice before passing it in. You can also summarize the file first using a dedicated chunking agent before passing to the specialist agents.

Can I use AgentPipeline with different AI models per agent?

If SessionConfig supports a model preference (see the Advanced GitHub Copilot SDK patterns), you can configure it per session inside each agent's call to RunAsync. The AgentBase pattern accommodates this -- extend the RunAsync signature with an optional model parameter and pass it through to SessionConfig. Different agents could then use different models based on their task requirements.

What to Explore Next

Build an Interactive Coding Agent with GitHub Copilot SDK in C#

Build an interactive coding agent with GitHub Copilot SDK in C#. Learn persistent CopilotSession management, file read/write tools, run_dotnet_build feedback, and the REPL agent loop.

Multi-Agent Orchestration in Microsoft Agent Framework in C#

Master multi-agent orchestration in Microsoft Agent Framework in C# with a real research app. Revision loops, quality gates, and context passing explained.

Build a Repository Analysis Bot with GitHub Copilot SDK in C#

Build a repository analysis bot with GitHub Copilot SDK in C#. Learn SystemMessageMode.Replace for agent personas, read-only tools, dual console and StringBuilder output, and batch processing.

An error has occurred. This application may no longer respond until reloaded. Reload