BrandGhost
Advanced GitHub Copilot SDK in C#: Tools, Hooks, Multi-Model, and Multi-Agent Patterns

Advanced GitHub Copilot SDK in C#: Tools, Hooks, Multi-Model, and Multi-Agent Patterns

Once you've mastered the basics of the GitHub Copilot SDK in C# and built your first applications, a world of advanced patterns opens up that transform how you architect production AI systems. The advanced GitHub Copilot SDK in C# capabilities I'll share in this guide come from months of working with these patterns in real .NET projects, and the capability leap from simple chat sessions to multi-agent orchestration, custom tool ecosystems, and intelligent model routing has fundamentally changed how I approach AI integration. In this guide, I'll walk you through the advanced techniques that separate prototype demos from production-ready AI applications, including comprehensive tool design with AIFunctionFactory, session lifecycle hooks for observability, multi-model strategies, and multi-agent coordination patterns that let you build sophisticated AI systems in C#.

Custom Tools with AIFunctionFactory in the GitHub Copilot SDK

The real power of the GitHub Copilot SDK C# emerges when you extend AI sessions with custom tools that ground the language model in your application's domain and capabilities. Tool calling is the mechanism that allows the AI to invoke functions you define, transforming a simple chat interface into an intelligent agent that can interact with APIs, databases, file systems, or any C# code you expose. The AIFunctionFactory class in the Microsoft.Extensions.AI namespace provides the bridge between your .NET methods and the AI's tool invocation system.

When you create a session with tools, the model receives metadata about each tool including its name, description, and parameter schema. During conversation, the model analyzes user requests and determines when to invoke your tools, generating structured arguments that the SDK automatically deserializes and passes to your methods. The tool execution happens transparently, with results being fed back to the model as context for generating its final response. This flow enables incredibly powerful patterns where the AI orchestrates complex workflows by chaining multiple tool calls together.

Here's a complete example showing how to register multiple tools with different signatures and use them in a session:

using Microsoft.Extensions.AI;
using GitHub.Copilot.SDK;

string GetWeather(string city) => $"72°F and sunny in {city}";
string GetTime(string timezone = "UTC") => TimeZoneInfo.ConvertTimeBySystemTimeZoneId(DateTime.UtcNow, timezone).ToString("HH:mm");
async Task<string> SearchWebAsync(string query) { await Task.Delay(100); return $"Top result for '{query}': example.com"; }

var tools = new[]
{
    AIFunctionFactory.Create(GetWeather, "get_weather", "Gets current weather for a city"),
    AIFunctionFactory.Create(GetTime, "get_time", "Gets current time in a timezone"),
    AIFunctionFactory.Create(SearchWebAsync, "search_web", "Searches the web for a query")
};

await using var session = await client.CreateSessionAsync(new SessionConfig
{
    Model = "gpt-5",
    Tools = tools
});

await session.SendAsync(new MessageOptions
{
    Prompt = "What's the weather in Seattle and what time is it in PST?"
});

Notice how AIFunctionFactory.Create() accepts synchronous methods, asynchronous tasks, methods with optional parameters, and automatically infers the parameter schema from your method signature. The model will intelligently invoke GetWeather for Seattle and GetTime for the PST timezone, combining both results into a natural language response. This pattern scales beautifully as you add more tools, with the model learning to orchestrate increasingly complex workflows.

Tool Design Best Practices

Designing effective tools requires thinking carefully about how the AI model will interpret and invoke your functions, and I've learned through trial and error that following consistent patterns makes a dramatic difference in reliability and maintainability.

First, use descriptive names and rich descriptions. The tool name should be snake_case and clearly indicate the action, like calculate_shipping_cost rather than calc. The description should explain what the tool does, what parameters it expects, and what it returns. The model uses these descriptions to decide when and how to call your tools, so clarity directly impacts accuracy. I always include units for numeric parameters like "price in USD" or "distance in kilometers" to eliminate ambiguity.

Second, return serializable types that the model can easily parse as context. Simple strings work great for most cases, but you can also return objects that serialize to JSON. Avoid returning complex types with circular references or non-serializable fields. If you need to return structured data, create a simple DTO class with public properties.

Third, make your tools async when they perform I/O operations like database queries, HTTP requests, or file system access. The GitHub Copilot SDK C# handles async tools seamlessly, and you avoid blocking threads during tool execution. Even a 100ms HTTP call should be async to maintain scalability when you're running multiple concurrent sessions.

Fourth, implement comprehensive error handling within your tools. If a tool throws an unhandled exception, the session may terminate unexpectedly. Instead, catch exceptions and return error messages as strings that the model can interpret and explain to the user. For example, return "Error: User not found with ID 12345" rather than throwing a KeyNotFoundException.

Finally, design tools to be idempotent when possible. The model might retry tool calls if it doesn't understand the result, so tools that create resources or perform state changes should handle duplicate invocations gracefully. Use unique identifiers or check for existing resources before creating new ones.

Session Hooks and Event Lifecycle

Understanding and leveraging the session event lifecycle is crucial for building observable, debuggable AI applications that you can monitor and optimize in production. Session hooks allow you to intercept every event that flows through a Copilot session, from individual token deltas to completed messages, tool invocations, errors, and lifecycle state changes. I use hooks extensively for structured logging, telemetry collection, response filtering, and real-time UI updates.

The session.On(...) method registers an event handler that receives all session events as a discriminated union. You pattern match on the event type to handle each case appropriately. The SDK defines several event types: AssistantMessageDeltaEvent for streaming tokens, AssistantMessageEvent for completed messages, ToolExecutionStartEvent when the model invokes tools, SessionIdleEvent when the session finishes processing, and SessionErrorEvent for errors. By handling all these events comprehensively, you gain complete visibility into what's happening inside your AI sessions.

Preview Note: The GitHub Copilot SDK is in active development. Hook/event contracts and model support lists may change between releases. Always verify against the current SDK documentation.

Here's a complete example showing comprehensive event logging that I use as a foundation in most projects:

session.On(evt =>
{
    switch (evt)
    {
        case AssistantMessageDeltaEvent delta:
            logger.LogDebug("Token received: {Token}", delta.Data.DeltaContent);
            break;
        case AssistantMessageEvent msg:
            logger.LogInformation("Message complete: {Length} chars", msg.Data.Content?.Length);
            break;
        case ToolExecutionStartEvent toolCall:
            logger.LogInformation("Tool called: {ToolName} with args: {Args}", 
                toolCall.Data.ToolName, toolCall.Data.Arguments);
            break;
        case SessionIdleEvent:
            logger.LogInformation("Session idle -- response complete");
            break;
        case SessionErrorEvent err:
            logger.LogError("Session error: {Code} -- {Message}", err.Data.Code, err.Data.Message);
            break;
    }
});

This pattern integrates perfectly with the [observer pattern](https://www.devleader.ca/2023/11/17/examples-of-the-observer-pattern-in-c-how-to-simplify-event-management) you might already use in .NET applications. You can register multiple handlers, chain them together, or use them to update UI state in real-time Blazor or WPF applications. For production systems, I typically extract event handling into dedicated services that publish metrics to Application Insights or DataDog, making it easy to track token usage, latency percentiles, and error rates across all sessions.

Multi-Model Routing

One of the most powerful yet underutilized features of the GitHub Copilot SDK C# is the ability to route different sessions to different models based on your specific use case requirements. Not every conversation needs the most capable and expensive model, and intelligent model selection can dramatically reduce costs while maintaining or even improving user experience by using faster models where appropriate. A well-designed multi-model strategy can significantly cut inference costs in typical applications.

The model selection happens at session creation time through the SessionConfig object's Model property. You can specify any model that your Copilot CLI has access to, including GPT-5.2, GPT-5, GPT-4.1, Claude models, and others (subject to change -- verify with official documentation). Different models have different strengths: GPT-5.2 excels at complex reasoning and code generation, GPT-4.1 offers a great balance of speed and capability for most tasks, and smaller models can handle simple classification or extraction tasks at a fraction of the cost.

The key is implementing a [strategy pattern](https://www.devleader.ca/2023/11/22/how-to-implement-the-strategy-pattern-in-c-for-improved-code-flexibility) for model selection based on request characteristics. For example, use GPT-5.2 for code review or architecture questions, GPT-4.1 for general chat and summarization, and Claude for long-context document analysis. Here's how you might implement side-by-side model comparison to empirically determine which model performs best for your specific use case:

await using var client = new CopilotClient();
await client.StartAsync();

var testPrompt = "Explain async/await in C# in 50 words";

// Test with GPT-5.2
await using var session1 = await client.CreateSessionAsync(new SessionConfig
{
    Model = "gpt-5"
});

var result1 = new StringBuilder();
session1.On(evt => {
    if (evt is AssistantMessageDeltaEvent delta) result1.Append(delta.Data.DeltaContent);
});
var start1 = DateTime.UtcNow;
await session1.SendAsync(new MessageOptions { Prompt = testPrompt });
await Task.Delay(5000); // Wait for completion
var duration1 = DateTime.UtcNow - start1;
Console.WriteLine($"GPT-5.2 ({duration1.TotalSeconds:F1}s): {result1}");

// Test with GPT-4.1
await using var session2 = await client.CreateSessionAsync(new SessionConfig
{
    Model = "gpt-4.1"
});

var result2 = new StringBuilder();
session2.On(evt => {
    if (evt is AssistantMessageDeltaEvent delta) result2.Append(delta.Data.DeltaContent);
});
var start2 = DateTime.UtcNow;
await session2.SendAsync(new MessageOptions { Prompt = testPrompt });
await Task.Delay(5000); // Wait for completion
var duration2 = DateTime.UtcNow - start2;
Console.WriteLine($"GPT-4.1 ({duration2.TotalSeconds:F1}s): {result2}");

Run experiments like this against your actual prompts and data to build an empirical model selection strategy. You'll often find that faster, cheaper models produce identical or better results for specific categories of requests, and this data justifies the engineering effort to implement dynamic routing.

BYOK: Bring Your Own Key Patterns

While the GitHub Copilot SDK C# defaults to using credentials from the Copilot CLI, production scenarios often require bringing your own keys to Azure OpenAI, OpenAI directly, or custom model endpoints. This pattern gives you complete control over billing, rate limits, model versions, and deployment regions. I use BYOK patterns when deploying to enterprise environments where centralized Azure OpenAI instances with private endpoints and managed identity are required for compliance.

The SDK supports BYOK via the SessionConfig.Provider property. You can configure custom providers using ProviderConfig:

await using var session = await client.CreateSessionAsync(new SessionConfig
{
    Model = "gpt-4",
    Provider = new ProviderConfig
    {
        Type = "openai",
        BaseUrl = "https://your-azure-openai-endpoint.openai.azure.com/",
        ApiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY")
    }
});

This pattern works for Azure OpenAI, standard OpenAI, or any OpenAI-compatible endpoint. For Azure OpenAI specifically, set the Type to "azure-openai" and provide your deployment endpoint URL and API key.

BYOK makes sense when you need guaranteed capacity through provisioned throughput, when you need to deploy models to specific Azure regions for data residency requirements, when you want to implement custom rate limiting and quota management across your organization, or when you're building commercial applications that need to separate customer usage from your development credentials. The trade-off is additional configuration complexity and the responsibility of managing keys and connection strings securely.

For hybrid scenarios, you can maintain two code paths using the [strategy pattern](https://www.devleader.ca/2023/11/22/how-to-implement-the-strategy-pattern-in-c-for-improved-code-flexibility) again, selecting the appropriate client factory based on environment configuration. Development environments use Copilot CLI credentials for zero-config setup, while production uses Azure OpenAI with managed identity authentication.

Microsoft Agent Framework Integration

Note: The AsAIAgent() integration bridge is anticipated functionality that is not yet present in the official SDK README at time of writing. The pattern described here reflects the planned integration direction -- verify current availability in the official repository before implementing.

The GitHub Copilot SDK C# integrates seamlessly with the Microsoft Agent Framework through the client.AsAIAgent() extension method, which exposes your Copilot client as an IEmbeddedAgent that can participate in multi-agent orchestration scenarios. This bridge is powerful because it lets you combine Copilot sessions with Semantic Kernel agents, custom agents, and the rich agent collaboration patterns that the Microsoft Agent Framework provides.

When you call AsAIAgent(), the Copilot client becomes a first-class agent that can receive messages, maintain conversation history, and participate in agent-to-agent communication. This enables patterns where a Semantic Kernel agent acts as a coordinator that delegates specific tasks to specialized Copilot sessions, or where multiple Copilot sessions with different tool sets communicate through a shared message bus managed by the Agent Framework.

Here's a brief example showing the bridge between the GitHub Copilot SDK C# and the Agent Framework:

using GitHub.Copilot.SDK;
using Microsoft.Extensions.AI;

await using var client = new CopilotClient();
await client.StartAsync();

// Convert the Copilot client to an agent
// var copilotAgent = client.AsAIAgent(); // NOTE: verify this API in SDK docs - may not yet be available

// Now this agent can participate in Agent Framework orchestration
// alongside Semantic Kernel agents or other IEmbeddedAgent implementations

This integration is particularly valuable if you're already invested in the [Semantic Kernel agents ecosystem](https://www.devleader.ca/2026/02/28/semantic-kernel-agents-csharp-complete-guide) and want to add Copilot-powered capabilities without rewriting your existing agent infrastructure. You get the benefits of Copilot's conversational capabilities and the Agent Framework's sophisticated collaboration patterns in a single cohesive system.

Multi-Agent Patterns with Multiple Sessions

Building truly sophisticated AI systems often requires coordinating multiple specialized agents that each handle different aspects of a complex workflow, and the GitHub Copilot SDK C# makes this pattern possible (though this is an emerging capability without formal production-readiness guarantees), allowing you to create multiple concurrent sessions with different configurations. I've used multi-agent patterns extensively for systems where one agent acts as a coordinator that plans and decomposes tasks while specialist agents execute specific steps using domain-specific tools.

The coordinator-specialist pattern is my go-to architecture for complex AI applications. The coordinator uses a powerful reasoning model like GPT-5.2 and has no tools, focusing purely on planning, decomposition, and orchestration. Specialist agents use faster models like GPT-4.1 and have narrow, focused tool sets for specific domains like database access, API integration, or file system operations. The coordinator generates a plan, delegates steps to appropriate specialists, and synthesizes the results into a coherent response.

Here's a complete example showing this pattern in action:

await using var client = new CopilotClient();
await client.StartAsync();

// Coordinator uses a powerful model for planning
await using var coordinator = await client.CreateSessionAsync(new SessionConfig
{
    Model = "gpt-5"
});

// Specialist uses a faster model for execution
await using var specialist = await client.CreateSessionAsync(new SessionConfig
{
    Model = "gpt-4.1",
    Tools = [AIFunctionFactory.Create(SearchWebAsync, "search_web", "Searches the web")]
});

// Coordinator creates a plan
var plan = await GetResponseAsync(coordinator, "Break this task into 3 steps: research and summarize async patterns in C#");
Console.WriteLine($"Plan: {plan}");

// Specialist executes
var result = await GetResponseAsync(specialist, $"Execute this plan step by step: {plan}");
Console.WriteLine($"Result: {result}");

static async Task<string> GetResponseAsync(CopilotSession session, string prompt)
{
    var tcs = new TaskCompletionSource<string>();
    var sb = new System.Text.StringBuilder();
    session.On(evt =>
    {
        switch (evt)
        {
            case AssistantMessageEvent msg: sb.Append(msg.Data.Content); break;
            case SessionIdleEvent: tcs.TrySetResult(sb.ToString()); break;
            case SessionErrorEvent err: tcs.TrySetException(new Exception(err.Data.Message)); break;
        }
    });
    await session.SendAsync(new MessageOptions { Prompt = prompt });
    return await tcs.Task;
}

async Task<string> SearchWebAsync(string query) { await Task.Delay(100); return $"Top result for '{query}': example.com"; }

This pattern scales beautifully. You can add multiple specialists with different capabilities, implement routing logic that selects the appropriate specialist for each subtask, and even create specialist hierarchies where mid-level agents coordinate teams of low-level execution agents. The key insight is that each session is lightweight and cheap to create, so you can spin up dozens of concurrent agents without significant overhead.

Performance: Concurrent Sessions and Throughput

One of the most pleasant surprises when working with the GitHub Copilot SDK C# is how well it handles concurrent sessions, making it practical to run dozens or even hundreds of parallel AI conversations in a single application. Sessions are designed to be lightweight, with most of the heavy lifting happening in the remote model API, so the local memory and CPU overhead is minimal. This architecture enables powerful patterns for batch processing, parallel evaluation, and high-throughput AI services.

For batch processing scenarios like analyzing thousands of documents or evaluating multiple prompt variations, you can create sessions on-demand and run them concurrently with careful rate limiting. The bottleneck is usually the remote API's rate limits, not your local application. I typically use a SemaphoreSlim to control concurrency and respect rate limits while maximizing throughput:

var documents = LoadDocuments(); // Returns 1000 documents
var maxConcurrency = 10;
var semaphore = new SemaphoreSlim(maxConcurrency);
var tasks = new List<Task<string>>();

foreach (var doc in documents)
{
    await semaphore.WaitAsync();
    tasks.Add(Task.Run(async () =>
    {
        try
        {
            await using var session = await client.CreateSessionAsync(new SessionConfig
            {
                Model = "gpt-4.1"
            });
            return await GetResponseAsync(session, $"Summarize this document: {doc}");
        }
        finally
        {
            semaphore.Release();
        }
    }));
}

var results = await Task.WhenAll(tasks);

This pattern lets you process large batches efficiently while respecting rate limits and avoiding overwhelming the API. Adjust maxConcurrency based on your API tier and rate limits. For Azure OpenAI with provisioned throughput, you can often run 50-100 concurrent sessions. For shared endpoints, stay conservative at 5-10 concurrent sessions.

The session design also enables sophisticated caching and session pooling patterns. Since sessions maintain conversation history, you can keep long-lived sessions around for multi-turn conversations and reuse them across requests. For stateless request-response patterns, create sessions on-demand and dispose them immediately. The async disposal is fast and ensures clean shutdown.

Conclusion and Next Steps

Mastering these advanced patterns in the GitHub Copilot SDK C# transforms how you architect production AI applications, moving from simple chat interfaces to sophisticated multi-agent systems with custom tools, intelligent model routing, and comprehensive observability. I've covered the core techniques that I use daily in real .NET projects, from AIFunctionFactory tool design to session lifecycle hooks, multi-model strategies, and coordinator-specialist agent patterns.

Frequently Asked Questions About Advanced GitHub Copilot SDK

What's the difference between the basic GitHub Copilot SDK and advanced patterns?

The basic GitHub Copilot SDK in C# focuses on simple chat sessions and message passing, while advanced patterns add custom tools via AIFunctionFactory, session lifecycle hooks for observability, multi-model routing for cost optimization, and multi-agent orchestration for complex workflows. Advanced patterns are essential for production applications that need reliability, observability, and sophisticated AI capabilities.

How do I decide which model to use for different sessions?

Use GPT-5.2 for complex reasoning, code generation, and architectural decisions where quality matters most. Use GPT-4.1 for general-purpose tasks like chat, summarization, and standard tool orchestration where speed and cost are important. Use specialized models like Claude for long-context document analysis. Implement A/B testing with your actual prompts to build an empirical model selection strategy.

Can I run multiple Copilot sessions concurrently?

Yes, the advanced GitHub Copilot SDK in C# is designed for concurrent sessions. Sessions are lightweight, and you can run dozens or hundreds in parallel with proper rate limiting using SemaphoreSlim. The bottleneck is usually the remote API rate limits, not local resources. This enables powerful batch processing and multi-agent patterns.

What are the best practices for designing tools with AIFunctionFactory?

Use descriptive snake_case names and rich descriptions that explain parameters and return values. Return serializable types like strings or simple DTOs. Make tools async for I/O operations. Implement comprehensive error handling to avoid session termination. Design for idempotency when tools perform state changes. The model uses tool descriptions to decide when and how to invoke them, so clarity is critical.

How do I integrate the GitHub Copilot SDK with Semantic Kernel agents?

Use the client.AsAIAgent() extension method to convert your Copilot client into an IEmbeddedAgent that participates in Microsoft Agent Framework orchestration. This bridges the advanced GitHub Copilot SDK in C# with Semantic Kernel's multi-agent collaboration patterns, letting you combine Copilot sessions with SK agents in unified workflows. Note: The AsAIAgent() integration bridge is anticipated functionality that is not yet present in the official SDK README at time of writing -- verify current availability in the official repository before implementing.

Conclusion and Next Steps

To continue deepening your expertise, I recommend exploring these related topics in detail. For foundational knowledge, review the GitHub Copilot SDK .NET complete guide and the getting started guide if you need to refresh the basics. For practical application patterns, check out building real apps with the GitHub Copilot SDK.

To go deeper on multi-agent orchestration, the Semantic Kernel agents guide covers the Microsoft Agent Framework integration in detail. For understanding the design patterns used throughout this guide, explore the strategy pattern in C# and the observer pattern in C#. Each of these topics deserves its own deep dive, and I'll be publishing dedicated articles on tool design patterns, production observability strategies, and advanced multi-agent architectures in the coming weeks. The patterns you've learned here are the foundation for building AI applications that scale to production workloads and deliver real business value.

Weekly Recap: Microsoft Agent Framework, Semantic Kernel, and Builder Patterns in C# [Feb 2026]

This week covers the Microsoft Agent Framework in C# -- from core AIAgent abstractions to AgentSessions and function tools -- plus Semantic Kernel agents and plugins, Builder pattern deep dives, and GitHub Copilot SDK development. Also included: new videos on software engineering planning and hiring insights.

AgentSession and Multi-Turn Conversations in Microsoft Agent Framework

Master AgentSession and multi-turn conversations in Microsoft Agent Framework. Session lifecycle, concurrent sessions, and a complete stateful Q&A loop in C#.

GitHub Copilot SDK for .NET: Complete Developer Guide

Learn the GitHub Copilot SDK for .NET in this complete developer guide. Build custom AI agents with CopilotClient, CopilotSession, streaming, tools, and multi-model support in C#.

An error has occurred. This application may no longer respond until reloaded. Reload