If you're a .NET developer looking to integrate AI capabilities directly into your C# applications, getting started with GitHub Copilot SDK in C# is an exciting journey that brings the power of GitHub Copilot beyond your IDE and into your code. The GitHub Copilot SDK C# enables you to programmatically access powerful language models through a clean, event-driven .NET API. This SDK allows you to programmatically interact with GitHub Copilot's language models, enabling you to build AI-powered features right into your .NET applications. In this comprehensive getting started guide, I'll walk you through everything you need to know to set up the GitHub Copilot SDK C# in your projects, establish your first connection with CopilotClient, create sessions, handle streaming responses, and build your first multi-turn conversation. Whether you're building a chatbot, adding AI assistance to your application, or experimenting with large language models in .NET, this article is your gateway to mastering the fundamentals. This is part of my complete guide to the GitHub Copilot SDK for .NET, and serves as the foundation for more advanced topics covered in subsequent articles.
Prerequisites: What You Need Before Getting Started with GitHub Copilot SDK in C#
Before diving into the GitHub Copilot SDK C# implementation, you need to ensure your development environment is properly configured. Getting started with GitHub Copilot SDK in C# requires a few key components that work together to enable AI functionality in your applications.The SDK has a few key requirements that must be satisfied for it to work correctly in your .NET applications.
First and foremost, you'll need .NET 8 or later installed on your machine. The GitHub Copilot SDK is built to leverage the latest features and performance improvements in modern .NET, so make sure you have an up-to-date runtime and SDK. You can verify your installation by running dotnet --version in your terminal.
The most critical prerequisite is having the GitHub Copilot CLI installed and properly configured on your system. The SDK doesn't communicate directly with GitHub's services -- instead, it acts as a wrapper around the GitHub Copilot CLI, starting it as a child process and communicating via standard input/output. This means the CLI must be installed, accessible in your system's PATH, and authenticated with a valid GitHub Copilot subscription. You can find detailed installation instructions in the official GitHub Copilot CLI documentation.
Finally, you'll need an active GitHub Copilot subscription. The SDK leverages your existing Copilot subscription to access the language models, so make sure your account is properly configured and has the necessary permissions. Once you have these prerequisites in place, you're ready to start building with the GitHub Copilot SDK C#.
Installing the GitHub Copilot SDK C#
Getting the GitHub Copilot SDK C# into your project is straightforward and follows the standard NuGet package installation process.The SDK is distributed as a NuGet package that you can add to any .NET project, whether it's a console application, web API, or desktop application.
To install the core SDK package, open your terminal in your project directory and run:
dotnet add package GitHub.Copilot.SDK
This command adds the main SDK package to your project, which includes all the core types you need: CopilotClient, CopilotSession, the event model, and all related configuration classes. The package is designed to have minimal dependencies and integrates seamlessly with modern .NET applications.
If you're planning to use tools and function calling, you'll also want to install the Microsoft.Extensions.AI package for the AIFunctionFactory integration:
dotnet add package Microsoft.Extensions.AI
This additional package provides the AIFunctionFactory class for registering custom tools and functions that the AI can invoke during conversations.
To verify your installation was successful, open your .csproj file and confirm that you see package references similar to these:
<ItemGroup>
<PackageReference Include="GitHub.Copilot.SDK" />
</ItemGroup>
Since the SDK is in technical preview, it's recommended to let NuGet use the latest available version rather than pinning to a specific version number.
Your First CopilotClient Connection
The CopilotClient is your entry point into the GitHub Copilot SDK C# and represents the connection to the underlying GitHub Copilot CLI process. Creating and initializing a client is the first step in any application that uses the SDK, and understanding how this connection works is fundamental to building reliable AI-powered applications.
Creating a CopilotClient instance is simple, but there's an important detail to understand: the client implements IAsyncDisposable, which means you should use it with the await using statement to ensure proper cleanup. This pattern is crucial because the client spawns a child process for the GitHub Copilot CLI, and you want to make sure that process is properly terminated when you're done.
Here's the basic pattern for creating and starting a client:
await using var client = new CopilotClient();
await client.StartAsync();
Console.WriteLine("Connected to GitHub Copilot CLI");
When you call StartAsync(), the SDK does several things under the hood. First, it locates the GitHub Copilot CLI executable in your system's PATH. Then it spawns the CLI process with appropriate parameters to enable the API mode. Finally, it establishes bidirectional communication channels using standard input/output streams, setting up the infrastructure for sending requests and receiving responses. If the CLI isn't installed or isn't properly authenticated, you'll receive an exception at this stage.
Let's look at a complete "hello world" example that creates a client, establishes a session, sends a message, and receives a response:
using GitHub.Copilot.SDK;
await using var client = new CopilotClient();
await client.StartAsync();
Console.WriteLine("Connected to GitHub Copilot CLI");
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5"
});
var tcs = new TaskCompletionSource();
var response = new System.Text.StringBuilder();
session.On(evt =>
{
switch (evt)
{
case AssistantMessageEvent msg:
response.Append(msg.Data.Content);
break;
case SessionIdleEvent:
tcs.TrySetResult();
break;
case SessionErrorEvent err:
tcs.TrySetException(new Exception(err.Data.Message));
break;
}
});
await session.SendAsync(new MessageOptions { Prompt = "Hello! What can you help me with?" });
await tcs.Task;
Console.WriteLine(response.ToString());
This example demonstrates the complete flow: create the client, start it, create a session, register event handlers, send a message, and wait for the response. The response you receive will be the AI's reply to your greeting, typically explaining the various ways it can assist you. This pattern forms the foundation for all interactions with the GitHub Copilot SDK C#, and you'll see variations of it throughout your work with the SDK.
Understanding how CopilotClient manages the CLI process is important for troubleshooting and resource management. If you encounter connection issues, check that the CLI is installed correctly and that your GitHub Copilot subscription is active. If the application crashes or is terminated unexpectedly, orphaned CLI processes might remain running -- this is why proper disposal with await using is so critical.
Understanding CopilotSession
While CopilotClient represents your connection to the GitHub Copilot CLI, CopilotSession represents an individual conversation context with the AI model. Sessions are where the actual AI interactions happen, and understanding how they work is key to building effective applications with the GitHub Copilot SDK C#.
A session maintains the conversation history between you and the AI model, which means that each message you send is understood in the context of previous messages in that same session. This is what enables natural, multi-turn conversations where the AI can reference earlier parts of the discussion. Each session is isolated, so you can have multiple concurrent sessions with different conversation contexts if needed.
You create a session by calling CreateSessionAsync() on your CopilotClient instance and providing a SessionConfig object. The configuration allows you to specify which model to use, whether to enable streaming, and other parameters that control the session's behavior:
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5",
Streaming = false
});
The SessionConfig offers several important options. The Model property lets you specify which language model to use -- options typically include various GPT models depending on your Copilot subscription. The Streaming property controls whether responses are delivered as complete messages or streamed token-by-token, which I'll cover in detail in a later section. The Temperature parameter controls the randomness of the AI's responses, with lower values producing more deterministic outputs.
The session uses an event-driven API rather than a traditional request-response pattern. Instead of making a call and getting a return value, you send messages using SendAsync() and receive responses through event handlers that you register with the On() method. This design choice enables the streaming capabilities and makes it easier to handle long-running AI operations without blocking your application. For .NET developers familiar with async/await patterns, this event-driven approach might feel slightly different at first, but it provides powerful flexibility once you understand how it works.
Like CopilotClient, sessions implement IAsyncDisposable and should be properly disposed when you're finished with them. A typical pattern is to nest the await using statements for both the client and session, ensuring that resources are cleaned up in the correct order when your code exits.
The Event Model: On(), SessionIdleEvent, Errors
The GitHub Copilot SDK C# uses an event-driven architecture for handling AI responses, which is fundamental to understanding how to work with the SDK effectively. Instead of blocking and waiting for a complete response, you register event handlers that are invoked as the SDK processes various stages of the AI interaction.
The primary method for registering event handlers is the On() method on a CopilotSession instance. You pass this method a callback function that receives event objects, and you typically use a switch expression or statement to handle different event types. The SDK provides several event types that you need to understand to build robust applications.
The AssistantMessageEvent is raised when the AI has completed generating a response (in non-streaming mode). This event contains the full text of the AI's reply in its Data.Content property. This is the primary event you'll use for simple, non-streaming interactions where you want to receive the complete response at once.
When streaming is enabled, you'll instead receive AssistantMessageDeltaEvent instances, which contain small chunks of the response as they're generated. Each delta event includes a Data.DeltaContent property with a fragment of the response, typically a few tokens or words. By handling these events as they arrive, you can provide real-time feedback to users, showing the AI's response as it's being generated.
The SessionIdleEvent is particularly important -- it signals that the AI has finished processing and the session is ready for the next message. This is your indication that a complete response has been delivered and that it's safe to send another message or conclude your interaction. Many examples use a TaskCompletionSource in conjunction with this event to bridge between the event-driven SDK and traditional async/await patterns.
The SessionErrorEvent is raised when something goes wrong during the AI interaction. This could be a network issue, an authentication problem, rate limiting, or various other errors. The event includes a Data.Message property with details about the error. You should always handle this event to ensure your application can respond gracefully to failures.
Finally, the ToolExecutionStartEvent is raised when the AI decides to invoke a tool or function. This is part of the function calling capabilities of the SDK, which allows the AI to interact with your application's functions during the conversation. While we won't cover tool calling in depth in this getting started article, it's important to know this event exists.
Here's a complete example showing the switch pattern for handling all these event types:
using GitHub.Copilot.SDK;
await using var client = new CopilotClient();
await client.StartAsync();
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5"
});
var tcs = new TaskCompletionSource();
var response = new System.Text.StringBuilder();
session.On(evt =>
{
switch (evt)
{
case AssistantMessageEvent msg:
response.Append(msg.Data.Content);
Console.WriteLine($"Complete message received: {msg.Data.Content}");
break;
case AssistantMessageDeltaEvent delta:
response.Append(delta.Data.DeltaContent);
Console.Write(delta.Data.DeltaContent);
break;
case SessionIdleEvent:
Console.WriteLine("
Session is idle and ready for next message");
tcs.TrySetResult();
break;
case SessionErrorEvent err:
Console.WriteLine($"Error occurred: {err.Data.Message}");
tcs.TrySetException(new Exception(err.Data.Message));
break;
case ToolExecutionStartEvent tool:
Console.WriteLine($"AI requested tool: {tool.Data.Name}");
break;
default:
Console.WriteLine($"Received unknown event type: {evt.GetType().Name}");
break;
}
});
await session.SendAsync(new MessageOptions
{
Prompt = "Explain dependency injection in C# in one paragraph."
});
await tcs.Task;
This pattern of using a switch statement to handle different event types is the idiomatic way to work with the GitHub Copilot SDK C#. You'll see variations of this pattern throughout all SDK usage, and understanding how these events flow through your application is crucial for building reliable AI-powered features.
Streaming: Real-Time Token Output
Streaming is one of the most powerful features of the GitHub Copilot SDK C#, enabling you to display AI responses as they're being generated rather than waiting for the complete response. This creates a much better user experience, especially for longer responses, as users can start reading the output immediately while the AI continues generating.
To enable streaming, you set the Streaming property to true in your SessionConfig when creating a session. When streaming is enabled, instead of receiving a single AssistantMessageEvent with the complete response, you'll receive multiple AssistantMessageDeltaEvent instances, each containing a small fragment of the response. When you're getting started with GitHub Copilot SDK in C#, understanding streaming is crucial for building responsive AI applications.
Each AssistantMessageDeltaEvent includes a Data.DeltaContent property that contains a few tokens of text -- typically a word or two, sometimes even individual characters. The SDK delivers these events as quickly as the underlying AI model generates them, which is usually quite fast but depends on the model and the complexity of the response.
The key to implementing streaming effectively is handling the delta events by immediately displaying or processing each fragment as it arrives. Here's a complete example that streams an AI response directly to the console:
using GitHub.Copilot.SDK;
await using var client = new CopilotClient();
await client.StartAsync();
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5",
Streaming = true
});
var tcs = new TaskCompletionSource();
session.On(evt =>
{
switch (evt)
{
case AssistantMessageDeltaEvent delta:
Console.Write(delta.Data.DeltaContent);
break;
case SessionIdleEvent:
Console.WriteLine();
tcs.TrySetResult();
break;
case SessionErrorEvent err:
tcs.TrySetException(new Exception(err.Data.Message));
break;
}
});
await session.SendAsync(new MessageOptions
{
Prompt = "Write a .NET method that reads a file asynchronously."
});
await tcs.Task;
Notice that in streaming mode, we use Console.Write() rather than Console.WriteLine() for the delta events, so that each fragment is appended to the same line rather than starting a new line. We only write a newline when we receive the SessionIdleEvent, which signals that streaming is complete.
If you need to collect the full response while still showing streaming output, you can use a StringBuilder to accumulate the fragments:
var fullResponse = new System.Text.StringBuilder();
session.On(evt =>
{
switch (evt)
{
case AssistantMessageDeltaEvent delta:
string fragment = delta.Data.DeltaContent;
fullResponse.Append(fragment);
Console.Write(fragment);
break;
// ... other cases
}
});
Streaming is particularly valuable in user-facing applications where perceived performance matters. Even though the total time to generate the complete response is roughly the same whether streaming is enabled or not, users perceive streaming responses as much faster because they can start reading immediately. This is the same technique that ChatGPT and other AI chat interfaces use to create their responsive feel.
One important consideration with streaming is error handling. If an error occurs mid-stream, you'll receive a SessionErrorEvent, but you may have already displayed partial output to your users. Your application should be designed to handle this gracefully, perhaps by showing an error message and potentially offering to retry the request.
Session Lifecycle: Starting, Using, and Disposing
Proper resource management is critical when working with the GitHub Copilot SDK C#, as both CopilotClient and CopilotSession manage unmanaged resources -- specifically, the child process running the GitHub Copilot CLI. Understanding the lifecycle of these objects and implementing proper disposal patterns is essential for building reliable, leak-free applications.
Both CopilotClient and CopilotSession implement the IAsyncDisposable interface, which means they have cleanup work that needs to happen when you're done with them. For the client, disposal involves terminating the CLI child process and cleaning up the communication channels. For a session, disposal involves properly closing the conversation context and notifying the CLI that the session is complete.
The await using statement is your primary tool for ensuring proper disposal. This language feature, introduced in C# 8, automatically calls DisposeAsync() when the variable goes out of scope, even if an exception is thrown. Here's the proper pattern for nesting client and session disposal:
await using var client = new CopilotClient();
await client.StartAsync();
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5"
});
// Use the session...
await session.SendAsync(new MessageOptions { Prompt = "Hello!" });
// Disposal happens automatically in reverse order:
// 1. session is disposed when leaving its scope
// 2. client is disposed when leaving its scope
The nesting order is important: you want to dispose of the session before disposing of the client, because the session needs the client's connection to properly close. The await using statements handle this automatically by disposing in reverse order of declaration.
If you forget to properly dispose of clients or sessions, you'll leak resources. The most visible symptom is orphaned CLI processes that continue running even after your application exits. You can verify this by checking your system's process list for github-copilot-cli processes. If you see multiple instances after your application has closed, you have a disposal leak.
In applications using dependency injection, you need to carefully consider the lifetime of your CopilotClient. Because the client manages a long-lived process, it's typically best registered as a singleton or scoped service rather than transient:
// In your service registration code
services.AddSingleton<ICopilotClient>(provider =>
{
var client = new CopilotClient();
client.StartAsync().GetAwaiter().GetResult();
return client;
});
Sessions, on the other hand, are typically created on-demand and disposed when the conversation is complete, so they're usually not registered in the DI container directly.
One subtle but important detail: if you're building a long-running application like a web API or service, you should start your CopilotClient once at application startup and reuse it for the lifetime of the application, creating and disposing sessions as needed. Creating a new client for every request is wasteful and will cause performance problems.
Multi-Turn Conversations
One of the most powerful features of the GitHub Copilot SDK C# is its support for multi-turn conversations, where each message is understood in the context of previous messages in the same session. This enables natural, contextual interactions where the AI can reference earlier parts of the discussion, follow complex instructions across multiple exchanges, and maintain state throughout the conversation.
The key to multi-turn conversations is session persistence -- as long as you keep the same CopilotSession instance active, the AI retains memory of all previous exchanges. You don't need to manually track or resend conversation history; the session handles this automatically. Each time you call SendAsync() with a new message, it's appended to the session's history, and the AI's response is also added to that history.
Building a multi-turn conversation involves creating a reusable pattern for sending messages and waiting for responses. Here's a complete example that demonstrates a three-turn conversation where each exchange builds on the previous ones:
using GitHub.Copilot.SDK;
using System.Text;
await using var client = new CopilotClient();
await client.StartAsync();
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5"
});
async Task<string> AskAsync(string prompt)
{
var tcs = new TaskCompletionSource<string>();
var sb = new StringBuilder();
session.On(evt =>
{
switch (evt)
{
case AssistantMessageEvent msg:
sb.Append(msg.Data.Content);
break;
case SessionIdleEvent:
tcs.TrySetResult(sb.ToString());
break;
case SessionErrorEvent err:
tcs.TrySetException(new Exception(err.Data.Message));
break;
}
});
await session.SendAsync(new MessageOptions { Prompt = prompt });
return await tcs.Task;
}
var turn1 = await AskAsync("What is the Builder pattern in C#?");
Console.WriteLine($"Turn 1: {turn1}
");
var turn2 = await AskAsync("Can you show me a code example of that?");
Console.WriteLine($"Turn 2: {turn2}
");
var turn3 = await AskAsync("Now make it use dependency injection.");
Console.WriteLine($"Turn 3: {turn3}
");
In this example, the helper method AskAsync encapsulates the pattern of sending a message, waiting for the complete response, and returning it as a string. Notice how the second and third prompts use pronouns like "that" and "it" -- the AI understands these references because it has the full conversation history from the session.
This is particularly powerful when combined with the Builder design pattern or other patterns that benefit from iterative refinement. You can ask the AI to create something, then progressively refine it through multiple turns, with each turn building on the previous state.
One important consideration with multi-turn conversations is token limits. Language models have a maximum context window -- the total number of tokens they can process in a single request, including both the conversation history and the new prompt. By default, the GitHub Copilot SDK uses infinite sessions which automatically manage context window limits through background compaction. You can configure this via InfiniteSessionConfig in SessionConfig.InfiniteSessions if you need custom behavior. For applications with specific context management requirements, you can disable infinite sessions and implement your own logic to manage conversation length.
Session state is only maintained within a single session instance. If you dispose of a session and create a new one, the conversation history is lost. If you need to persist conversations across application restarts or share them between users, you'll need to implement your own storage mechanism for the message history and potentially reconstruct sessions from saved state.
Frequently Asked Questions: Getting Started with GitHub Copilot SDK in C#
Do I need a separate API key for getting started with GitHub Copilot SDK in C#?
No, the GitHub Copilot SDK C# uses your existing GitHub Copilot subscription through the GitHub Copilot CLI. You don't need to manage separate API keys or authentication tokens. As long as your CLI is authenticated and your Copilot subscription is active, the SDK will work automatically.
Can I use the GitHub Copilot SDK C# in production applications?
Yes, the GitHub Copilot SDK C# is designed for production use. However, you should implement proper error handling, resource management with await using statements, and consider your application's specific requirements for conversation persistence, token limits, and rate limiting. Always test thoroughly before deploying AI-powered features.
What's the difference between streaming and non-streaming mode?
In non-streaming mode, you receive the complete AI response in a single AssistantMessageEvent after the model finishes generating it. In streaming mode with Streaming = true, you receive multiple AssistantMessageDeltaEvent instances as the response is generated, allowing you to display partial results in real-time. Streaming provides better perceived performance but requires slightly more complex event handling code.
How do I handle errors in the GitHub Copilot SDK C#?
Always register a handler for SessionErrorEvent in your event processing code. This event is raised whenever something goes wrong during AI interactions. You can extract error details from evt.Data.Message and use a TaskCompletionSource to convert the error into an exception that can be caught and handled with standard try-catch blocks.
Can I have multiple concurrent sessions with the GitHub Copilot SDK C#?
Yes, you can create multiple CopilotSession instances from a single CopilotClient and use them concurrently. Each session maintains its own independent conversation history and can be used for different conversations or contexts simultaneously. Just ensure you properly dispose of all sessions when you're done with them. This is an important pattern to understand when getting started with GitHub Copilot SDK in C#.
Wrapping Up: Your Foundation with the GitHub Copilot SDK C#
You've now learned the fundamentals of getting started with GitHub Copilot SDK in C# -- from installation and setup through creating clients and sessions, handling events and streaming, managing resource lifecycles, and building multi-turn conversations.These core concepts form the foundation for everything you can build with the SDK, whether you're adding AI assistance to existing applications, building chatbots, or experimenting with novel AI-powered features in .NET.
The GitHub Copilot SDK C# brings enterprise-grade AI capabilities directly into your .NET applications with a straightforward, event-driven API that feels natural to C# developers. By leveraging your existing GitHub Copilot subscription and the battle-tested Copilot CLI, you get access to powerful language models without needing to manage API keys, implement custom authentication, or deal with the complexities of direct API integration.
As you continue your journey with the SDK, you'll want to explore more advanced topics. In my next articles in this series, I'll dive deeper into specific capabilities:
- Using tools and function calling to let the AI interact with your application's functions
- Implementing advanced conversation patterns including conversation persistence and history management
- Integrating with Microsoft.Extensions.AI for maximum flexibility and testability
- Performance optimization and production deployment considerations
All of these topics build on the foundation you've established in this article. I encourage you to experiment with the code examples, try building your own simple AI-powered features, and explore how the GitHub Copilot SDK C# can enhance your .NET applications. The best way to learn is by building, and the patterns you've learned here will serve you well as you tackle more complex scenarios.
For the complete picture of everything the SDK can do, check out my comprehensive GitHub Copilot SDK for .NET guide, which serves as the hub for this entire article series. Happy coding, and welcome to the exciting world of AI-powered .NET development!
