Choosing the right AI model for your application can make or break your user experience. The multi-model support in GitHub Copilot SDK gives .NET developers the flexibility to switch between GPT-5, Claude, Gemini, and other models without rewriting application logic. I've been building C# applications with the Copilot SDK, and the ability to swap models based on task requirements, cost constraints, or response characteristics has proven invaluable. This guide walks through configuring different models, comparing their strengths for .NET development scenarios, and building applications that remain agnostic to the underlying AI provider.
Note: The GitHub Copilot SDK for .NET is currently in technical preview. Model availability, supported model identifiers, and API patterns may change as the SDK evolves toward general availability.
How Multi-Model Support Works in the Copilot SDK
The GitHub Copilot SDK offers multi-model support through a unified API surface that abstracts model selection and remains consistent regardless of which AI model powers your application. When you initialize a conversation session, the SDK uses the ModelId property in the SessionConfig object to route requests to the appropriate backend service. This design means your core application logic -- prompt engineering, response parsing, tool definitions, and conversation flow -- stays identical whether you're using GPT-5, Claude Sonnet, or any other supported model.
The SDK handles the complexity of different model APIs, authentication mechanisms, and response formats behind a consistent interface. You work with CopilotSession and CopilotAgent abstractions that don't leak model-specific details into your business logic. The actual model selection happens at configuration time through dependency injection or runtime factory patterns. This architectural decision gives you enormous flexibility to A/B test models, implement fallback strategies when one model service experiences issues, or route different user segments to different models based on subscription tiers or usage patterns.
Multi-Model Support: Supported Models and Model IDs
The GitHub Copilot SDK provides multi-model support through specific model identifiers that you pass during session configuration. Understanding which models are available and their capabilities helps you make informed decisions for your C# applications. The SDK provides access to OpenAI's GPT family including GPT-5 and GPT-4 variants, Anthropic's Claude models including Sonnet, Opus, and Haiku, and Google's Gemini models. Each model brings different strengths in reasoning capability, response speed, context window size, and cost efficiency.
When configuring your session, you specify the model using string identifiers that follow a predictable naming convention. Here's how the major models map to their SDK identifiers and key characteristics:
| Model Family | Model ID | Context Window | Best For | Confirmed in SDK |
|---|---|---|---|---|
| GPT-5 | gpt-5 |
400k tokens | Complex reasoning, code generation | Yes |
| Claude Sonnet 4.5 | claude-sonnet-4.5 |
200k tokens | Balanced tasks, writing | Yes |
Note: The table above shows models confirmed in the SDK README. Other AI service models like gpt-4-turbo, claude-opus-4, claude-haiku-4, and gemini-pro may be supported but are not explicitly listed in the current SDK documentation. Check the GitHub Copilot SDK repository for the most up-to-date list of supported models.
The model ID strings are used directly in your SessionConfig when initializing conversations. The SDK documentation at the GitHub Copilot SDK repository provides the definitive list of supported models, which expands as new models become available.
Configuring GPT-5 in Your Application
Configuring GPT-5 in your C# application starts with setting up the SessionConfig with the appropriate model identifier and any model-specific parameters. GPT-5 excels at complex reasoning tasks, code generation with extensive context, and maintaining coherent conversations across long interaction chains. For .NET development scenarios like architectural analysis, refactoring suggestions, or generating comprehensive documentation, GPT-5's advanced reasoning capabilities make it a strong default choice despite its higher cost per token.
Here's a complete example showing GPT-5 configuration with the Copilot SDK:
using GitHub.Copilot.SDK;
// Initialize the Copilot client
using var client = new CopilotClient(new CopilotClientOptions
{
GithubToken = Environment.GetEnvironmentVariable("GITHUB_TOKEN")
});
// Create a GPT-5 session
await using var gptSession = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5"
});
// Subscribe to session events
gptSession.On(evt =>
{
if (evt is AssistantMessageEvent msg)
Console.WriteLine(msg.Data.Content);
if (evt is SessionIdleEvent)
Console.WriteLine();
});
// Send a message
await gptSession.SendAsync(new MessageOptions
{
Prompt = "Explain the SOLID principles with C# examples"
});
GPT-5 demonstrates strong performance characteristics for C# use cases that require deep technical understanding. I've found it particularly effective when asking for architectural recommendations, explaining complex framework internals, or generating boilerplate code for patterns like Repository or Mediator. The model's training includes extensive .NET documentation and code samples, which translates to responses that follow C# conventions and idiomatic patterns. The tradeoff is cost -- for high-volume scenarios or simple questions, you might want to route to a smaller model.
Using Claude Models with the Copilot SDK
Claude models from Anthropic offer a compelling alternative to GPT models, particularly for tasks that emphasize writing quality, instruction following, and nuanced analysis. Configuring Claude through the Copilot SDK follows the same pattern as GPT-5, but you substitute the model identifier and potentially adjust parameters like temperature based on Claude's response characteristics. Claude Sonnet strikes an excellent balance between capability and cost, while Claude Haiku provides blazing fast responses for simpler queries, and Claude Opus delivers the highest quality output for critical tasks.
Here's how you configure Claude Sonnet in your C# application:
using GitHub.Copilot.SDK;
// Initialize the Copilot client
using var client = new CopilotClient(new CopilotClientOptions
{
GithubToken = Environment.GetEnvironmentVariable("GITHUB_TOKEN")
});
// Create a Claude Sonnet session
await using var claudeSession = await client.CreateSessionAsync(new SessionConfig
{
Model = "claude-sonnet-4.5"
});
// Subscribe to session events
claudeSession.On(evt =>
{
if (evt is AssistantMessageEvent msg)
Console.WriteLine(msg.Data.Content);
if (evt is SessionIdleEvent)
Console.WriteLine();
});
// Send a message
await claudeSession.SendAsync(new MessageOptions
{
Prompt = "Review this C# code for potential issues and suggest improvements:
" +
"public class UserService { public void ProcessUser(int id) { var user = db.Users.Find(id); user.LastLogin = DateTime.Now; db.SaveChanges(); } }"
});
Claude excels in scenarios where you need careful code review, detailed explanations of subtle bugs, or comprehensive documentation generation. I've used Claude Sonnet for generating XML documentation comments, creating README files, and analyzing exception handling patterns in existing codebases. Claude's training emphasizes safety and helpfulness, which means it tends to provide more caveats and edge case considerations compared to GPT models. For real use case comparison, when I asked both models to review a complex async/await pattern, Claude provided more thorough analysis of potential deadlock scenarios and race conditions, while GPT-5 focused more on performance optimizations and alternative implementations.
Building a Model-Agnostic Application
Building applications that don't hard-code model selection gives you flexibility to experiment, optimize costs, and adapt to changing model availability without redeploying code. The pattern I use involves creating an abstraction layer that encapsulates model configuration and provides a factory mechanism for runtime model selection. This approach works naturally with dependency injection in ASP.NET Core and allows you to configure different models for different features or user tiers through configuration rather than code changes.
The abstraction starts with an interface that defines the contract your application code depends on:
public interface IAIModelProvider
{
Task<string> CompleteAsync(
string prompt,
CancellationToken cancellationToken = default);
string ModelIdentifier { get; }
}
Next, implement concrete providers for each model you want to support:
public class GPT5ModelProvider : IAIModelProvider
{
private readonly CopilotClient _client;
private readonly string _githubToken;
public string ModelIdentifier => "gpt-5";
public GPT5ModelProvider(IConfiguration configuration)
{
_githubToken = configuration["GitHub:Token"];
_client = new CopilotClient(new CopilotClientOptions
{
GithubToken = _githubToken
});
}
public async Task<string> CompleteAsync(
string prompt,
CancellationToken cancellationToken = default)
{
await using var session = await _client.CreateSessionAsync(new SessionConfig
{
Model = ModelIdentifier
});
var responseBuilder = new StringBuilder();
session.On(evt =>
{
if (evt is AssistantMessageEvent msg)
responseBuilder.Append(msg.Data.Content);
});
await session.SendAsync(new MessageOptions { Prompt = prompt });
await Task.Delay(100); // Brief wait for response
return responseBuilder.ToString();
}
}
public class ClaudeModelProvider : IAIModelProvider
{
private readonly CopilotClient _client;
private readonly string _githubToken;
public string ModelIdentifier => "claude-sonnet-4.5";
public ClaudeModelProvider(IConfiguration configuration)
{
_githubToken = configuration["GitHub:Token"];
_client = new CopilotClient(new CopilotClientOptions
{
GithubToken = _githubToken
});
}
public async Task<string> CompleteAsync(
string prompt,
CancellationToken cancellationToken = default)
{
await using var session = await _client.CreateSessionAsync(new SessionConfig
{
Model = ModelIdentifier
});
var responseBuilder = new StringBuilder();
session.On(evt =>
{
if (evt is AssistantMessageEvent msg)
responseBuilder.Append(msg.Data.Content);
});
await session.SendAsync(new MessageOptions { Prompt = prompt });
await Task.Delay(100); // Brief wait for response
return responseBuilder.ToString();
}
}
Finally, create a factory that selects the appropriate provider based on configuration or runtime criteria:
public class AIModelFactory
{
private readonly IConfiguration _configuration;
private readonly Dictionary<string, Func<IAIModelProvider>> _providers;
public AIModelFactory(IConfiguration configuration)
{
_configuration = configuration;
_providers = new Dictionary<string, Func<IAIModelProvider>>
{
["gpt5"] = () => new GPT5ModelProvider(configuration),
["claude"] = () => new ClaudeModelProvider(configuration)
};
}
public IAIModelProvider CreateProvider(string modelKey)
{
if (!_providers.TryGetValue(modelKey, out var factory))
{
throw new ArgumentException(
$"Unknown model key: {modelKey}",
nameof(modelKey));
}
return factory();
}
public IAIModelProvider CreateDefaultProvider()
{
var defaultModel = _configuration["AI:DefaultModel"] ?? "claude";
return CreateProvider(defaultModel);
}
}
This pattern allows your application code to work against the IAIModelProvider interface without knowing which model is actually being used. You can configure the default model in appsettings.json, select models based on user preferences, or implement A/B testing by randomly selecting models for different request cohorts. The factory approach also makes unit testing straightforward since you can inject mock implementations of IAIModelProvider without dealing with actual AI service calls.
Comparing GPT-5 vs Claude for C# Development Use Cases
Understanding which model performs best for specific C# development tasks helps you optimize both quality and cost in your applications. I've tested both GPT-5 and Claude Sonnet across common developer scenarios to identify their relative strengths. The comparison isn't about declaring one model universally better, but rather understanding which tool fits which job based on your specific requirements.
Here's a head-to-head comparison based on real-world .NET development tasks:
| Task Category | GPT-5 | Claude Sonnet 4.5 | Winner |
|---|---|---|---|
| Code Generation (complex) | Excellent architectural patterns, handles nuance well | Very good, more conservative, adds safety checks | GPT-5 |
| Code Generation (simple) | Good quality | Accurate for straightforward tasks | Tie |
| Documentation Writing | Good technical accuracy | Superior writing quality, better structure | Claude |
| Bug Analysis | Identifies issues quickly | More thorough, considers edge cases | Claude |
| API Design | Strong REST conventions | Excellent design rationale | Tie |
| Data Extraction | Very accurate with complex JSON | Accurate, better error handling | Claude |
| Long Conversations | Maintains context well | Maintains context well | Tie |
Note: Actual response speeds and costs vary by deployment, usage patterns, and current pricing. For current pricing, see platform.openai.com and docs.anthropic.com.
The cost versus quality tradeoff becomes significant at scale. For a high-volume documentation generation pipeline, Claude Sonnet provides excellent quality at lower cost. That math works strongly in Claude's favor. However, for complex architectural analysis where you need the model to reason through multiple layers of abstraction and make sophisticated tradeoffs, GPT-5's superior reasoning capability may justify the premium. I've settled on a hybrid approach: use Claude Sonnet as the default for most tasks, and route to GPT-5 for complex reasoning, architectural decisions, and scenarios where I need the absolute best response quality.
BYOK and Enterprise Model Configuration
Bring-your-own-key patterns allow enterprises to use their existing Azure OpenAI or Anthropic API credentials rather than routing through GitHub's hosted Copilot service. This configuration provides better cost visibility, allows custom rate limiting, and gives you direct access to Azure features like private endpoints, managed identities, and compliance certifications. The Copilot SDK supports BYOK through custom provider configurations.
For Azure OpenAI integration, you configure the session with a custom provider:
using GitHub.Copilot.SDK;
using var client = new CopilotClient(new CopilotClientOptions
{
GithubToken = Environment.GetEnvironmentVariable("GITHUB_TOKEN")
});
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5",
Provider = new ProviderConfig
{
Type = "openai",
BaseUrl = "https://your-azure-openai-resource.openai.azure.com/",
ApiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY")
}
});
This approach routes requests directly to your Azure OpenAI resource, giving you full control over billing, monitoring, and compliance. Azure OpenAI supports managed identity authentication, which eliminates API keys from your configuration entirely in Azure-hosted applications. The pattern works similarly for Claude through Anthropic's API or through Azure's Claude offerings where available.
Enterprise deployment considerations extend beyond just API configuration. You need to think about rate limiting strategies to prevent runaway costs, caching mechanisms to avoid redundant API calls for identical prompts, and fallback logic when your primary model service experiences outages. I've implemented a circuit breaker pattern that automatically routes to a secondary model when the primary model's error rate exceeds a threshold. Additionally, consider implementing prompt sanitization to prevent sensitive data from being sent to external AI services, and audit logging to track which users are generating which prompts for compliance and cost allocation purposes.
Frequently Asked Questions
Which AI models does the GitHub Copilot SDK support?
The GitHub Copilot SDK provides access to multiple AI model families through a unified interface. Supported models include OpenAI's GPT-5 and GPT-4 variants, Anthropic's Claude family including Opus, Sonnet, and Haiku, and Google's Gemini models. Each model is accessed through a specific model identifier string that you pass during session configuration. The SDK abstracts the underlying API differences so your application code remains consistent regardless of which model you select. The definitive list of supported models and their identifiers is maintained in the official Copilot SDK documentation and expands as new models become available.
Can I switch models mid-conversation in the Copilot SDK?
The model selection happens at the session level when you initialize your CopilotSession or IChatClient, and changing models mid-conversation requires creating a new session with a different model configuration. This design reflects the reality that different models have different context windows, token counting mechanisms, and conversation state management. If you need to switch models within a logical conversation flow, you can transfer the conversation history by replaying previous messages to the new session. For most use cases, I recommend designing your application to select the appropriate model upfront based on the task type rather than switching mid-conversation, as this provides better performance and avoids the complexity of state transfer.
How does BYOK work with the GitHub Copilot SDK?
Bring-your-own-key implementations allow you to use your own API credentials for Azure OpenAI, Anthropic, or other providers rather than routing through GitHub's Copilot infrastructure. The SDK supports this through custom provider configurations in the SessionConfig. With BYOK, you configure the endpoint URL and authentication directly in your application, giving you control over billing, rate limits, and regional deployment. This approach makes sense for enterprises that need usage attribution, compliance certifications like HIPAA or SOC2, or integration with existing Azure infrastructure using managed identities. The tradeoff is additional configuration complexity and the need to manage API keys or identity credentials yourself rather than relying on GitHub's token infrastructure.
Is GPT-5 or Claude better for .NET code generation?
Both models excel at .NET code generation but with different strengths. GPT-5 demonstrates superior performance for complex architectural patterns, framework-specific implementations, and scenarios requiring deep reasoning about design tradeoffs. It generates code that tends to be more feature-complete but occasionally more complex than necessary. Claude Sonnet produces code that emphasizes safety, includes more defensive checks, and provides excellent explanatory comments. For straightforward CRUD operations, data access patterns, and common framework usage, Claude Sonnet delivers comparable quality to GPT-5 at lower cost and faster response times. I use GPT-5 when generating complex state machines, implementing custom middleware, or designing new architectural patterns, and Claude Sonnet for everyday coding tasks, documentation generation, and code reviews.
Conclusion
Multi-model support in the GitHub Copilot SDK empowers .NET developers to leverage multi-model support in GitHub Copilot SDK to choose the right AI model for each specific task rather than committing to a single provider. I've shown you how to configure both GPT-5 and Claude models, build model-agnostic applications through abstraction patterns, and make informed decisions about which model fits your use case based on performance, cost, and capability tradeoffs. The flexibility to switch models without rewriting application logic means you can optimize both quality and cost as new models emerge and pricing evolves.
For comprehensive coverage of the Copilot SDK ecosystem, see the GitHub Copilot SDK for .NET: Complete Developer Guide. To understand the foundational concepts, review CopilotClient and CopilotSession: Core Concepts in C#. For advanced capabilities like custom tools and multi-agent systems, check out Advanced GitHub Copilot SDK: Tools, Hooks, and Multi-Agent and Custom AI Tools with AIFunctionFactory in GitHub Copilot SDK. If you're exploring broader AI orchestration patterns in .NET, the Semantic Kernel in C#: Complete AI Orchestration Guide provides complementary context for enterprise AI application architecture.

