BrandGhost
Semantic Kernel in C#: Complete AI Orchestration Guide

Semantic Kernel in C#: Complete AI Orchestration Guide

If you're a .NET developer looking to integrate large language models into your applications, Semantic Kernel in C# is one of the most powerful tools available to .NET developers. Microsoft's open-source AI orchestration SDK lets you build enterprise-grade AI applications with plugins, multi-agent workflows, and retrieval-augmented generation -- all within the familiar .NET ecosystem.

In this complete guide, I'll walk you through everything you need to know about Semantic Kernel in C#: what it is, how it works, and how to use its core pillars -- plugins, agents, and memory/RAG -- to ship real AI-powered .NET applications.

What Is Semantic Kernel?

Semantic Kernel (SK) is an open-source SDK from Microsoft that enables .NET developers to integrate AI models like OpenAI's GPT series, Azure OpenAI, and other LLMs directly into their applications. It provides a structured framework for orchestrating prompts, functions, memory, and agents in a way that is modular, testable, and enterprise-ready.

As of early 2026, the latest stable release is Microsoft.SemanticKernel v1.71.0, targeting .NET 8.0 and .NET Standard 2.0. The SDK has matured significantly from its earlier preview versions, and with the addition of the Agent Framework and first-class Model Context Protocol (MCP) support, it's now a complete platform for production AI development.

The official documentation lives at learn.microsoft.com/en-us/semantic-kernel, and the source code is available on GitHub.

Why Semantic Kernel Over Raw API Calls?

You could call the OpenAI API directly using HttpClient or the official OpenAI .NET SDK. So why add Semantic Kernel to the mix?

  • Abstraction layer: Swap between OpenAI, Azure OpenAI, HuggingFace, and local models with minimal code changes
  • Plugin system: Expose your existing C# methods to LLMs as callable tools, enabling function calling patterns
  • Agent framework: Build autonomous agents that plan, use tools, and collaborate with other agents
  • Memory and RAG: First-class support for vector stores, embeddings, and retrieval-augmented generation
  • Observability: Built-in OpenTelemetry hooks, structured logging, and safety features
  • Enterprise patterns: Supports dependency injection, configuration, and the extension patterns your team already uses

If you've been exploring getting started with AI coding tools, Semantic Kernel in C# is the natural next step for building production-grade applications.

Getting Started with Semantic Kernel in C#

Setting up Semantic Kernel in C# is straightforward. You'll need .NET 8 or later, an API key from OpenAI or Azure OpenAI, and about five minutes to have your first AI-orchestrated application running. The following steps walk you through installation, kernel configuration, and your first prompt invocation.

Installation

Getting started with Semantic Kernel in C# takes less than a minute. Install the core NuGet package into your .NET 8+ project:

dotnet add package Microsoft.SemanticKernel

For Azure OpenAI support specifically, add the Azure connector:

dotnet add package Microsoft.SemanticKernel.Connectors.AzureOpenAI

Your First Kernel

The Kernel class is the heart of Semantic Kernel. You build it using Kernel.CreateBuilder() and configure your AI service connectors:

using Microsoft.SemanticKernel;

var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
    modelId: "gpt-4o",
    apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!);

var kernel = builder.Build();

// Invoke a simple prompt
var result = await kernel.InvokePromptAsync("Explain dependency injection in one sentence.");
Console.WriteLine(result);

The Kernel object is your central orchestration hub. It manages AI service connections, registered plugins, and execution context for all AI operations.

Dependency Injection Integration

Semantic Kernel integrates naturally with .NET's dependency injection container, which is a pattern you're already familiar with if you've worked with IServiceCollection for dependency injection in C#:

using Microsoft.Extensions.DependencyInjection;
using Microsoft.SemanticKernel;

var services = new ServiceCollection();
services.AddKernel()
    .AddOpenAIChatCompletion("gpt-4o", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!);

var provider = services.BuildServiceProvider();
var kernel = provider.GetRequiredService<Kernel>();

This makes it straightforward to register SK as a service in ASP.NET Core applications and share it across your application layers.

Pillar 1: Plugins

Plugins are the core extension mechanism in Semantic Kernel. They expose C# methods to LLMs as callable functions -- enabling AI agents to interact with your application logic, external APIs, and databases.

Native Function Plugins

The simplest type of plugin is a native function plugin: a C# class with methods decorated with [KernelFunction] and [Description] attributes.

using Microsoft.SemanticKernel;
using System.ComponentModel;

public class WeatherPlugin
{
    [KernelFunction, Description("Gets the current temperature for a given city")]
    public async Task<string> GetTemperatureAsync(
        [Description("The city name, e.g. Seattle")] string city)
    {
        // In a real implementation, call a weather API here
        await Task.Delay(100); // Simulate async work
        return $"The current temperature in {city} is 18°C.";
    }

    [KernelFunction, Description("Gets the current UTC date and time")]
    public string GetCurrentTime() => DateTime.UtcNow.ToString("O");
}

The [Description] attributes are critical -- they tell the LLM what each function does and when to call it. Always write clear, complete descriptions.

Register and use the plugin:

var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion("gpt-4o", apiKey);
builder.Plugins.AddFromType<WeatherPlugin>();
var kernel = builder.Build();

// The LLM can now call WeatherPlugin functions automatically
var settings = new OpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

var result = await kernel.InvokePromptAsync(
    "What's the current weather in Seattle?",
    new KernelArguments(settings));

Console.WriteLine(result);

This plugin architecture approach maps directly to patterns you may already use for modular software design. The difference is that instead of just your application consuming the plugin, the LLM can decide when and how to invoke it.

OpenAPI Plugins

Semantic Kernel can import any REST API described by an OpenAPI spec as a plugin, making it trivial to give AI agents access to external services:

var plugin = await kernel.ImportPluginFromOpenApiAsync(
    "petstore",
    new Uri("https://petstore3.swagger.io/api/v3/openapi.json"));

// The LLM can now call Petstore API endpoints as functions

Full OpenAPI plugin documentation is available at Microsoft Learn.

MCP Plugin Support

Semantic Kernel 1.x includes first-class support for the Model Context Protocol (MCP) -- the emerging standard for connecting AI models to external tools and data sources. This means your SK agents can consume any MCP-compatible server as a plugin.

Pillar 2: Agents and the Agent Framework

The Agent Framework in Semantic Kernel provides the building blocks for autonomous AI systems that can plan, use tools, and collaborate with each other.

ChatCompletionAgent

ChatCompletionAgent is the foundational agent type. It wraps a chat completion model with a system prompt and plugins:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Agents;
using Microsoft.SemanticKernel.ChatCompletion;

var agent = new ChatCompletionAgent
{
    Name = "CodeReviewer",
    Instructions = @"You are an expert C# code reviewer. 
        Review the provided code and provide actionable feedback on:
        - Code quality and readability
        - Potential bugs or edge cases
        - Performance considerations
        - SOLID principle adherence",
    Kernel = kernel
};

// Create a thread to maintain conversation history
var thread = new ChatHistoryAgentThread();

// Invoke the agent
await foreach (var response in agent.InvokeAsync("Review this code: ...", thread))
{
    Console.Write(response.Content);
}

Multi-Agent Orchestration

One of the most compelling features of Semantic Kernel in C# is multi-agent orchestration. You can create teams of specialized agents that collaborate on complex tasks. The Agent Framework supports several orchestration patterns documented at learn.microsoft.com:

  • Sequential: Agents execute in a pipeline, each building on the previous agent's output
  • Concurrent: Multiple agents work in parallel on independent subtasks
  • Group Chat: Agents participate in a shared conversation and can respond to each other
  • Handoff: One agent transfers control to another based on conditions
using Microsoft.SemanticKernel.Agents;
using Microsoft.SemanticKernel.Agents.Chat;

// Define specialized agents
var researcher = new ChatCompletionAgent
{
    Name = "Researcher",
    Instructions = "You research topics and gather factual information.",
    Kernel = kernel
};

var writer = new ChatCompletionAgent
{
    Name = "Writer",
    Instructions = "You write clear, engaging content based on research.",
    Kernel = kernel
};

// Create a group chat with a termination condition
var groupChat = new AgentGroupChat(researcher, writer)
{
    ExecutionSettings = new AgentGroupChatSettings
    {
        TerminationStrategy = new ApprovalTerminationStrategy()
    }
};

groupChat.AddChatMessage(new ChatMessageContent(AuthorRole.User,
    "Write a summary of Semantic Kernel's plugin system."));

await foreach (var response in groupChat.InvokeAsync())
{
    Console.WriteLine($"[{response.AuthorName}]: {response.Content}");
}

Microsoft Agent Framework

SK has converged with AutoGen via the Microsoft Agent Framework, announced in early 2026. This allows CopilotClient-powered agents and SK agents to interoperate in the same multi-agent system -- a significant milestone for building complex AI workflows. You can read about this integration at the Semantic Kernel Dev Blog.

Pillar 3: Memory and RAG

Retrieval-augmented generation (RAG) allows AI agents to ground their responses in your own data -- reducing hallucinations and enabling domain-specific knowledge. Semantic Kernel in C# provides a complete RAG pipeline through its vector store abstractions.

Vector Store Connectors

SK ships with connectors for all major vector stores, so you can prototype locally and promote to a managed cloud store with minimal code changes. Full connector documentation is available at Microsoft Learn:

Store NuGet Package
In-Memory (dev/test) Microsoft.SemanticKernel.Connectors.InMemory
Azure AI Search Microsoft.SemanticKernel.Connectors.AzureAISearch
Qdrant Microsoft.SemanticKernel.Connectors.Qdrant
Chroma Microsoft.SemanticKernel.Connectors.Chroma

Adding RAG to an Agent

The TextSearchProvider API wires a vector store into an agent thread, enabling automatic retrieval at query time:

using Microsoft.SemanticKernel.Connectors.InMemory;
using Microsoft.SemanticKernel.Memory;

// Setup embedding generation
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion("gpt-4o", apiKey);
builder.AddOpenAITextEmbeddingGeneration("text-embedding-3-small", apiKey);
var kernel = builder.Build();

// Create in-memory vector store (use Qdrant or Azure AI Search in production)
var vectorStore = new InMemoryVectorStore(new InMemoryVectorStoreOptions
{
    EmbeddingGenerator = kernel.GetRequiredService<ITextEmbeddingGenerationService>()
        .AsEmbeddingGenerator(1536)
});

<!-- Note: The following APIs are experimental and may change -->

// Upsert documents
var store = new TextSearchStore<string>(vectorStore, "KnowledgeBase", 1536);
await store.UpsertTextAsync(new[]
{
    "Semantic Kernel is Microsoft's AI orchestration SDK for .NET.",
    "Plugins in Semantic Kernel expose C# methods to LLMs as callable functions.",
    "The Agent Framework enables multi-agent AI workflows."
});

// Wire RAG into an agent thread
var agent = new ChatCompletionAgent
{
    Name = "KnowledgeAgent",
    Instructions = "Answer questions using only the provided context.",
    Kernel = kernel
};

var thread = new ChatHistoryAgentThread();
thread.AddProvider(new TextSearchProvider(store));

await foreach (var response in agent.InvokeAsync("What is Semantic Kernel?", thread))
{
    Console.WriteLine(response.Content);
}

This pattern is explained in detail in the official SK Agent RAG guide.

Observability and Production Considerations

Semantic Kernel in C# includes OpenTelemetry integration for production observability. You can trace every AI call, plugin invocation, and agent action:

using OpenTelemetry;
using OpenTelemetry.Trace;

var tracerProvider = Sdk.CreateTracerProviderBuilder()
    .AddSource("Microsoft.SemanticKernel*")
    .AddConsoleExporter()
    .Build();

For production applications, you should also consider:

  • Prompt injection defense: Validate and sanitize user inputs before passing them to the kernel
  • Token budget management: Set MaxTokens on execution settings to prevent runaway costs
  • Error handling and retries: Use the Polly resilience library for transient fault handling around AI API calls
  • Responsible AI: Filter outputs using content safety APIs where appropriate

These concerns align with keeping AI behavior controlled and predictable, which is critical in any production deployment.

Semantic Kernel vs. Direct LLM APIs

Choosing between Semantic Kernel in C# and direct LLM API calls comes down to complexity, scale, and how much you value a structured abstraction layer. The right answer depends on your specific application requirements -- here's a clear framework for deciding.

When should you use Semantic Kernel in C# versus calling LLM APIs directly?

Use Semantic Kernel when you:

  • Need to switch between multiple AI providers
  • Are building multi-agent workflows
  • Want plugin/function calling with automatic invocation
  • Need RAG with vector stores
  • Are building enterprise applications that need observability and testing

Use direct API calls when you:

  • Have a simple, single-purpose AI integration
  • Need full control over request/response without an abstraction layer
  • Are exploring or prototyping quickly

For most production .NET applications, Semantic Kernel provides enough value through its abstractions and patterns to justify the dependency.

FAQ

The following questions come up most frequently when developers start evaluating or using Semantic Kernel in C#. If your question isn't answered here, the official documentation and the GitHub Discussions are excellent resources.

What is Semantic Kernel in C# used for?

Semantic Kernel in C# is Microsoft's open-source AI orchestration SDK for .NET. It's used for building AI-powered applications that need to integrate LLMs (OpenAI, Azure OpenAI, etc.) with plugins, multi-agent workflows, and retrieval-augmented generation.

Is Semantic Kernel free to use?

Yes -- Semantic Kernel itself is open-source (MIT license) and free. You pay for the underlying AI services you connect to (OpenAI API, Azure OpenAI, etc.), but the SDK has no cost.

What .NET version does Semantic Kernel support?

As of v1.71.0, Semantic Kernel targets .NET 8.0 and .NET Standard 2.0. This means it works with .NET 8, .NET 9, and anywhere .NET Standard 2.0 is supported.

How does Semantic Kernel compare to LangChain?

Semantic Kernel is Microsoft's first-party SDK for .NET (and Python/Java), with deep integration into the .NET ecosystem, dependency injection, and Azure services. LangChain is Python-first and more community-driven. For .NET developers, Semantic Kernel provides a more native experience with better tooling integration.

Does Semantic Kernel support local LLMs?

Yes -- SK supports Ollama and other local model providers through community connectors, allowing you to run AI applications without sending data to external APIs.

How do Semantic Kernel plugins work?

Plugins are C# classes with methods decorated with [KernelFunction] and [Description] attributes. You register them on the kernel, and when you enable FunctionChoiceBehavior.Auto(), the LLM can decide when and how to call your functions based on user input.

What is the difference between ChatCompletionAgent and AssistantAgent in Semantic Kernel?

ChatCompletionAgent uses chat completion APIs (stateless by default, history managed in your code). AssistantAgent integrates with the OpenAI Assistants API, which manages conversation threads and tool state server-side. Most C# applications start with ChatCompletionAgent for simplicity and control.

Wrapping Up

Semantic Kernel in C# is a mature, production-ready framework for building AI-orchestrated .NET applications. Its three core pillars -- plugins (exposing your code to LLMs), agents (autonomous AI workflows), and memory/RAG (grounding AI in your data) -- cover the full spectrum of what most AI applications need.

Throughout this series, we'll go deep on each of these pillars. Next up: a complete guide to the Semantic Kernel Plugin system, including native functions, prompt functions, OpenAPI integration, and MCP support.

If you're new to dependency injection patterns in .NET, the automatic dependency injection guide for .NET is a great foundation before diving deeper into SK's DI-first design.

Auth0 Changes The Game for AI Agents - Dev Leader Weekly 119

Welcome to another issue of Dev Leader Weekly! In this issue, I discuss some of the awesome features Auth0 is bringing for building AI agents!

These AI Agents Ain't It - Dev Leader Weekly 94

Welcome to another issue of Dev Leader Weekly! In this issue, I discuss my experience using AI agents to refactor code.

Microsoft Agent Framework in C#: Complete Developer Guide

Complete guide to Microsoft Agent Framework in C#. Core abstractions, architecture, tool registration, sessions, and where MAF fits in the .NET AI ecosystem.

An error has occurred. This application may no longer respond until reloaded. Reload x