BrandGhost
Semantic Kernel Agents in C#: Complete Guide to AI Agents

Semantic Kernel Agents in C#: Complete Guide to AI Agents

Building intelligent applications with AI is becoming essential for modern software development, and Semantic Kernel agents in C# provide a powerful abstraction for creating autonomous AI-powered components. Rather than manually managing chat completions and context, Semantic Kernel agents in C# encapsulate identity, instructions, and capabilities into reusable units that can work independently or collaborate in multi-agent systems. I've found that understanding how to build Semantic Kernel agents in C# can significantly influence how we architect AI applications -- moving from simple request-response patterns to sophisticated orchestrations where multiple AI personalities collaborate to solve complex problems. In this complete guide, I'll walk you through everything you need to know about Semantic Kernel agents in C# implementation, from basic ChatCompletionAgent patterns to advanced multi-agent orchestration with AgentGroupChat, and show you practical code examples you can use in your .NET projects.

What Are Semantic Kernel Agents?

Note: The Semantic Kernel agent functionality is currently experimental -- as an experimental feature, breaking changes may occur between versions without prior notice. Install the agent package with the --prerelease flag:

dotnet add package Microsoft.SemanticKernel.Agents.Core --prerelease

In Semantic Kernel, an agent represents a specialized AI entity with a distinct identity, purpose, and set of capabilities.Unlike basic chat completions where you send prompts and receive responses, Semantic Kernel agents in C# maintain their own persona through instructions, have a name that identifies them in conversations, and can be equipped with plugins that extend their functionality. I think of Semantic Kernel agents as cohesive units that combine a kernel instance with specific instructions and execution settings to create a consistent AI personality.

The core difference between using agents and calling chat completions directly lies in abstraction and orchestration. When you work with raw chat completion APIs, you're responsible for managing system prompts, maintaining conversation history, and coordinating multiple AI interactions. Semantic Kernel agents in C# handle much of this complexity for you -- each agent knows who it is through its Name property, what it should do through its Instructions property, and what tools it can use through the Kernel and ExecutionSettings. This abstraction becomes particularly powerful when you need multiple AI entities working together, which is where the Semantic Kernel agent framework truly shines.

The agent abstraction in Semantic Kernel follows a clear pattern where every agent consists of three fundamental elements: a Kernel instance that provides the underlying AI model and plugins, Instructions that define the agent's behavior and persona through a system prompt, and an identity expressed through the Name property. This combination creates a reusable AI component that maintains consistency across interactions and can participate in complex multi-agent scenarios that would be difficult to orchestrate with direct API calls.

ChatCompletionAgent: Building Your First Semantic Kernel Agent in C#

The ChatCompletionAgent is the primary agent type you'll use in Semantic Kernel for most scenarios. It wraps an LLM chat completion model with agent-specific properties and provides a clean interface for conversational AI. I've found that building Semantic Kernel agents in C# with ChatCompletionAgent strikes the right balance between simplicity and power -- it's straightforward enough for basic use cases but supports advanced features like automatic tool calling and multi-turn conversations.

Creating your first agent requires just a few key properties. The Name identifies the agent in conversations and logs, the Instructions property contains the system prompt that defines the agent's persona and behavior, and the Kernel provides the underlying AI model and any registered plugins. You can also specify ExecutionSettings to control behavior like temperature, token limits, and tool calling policies.

Here's a complete working example that demonstrates creating and using a ChatCompletionAgent:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Agents;
using Microsoft.SemanticKernel.Connectors.OpenAI;

var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion("gpt-4o", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!);
var kernel = builder.Build();

var agent = new ChatCompletionAgent
{
    Name = "TechWriter",
    Instructions = "You are an expert technical writer. Explain complex concepts clearly and concisely for .NET developers.",
    Kernel = kernel,
    Arguments = new KernelArguments(new OpenAIPromptExecutionSettings
    {
        FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
    })
};

var thread = new ChatHistoryAgentThread();
await foreach (var response in agent.InvokeAsync("Explain what dependency injection is.", thread))
{
    Console.WriteLine(response.Content);
}

The ChatHistoryAgentThread in this example manages the conversation state between you and the agent. When you invoke the agent with a message, it adds your message to the thread's history, sends the entire context to the underlying LLM, and streams back the response. Each invocation maintains the full conversation history, allowing the agent to reference previous exchanges and maintain context across multiple turns. This pattern is particularly useful when building chatbots, virtual assistants, or any scenario where conversational context matters.

Agent Instructions and Personas

The Instructions property of an agent is where you define its personality, expertise, and behavioral guidelines through what's effectively a system prompt. I've learned that well-crafted instructions are crucial for agent reliability -- they set the tone, establish boundaries, and guide the model toward producing consistent outputs that align with your application's needs. The quality of your agent's instructions directly impacts how useful and predictable it will be in production scenarios.

When crafting agent instructions, I focus on three key elements: identity, expertise, and constraints. The identity defines who the agent is in a conversational sense, like "You are a senior software architect specializing in .NET microservices." The expertise section outlines what the agent knows and should focus on, such as design patterns, best practices, or domain-specific knowledge. Constraints specify what the agent should and shouldn't do, including output format requirements, tone guidelines, and any prohibited behaviors or topics.

The instructions you provide become part of the system message that frames every interaction with the underlying LLM. When a user sends a message to your agent, Semantic Kernel constructs the full prompt by combining your instructions, any relevant conversation history, and the user's message. This means your instructions persist across the entire conversation, continuously shaping how the agent interprets questions and formulates responses.

Here are some best practices I've developed for writing effective agent instructions:

Start with a clear identity statement that defines the agent's role and perspective. Be specific about the agent's expertise areas and the types of questions it's well-suited to answer. Include constraints about response length, format, or style when consistency matters for your application. Avoid overly complex or contradictory instructions that might confuse the model. Test your instructions with various inputs to ensure the agent behaves as expected across different scenarios.

The difference between a generic AI assistant and a valuable specialized agent often comes down to instruction quality. I've seen agents become dramatically more useful when their instructions are refined through iteration and testing. If you're building an agent for code review, for example, instructions like "You are a code reviewer focused on security vulnerabilities and performance issues in C# applications" will produce much more relevant feedback than generic instructions about being a helpful assistant.

Multi-Agent Orchestration with AgentGroupChat

Note: AgentGroupChat orchestration is currently experimental per the official Semantic Kernel documentation -- as an experimental feature, breaking changes may occur between versions without prior notice.

AgentGroupChat enables multiple agents to collaborate on tasks through structured conversations where each agent contributes based on its specialized instructions.This is where Semantic Kernel agents in C# become truly powerful -- instead of coordinating multiple LLM calls yourself and managing which AI personality responds when, you define the agents and let AgentGroupChat handle the orchestration. I've used this pattern for scenarios like content creation where a researcher agent gathers information and a writer agent transforms that research into polished prose.

The orchestration mechanics revolve around two key strategies: SelectionStrategy and TerminationStrategy. The SelectionStrategy determines which agent speaks next in the conversation -- options include SequentialSelectionStrategy where agents take turns in order, or more sophisticated strategies that analyze the conversation and pick the most appropriate agent for the next response. The TerminationStrategy defines when the group chat should stop, which might be after a maximum number of iterations, when a specific agent signals completion, or when the conversation reaches a natural conclusion.

Here's a complete example demonstrating multi-agent orchestration with two specialized agents:

var researcherAgent = new ChatCompletionAgent
{
    Name = "Researcher",
    Instructions = "You are a researcher. Gather relevant information and present key facts.",
    Kernel = kernel
};

var writerAgent = new ChatCompletionAgent
{
    Name = "Writer",
    Instructions = "You are a technical writer. Take researched facts and write clear, engaging content.",
    Kernel = kernel
};

#pragma warning disable SKEXP0110
var groupChat = new AgentGroupChat(researcherAgent, writerAgent)
{
    ExecutionSettings = new AgentGroupChatSettings
    {
        SelectionStrategy = new SequentialSelectionStrategy()
    }
};

groupChat.ExecutionSettings.TerminationStrategy.MaximumIterations = 6;
#pragma warning restore SKEXP0110

groupChat.AddChatMessage(new ChatMessageContent(AuthorRole.User, "Write a paragraph about async/await in C#."));

await foreach (var message in groupChat.InvokeAsync())
{
    Console.WriteLine($"[{message.AuthorName}]: {message.Content}");
}

In this example, the SequentialSelectionStrategy means the Researcher agent will respond first, followed by the Writer agent, then back to the Researcher, and so on. The DefaultTerminationStrategy with MaximumIterations set to 6 ensures the conversation doesn't run indefinitely -- after six total agent responses, the group chat concludes. You can see how each agent's specialized instructions guide its contributions, with the researcher focusing on facts and the writer transforming those facts into readable content.

The real power of AgentGroupChat emerges when you need multiple perspectives or stages in a workflow. I've built systems where agents for requirements analysis, architecture design, and code review collaborate to evaluate proposed features. Each agent applies its specialized lens to the problem, and the resulting conversation captures insights that would be difficult to obtain from a single generalized prompt. This multi-agent approach mirrors how human teams work together (this is an architectural analogy; actual performance depends on model capabilities and task design), with each specialist contributing their expertise toward a common goal.

Microsoft Agent Framework

The Microsoft Agent Framework represents Microsoft's broader vision for building agentic AI systems across their ecosystem, providing standards and abstractions that work beyond just Semantic Kernel (currently in active development -- check Microsoft's documentation for stability status). While Semantic Kernel offers its own agent implementations like ChatCompletionAgent, these integrate with the Microsoft Agent Framework through common interfaces and patterns. I think of the Agent Framework as the foundation layer that enables interoperability between different agent implementations, including those in the Copilot SDK and other Microsoft AI tools.

At the core of this integration is the concept that agents should be composable and reusable across different contexts. The Microsoft Agent Framework defines interfaces like IEmbeddedAgent that allow agents built in one system to be embedded and used in another. For Semantic Kernel developers, this means agents you create with ChatCompletionAgent can potentially participate in larger ecosystems, like being surfaced in Microsoft 365 Copilot experiences or integrated with other enterprise AI systems built on Microsoft's agent platform.

The practical implication for us as .NET developers is that when we build agents using Semantic Kernel, we're not creating isolated components -- we're creating building blocks that align with Microsoft's strategic direction for enterprise AI. The Agent Framework provides common concepts around agent identity, capabilities, and communication protocols that make our Semantic Kernel agents first-class citizens in the broader Microsoft AI ecosystem. This architectural alignment means investments we make in Semantic Kernel agent development have longer-term value as Microsoft's agent story continues to evolve.

While you don't need to deeply understand the Microsoft Agent Framework internals to build effective Semantic Kernel agents, it's worth knowing that the design choices in Semantic Kernel's agent API reflect this broader framework thinking. Properties like Name and Instructions aren't arbitrary -- they map to agent framework concepts that enable discoverability and orchestration at the platform level. As the agent framework matures, I expect we'll see more explicit bridges and tools for taking Semantic Kernel agents and deploying them into enterprise agent orchestration platforms.

Plugins in Agents

Agents become dramatically more capable when equipped with plugins that extend their functionality beyond pure language generation. When you register plugins on the Kernel instance that an agent uses, that agent can invoke those plugin functions as tools during its reasoning process. I've found this integration between agents and plugins to be one of the most powerful features in Semantic Kernel -- it transforms agents from conversational interfaces into action-takers that can query databases, call APIs, perform calculations, or interact with any external system you've wrapped in a plugin.

The tool calling mechanism works through the ExecutionSettings on your agent, specifically the ToolCallBehavior property. When set to AutoInvokeKernelFunctions, the agent automatically detects when it needs to use a plugin function to answer a query, invokes that function, and incorporates the results into its response. This happens transparently during the agent's invocation -- the underlying LLM identifies that it needs additional information, makes a tool call to your plugin, and uses the returned data to formulate a complete answer.

Here's an example showing how an agent with plugins is configured:

var agent = new ChatCompletionAgent
{
    Name = "DataAssistant",
    Instructions = "You are a data assistant that can query and summarize database records.",
    Kernel = kernel,  // kernel has DatabasePlugin registered
    Arguments = new KernelArguments(new OpenAIPromptExecutionSettings
    {
        FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
    })
};

In this configuration, the kernel has a DatabasePlugin already registered with functions for querying data. When a user asks the DataAssistant agent a question that requires database information, the agent automatically invokes the appropriate plugin function, retrieves the data, and uses it in the response. This pattern is particularly valuable because it keeps your agent instructions focused on behavior and persona while plugins handle the technical integration details.

The combination of agent instructions and plugins creates a powerful division of responsibilities. Instructions define how the agent thinks and communicates, while plugins define what the agent can do. I've built agents where the instructions establish expertise in financial analysis, and plugins provide access to market data APIs -- the agent knows how to interpret and explain financial information, while the plugins know how to fetch it. If you want to learn more about creating and using plugins effectively, check out the Semantic Kernel plugins guide.

Agent Thread and History Management

The ChatHistoryAgentThread is the mechanism that manages conversation state between your application and an agent, maintaining the history of messages exchanged and providing that context on each agent invocation. While it might seem similar to the ChatHistory class you use with direct chat completion calls, ChatHistoryAgentThread is specifically designed for agent scenarios and includes additional metadata about agent identities and message attribution. I've found that understanding thread management is crucial for building stateful conversational experiences where agents remember previous exchanges.

When you create a ChatHistoryAgentThread and pass it to an agent's InvokeAsync method, that thread accumulates all messages from the conversation. Each user message you send gets added to the thread, and each agent response gets added as well, attributed to the agent by name. On subsequent invocations with the same thread, the agent has access to the entire history, allowing it to reference earlier parts of the conversation, maintain context about topics discussed, and provide coherent multi-turn interactions.

The difference between ChatHistoryAgentThread and direct ChatHistory usage becomes most apparent in multi-agent scenarios. In an AgentGroupChat, the thread contains messages from multiple different agents, each identified by their Name property. The thread maintains not just what was said, but who said it, which becomes critical when agents need to reference or respond to each other's contributions. This metadata-rich history enables sophisticated orchestration patterns where agent selection strategies can analyze the conversation flow and make intelligent decisions about which agent should speak next.

One important consideration for production applications is thread lifecycle management. Threads accumulate messages over time, and long-running conversations can grow quite large, potentially hitting token limits or causing performance issues. I typically implement strategies for thread management like summarizing older messages, trimming conversation history beyond a certain depth, or persisting threads to storage and loading only recent context. The specific approach depends on your use case -- a customer service chatbot might need full conversation history, while a code assistant might only need the last few exchanges.

For scenarios requiring persistence, you can serialize the conversation history from a ChatHistoryAgentThread and store it in a database, then reconstitute it when the user returns. This pattern is essential for applications where conversations span multiple sessions, like support tickets or ongoing projects. The thread abstraction makes this relatively straightforward since it encapsulates all the state needed to resume a conversation where you left off, similar to how the observer pattern enables event-driven state management in traditional applications.

When to Use Agents vs Direct Chat

Choosing between Semantic Kernel agents and direct chat completion calls depends on the complexity and structure of your AI interactions. For simple one-off queries where you just need to send a prompt and receive a response, direct chat completion through the IChatCompletionService is often sufficient and involves less overhead. However, when you need consistent persona, reusable AI components, or orchestration of multiple AI entities, agents provide crucial abstractions that make your code cleaner and more maintainable.

I reach for agents when building conversational experiences that benefit from a defined identity and consistent behavior across multiple interactions. If your AI component needs a name, a specific area of expertise, and instructions that should persist across invocations, that's a strong signal that an agent makes sense. Agents also shine when you're integrating plugins and want automatic tool calling behavior -- the ExecutionSettings on an agent encapsulate this configuration in a reusable way that's cleaner than managing it on every individual chat completion call.

The multi-agent use case is where agents become essential rather than just convenient. If your application involves multiple AI entities collaborating, debating, or handling different aspects of a workflow, AgentGroupChat and the agent abstraction are purpose-built for this scenario. Trying to implement the same orchestration with direct chat calls means you're essentially rebuilding what the agent framework provides -- message routing, identity management, conversation flow control, and termination logic.

Direct chat completion calls remain the right choice for scenarios like one-time content generation, simple prompt engineering experiments, or cases where the overhead of defining an agent doesn't add value. If you're generating a single report, translating text, or performing a standalone analysis task that doesn't require conversation context or identity, calling the chat completion service directly is more straightforward. Think of direct calls as procedural and agents as object-oriented -- sometimes you just need a function, and sometimes you need a class with state and behavior.

The decision also connects to broader architectural patterns in your application. If you're building a system where AI capabilities are first-class entities that can be discovered, configured, and composed dynamically, agents provide the structure you need. This aligns with patterns like chain of responsibility where requests flow through a pipeline of handlers -- agents can form similar pipelines where each agent in a group chat handles its specialized aspect of the request.

Conclusion

Semantic Kernel agents in C# provide a powerful abstraction for building sophisticated AI-powered applications where identity, instructions, and capabilities are encapsulated into reusable components. From single ChatCompletionAgent instances that provide consistent persona-driven interactions, to complex AgentGroupChat orchestrations where multiple specialized agents collaborate, the agent framework in Semantic Kernel can significantly influence how we architect AI systems. I've shown you the core patterns -- creating agents with instructions and plugins, managing conversation state through threads, and orchestrating multi-agent workflows -- that form the foundation for building production-ready agentic AI applications in .NET.

The examples and concepts in this guide connect to the broader Semantic Kernel in C# complete guide that covers the entire framework. For deeper dives into specific agent topics, I'll be publishing additional guides that explore advanced multi-agent patterns, agent monitoring and observability, and real-world agent architectures. If you're looking to enhance your development workflow with AI assistance while building these agent systems, the getting started with AI coding tools guide can help you leverage AI in your development process.

Building with Semantic Kernel agents represents a significant step forward from traditional prompt engineering and direct API calls. The agent abstraction aligns with how we naturally think about specialized AI entities working together, and as the Microsoft Agent Framework continues to evolve, these patterns will become increasingly important for enterprise AI development. I encourage you to experiment with the code examples in this guide, explore multi-agent orchestrations in your own projects, and discover how agents can simplify the AI integration challenges you're facing in your .NET applications.

Getting Started with Microsoft Agent Framework in C#

Getting started with Microsoft Agent Framework in C# is fast. Install packages, configure OpenAI or Azure OpenAI, and build your first streaming agent.

Semantic Kernel in C#: Complete AI Orchestration Guide

Master Semantic Kernel in C# with this complete guide. Learn plugins, agents, RAG, and vector stores to build production AI applications with .NET.

Microsoft Agent Framework in C#: Complete Developer Guide

Complete guide to Microsoft Agent Framework in C#. Core abstractions, architecture, tool registration, sessions, and where MAF fits in the .NET AI ecosystem.

An error has occurred. This application may no longer respond until reloaded. Reload