If you've been working with Semantic Kernel, you've probably used some of the built-in connectors and capabilities. But at some point, you'll need to extend the framework with your own functionality -- and that's where custom plugins Semantic Kernel C# come into play. Custom plugins let you expose your application's domain logic, APIs, database queries, and business rules as callable functions that LLMs can invoke during conversations. Instead of relying solely on what Semantic Kernel provides out of the box, you can build plugins tailored to your exact needs. In this tutorial, I'll walk you through every step of creating custom plugins for Semantic Kernel in C#, from the anatomy of a plugin to registration, invocation, dependency injection, and testing.
The Plugin Anatomy: KernelFunction and Description
Every custom plugin in Semantic Kernel is just a plain C# class with methods decorated with specific attributes. The two most important attributes you'll use are [KernelFunction] and [Description]. The [KernelFunction] attribute marks a method as callable by the kernel, essentially saying "this method is a function the LLM can invoke." Without this attribute, the kernel won't recognize your method as a plugin function, even if it's public. You can optionally pass a name to [KernelFunction] like [KernelFunction("read_file")] to give it a specific identifier. If you omit the name, Semantic Kernel uses the method name by convention.
The [Description] attribute is equally critical, especially when you're enabling automatic function calling with LLMs. When you use function calling features, the LLM receives a list of available tools along with their descriptions. A clear, detailed description helps the LLM decide when and how to call your plugin. For example, [Description("Reads the contents of a text file at the given path")] tells the LLM exactly what this function does. You should also use [Description] on individual parameters to explain what each one expects. Without good descriptions, the LLM might never call your function or might pass incorrect arguments.
Here's what happens behind the scenes: when you register a plugin, Semantic Kernel uses reflection to discover all methods marked with [KernelFunction]. It reads the [Description] attributes and builds a metadata model that can be serialized into the function calling schema your LLM expects. If you skip descriptions, the LLM sees only the function name and parameter names, which often isn't enough context to make smart decisions.
Step 1: Create a Plugin Class
Creating a custom plugin starts with defining a plain C# class. You don't need to inherit from any base class or implement any interfaces -- Semantic Kernel uses a convention-based approach powered by reflection. I recommend naming your plugin class with a "Plugin" suffix like FileSystemPlugin or OrderPlugin to make its purpose obvious in your codebase. Keep the single responsibility principle in mind: each plugin class should represent one cohesive capability or domain area.
Here's a minimal plugin class structure:
using Microsoft.SemanticKernel;
using System.ComponentModel;
public class CalculatorPlugin
{
// Plugin methods will go here
}
That's it. No special configuration, no boilerplate. The class can be public or internal, though public makes it easier to reuse across projects. You can add private fields, constructor parameters, and helper methods -- the kernel only cares about methods marked with [KernelFunction]. This simplicity is one of the reasons I love working with Semantic Kernel plugins. You're just writing normal C# code that happens to be callable by an LLM.
Step 2: Add KernelFunction Methods
Now let's add the actual plugin functions. Each method you want to expose must have the [KernelFunction] attribute. The method can be synchronous or asynchronous (returning Task or ValueTask), and it can accept multiple parameters of various types. Semantic Kernel supports primitive types, strings, collections, and even complex objects, though I recommend sticking with simple types when possible because LLMs work best with straightforward inputs.
Here's a complete example with two functions in a file system plugin:
using Microsoft.SemanticKernel;
using System.ComponentModel;
public class FileSystemPlugin
{
[KernelFunction("read_file")]
[Description("Reads the contents of a text file at the given path")]
public async Task<string> ReadFileAsync(
[Description("The absolute path to the text file to read")] string filePath)
{
if (!File.Exists(filePath))
return $"Error: File not found at '{filePath}'";
return await File.ReadAllTextAsync(filePath);
}
[KernelFunction("list_files")]
[Description("Lists all files in a directory with their sizes")]
public string ListFiles(
[Description("The directory path to list files in")] string directoryPath,
[Description("File extension filter, e.g. '*.cs'. Use '*.*' for all files")] string pattern = "*.*")
{
if (!Directory.Exists(directoryPath))
return $"Error: Directory not found at '{directoryPath}'";
var files = Directory.GetFiles(directoryPath, pattern)
.Select(f => new FileInfo(f))
.Select(fi => $"{fi.Name} ({fi.Length:N0} bytes)");
return string.Join(Environment.NewLine, files);
}
}
Notice how I've decorated both the methods and their parameters with [Description]. The ReadFileAsync method is async and returns a Task<string>, while ListFiles is synchronous and returns a plain string. Both are perfectly valid. The ListFiles function also demonstrates optional parameters -- the pattern parameter has a default value of "*.*", which the LLM can omit if it wants to list all files.
The return types you choose matter. Semantic Kernel can handle string, int, bool, Task<T>, ValueTask<T>, and even void (though void functions don't provide feedback to the LLM). I almost always return string because it's the most flexible for LLM consumption. You can format JSON, XML, plain text, or error messages -- whatever makes sense for your use case.
Step 3: Register the Plugin
Once you've written your plugin class, you need to register it with the Semantic Kernel so it knows about your functions. There are three main registration methods, and which one you use depends on your scenario and whether you're using dependency injection. The most common approach is builder.Plugins.AddFromType<T>(), which you call during kernel setup:
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion("gpt-4o", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!);
builder.Plugins.AddFromType<FileSystemPlugin>();
var kernel = builder.Build();
This method tells the kernel to create an instance of FileSystemPlugin when it needs to invoke one of its functions. If your plugin has constructor parameters, Semantic Kernel resolves them from the IServiceProvider you've configured (more on that in the dependency injection section).
The second approach is kernel.Plugins.AddFromObject(), which you use when you already have an instance of your plugin:
var filePlugin = new FileSystemPlugin();
kernel.Plugins.AddFromObject(filePlugin, "FileSystemPlugin");
This is useful when you need to manually configure the plugin instance or when you're not using a full dependency injection container. The third method, kernel.ImportPluginFromType<T>(), is similar to AddFromType but returns a KernelPlugin object you can manipulate before adding it to the kernel. I rarely use this one unless I need to customize the plugin metadata programmatically.
For most scenarios, I stick with AddFromType<T>() because it integrates cleanly with the dependency injection pattern and keeps my kernel setup code concise.
Step 4: Invoke the Plugin
After registration, you can invoke your plugin functions in two ways: directly through the kernel API or automatically via LLM function calling. Direct invocation looks like this:
var result = await kernel.InvokeAsync("FileSystemPlugin", "list_files", new KernelArguments
{
{ "directoryPath", @"C:ProjectsMyAppsrc" },
{ "pattern", "*.cs" }
});
Console.WriteLine(result.GetValue<string>());
The first argument is the plugin name (defaults to the class name without "Plugin" suffix, so FileSystemPlugin becomes FileSystem), the second is the function name, and the third is a KernelArguments dictionary containing the parameters. This approach is great for testing or when you want to call a specific function deterministically.
The more powerful approach is automatic invocation through chat completions. When you enable FunctionChoiceBehavior.Auto(), the LLM can decide which plugins to call based on the conversation:
using Microsoft.SemanticKernel.Connectors.OpenAI;
var settings = new OpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
var chatService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
history.AddUserMessage("List the C# files in C:\Projects\MyApp\src");
var response = await chatService.GetChatMessageContentAsync(history, settings, kernel);
Console.WriteLine(response.Content);
Behind the scenes, the LLM sees your plugin descriptions, decides list_files is the right tool for the job, and calls it with the appropriate parameters. The result is fed back to the LLM, which then formulates a natural language response to the user. This is where good descriptions really shine. If you want to know which functions were called, you can inspect the response.Metadata or check the history after the call completes.
Multi-Function Plugins
I mentioned earlier that each plugin class should represent one cohesive capability, but that doesn't mean one function per class. In fact, grouping related functions into a single plugin class is a best practice. The FileSystemPlugin example above has two functions: read_file and list_files. Both are file system operations, so they belong together. This mental model of "plugin equals capability group" helps organize your code and makes it easier for LLMs to discover related functions.
When you design multi-function plugins, think about naming and scoping. If you're building a DatabasePlugin, you might have functions like execute_query, get_table_schema, and count_rows. Each function does one specific thing, but together they form a complete database capability. The LLM can chain these functions in a conversation, using one to gather information before calling another.
Here's another example from an e-commerce domain:
public class OrderPlugin
{
[KernelFunction("get_order")]
[Description("Retrieves an order by its ID")]
public async Task<string> GetOrderAsync(
[Description("The unique order identifier")] string orderId)
{
// Implementation
}
[KernelFunction("cancel_order")]
[Description("Cancels an order if it has not yet shipped")]
public async Task<string> CancelOrderAsync(
[Description("The order ID to cancel")] string orderId)
{
// Implementation
}
[KernelFunction("get_order_status")]
[Description("Gets the current status of an order")]
public async Task<string> GetOrderStatusAsync(
[Description("The order ID to check")] string orderId)
{
// Implementation
}
}
All three functions deal with orders, so they live in OrderPlugin. This keeps related functionality together and reduces the number of plugin classes you need to manage. If you have a detailed guide on plugin architecture in C# for improved software design, applying those same architectural principles here will make your Semantic Kernel plugins more maintainable.
Async Plugins
Most real-world plugins need to perform I/O operations: reading files, querying databases, calling HTTP APIs, or interacting with external services. All of these are async operations in modern .NET, and Semantic Kernel fully supports async plugin methods. Just return Task<T> or ValueTask<T> from your function and use the async/await pattern as you normally would.
Here's an example of a plugin that calls an external API:
public class WeatherPlugin
{
private readonly HttpClient _httpClient;
public WeatherPlugin(HttpClient httpClient)
{
_httpClient = httpClient;
}
[KernelFunction("get_current_weather")]
[Description("Gets the current weather conditions for a city")]
public async Task<string> GetCurrentWeatherAsync(
[Description("The city name, e.g. 'Seattle'")] string city)
{
var response = await _httpClient.GetAsync($"https://api.weather.example.com/current?city={city}");
response.EnsureSuccessStatusCode();
var json = await response.Content.ReadAsStringAsync();
return json;
}
}
The kernel awaits your async method just like any other async call in C#. If you're not familiar with async patterns, I have a guide on async/await in C# with 3 beginner tips you need to know that covers the fundamentals.
One thing to watch out for: avoid blocking calls like .Result or .Wait() in your plugin methods. The kernel's execution pipeline is fully async, and blocking can cause deadlocks or performance issues. Stick with async/await throughout.
Plugins with Dependencies
Real plugins often need access to services like repositories, loggers, HTTP clients, or configuration. Semantic Kernel integrates seamlessly with .NET's dependency injection system, so you can inject dependencies into your plugin constructors just like you would with ASP.NET Core controllers or services. When you call builder.Plugins.AddFromType<T>(), the kernel uses the IServiceProvider from the builder to resolve constructor parameters.
Here's a complete example with dependency injection:
public class OrderPlugin
{
private readonly IOrderRepository _orders;
private readonly ILogger<OrderPlugin> _logger;
public OrderPlugin(IOrderRepository orders, ILogger<OrderPlugin> logger)
{
_orders = orders;
_logger = logger;
}
[KernelFunction("get_order")]
[Description("Retrieves an order by its ID")]
public async Task<string> GetOrderAsync(
[Description("The unique order identifier")] string orderId)
{
_logger.LogInformation("Fetching order {OrderId}", orderId);
var order = await _orders.GetByIdAsync(orderId);
return order is null ? $"Order '{orderId}' not found" : $"Order {order.Id}: {order.Status}, {order.Items.Count} items, ${order.Total}";
}
}
During kernel setup, you register your services with the builder's service collection and then add the plugin:
var builder = Kernel.CreateBuilder();
builder.Services.AddScoped<IOrderRepository, SqlOrderRepository>();
builder.Services.AddLogging();
builder.Plugins.AddFromType<OrderPlugin>(); // SK resolves IOrderRepository and ILogger
var kernel = builder.Build();
Semantic Kernel calls the service provider to create an instance of OrderPlugin, automatically injecting IOrderRepository and ILogger<OrderPlugin>. This is the same pattern you use everywhere else in .NET, so it feels natural if you're coming from ASP.NET Core or other frameworks. If you need a refresher on dependency injection, check out my guide on IServiceCollection in C# -- simplified beginner's guide for dependency injection.
One important note: plugins are resolved from the service provider each time a function is invoked. If you register your plugin as a singleton, transient, or scoped service, that lifetime applies to the plugin instance. Most of my plugins are stateless and registered as transient or scoped, depending on their dependencies.
Testing Your Plugin
One of the benefits of Semantic Kernel's plugin design is that plugins are just plain C# classes, which makes them easy to unit test. You don't need to spin up a full kernel instance to test a plugin function. Just instantiate the plugin class, mock or stub its dependencies, and call the method directly.
Here's an example using xUnit and NSubstitute:
using Xunit;
using Microsoft.Extensions.Logging.Abstractions;
using NSubstitute;
public class OrderPluginTests
{
[Fact]
public async Task GetOrderAsync_ReturnsOrderDetails_WhenOrderExists()
{
// Arrange -- no Kernel needed, just the plugin class
var mockOrders = Substitute.For<IOrderRepository>();
mockOrders.GetByIdAsync("ORD-001").Returns(new Order
{
Id = "ORD-001",
Status = "Shipped",
Items = [new OrderItem("Widget", 2)],
Total = 29.99m
});
var plugin = new OrderPlugin(mockOrders, NullLogger<OrderPlugin>.Instance);
// Act -- call the method directly, not through the kernel
var result = await plugin.GetOrderAsync("ORD-001");
// Assert
Assert.Contains("ORD-001", result);
Assert.Contains("Shipped", result);
}
}
This test verifies the plugin logic without involving the kernel, LLM, or any other Semantic Kernel infrastructure. You're testing your business logic in isolation, which is exactly what unit tests should do. If you want to test the integration between your plugin and the kernel (for example, verifying that function calling works correctly), you can write integration tests that build a full kernel and invoke plugins through the kernel API. But for most scenarios, unit testing the plugin class directly is faster and more focused.
FAQ
Here are some common questions I get about creating custom plugins for Semantic Kernel in C#. These cover return types, state management, optional parameters, and other practical concerns you'll encounter when building plugins.
What return types are supported for plugin functions?
Semantic Kernel supports most common return types: string, int, bool, double, Task<T>, ValueTask<T>, and even void. For LLM consumption, I recommend string because it gives you maximum flexibility to return formatted data, error messages, or JSON. The kernel serializes complex objects to JSON automatically if needed, but keeping it simple with strings avoids surprises.
Can a plugin have state or should it be stateless?
Plugins can have state if you need it. You can store fields in your plugin class and mutate them across function calls. However, be aware of the plugin's lifetime. If the plugin is registered as transient, each invocation gets a new instance, so state won't persist. If it's a singleton, state persists across all invocations. For most use cases, I prefer stateless plugins that rely on injected services (like repositories) for any state management.
Can I have optional parameters in my plugin functions?
Yes. Just use C# optional parameters with default values, like string pattern = "*.*". The LLM can omit optional parameters when calling the function. Make sure your [Description] attribute on the parameter explains what the default behavior is so the LLM understands when it's safe to omit the parameter.
Do plugin classes need to be public?
No, they can be internal or even private nested classes. Semantic Kernel uses reflection to discover and invoke plugin methods, so visibility doesn't matter. I usually make them public for reusability, but if your plugin is only used within a single assembly, internal is fine.
Can I use generic methods in plugins?
Not directly. Semantic Kernel doesn't support generic [KernelFunction] methods because it can't determine the concrete types at runtime. If you need generic logic, wrap it in a non-generic plugin method that calls your generic helper internally.
Conclusion
Creating custom plugins for Semantic Kernel in C# is straightforward once you understand the core concepts: decorate methods with [KernelFunction] and [Description], register the plugin with the kernel, and invoke it either directly or through LLM function calling. Multi-function plugins let you group related capabilities, async methods work seamlessly for I/O operations, and dependency injection makes it easy to integrate with your existing .NET services. Testing plugins is simple because they're just plain C# classes that you can unit test without the full kernel infrastructure.
If you want a broader overview of Semantic Kernel plugins, including prompt plugins and how plugins fit into the overall architecture, check out the Semantic Kernel plugins guide. For a complete introduction to the framework, see my Semantic Kernel C# complete guide. Now go build some custom plugins and extend your Semantic Kernel applications with your own domain logic!

