BrandGhost
Semantic Kernel Plugins in C#: The Complete Guide

Semantic Kernel Plugins in C#: The Complete Guide

If you've been working with AI in .NET applications, you've probably heard about Microsoft's Semantic Kernel. But here's the thing that makes Semantic Kernel truly powerful: plugins. Semantic Kernel plugins in C# are your primary mechanism for extending AI capabilities with custom functionality, and they're the bridge between your existing code and large language models. Whether you're exposing APIs to AI agents, creating reusable prompt templates, or integrating third-party services, understanding plugins is essential for building production-ready AI applications. In this complete guide, I'll walk you through everything you need to know about creating and using plugins in Semantic Kernel.

What Are Semantic Kernel Plugins?

Semantic Kernel plugins are the primary extension point for adding functionality to your AI applications. At their core, plugins are simply collections of functions that the AI can discover and invoke to accomplish tasks. Think of them as tools in an AI agent's toolbox -- each plugin provides specific capabilities that help the model interact with your application, external APIs, or data sources.

The beauty of the plugin architecture in Semantic Kernel is that it's based on standard C# code. You mark methods with the [KernelFunction] attribute, and Semantic Kernel automatically makes them available to the AI. The [Description] attribute provides metadata that helps the AI understand when and how to use each function.

Why do plugins matter so much? Because they enable AI orchestration. Modern AI applications aren't just about generating text -- they need to retrieve data, call APIs, perform calculations, and interact with external systems. Plugins provide the structured way to expose these capabilities while maintaining type safety and leveraging your existing C# codebase. When you combine plugins with auto-invocation features, the AI can autonomously decide which functions to call based on user intent, creating truly intelligent applications.

Native Function Plugins

Native function plugins are C# classes where you decorate methods with special attributes to expose them to Semantic Kernel. This is the most common and powerful type of plugin because you're writing regular C# code with full access to the .NET ecosystem.

Here's a complete working example of a native function plugin:

using Microsoft.SemanticKernel;
using System.ComponentModel;

public class WeatherPlugin
{
    [KernelFunction("get_weather")]
    [Description("Gets the current weather for a given city")]
    public string GetWeather(
        [Description("The city to get weather for")] string city)
    {
        // In production, call a real weather API
        return $"The weather in {city} is 72°F and sunny.";
    }
}

// Registration
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion("gpt-4o", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!);
builder.Plugins.AddFromType<WeatherPlugin>();
var kernel = builder.Build();

The [KernelFunction] attribute is what makes a method discoverable by Semantic Kernel. The string parameter "get_weather" becomes the function's identifier that the AI uses when calling it. The [Description] attributes on both the method and parameters are crucial -- they're passed to the AI model as part of the function schema, helping it understand when and how to use your function.

When you register a plugin using builder.Plugins.AddFromType<WeatherPlugin>(), Semantic Kernel scans the class for all methods decorated with [KernelFunction] and makes them available. You can have multiple kernel functions in a single plugin class, which is great for grouping related functionality together.

Native function plugins support both synchronous and async/await methods, complex parameter types (which get serialized to JSON), and dependency injection. This makes them incredibly flexible for real-world scenarios where you need to interact with databases, external APIs, or complex business logic.

Prompt Function Plugins

While native functions give you full C# control, sometimes you just need to define a reusable prompt template. That's where prompt function plugins come in. These are functions defined entirely by prompt text and parameters, without any C# logic.

Here's how you create a prompt function plugin:

// Prompt function plugin
var summarizeFunction = KernelFunction.CreateFromPrompt(
    "Summarize the following text in 3 bullet points:

{{$input}}",
    functionName: "summarize",
    description: "Summarizes text into 3 bullet points");

kernel.Plugins.AddFromFunctions("TextPlugin", [summarizeFunction]);

var result = await kernel.InvokeAsync(summarizeFunction, new KernelArguments
{
    { "input", "Semantic Kernel is a lightweight SDK that integrates AI models with conventional programming languages." }
});
Console.WriteLine(result.GetValue<string>());

The template syntax uses Handlebars-style placeholders like {{$input}} for parameters. When you invoke the function, you pass arguments through the KernelArguments dictionary, and Semantic Kernel substitutes them into the template before sending it to the AI model.

Prompt functions are perfect for creating reusable prompt patterns. For example, you might have a collection of prompt functions for different writing styles, analysis templates, or structured output formats. By packaging them as plugins, you make them discoverable to AI orchestration -- the model can choose to use your summarization function when a user asks for a summary, without you explicitly calling it.

You can also create more complex prompt functions with multiple parameters:

var translateFunction = KernelFunction.CreateFromPrompt(
    "Translate the following {{$language_from}} text to {{$language_to}}:

{{$text}}",
    functionName: "translate",
    description: "Translates text from one language to another");

The key advantage of prompt functions over native functions is simplicity when you don't need C# logic. They're also easier for non-developers to create and maintain, making them great for teams where prompt engineers work alongside developers.

Plugin YAML Files

For more complex prompt plugins that you want to version control and organize separately from your code, Semantic Kernel supports loading plugins from directory structures with YAML configuration files. This approach is especially useful when you have multiple related prompt functions that form a cohesive plugin.

The structure looks like this:

Plugins/
  WriterPlugin/
    plugin.yaml
    Summarize/
      config.yaml
      skprompt.txt
    Rewrite/
      config.yaml
      skprompt.txt

The plugin.yaml file at the root defines the plugin metadata:

name: WriterPlugin
description: A plugin for writing and editing tasks
functions:
  - Summarize
  - Rewrite

Each function subdirectory contains a config.yaml with function-specific settings:

name: Summarize
description: Summarizes text into a concise format
input_variables:
  - name: input
    description: The text to summarize
    required: true
  - name: length
    description: The desired length (short, medium, long)
    default: medium
execution_settings:
  temperature: 0.7
  max_tokens: 500

And the skprompt.txt file contains the actual prompt template:

Summarize the following text in {{$length}} format:

{{$input}}

Provide a clear and concise summary that captures the main points.

You load these plugin directories using:

var pluginDirectory = Path.Combine(Directory.GetCurrentDirectory(), "Plugins", "WriterPlugin");
var plugin = kernel.ImportPluginFromPromptDirectory(pluginDirectory);

This approach gives you several benefits. First, non-developers can modify prompts without touching C# code. Second, you get clear versioning of prompt changes through your source control system. Third, you can organize related prompts into logical plugins with shared metadata. I find this particularly useful when building AI applications where prompt engineering is an ongoing process and you want to separate prompt iteration from code deployment.

OpenAPI Plugin Integration

One of the most powerful features of Semantic Kernel plugins is the ability to automatically import entire APIs as plugins using OpenAPI specifications. This means you can give your AI access to any REST API that has an OpenAPI (Swagger) definition without writing integration code.

Here's how you import an OpenAPI plugin:

var openApiPlugin = await kernel.ImportPluginFromOpenApiAsync(
    pluginName: "WeatherApi",
    uri: new Uri("https://api.weather.example.com/openapi.json"),
    executionParameters: new OpenApiFunctionExecutionParameters
    {
        AuthCallback = async (request, cancellationToken) =>
        {
            request.Headers.Add("X-API-Key", Environment.GetEnvironmentVariable("WEATHER_API_KEY"));
        }
    });

Semantic Kernel parses the OpenAPI specification, creates a kernel function for each endpoint, and maps the API's request/response schemas to function parameters and return types. The AI can then call these functions just like any other plugin function.

The executionParameters let you customize the HTTP requests, add authentication headers, modify endpoints, or handle specific API requirements. This is crucial for production scenarios where you need API keys, OAuth tokens, or custom headers.

When should you use OpenAPI plugins versus writing native function wrappers? Use OpenAPI plugins when you're integrating third-party APIs that already have OpenAPI specs and don't need complex pre-processing or post-processing logic. Write native functions when you need to transform data, combine multiple API calls, implement caching, or add business logic around the API interaction. I often use both -- OpenAPI plugins for quick integration and prototyping, then refactor to native functions when requirements get more complex.

Auto-Invocation and Tool Calling

Here's where Semantic Kernel plugins really shine: automatic function invocation based on natural language input. Instead of manually parsing user intent and calling specific functions, you can let the AI decide which functions to invoke based on the conversation context.

The magic happens through FunctionChoiceBehavior:

using Microsoft.SemanticKernel.Connectors.OpenAI;

var executionSettings = new OpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

var chatService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
history.AddUserMessage("What's the weather in Seattle?");

var response = await chatService.GetChatMessageContentAsync(
    history, 
    executionSettings, 
    kernel);
Console.WriteLine(response.Content);

When you set FunctionChoiceBehavior.Auto(), here's what happens behind the scenes. First, Semantic Kernel sends the user's message along with descriptions of all available plugin functions to the AI model. The model analyzes the request and decides which functions (if any) are needed. It responds with tool call instructions including function names and parameters. Semantic Kernel automatically invokes those functions, gathers the results, and sends them back to the model. Finally, the model incorporates the function results into its response to the user.

This creates an autonomous agent pattern where the AI can use your plugins as tools to accomplish tasks. For example, if you have plugins for database queries, sending emails, and calendar management, the AI can coordinate multiple function calls to complete a complex request like "Email my team the project status update."

You can also control the behavior more granularly. FunctionChoiceBehavior.Required() makes functions available but requires you to handle the invocation manually. This gives you more control over the execution flow, which is useful when you need to add approval steps or validation before invoking sensitive functions.

Dependency Injection for Plugins

Real-world plugins often need access to services like loggers, database connections, HTTP clients, or configuration. Semantic Kernel integrates beautifully with .NET's dependency injection system, making it easy to build plugins that leverage IServiceCollection DI.

Here's a plugin that uses dependency injection:

public class DatabasePlugin
{
    private readonly ILogger<DatabasePlugin> _logger;
    
    public DatabasePlugin(ILogger<DatabasePlugin> logger)
    {
        _logger = logger;
    }
    
    [KernelFunction("query_records")]
    [Description("Queries the database for records matching the given filter")]
    public async Task<string> QueryRecordsAsync(
        [Description("The filter criteria for the query")] string filter)
    {
        _logger.LogInformation("Querying records with filter: {Filter}", filter);
        // Database query logic here
        return $"Found 5 records matching '{filter}'";
    }
}

When you register the plugin, Semantic Kernel uses the service provider to resolve dependencies:

var builder = Kernel.CreateBuilder();
builder.Services.AddLogging(logging => logging.AddConsole());
builder.Services.AddSingleton<IDatabaseConnection, DatabaseConnection>();
builder.Plugins.AddFromType<DatabasePlugin>();
var kernel = builder.Build();

The plugin's constructor dependencies are automatically resolved from the service collection. This means you can inject any registered service -- loggers, database contexts, HTTP clients, configuration objects, or your own custom services.

You can also use constructor injection for plugin-level dependencies and the [FromKernelServices] attribute for function-level dependencies that vary per invocation. This is less common but useful for scenarios where the service instance needs to be resolved at function call time rather than plugin instantiation time.

The DI integration becomes even more powerful when combined with Scrutor for convention-based plugin registration. You can automatically register all plugins in an assembly, making your plugin architecture more maintainable as your application grows.

Best Practices for Semantic Kernel Plugins in C#

Building effective Semantic Kernel plugins requires balancing flexibility with maintainability and ensuring the AI can successfully discover and use your functions. Here are the patterns I've found most effective when building production AI applications.

When designing Semantic Kernel plugins in C#, the quality of your function descriptions directly impacts how well the AI can use them. The model relies entirely on these descriptions to determine which functions to call and what parameters to pass. I've seen many cases where poorly described functions are never invoked by the AI, even when they would be perfect for the task at hand.

Follow these best practices when creating plugins:

  • Write descriptive function and parameter descriptions -- The AI uses these to decide when to call your functions, so be specific about what each function does and what each parameter means. Vague descriptions lead to incorrect function calls.

  • Keep plugins focused on single responsibilities -- Create separate plugins for different domains like "DatabasePlugin," "EmailPlugin," "CalendarPlugin" rather than one giant "UtilityPlugin." This makes functions easier to discover and maintain.

  • Return string or easily serializable types -- The AI model receives function results as text, so return types that serialize cleanly to JSON or strings. Avoid returning complex object graphs that lose meaning when serialized.

  • Make functions idempotent when possible -- Since the AI might call functions multiple times or retry on failure, design functions to be safe to call repeatedly with the same parameters without causing unintended side effects.

Error handling and observability are equally critical when working with Semantic Kernel plugins in C#. Since the AI is making autonomous decisions about function invocation, you need robust logging and exception handling to understand what's happening in production. Without proper logging, debugging why an AI agent behaved a certain way becomes nearly impossible.

Additional considerations for production-ready plugins:

  • Use async/await for I/O operations -- Most plugin functions will call external APIs, databases, or perform file operations. Make these methods async to avoid blocking threads and improve scalability.

  • Validate parameters within functions -- Don't assume the AI will always pass valid parameters. Add validation logic and return meaningful error messages that the AI can use to retry with corrected parameters.

  • Log function invocations -- Inject ILogger and log when functions are called, with what parameters, and whether they succeed or fail. This is crucial for debugging AI agent behavior in production.

  • Handle exceptions gracefully -- Wrap function logic in try-catch blocks and return error descriptions rather than letting exceptions bubble up. The AI can often recover from errors if you provide clear error messages.

Performance and versioning considerations matter more than you might think. As your AI application scales, certain Semantic Kernel plugins in C# might become bottlenecks if they make expensive API calls or perform complex computations. Planning for these scenarios early saves significant refactoring work later.

Final best practices to consider:

  • Consider function execution costs -- Some plugin functions might be expensive (API calls, complex computations). Document these costs and consider implementing rate limiting or caching strategies.

  • Version your plugins -- As your plugin interfaces evolve, maintain backward compatibility or use versioning strategies to avoid breaking existing AI agent behaviors that depend on specific function signatures.

Conclusion

Semantic Kernel plugins in C# are your gateway to building truly intelligent AI applications that go beyond simple text generation. Whether you're creating native function plugins with full C# capabilities, prompt function plugins for reusable templates, YAML-based plugin directories for team collaboration, or OpenAPI integrations for third-party services, you now have the foundation to extend Semantic Kernel with custom functionality. The combination of auto-invocation and dependency injection makes it possible to build production-ready AI agents that can autonomously orchestrate complex workflows while leveraging your existing .NET infrastructure.

This complete guide has covered the fundamentals, but there's much more depth to explore in each area. To dive deeper into specific aspects of Semantic Kernel, check out these related articles:

  • Creating Custom Native Functions -- Learn advanced patterns for building robust native function plugins with complex parameter types, streaming responses, and error handling strategies
  • Prompt Engineering for Semantic Kernel -- Master the art of writing effective prompt functions, template composition, and using Handlebars helpers for dynamic prompts
  • OpenAPI Plugin Integration Patterns -- Explore advanced techniques for integrating REST APIs, handling authentication, request transformation, and response caching
  • AI Orchestration and Auto-Invocation -- Deep dive into tool calling behavior, multi-step agent workflows, function sequencing, and building conversational AI agents

For a broader overview of Semantic Kernel's capabilities, refer back to the Semantic Kernel complete guide as your central resource for everything Semantic Kernel in C#.

Start with native function plugins to get comfortable with the basics, then experiment with prompt functions and auto-invocation to see the real power of AI orchestration. The best way to learn is by building -- pick a simple use case in your current project and create a plugin for it. You'll quickly discover how plugins transform AI from a text generator into a capable agent that can interact with your entire application ecosystem.

Blazor RenderFragment - How To Use Plugins To Generate HTML

In this article, we'll see how we can use an ASP.NET Core Blazor RenderFragment alongside plugins to dynamically load HTML into our applications!

Plugin Architecture in C# for Improved Software Design

Learn about plugin architecture in C# to create extensible apps! This article provides examples with code snippets to explain how to start with C# plugins.

Semantic Kernel in C#: Complete AI Orchestration Guide

Master Semantic Kernel in C# with this complete guide. Learn plugins, agents, RAG, and vector stores to build production AI applications with .NET.

An error has occurred. This application may no longer respond until reloaded. Reload