BrandGhost
Semantic Kernel OpenAPI Plugin Integration in C#: Connect Any REST API as an AI Tool

Semantic Kernel OpenAPI Plugin Integration in C#: Connect Any REST API as an AI Tool

If you've been working with Semantic Kernel in C#, you've probably written native plugins to give your AI agents specific capabilities. But what if I told you that you could skip writing code entirely and turn any existing REST API into a plugin? That's exactly what Semantic Kernel OpenAPI plugin integration in C# enables. By simply pointing Semantic Kernel at an OpenAPI specification, you can automatically generate kernel functions for every endpoint in that API. Your LLM can then call those endpoints as if they were native functions, all without you writing a single line of glue code. This is transformative for integrating third-party services, internal microservices, or any API that exposes an OpenAPI spec.

In this guide, I'll walk you through everything you need to know about Semantic Kernel OpenAPI plugin integration in C#, from loading specs to configuring authentication and filtering operations. If you're building AI agents that need to interact with REST APIs, this is the pattern you need to master.

What Is a Semantic Kernel OpenAPI Plugin Integration?

When you import an OpenAPI plugin into Semantic Kernel, the framework reads the OpenAPI specification file and automatically generates KernelFunction instances for each operation defined in that spec. Each HTTP operation (GET, POST, PUT, DELETE, etc.) becomes a function that the AI model can discover and invoke.

The beauty of this approach is that Semantic Kernel handles all the heavy lifting. It parses the spec to understand parameters, request bodies, response types, and even authentication requirements. The LLM receives function descriptions derived from the OpenAPI operation summaries and descriptions, allowing it to intelligently select which API calls to make based on the user's intent.

This is essentially the facade pattern applied to REST APIs. Instead of dealing with raw HTTP requests, the AI works with strongly-typed function calls that Semantic Kernel translates into HTTP operations behind the scenes.

For context on how plugins fit into the broader Semantic Kernel ecosystem, check out my complete guide to Semantic Kernel plugins. Understanding function calling patterns is also essential before diving deep into OpenAPI integration.

Prerequisites for Semantic Kernel OpenAPI Plugin Integration in C#

Before you can use Semantic Kernel OpenAPI plugin integration in C#, you need a few things in place. First and foremost, the REST API you want to integrate must expose an OpenAPI specification. This is typically a JSON or YAML file that describes all the endpoints, parameters, request/response schemas, and authentication methods. Many modern APIs provide this automatically at endpoints like /swagger.json or /openapi.json.

If you're working with a third-party API, check their developer documentation for links to their OpenAPI spec. Services like Stripe, GitHub, and countless others provide OpenAPI specs. If you're integrating an internal API that doesn't have a spec, you'll need to generate one using tools like Swashbuckle (for ASP.NET Core) or NSwag.

On the Semantic Kernel side, you need the Microsoft.SemanticKernel.Plugins.OpenApi NuGet package. This package provides the extension methods and classes required for Semantic Kernel OpenAPI plugin integration in C#. Install it alongside your core Semantic Kernel packages:

dotnet add package Microsoft.SemanticKernel.Plugins.OpenApi

You'll also need a configured kernel with a chat completion service, as outlined in my Semantic Kernel C# complete guide. The OpenAPI plugin mechanism works seamlessly with any LLM provider Semantic Kernel supports.

Loading an OpenAPI Plugin from a URL

The simplest way to load a Semantic Kernel OpenAPI plugin in C# is by pointing it at a URL where the OpenAPI spec is hosted. The ImportPluginFromOpenApiAsync method handles fetching and parsing the spec in one call.

Let me show you a complete working example using the public Petstore API, which is a well-known test API with a public OpenAPI spec:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Plugins.OpenApi;

var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion("gpt-4o", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!);
var kernel = builder.Build();

// Import from a public OpenAPI spec
var plugin = await kernel.ImportPluginFromOpenApiAsync(
    "PetStore",
    new Uri("https://petstore3.swagger.io/api/v3/openapi.json"));

Console.WriteLine($"Imported {plugin.Count()} functions from OpenAPI spec");
foreach (var function in plugin)
{
    Console.WriteLine($"  {function.Name}: {function.Description}");
}

When you run this code, Semantic Kernel downloads the OpenAPI spec from the URL, parses it, and creates kernel functions for each operation. The plugin name "PetStore" becomes the prefix for all function names, helping you organize multiple plugins in the same kernel.

The output shows you exactly what functions were created. For the Petstore API, you'll see functions like addPet, getPetById, updatePet, and so on. Each function name corresponds to the operationId field in the OpenAPI spec, and the description comes from the operation's summary or description field.

This is the foundation of OpenAPI plugin integration. From here, you can start invoking these functions either manually or by letting the LLM choose them automatically during conversations.

Loading from a Local File

While loading from a URL is convenient for public APIs, you'll often work with local OpenAPI spec files during development or when integrating internal APIs. Semantic Kernel supports loading specs from local files just as easily.

Here's how to load an OpenAPI spec from a local JSON file:

await using var fileStream = File.OpenRead(@"C:specsmy-api-openapi.json");
var plugin = await kernel.ImportPluginFromOpenApiAsync(
    "LocalApi",
    fileStream,
    new OpenApiFunctionExecutionParameters { ServerUrlOverride = new Uri("https://localhost:7001") });

Notice the ServerUrlOverride parameter. OpenAPI specs contain server URL definitions that tell clients where to send requests. When loading a spec from a local file, especially during development, the server URLs in the spec might not match where your API is actually running. The ServerUrlOverride parameter lets you redirect all requests to a different base URL.

This is particularly useful when you're developing an API locally and want to test SK integration before deploying to production. You can use the production OpenAPI spec but override the server URL to point to your local development environment running on localhost.

The same approach works for YAML files. Just open the file stream and pass it to ImportPluginFromOpenApiAsync. Semantic Kernel detects the format and parses accordingly.

Configuring Authentication

Most real-world APIs require authentication, and the Semantic Kernel OpenAPI plugin C# implementation makes this straightforward through the OpenApiFunctionExecutionParameters class. You have complete control over how HTTP requests are executed, including adding authentication headers and customizing the HTTP client.

The most common pattern is passing a configured HttpClient with authentication headers:

var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization = 
    new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", "your-api-token-here");

var executionParams = new OpenApiFunctionExecutionParameters
{
    HttpClient = httpClient,
    IgnoreNonCompliantErrors = true  // useful for specs that don't fully comply with OpenAPI standard
};

var plugin = await kernel.ImportPluginFromOpenApiAsync(
    "MySecureApi",
    new Uri("https://api.myservice.com/openapi.json"),
    executionParams);

This approach works for bearer tokens, API keys in headers, and any other header-based authentication scheme. Just configure your HttpClient exactly as you would for making manual HTTP requests, and Semantic Kernel will use that client for all plugin function invocations.

The IgnoreNonCompliantErrors flag is worth noting. Some APIs have OpenAPI specs that don't perfectly conform to the standard, perhaps with missing required fields or slightly malformed definitions. Setting this to true tells Semantic Kernel to be more lenient during parsing, which can save you headaches when working with imperfect specs.

For more complex authentication scenarios, like OAuth flows or custom token refresh logic, you can implement a custom DelegatingHandler and add it to your HttpClient. This gives you full control over the authentication pipeline while keeping your plugin code clean.

Filtering Operations

One of the biggest challenges with OpenAPI plugins is that comprehensive APIs can expose dozens or even hundreds of operations. If you import all of them, you'll overwhelm the LLM's context window and make function selection less reliable. The model performs better when it has a focused set of tools to choose from.

Semantic Kernel provides filtering mechanisms through the OpenApiFunctionExecutionParameters class to control exactly which operations get imported. The recommended approach is to use OperationSelectionPredicate, which allows you to define custom logic for including or excluding operations:

// Filter operations using OperationSelectionPredicate
var executionParams = new OpenApiFunctionExecutionParameters
{
    OperationSelectionPredicate = (operation) => 
    {
        // Include only GET operations
        return operation.Method == "GET";
    }
};

// Or filter by operation ID patterns
var readOnlyParams = new OpenApiFunctionExecutionParameters
{
    OperationSelectionPredicate = (operation) => 
    {
        var allowedOps = new[] { "getUser", "listProducts", "searchOrders" };
        return allowedOps.Contains(operation.Id);
    }
};

var plugin = await kernel.ImportPluginFromOpenApiAsync(
    "ReadOnlyApi",
    new Uri("https://api.myservice.com/openapi.json"),
    readOnlyParams);

Note: The OperationsToExclude property is marked as [Obsolete] and should be replaced with OperationSelectionPredicate. This provides more flexibility and will be the supported approach going forward.

The predicate-based approach is more powerful than the deprecated allow/deny lists because you can implement complex filtering logic. You can filter by HTTP method, operation ID patterns, tags, or any other property of the operation metadata. This is critical for safety -- you don't want an AI agent accidentally calling destructive endpoints like deleteAllUsers or resetDatabase.

This is my preferred approach when building production AI agents because it follows the principle of least privilege. Only expose exactly what the agent needs to accomplish its tasks.

Invoking OpenAPI Functions

Once you've imported an OpenAPI plugin, invoking the functions works exactly like invoking native kernel functions. The most powerful pattern is auto-invocation during chat, where the LLM decides which API calls to make based on the conversation:

using Microsoft.SemanticKernel.Connectors.OpenAI;

var settings = new OpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

var chatService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
history.AddUserMessage("Find a cat pet for sale in the pet store");

var response = await chatService.GetChatMessageContentAsync(history, settings, kernel);
Console.WriteLine(response.Content);

This is the magic of OpenAPI plugins. The entire HTTP layer becomes transparent. The AI treats REST API calls as first-class function invocations, and you don't write any HTTP client code.

You can also invoke OpenAPI functions directly by name if you need more control:

var result = await kernel.InvokeAsync(
    "PetStore", 
    "getPetById", 
    new KernelArguments { ["petId"] = "123" });

Console.WriteLine(result.GetValue<string>());

Parameters from the OpenAPI spec map directly to KernelArguments. Path parameters, query parameters, and request body fields all become function parameters that you can pass as key-value pairs.

Creating Your Own API for SK Integration

If you're building a new ASP.NET Core API specifically for Semantic Kernel integration, designing it with OpenAPI in mind from the start will make your life much easier. The good news is that modern ASP.NET Core projects generate OpenAPI specs automatically using Swashbuckle.

The key is making sure your API controllers have rich descriptions that translate into useful function descriptions for the LLM. Use XML documentation comments liberally:

/// <summary>
/// Retrieves order details by order ID
/// </summary>
/// <param name="orderId">The unique identifier for the order</param>
/// <returns>Complete order details including items, status, and shipping information</returns>
[HttpGet("{orderId}")]
public async Task<ActionResult<OrderDetails>> GetOrder(string orderId)
{
    // implementation
}

Configure Swashbuckle in your Program.cs to include XML comments:

builder.Services.AddSwaggerGen(options =>
{
    var xmlFilename = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
    options.IncludeXmlComments(Path.Combine(AppContext.BaseDirectory, xmlFilename));
});

Make sure your project file generates XML documentation:

<PropertyGroup>
  <GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>

These descriptions become the function descriptions that the LLM uses to decide when to call your API. Clear, specific descriptions lead to better function selection. Avoid generic descriptions like "Gets data" -- instead use "Retrieves customer order history for the specified customer ID including order dates, amounts, and delivery status."

For guidance on setting up ASP.NET Core APIs effectively, my article on ASP.NET Core with Needlr covers modern setup patterns that work great with OpenAPI generation.

Keep your API design focused. Each endpoint should do one thing well. This translates into focused functions that the LLM can reason about effectively. Avoid catch-all endpoints that do too much based on complex parameter combinations.

When to Use OpenAPI vs Native Plugins

Now that you understand both OpenAPI plugins and native plugins (covered in my function calling guide), when should you choose each approach?

Use OpenAPI plugins when you're integrating existing REST APIs, especially third-party services or internal microservices that you don't control. If an API already exists and has an OpenAPI spec, importing it as a plugin is faster than writing a native plugin wrapper. You get automatic updates whenever the API changes -- just reload the spec. This is perfect for integrating services like payment processors, CRM systems, or internal business APIs.

OpenAPI plugins also shine when you need to expose many endpoints. Writing native functions for 50 different API operations would be tedious and error-prone. Let Semantic Kernel generate them automatically from the spec.

Choose native plugins when you're implementing new logic that doesn't correspond to an existing API. Native plugins give you type safety, compile-time checking, and better testability. If you're implementing business logic, data transformations, or integrations with non-HTTP systems like databases or message queues, native plugins are the better choice.

Native plugins also give you more control over error handling and retry logic. With OpenAPI plugins, you're limited to what Semantic Kernel provides. Native plugins let you implement sophisticated error handling, circuit breakers, or custom retry policies.

In practice, most production Semantic Kernel applications use both. OpenAPI plugins for integrating external services, and native plugins for custom logic specific to your application. They work together seamlessly in the same kernel.

FAQ

Can I use OpenAPI plugins with APIs that require OAuth authentication?

Yes, but you need to handle the OAuth flow yourself before importing the plugin. Obtain your access token using a standard OAuth library, then configure your HttpClient with that token as shown in the authentication section. For long-running applications, implement token refresh logic in a custom DelegatingHandler attached to the HTTP client. Semantic Kernel doesn't manage OAuth flows directly, but it works with any authenticated HttpClient.

What happens if an API call fails during auto-invocation?

When Semantic Kernel invokes an OpenAPI function and receives an HTTP error response, it reports that error back to the LLM. The model can then decide how to handle it -- retry with different parameters, inform the user, or try an alternative approach. You can customize error handling by implementing custom middleware in your HttpClient pipeline or by setting parameters like IgnoreNonCompliantErrors on the execution parameters.

Can I modify OpenAPI plugin functions after importing them?

Not directly. OpenAPI plugins are generated from the spec at import time. If you need to modify behavior, you have a few options. You can create wrapper functions using native plugins that call the OpenAPI functions with modified parameters. You can also use a custom HttpClient with delegating handlers to intercept and modify requests and responses. For extensive modifications, consider creating a native plugin that wraps your API instead of using the OpenAPI import.

How do I handle API versioning with OpenAPI plugins?

Import different versions as separate plugins with different names, like "MyApiV1" and "MyApiV2". Each plugin instance is independent. You can have both loaded simultaneously and let the LLM choose, or you can programmatically select which plugin to include in the kernel based on your application's needs. Some APIs include version information in the base URL, which you can override using ServerUrlOverride in the execution parameters.

Conclusion

Semantic Kernel OpenAPI plugin integration in C# can significantly change how we build AI agents. Instead of writing integration code for every API endpoint, we simply point Semantic Kernel at an OpenAPI spec and get automatic function generation. The LLM can then interact with those APIs as naturally as calling native functions.

I've shown you how to load specs from URLs and local files, configure authentication with custom HTTP clients, filter operations to keep your agent focused and safe, and invoke API functions both automatically and manually. You've also seen how to design your own APIs for optimal SK integration using ASP.NET Core and Swashbuckle.

The key insight is that OpenAPI plugins act as a bridge between the AI world and the REST API world. You don't need to choose between the flexibility of AI agents and the structure of REST APIs -- you get both. Start with the patterns I've covered here, experiment with public APIs like Petstore, and then apply these techniques to your own services. The combination of Semantic Kernel's plugin system and OpenAPI's standardization is powerful, and it's only getting better as both ecosystems mature.

For more on building complete AI applications with Semantic Kernel, check out my Semantic Kernel C# complete guide and the plugins hub for comprehensive coverage of the plugin ecosystem.

Semantic Kernel Plugins in C#: The Complete Guide

Master Semantic Kernel plugins in C# with this guide. Learn to create native functions, prompt functions, and OpenAPI plugins with real code examples.

Semantic Kernel in C#: Complete AI Orchestration Guide

Master Semantic Kernel in C# with this complete guide. Learn plugins, agents, RAG, and vector stores to build production AI applications with .NET.

How to Create Custom Plugins for Semantic Kernel in C#

Learn how to create custom plugins for Semantic Kernel in C# step by step. Build native function plugins, prompt plugins, and multi-function plugin classes with KernelFunction, Description attributes, and dependency injection.

An error has occurred. This application may no longer respond until reloaded. Reload