BrandGhost
MCP Tool Integration in Microsoft Agent Framework in C#

MCP Tool Integration in Microsoft Agent Framework in C#

If you are building AI agents in .NET, at some point you will want your agent to do more than just chat -- you will want it to take action. MCP tool integration in Microsoft Agent Framework in C# is the capability that bridges that gap, and microsoft agent framework mcp tools are what let your agent call structured functions at runtime based on what the user actually needs. This article walks through a real working application demonstrating MCP tool integration in Microsoft Agent Framework in C#, covering everything from tool creation to agent execution.

What is MCP and Why Does It Matter for AI Agents?

The Model Context Protocol (MCP) is an open standard designed to give AI models a consistent way to interact with external tools and data sources. Instead of hard-coding how each tool is invoked, MCP defines a protocol so that any compliant server can expose tools that any compliant client -- including AI agents -- can discover and call.

For .NET developers, this is significant because it means your agent can interact with file systems, databases, APIs, and more without you having to wire up each integration by hand. The agent runtime handles the conversation loop, decides when a tool is needed, calls it, and feeds the result back into the model context.

The ModelContextProtocol NuGet package (at version 0.9.0-preview.1 in the rc1 release) is the official .NET implementation of this protocol. Its API is still evolving, but the core pattern for creating tools is stable enough to build real applications against.

If you are coming from the Semantic Kernel world, the concept will feel familiar -- Semantic Kernel plugins in C# work on a similar idea of exposing callable functions to the model. MAF takes a more lightweight approach, leaning on Microsoft.Extensions.AI abstractions directly.

The Project Setup

The demo application targets .NET 10 and uses the following NuGet packages:

<PackageReference Include="Microsoft.Agents.AI" Version="1.0.0-rc1" />
<PackageReference Include="Microsoft.Extensions.AI" Version="10.3.0" />
<PackageReference Include="Microsoft.Extensions.AI.OpenAI" Version="10.3.0" />
<PackageReference Include="Azure.AI.OpenAI" Version="2.1.0" />
<PackageReference Include="ModelContextProtocol" Version="0.9.0-preview.1" />
<PackageReference Include="Microsoft.Extensions.Hosting" Version="10.0.3" />

Microsoft.Agents.AI provides the AIAgent, ChatClientAgent, and related types. Microsoft.Extensions.AI provides the IChatClient abstraction and AIFunctionFactory. The ModelContextProtocol package is included because the app is designed to eventually connect to a real MCP server via stdio transport -- more on that at the end.

Configuration is handled via appsettings.json and supports both OpenAI and Azure OpenAI providers, along with an McpServer section that specifies how to launch an external MCP server:

{
  "AIProvider": {
    "Type": "openai",
    "ModelId": "gpt-4o-mini",
    "ApiKey": "",
    "Endpoint": ""
  },
  "McpServer": {
    "Type": "stdio",
    "Command": "npx",
    "Args": [
      "-y",
      "@modelcontextprotocol/server-filesystem",
      "{directory}"
    ]
  }
}

The {directory} placeholder in Args is replaced at runtime with the actual target directory. This is a clean way to parameterize the MCP server invocation without hardcoding paths.

MCP Tool Integration in Microsoft Agent Framework: AIFunctionFactory

The core of this integration is AIFunctionFactory.Create() from Microsoft.Extensions.AI. This factory method turns a regular C# delegate or lambda into an AIFunction object that the agent can call. MCP tool integration in Microsoft Agent Framework relies on this API because the function's name and description are surfaced to the LLM, which uses them to decide when and how to invoke each tool.

Here is the DiscoverMcpToolsAsync method from McpAgentService.cs, which creates two filesystem tools:

private async Task<List<AIFunction>> DiscoverMcpToolsAsync(
    string command,
    string[] args)
{
    var tools = new List<AIFunction>();

    try
    {
        _logger.LogInformation("Attempting to discover MCP tools...");

        var fileReadFunction = AIFunctionFactory.Create(
            async (string filePath) =>
            {
                if (!File.Exists(filePath))
                    return $"File not found: {filePath}";

                try
                {
                    var content = await File.ReadAllTextAsync(filePath);
                    return content.Length > 1000
                        ? content[..1000] + $"\n... (truncated, total {content.Length} chars)"
                        : content;
                }
                catch (Exception ex)
                {
                    return $"Error reading file: {ex.Message}";
                }
            },
            name: "read_file",
            description: "Reads the content of a file from the filesystem");

        var listDirectoryFunction = AIFunctionFactory.Create(
            (string directoryPath) =>
            {
                if (!Directory.Exists(directoryPath))
                    return $"Directory not found: {directoryPath}";

                try
                {
                    var files = Directory.GetFiles(directoryPath);
                    var directories = Directory.GetDirectories(directoryPath);

                    var result = "Files:\n" +
                        string.Join("\n", files.Select(f => $"  - {Path.GetFileName(f)}")) +
                        "\n\nDirectories:\n" +
                        string.Join("\n", directories.Select(d => $"  - {Path.GetFileName(d)}"));

                    return result;
                }
                catch (Exception ex)
                {
                    return $"Error listing directory: {ex.Message}";
                }
            },
            name: "list_directory",
            description: "Lists files and directories in the specified path");

        tools.Add(fileReadFunction);
        tools.Add(listDirectoryFunction);

        _logger.LogInformation("Created {ToolCount} filesystem tools", tools.Count);
    }
    catch (Exception ex)
    {
        _logger.LogWarning(ex, "Could not connect to MCP server, using fallback tools");
    }

    return tools;
}

A few things are worth noting here. Each tool gets an explicit name and description that the LLM uses when deciding which tool to invoke. The read_file function truncates content at 1000 characters to avoid flooding the model context. The list_directory function returns a formatted string with files and subdirectories separated into sections. Both functions return strings -- MCP tools communicate through text content that the model reads and reasons about.

This same pattern is what real MCP servers expose. When you connect to @modelcontextprotocol/server-filesystem via stdio, you get tools with exactly these kinds of signatures. The factory approach in this demo mimics that interface so you can test locally before wiring up an external process.

For comparison, this is conceptually similar to Semantic Kernel agents in C#, where plugins expose functions the kernel can call. The key difference is that MAF's tool model is lighter weight and built directly on Microsoft.Extensions.AI abstractions.

Building the Agent with McpAgentService

The McpAgentService class is responsible for assembling the agent. It takes an IChatClient (which could be OpenAI or Azure OpenAI), discovers the tools, and returns a ready-to-use AIAgent:

public async Task<(AIAgent Agent, Process? McpProcess)> CreateAgentWithMcpToolsAsync(
    IChatClient chatClient,
    string directoryPath)
{
    try
    {
        var mcpCommand = _configuration["McpServer:Command"] ?? "npx";
        var mcpArgs = _configuration.GetSection("McpServer:Args").Get<string[]>() ?? [];

        var processedArgs = mcpArgs.Select(arg =>
            arg.Replace("{directory}", directoryPath)).ToArray();

        _logger.LogInformation(
            "Starting MCP server: {Command} {Args}",
            mcpCommand,
            string.Join(" ", processedArgs));

        var tools = await DiscoverMcpToolsAsync(mcpCommand, processedArgs);

        var instructions =
            "You are a file system analyst. Use the available MCP tools to explore and " +
            "summarize files in the directory. When analyzing files, be thorough and provide " +
            "clear summaries of what you find.";

        var agent = new ChatClientAgent(chatClient, instructions);

        _logger.LogInformation(
            "Agent created with {ToolCount} MCP tools and instructions: {Instructions}",
            tools.Count, instructions);

        return (agent, null);
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "Failed to create agent with MCP tools");
        throw;
    }
}

ChatClientAgent is MAF's concrete agent implementation that wraps an IChatClient with a system prompt. When connecting tools directly via the AsAIAgent extension method pattern, you can pass them inline:

// Alternative: using AsAIAgent extension with tools array
var agent = chatClient.AsAIAgent(
    instructions: "You are a file system analyst...",
    tools: new[]
    {
        AIFunctionFactory.Create(readFileFunc, name: "read_file", description: "..."),
        AIFunctionFactory.Create(listDirFunc, name: "list_directory", description: "...")
    });

Both ChatClientAgent and AsAIAgent produce an AIAgent instance. The AsAIAgent extension approach is more concise when you want to pass tools at construction time. The ChatClientAgent constructor approach is useful when you need a named class you can inject or subclass. Either way, AIAgent.RunAsync drives the execution loop.

Running the Agent: Single-Shot and Interactive

The AgentRunner service handles two execution modes -- a single-shot analysis and an interactive REPL loop.

public async Task RunSingleAnalysisAsync(
    AIAgent agent,
    string directoryPath)
{
    Console.WriteLine($"? MCP Tool Agent - Analyzing directory: {directoryPath}");
    Console.WriteLine("Using MCP filesystem tools to explore...\n");

    var prompt = $"Please analyze the files in {directoryPath} and provide a comprehensive summary. " +
                $"List the files, describe their types, and give an overview of what this directory contains.";

    var response = await agent.RunAsync(prompt);

    Console.WriteLine("? Summary:");
    Console.WriteLine(response.ToString());
}

public async Task RunInteractiveModeAsync(AIAgent agent)
{
    Console.WriteLine("? MCP Tool Agent - Interactive Mode");
    Console.WriteLine("Ask questions about files (type 'quit' to exit)\n");

    while (true)
    {
        Console.Write("You: ");
        var input = Console.ReadLine();

        if (string.IsNullOrWhiteSpace(input)) continue;
        if (input.Trim().Equals("quit", StringComparison.OrdinalIgnoreCase))
        {
            Console.WriteLine("Goodbye!");
            break;
        }

        var response = await agent.RunAsync(input);
        Console.WriteLine($"\nAgent: {response}\n");
    }
}

agent.RunAsync(prompt) is the core call. It submits the user message to the LLM, receives back either a text response or a tool call request, executes any tool calls, and continues the loop until the model returns a final text response. All of that happens inside RunAsync -- you do not manage the tool execution loop manually.

The single-shot mode is great for scripted analysis pipelines where you invoke the agent from CI or a background job. The interactive mode is better for exploratory sessions where you want to ask follow-up questions about what the agent found. Both use the same AIAgent instance.

If you want multi-turn conversation memory across invocations, you can use an AgentSession:

// Multi-turn with persistent session context
AgentSession session = await agent.CreateSessionAsync();
AgentResponse r1 = await agent.RunAsync("List files in C:\\src", session);
AgentResponse r2 = await agent.RunAsync("Now read the README", session);

The session maintains the conversation history so the model has context from previous turns. This is important for agents doing multi-step analysis where later questions reference earlier results.

The Full Tool Execution Workflow

When a user submits a query like "What files are in my project?", the execution flow looks like this:

  1. agent.RunAsync(prompt) sends the user message to the LLM along with the tool definitions
  2. The LLM sees the list_directory and read_file tool descriptions and decides list_directory is needed
  3. MAF receives the tool call request and invokes the list_directory function with the path
  4. The function returns a formatted string with the directory contents
  5. MAF sends the tool result back to the LLM as a new message in the conversation
  6. The LLM may call additional tools (e.g., read_file on specific files) or generate the final response
  7. RunAsync returns the final AgentResponse containing the model's text output

This is the reason tool descriptions matter so much. When you write description: "Lists files and directories in the specified path", you are writing documentation for the LLM's decision-making process. A vague description leads to the model not calling the tool when it should, or calling it with wrong arguments. Clear, specific descriptions are the difference between a tool that works reliably and one that gets ignored.

The same principle applies to Semantic Kernel in C# -- how you describe your plugins determines how well the model uses them. MAF and SK both rely on the LLM reading function metadata to decide what to call.

Wiring It All Together with Dependency Injection

Program.cs sets up the DI container, resolves the AI provider from configuration, and coordinates the two services:

var builder = Host.CreateApplicationBuilder(args);

builder.Configuration
    .AddJsonFile("appsettings.json", optional: false)
    .AddJsonFile($"appsettings.{builder.Environment.EnvironmentName}.json", optional: true);

builder.Services.AddSingleton<McpAgentService>();
builder.Services.AddSingleton<AgentRunner>();

var host = builder.Build();

// Resolve IChatClient based on config
IChatClient chatClient;
if (aiProviderType == "azureopenai")
{
    var azureClient = new AzureOpenAIClient(new Uri(endpoint), new AzureKeyCredential(apiKey));
    chatClient = azureClient.GetChatClient(modelId).AsIChatClient();
}
else
{
    var openAIClient = new OpenAIClient(apiKey);
    chatClient = openAIClient.GetChatClient(modelId).AsIChatClient();
}

var (agent, _) = await mcpAgentService.CreateAgentWithMcpToolsAsync(chatClient, directoryPath);

if (directoryArg >= 0)
    await agentRunner.RunSingleAnalysisAsync(agent, directoryPath);
else
    await agentRunner.RunInteractiveModeAsync(agent);

The IChatClient interface from Microsoft.Extensions.AI is the abstraction that makes this provider-agnostic. Swapping from OpenAI to Azure OpenAI only requires a config change -- the agent and runner code does not care which provider is underneath. If you want to read more about structuring dependency injection in .NET applications, Automatic Dependency Injection in C#: The Complete Guide to Needlr covers the patterns in depth.

The --directory command-line argument controls which path is analyzed. If omitted, the current working directory is used. This makes the tool useful as both a standalone CLI and a component in larger automation pipelines.

Connecting to a Real MCP Server via stdio

The appsettings.json configuration already has everything needed to launch the official MCP filesystem server:

"McpServer": {
  "Type": "stdio",
  "Command": "npx",
  "Args": ["-y", "@modelcontextprotocol/server-filesystem", "{directory}"]
}

With the ModelContextProtocol NuGet package (0.9.0-preview.1) included in the project, you can launch this external process, connect to it via stdio transport, and use the tools it exposes instead of the in-process implementations. The protocol defines a handshake where the client asks the server to list its available tools, then routes tool calls through the stdio channel.

The in-process tools in DiscoverMcpToolsAsync are a clean fallback for when no MCP server is running -- or for local testing when you do not want a Node.js dependency. The real value of the stdio approach is that your agent can connect to any compliant MCP server regardless of what language it is written in. A Python server, a Rust server, or a Node.js server all look the same to your C# agent because they all speak the same protocol.

This is the direction the ecosystem is heading -- standardized tool interfaces that cross language and runtime boundaries. The ModelContextProtocol package's API is still evolving rapidly as the spec matures, so expect API surface changes in upcoming preview releases. Pinning to 0.9.0-preview.1 and tracking the official .NET MCP SDK repository is a good approach for staying current.

For a related perspective on grounding AI responses in external data sources, RAG with Semantic Kernel in C# covers vector-based retrieval which complements tool use nicely -- tools handle action, RAG handles knowledge retrieval.

MAF vs. Semantic Kernel for Tool Integration

It is worth briefly comparing the two main .NET options for agent tool integration. Semantic Kernel in C# is a more complete orchestration framework with built-in planners, memory connectors, and a plugin model. MAF is lower level and more focused -- it gives you AIAgent with tool support and session management without the broader orchestration layer.

If you are building a focused application like a file analysis tool, a code review agent, or a single-purpose assistant, MAF's lightweight model is a good fit. If you need complex multi-agent orchestration, planner-driven workflows, or deep integration with Azure AI services, Semantic Kernel is the better starting point. The two are not mutually exclusive either -- MAF is built on Microsoft.Extensions.AI abstractions that SK also targets, so future interoperability is likely.

Getting Started with GitHub Copilot SDK in C# is another angle worth knowing about -- it provides a GitHub-specific chat client that also implements IChatClient, meaning you could swap in a Copilot-backed model behind the same MAF agent code without changing your tool definitions.


Frequently Asked Questions

What is AIFunctionFactory.Create() and how does it work?

AIFunctionFactory.Create() is a factory method in Microsoft.Extensions.AI that converts a C# delegate into an AIFunction object. The function's parameter names, types, and the description you provide are surfaced to the LLM as tool metadata. When the model decides a tool is needed, the framework invokes the underlying delegate and returns the result as a string back into the model's context.

How does the agent know when to call a tool?

The LLM decides when to call a tool based on the tool's name and description relative to the user's query. If the user asks "what files are in this folder?" and a list_directory tool with a matching description is available, the model will generate a tool call request rather than a text response. MAF's RunAsync detects this, executes the tool, and loops back to the model with the result.

What is the difference between ChatClientAgent and AsAIAgent()?

ChatClientAgent is a concrete class in Microsoft.Agents.AI that you instantiate directly with a new call, passing IChatClient and a system prompt string. AsAIAgent() is an extension method on IChatClient that creates the same type of agent in a more fluent, inline style -- and it also supports passing a tools array directly at construction time. Both produce an AIAgent and behave the same way at runtime.

Do I need an external MCP server to use MAF tools?

No. You can create tools entirely in-process using AIFunctionFactory.Create() with regular C# lambdas. The external MCP server (via stdio transport) is a more advanced pattern that lets you leverage MCP-compliant servers written in any language. The app in this article uses in-process fallback tools while keeping the configuration ready for an external server connection.

Is the ModelContextProtocol NuGet package stable?

As of version 0.9.0-preview.1, the package is still in preview and the API surface is actively evolving. It is suitable for experimental and prototype use, but production applications should pin to a specific version and monitor release notes for breaking changes before upgrading.

Can I use MAF tools with Azure OpenAI instead of OpenAI?

Yes. The IChatClient abstraction decouples your agent and tool code from the underlying provider. Switching from OpenAI to Azure OpenAI is a configuration change -- you provide an endpoint and use AzureOpenAIClient instead of OpenAIClient. Your tool definitions, agent instructions, and runner logic stay exactly the same.

How does session management work for multi-turn tool conversations?

You call agent.CreateSessionAsync() to get an AgentSession object, then pass that session to each agent.RunAsync() call. The session maintains the conversation history including any tool calls and their results from previous turns. This allows the agent to reference earlier findings in later messages -- critical for multi-step analysis tasks where each question builds on prior tool output.


Wrapping Up

Microsoft Agent Framework MCP tools in C# give you a practical path from a simple chat client to a tool-using agent without a lot of ceremony. The key pieces are:

  • AIFunctionFactory.Create() to define your tools with names and descriptions the LLM can understand
  • ChatClientAgent or AsAIAgent() to wrap an IChatClient with instructions and tools
  • agent.RunAsync() to execute the full tool invocation loop automatically
  • AgentSession for multi-turn conversations that retain context

The appsettings.json pattern for configuring both the AI provider and the MCP server command makes it straightforward to swap providers or point at different MCP servers without touching code. The ModelContextProtocol package gives you a path to real MCP server integration when you are ready for it.

If you want to explore related agent patterns, Semantic Kernel Agents in C# and Semantic Kernel Plugins in C# cover how the SK ecosystem approaches the same problems. MAF and SK each have their place -- knowing both gives you the right tool for each job.

Getting Started with Microsoft Agent Framework in C#

Getting started with Microsoft Agent Framework in C# is fast. Install packages, configure OpenAI or Azure OpenAI, and build your first streaming agent.

Function Tools with AIFunctionFactory in Microsoft Agent Framework

Learn function tools with AIFunctionFactory in Microsoft Agent Framework. Covers registration, async tools, parameter descriptions, and error handling in C#.

Microsoft Agent Framework in C#: Complete Developer Guide

Complete guide to Microsoft Agent Framework in C#. Core abstractions, architecture, tool registration, sessions, and where MAF fits in the .NET AI ecosystem.

An error has occurred. This application may no longer respond until reloaded. Reload