BrandGhost
Custom AI Tools with AIFunctionFactory in GitHub Copilot SDK for C#

Custom AI Tools with AIFunctionFactory in GitHub Copilot SDK for C#

AI models become exponentially more useful when they can interact with your code and data through custom tools. I've been building custom AI tools with AIFunctionFactory in GitHub Copilot SDK for C#, and the results have been transformative for creating context-aware AI features. The AIFunctionFactory class bridges the gap between standard C# methods and AI tool calls, letting you expose any function as a tool the model can invoke when it needs to retrieve data, perform calculations, or interact with external systems.

In this guide, I'll show you exactly how to build, register, and use custom tools with AIFunctionFactory. You'll see working code examples, learn the registration patterns, and understand how tool calls flow through the SDK to create powerful AI-driven applications.

What Is AIFunctionFactory and Why It Matters

AIFunctionFactory is the core mechanism in the GitHub Copilot SDK that transforms your C# methods into custom AI tools that AI models can invoke. When you register a method with AIFunctionFactory, the SDK automatically generates the JSON schema that describes the function to the language model, including parameter names, types, and descriptions. The model can then decide to call your tool when it determines that invoking the function would help answer the user's question or complete a task.

The flow works like this: you send a prompt to the model through a CopilotSession, and if the model decides it needs information from a tool, it returns a tool call request instead of a text response. The SDK intercepts this, invokes your registered C# method with the parameters the model specified, and sends the result back to the model. The model then incorporates that result into its reasoning and generates the final response.

This matters because it lets you build AI applications that aren't limited to the model's training data. I can create a tool that queries my database, searches documentation, calls external APIs, or performs any computation in C#. The model orchestrates when to use these tools, but my code controls what they actually do. Real-world use cases include document search tools, data retrieval functions, calculation engines, and system integration points that give the AI access to live data and business logic.

Setting Up Your First Custom AI Tools with AIFunctionFactory

Getting started with custom AI tools AIFunctionFactory GitHub Copilot SDK C# requires the GitHub Copilot SDK NuGet package. I install it with dotnet add package GitHub.Copilot.SDK, which brings in all the necessary types including AIFunctionFactory. Once installed, creating a simple tool is straightforward with the AIFunctionFactory class.

Here's a basic weather tool that demonstrates the pattern. This example shows how to define a method, add metadata with attributes, and register it with AIFunctionFactory:

using GitHub.Copilot.SDK;
using System.ComponentModel;

public class WeatherTools
{
    [Description("Get the current weather for a specified location")]
    public string GetWeather(
        [Description("The city and state, e.g. San Francisco, CA")]
        string location)
    {
        // In production, this would call a real weather API
        return $"The weather in {location} is sunny, 72°F";
    }
}

// Register the tool
var weatherTools = new WeatherTools();
var tool = AIFunctionFactory.Create(
    weatherTools.GetWeather,
    name: "get_weather");

The Description attributes are crucial here. AIFunctionFactory uses reflection to read these attributes and generates the JSON schema that tells the model what the function does and what parameters it accepts. The model uses this information to decide when to call the tool and what arguments to pass.

When you register this with a session, the model can invoke it automatically. I'll show you session configuration in the next section, but the key point is that this pattern works for any method: define it, add descriptions, and create a tool from it with AIFunctionFactory.

Tool Registration and the Session Configuration

Tools don't do anything until you register them with a CopilotSession through the SessionConfig. The SessionConfig.Tools collection holds all available tools for that session, and you populate it during session creation. Here's how I set up a session with multiple tools:

var client = new CopilotClient(new CopilotClientOptions
{
    GithubToken = Environment.GetEnvironmentVariable("GITHUB_TOKEN")
});

var weatherTools = new WeatherTools();
var searchTools = new SearchTools();

var config = new SessionConfig
{
    Model = "gpt-5",
    SystemMessage = new SystemMessageConfig
    {
        Mode = SystemMessageMode.Append,
        Content = "You are a helpful assistant with access to weather and search tools."
    },
    Tools = new List<AIFunction>
    {
        AIFunctionFactory.Create(weatherTools.GetWeather, name: "get_weather"),
        AIFunctionFactory.Create(searchTools.SearchDocumentation, name: "search_docs")
    }
};

var session = await client.CreateSessionAsync(config);

Once the session is created with these tools, every message sent through that session has access to them. When the model generates a tool call, the SDK handles the invocation automatically if you're using the higher-level APIs, or you can handle tool calls manually for more control.

The SDK passes tool results back to the model by adding a tool result message to the conversation. This happens behind the scenes when using session.SendMessageAsync(), but if you're working with the lower-level APIs, you'll construct a tool result message yourself. The model receives the tool output and continues reasoning with that information available.

Multiple tool registration follows the same pattern. I create an AIFunction for each method I want to expose, add them all to the Tools collection, and the model can call any of them based on the user's request. The model decides which tool to use based on the descriptions and the context of the conversation.

Building a Practical Search Tool

Let me show you a complete, practical example: a documentation search tool that demonstrates async operations, parameter handling, and return values. This tool searches through markdown files in a directory and returns relevant content:

public class DocumentationTools
{
    private readonly string _docsPath;

    public DocumentationTools(string docsPath)
    {
        _docsPath = docsPath;
    }

    [Description("Search documentation files for information about a specific topic")]
    public async Task<string> SearchDocumentation(
        [Description("The topic or keyword to search for")]
        string query,
        [Description("Maximum number of results to return")]
        int maxResults = 3)
    {
        var results = new List<string>();
        var files = Directory.GetFiles(_docsPath, "*.md", SearchOption.AllDirectories);

        foreach (var file in files.Take(20))
        {
            var content = await File.ReadAllTextAsync(file);
            if (content.Contains(query, StringComparison.OrdinalIgnoreCase))
            {
                var excerpt = GetExcerpt(content, query, 200);
                results.Add($"**{Path.GetFileName(file)}**
{excerpt}");

                if (results.Count >= maxResults)
                    break;
            }
        }

        return results.Any()
            ? string.Join("

---

", results)
            : $"No documentation found for '{query}'";
    }

    private string GetExcerpt(string content, string query, int length)
    {
        var index = content.IndexOf(query, StringComparison.OrdinalIgnoreCase);
        if (index < 0) return content.Substring(0, Math.Min(length, content.Length));

        var start = Math.Max(0, index - length / 2);
        var excerpt = content.Substring(start, Math.Min(length, content.Length - start));
        return $"...{excerpt}...";
    }
}

// Registration
var docTools = new DocumentationTools(@"C:docs");
var searchTool = AIFunctionFactory.Create(
    docTools.SearchDocumentation,
    name: "search_documentation");

The complete request/response cycle with custom AI tools AIFunctionFactory GitHub Copilot SDK C# looks like this: I send a message asking "What are the best practices for async/await?", the model sees the search_documentation tool is available, generates a tool call with query "async await best practices", the SDK invokes my SearchDocumentation method, my method returns matching documentation excerpts, and the model synthesizes those excerpts into a coherent answer for the user.

This pattern works for any async operation. The SDK handles awaiting the Task and extracting the result automatically. I can make database queries, call external APIs, read files, or perform any async operation, and AIFunctionFactory wraps it correctly for the model to invoke.

Handling Tool Parameters and Return Types

AIFunctionFactory generates JSON schema automatically based on your method signatures, but understanding how types map helps you design effective tools. The SDK supports primitive types like string, int, bool, double, as well as arrays, enums, and complex objects. Each parameter type becomes a JSON schema property with appropriate validation rules.

Parameter descriptions are mandatory for good tool design. The model uses these descriptions to understand what values to pass, so I make them clear and specific. Instead of "location parameter", I write "The city and state, e.g. San Francisco, CA". Instead of "query string", I write "The search query or keywords to find in documentation". These descriptions directly impact how well the model uses your tools.

Return types should be string or serializable objects. The SDK converts the return value to JSON and passes it to the model. I prefer returning structured strings with markdown formatting because the model handles those well for generating responses. Complex objects work too, but the model sees the JSON serialization, so I keep return values focused on information the model needs.

Error handling in tools is critical because exceptions during tool execution can break the conversation flow. I wrap tool bodies in try-catch blocks and return meaningful error messages:

[Description("Calculate factorial of a number")]
public string CalculateFactorial(
    [Description("The number to calculate factorial for (must be 0-20)")]
    int number)
{
    try
    {
        if (number < 0 || number > 20)
            return "Error: Number must be between 0 and 20";

        long result = 1;
        for (int i = 2; i <= number; i++)
            result *= i;

        return $"The factorial of {number} is {result}";
    }
    catch (Exception ex)
    {
        return $"Error calculating factorial: {ex.Message}";
    }
}

This approach keeps the conversation going even when tools fail. The model receives the error message as the tool result and can inform the user or try a different approach.

Chaining Tools and Multi-Step Reasoning

One of the most powerful patterns I use is letting the model chain multiple tools together to solve complex problems. The model can call a tool, examine the result, and decide to call another tool based on what it learned. The SDK supports this through the natural request/response cycle of the conversation.

Here's how it works: I register several related tools, and the model orchestrates them. For example, I might have search_user, get_user_orders, and calculate_refund tools. When asked "What refund does user [email protected] qualify for?", the model first calls search_user with the email, uses the returned user ID to call get_user_orders, and then calls calculate_refund with the order details. Each tool call happens sequentially, with the model reasoning between calls.

Tool call loops happen when the model needs to gather information iteratively. I've seen the model call a search tool multiple times with different queries, refining its search based on previous results. The SDK handles this by continuing the conversation: each tool result becomes a message in the conversation history, and the model can request another tool call in its next response.

The practical pattern for building agent-like systems with tools involves several key principles. First, I keep tools focused and single-purpose rather than creating monolithic functions. Second, I provide clear descriptions that help the model understand when each tool is appropriate. Third, I design tools that return enough context for the model to make the next decision. Here's a complete example:

public class CustomerServiceTools
{
    private readonly ICustomerRepository _customers;
    private readonly IOrderRepository _orders;

    [Description("Find a customer by email address")]
    public async Task<string> FindCustomer(
        [Description("The customer's email address")]
        string email)
    {
        var customer = await _customers.FindByEmailAsync(email);
        if (customer == null)
            return "Customer not found";

        return $"Customer ID: {customer.Id}, Name: {customer.Name}, Status: {customer.Status}";
    }

    [Description("Get recent orders for a customer")]
    public async Task<string> GetCustomerOrders(
        [Description("The customer ID")]
        int customerId,
        [Description("Number of recent orders to retrieve")]
        int count = 5)
    {
        var orders = await _orders.GetRecentOrdersAsync(customerId, count);
        if (!orders.Any())
            return "No orders found";

        var orderList = orders.Select(o =>
            $"Order {o.Id}: ${o.Total}, Status: {o.Status}, Date: {o.OrderDate:yyyy-MM-dd}");

        return string.Join("
", orderList);
    }
}

When these tools are registered together, the model chains them automatically. A query like "Show me recent orders for [email protected]" results in the model calling FindCustomer first, extracting the customer ID from the result, and then calling GetCustomerOrders with that ID. I don't orchestrate this -- the model figures it out from the tool descriptions and the conversation context.

AIFunctionFactory vs Manual Tool Implementation

The GitHub Copilot SDK offers two approaches to tool creation: using AIFunctionFactory with method reflection and attributes, or manually implementing the AIFunction interface and providing your own JSON schema. I've used both, and each has clear use cases.

AIFunctionFactory with attributes is the fastest path to working tools. I add Description attributes to methods and parameters, call AIFunctionFactory.Create(), and I'm done. The SDK handles schema generation, parameter mapping, and invocation. This works great for straightforward tools where the C# type system maps cleanly to JSON schema. Most of my tools use this approach because it's maintainable and the code is self-documenting.

Manual implementation gives me complete control over the JSON schema and invocation logic. I implement AIFunction myself when I need schema features that don't map to C# types, when I want to handle parameters dynamically, or when I'm wrapping legacy code that doesn't fit the attribute pattern. Here's what manual implementation looks like:

public class CustomToolFunction : AIFunction
{
    public override string Name => "custom_tool";

    public override string Description => "A manually implemented tool with custom schema";

    public override JsonElement Schema => JsonDocument.Parse("""
    {
        "type": "object",
        "properties": {
            "query": {
                "type": "string",
                "description": "The search query"
            },
            "filters": {
                "type": "array",
                "items": { "type": "string" },
                "description": "Optional filters to apply"
            }
        },
        "required": ["query"]
    }
    """).RootElement;

    public override async Task<object> InvokeAsync(JsonElement parameters)
    {
        var query = parameters.GetProperty("query").GetString();
        var filters = parameters.TryGetProperty("filters", out var filtersElement)
            ? filtersElement.EnumerateArray().Select(e => e.GetString()).ToList()
            : new List<string>();

        // Implement tool logic here
        return "Tool result";
    }
}

The trade-offs are clear. AIFunctionFactory is faster to write and easier to maintain, but manual implementation offers flexibility for complex schemas, dynamic parameter handling, and integration with existing systems. I start with AIFunctionFactory and only switch to manual implementation when I hit a limitation.

For most .NET developers building tools for the first time, AIFunctionFactory is the right choice. You can learn more about the overall SDK patterns in my GitHub Copilot SDK for .NET: Complete Developer Guide and see how tools fit into the broader architecture in Advanced GitHub Copilot SDK: Tools, Hooks, and Multi-Agent.

Frequently Asked Questions

What is AIFunctionFactory in the GitHub Copilot SDK?

AIFunctionFactory is a utility class in the GitHub Copilot SDK that converts C# methods into AI-callable tools. It uses reflection to read method signatures and Description attributes, generates the JSON schema that describes the function to the language model, and creates an AIFunction instance that the SDK can invoke when the model requests a tool call. You pass a method reference and optional name to AIFunctionFactory.Create(), and it returns a fully configured tool ready for registration with a CopilotSession.

How many tools can I register in a single session?

The SDK doesn't impose a hard limit on tool count, but practical constraints exist. Language models have context window limits that include tool schemas, so registering hundreds of tools will consume significant context space and may degrade performance. I typically register 5-15 tools per session, grouped by functionality. For applications needing many tools, I create separate sessions for different domains or implement dynamic tool selection where I register only relevant tools based on the conversation context.

Can tools be async in the GitHub Copilot SDK?

Yes, tools can be async methods returning Task or Task. AIFunctionFactory handles async methods automatically, and the SDK awaits them during invocation. I use async tools extensively for database queries, API calls, file operations, and any I/O-bound work. The pattern is identical to sync tools -- add Description attributes, return Task instead of string, and AIFunctionFactory wraps it correctly. This is essential for production applications where tools interact with external systems or perform operations that shouldn't block.

How does AIFunctionFactory compare to Semantic Kernel plugins?

AIFunctionFactory in the GitHub Copilot SDK and Semantic Kernel plugins solve similar problems with different philosophies. Semantic Kernel provides a comprehensive plugin system with dependency injection, prompt templates, and multi-model orchestration, while AIFunctionFactory focuses specifically on tool calling for GitHub Copilot models. AIFunctionFactory is simpler and more lightweight, making it easier to add tools to existing .NET applications. Semantic Kernel offers more features for complex AI workflows but requires adopting its patterns and abstractions. I choose AIFunctionFactory when building Copilot-specific integrations and Semantic Kernel for broader AI applications needing multi-model support and advanced orchestration.

Conclusion

Building custom AI tools with AIFunctionFactory in the GitHub Copilot SDK for C# transforms static AI interactions into dynamic, data-driven applications. I've shown you how to create custom AI tools AIFunctionFactory GitHub Copilot SDK C# from methods, register them with sessions, handle parameters and return values, and chain tools for multi-step reasoning. The pattern is straightforward: define methods with clear descriptions, register them with AIFunctionFactory, and let the model orchestrate when to use them.

Start with simple tools like the weather or search examples, then expand to your domain-specific needs. The SDK handles the complexity of JSON schema generation and tool invocation, letting you focus on implementing useful functionality. If you're new to the SDK, check out Getting Started with GitHub Copilot SDK in C# for foundation concepts, and explore CopilotClient and CopilotSession: Core Concepts in C# to understand how tools integrate with session management. For handling responses efficiently, see Streaming Responses with GitHub Copilot SDK in C#.

The official GitHub Copilot SDK repository contains source code and examples you can reference. Microsoft's documentation on building AI applications with .NET provides broader context on AI development patterns. The GitHub Copilot documentation covers platform capabilities beyond the SDK. For OpenAI's function calling concepts that underpin this functionality, see the OpenAI function calling guide.

Custom tools unlock the full potential of AI in your applications by connecting models to real data and business logic. Start building your tools and see how AIFunctionFactory makes AI-driven features practical and maintainable in C#.

GitHub Copilot SDK vs Semantic Kernel: When to Use Each in C#

GitHub Copilot SDK vs Semantic Kernel in C# -- a practical comparison. Understand when to use each framework, their key differences, and how to choose for your .NET AI applications.

GitHub Copilot SDK for .NET: Complete Developer Guide

Learn the GitHub Copilot SDK for .NET in this complete developer guide. Build custom AI agents with CopilotClient, CopilotSession, streaming, tools, and multi-model support in C#.

Build an AI CLI Developer Tool with GitHub Copilot SDK in C#

Build an AI CLI developer tool with GitHub Copilot SDK in C#. Learn CopilotClient, AIFunctionFactory tools, streaming responses, and session management.

An error has occurred. This application may no longer respond until reloaded. Reload