BrandGhost
Build an AI Code Review Bot with Semantic Kernel in C#

Build an AI Code Review Bot with Semantic Kernel in C#

Code review is one of those things every team knows they should do consistently but rarely manages to do well. Time pressure, reviewer fatigue, inconsistent standards -- the result is that bugs slip through, security issues go unnoticed, and code quality erodes slowly over time. After spending the last few weeks writing about Semantic Kernel in C#, I wanted to put everything together into something real and useful. So I built an AI code review bot with Semantic Kernel in C# that takes a file or folder of C# code and produces a structured markdown report covering bugs, security, performance, and style -- all orchestrated by a ChatCompletionAgent with four specialized plugins.

In this walkthrough I'll show you exactly how to build an AI code review bot with Semantic Kernel in C# from scratch -- the plugin design, the agent orchestration, and the configuration setup. By the end you'll have a working tool you can point at any C# project.

The full source is on GitHub: ncosentino/DevLeader -- ai-code-review-bot

In this article I'll walk through every part of the build, show you the actual code, and share what happened when I pointed the bot at its own source files.

How the AI Code Review Bot Uses Semantic Kernel

Before diving into code, it's worth understanding why Semantic Kernel is the right choice here rather than calling the OpenAI API directly. The ChatCompletionAgent with FunctionChoiceBehavior.Auto() means you don't have to manually orchestrate four separate AI calls and then stitch the results together. Instead, you describe what each plugin does (via [Description] attributes), configure the agent to use all available tools, and the LLM decides how and when to call each one autonomously. You get synthesis -- not just concatenation -- because the agent sees all four plugin results before writing its summary.

The plugin pattern also enforces separation of concerns in a way that manual API calls don't. Each plugin has a focused system prompt tuned for one review dimension. Mixing all four concerns into a single prompt would produce worse results -- the model would try to balance them and often skip the less prominent ones. Separate plugins with clear descriptions give each dimension full attention.

What We're Building

The AI code review bot with Semantic Kernel is a .NET 9 console application with this architecture:

  • Four [KernelFunction] plugins -- each focused on one review dimension: bugs, security, performance, and style. Each plugin uses Kernel parameter injection to call IChatCompletionService with a specialized system prompt.
  • One ChatCompletionAgent orchestrator -- a ChatCompletionAgent with FunctionChoiceBehavior.Auto() that autonomously calls all four plugins and synthesizes the results into a structured report.
  • Configurable AI provider -- supports both OpenAI and Azure OpenAI via appsettings.json, so you can run it against whatever backend you have access to.
  • File or folder input -- point it at a single .cs file or a directory, and it reviews everything, skipping bin/ and obj/ folders automatically.

The output is a markdown report you can pipe to a file (--output review.md) or read directly in the terminal.

This project deliberately exercises the SK concepts covered in the recent articles in this series -- Semantic Kernel plugins, custom plugin creation, ChatCompletionAgent, and function calling -- in a single working application.

Prerequisites

Before you start, make sure you have the following in place. The application targets .NET 9 and uses the latest SK Agents API, so an up-to-date SDK is required:

  • .NET 9 SDK
  • An OpenAI API key or an Azure OpenAI deployment (GPT-4o or GPT-4.1 recommended)
  • Basic familiarity with Semantic Kernel -- if you're new to SK, start with the Semantic Kernel complete guide

Project Setup

Create a .NET 9 console app and add these packages:

dotnet new console -n ai-code-review-bot -f net9.0
cd ai-code-review-bot

dotnet add package Microsoft.SemanticKernel --version 1.72.0
dotnet add package Microsoft.SemanticKernel.Agents.Core --version 1.72.0
dotnet add package Microsoft.SemanticKernel.Connectors.AzureOpenAI --version 1.72.0
dotnet add package Microsoft.Extensions.Configuration.Json --version 9.0.0
dotnet add package Microsoft.Extensions.Configuration.Binder --version 9.0.0
dotnet add package Microsoft.Extensions.Configuration.EnvironmentVariables --version 9.0.0

One important note: all three SK packages must be pinned to the same version. If you use --prerelease for Agents.Core, it may resolve to a different version than your other SK packages, causing a NU1605 downgrade conflict. Pin all three explicitly. Also add <NoWarn>$(NoWarn);SKEXP0001;SKEXP0110</NoWarn> to your .csproj to suppress experimental feature warnings.

Configuration

The bot uses IConfiguration with the standard appsettings.json + environment variable pattern, so you can override credentials without touching source files. This also makes it easy to run in CI by setting environment variables instead of committing secrets. Create this file:

{
  "AIProvider": {
    "Type": "openai",
    "ModelId": "gpt-4o",
    "ApiKey": "",
    "Endpoint": ""
  }
}

For local development, copy this to appsettings.Development.json (add it to .gitignore) and fill in your credentials. For Azure OpenAI:

{
  "AIProvider": {
    "Type": "azureopenai",
    "ModelId": "your-deployment-name",
    "Endpoint": "https://your-resource.openai.azure.com/",
    "ApiKey": "your-key"
  }
}

The corresponding config class is straightforward:

// Configuration/AIProviderConfig.cs
namespace AiCodeReviewBot.Configuration;

public sealed class AIProviderConfig
{
    public string Type { get; set; } = "openai";
    public string ModelId { get; set; } = "gpt-4o";
    public string? Endpoint { get; set; }
    public string ApiKey { get; set; } = string.Empty;
}

Building the Review Plugins

Each plugin is a class with a single [KernelFunction] method. The key SK pattern here is Kernel parameter injection -- SK automatically injects the current Kernel into any plugin function that declares it as a parameter. This gives the plugin access to IChatCompletionService without constructor dependency injection.

Here's the BugDetectionPlugin -- the others follow the same structure with different system prompts:

// Plugins/BugDetectionPlugin.cs
using System.ComponentModel;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

namespace AiCodeReviewBot.Plugins;

public sealed class BugDetectionPlugin
{
    [KernelFunction]
    [Description("Reviews C# code for potential bugs including null reference exceptions, " +
                 "off-by-one errors, resource leaks, async/await misuse, and logic errors")]
    public async Task<string> ReviewForBugsAsync(
        Kernel kernel,  // SK injects this automatically
        [Description("The C# source code to analyze for bugs")] string code)
    {
        var chat = kernel.GetRequiredService<IChatCompletionService>();
        var history = new ChatHistory(
            """
            You are an expert C# code reviewer specializing in bug detection.
            Analyze the provided code for: null reference exceptions, resource leaks, async/await misuse,
            off-by-one errors, incorrect exception handling, race conditions, and logic errors.
            Be specific -- reference line numbers or code constructs when possible.
            Format your response as markdown. Use severity labels: ? High, ? Medium, ? Low.
            If no bugs are found, state that clearly.
            """);
        history.AddUserMessage($"Review this C# code for bugs:

```csharp
{code}
```");

        var result = await chat.GetChatMessageContentAsync(history);
        return result.Content ?? "No response received.";
    }
}

The [Description] attribute on both the method and the code parameter is critical -- it's what the LLM uses to decide when to call this function and what to pass as code. Without clear descriptions, the orchestrator agent might skip plugins or pass the wrong arguments. This is a core plugin best practice worth remembering.

The other three plugins follow the exact same pattern with different system prompts targeting security vulnerabilities, performance issues, and C# style/naming conventions respectively. See the full source on GitHub for all four.

Building the ReviewOrchestrator

The ReviewOrchestrator wraps a ChatCompletionAgent that autonomously calls all four plugins and synthesizes the results. The agent's Instructions are explicit -- it must call all four tools, not just the ones it thinks are relevant:

// Agents/ReviewOrchestrator.cs
using System.Text;
using AiCodeReviewBot.Models;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Agents;
using Microsoft.SemanticKernel.Connectors.OpenAI;

namespace AiCodeReviewBot.Agents;

public sealed class ReviewOrchestrator
{
    private readonly ChatCompletionAgent _agent;

    public ReviewOrchestrator(Kernel kernel)
    {
        _agent = new ChatCompletionAgent
        {
            Name = "CodeReviewer",
            Instructions =
                """
                You are a senior C# code reviewer. When given code to review, you MUST:

                1. Call ReviewForBugsAsync to check for bugs and defects
                2. Call ReviewForSecurityAsync to check for security vulnerabilities
                3. Call ReviewForPerformanceAsync to check for performance issues
                4. Call ReviewForStyleAsync to check for style and best practice violations

                After collecting all results, synthesize them into a comprehensive markdown report with:
                ## Executive Summary, ## Bugs, ## Security, ## Performance, ## Style & Best Practices,
                and ## Overall Recommendation (✅ Approved / ? Needs Minor Changes / ❌ Major Revision Required)
                """,
            Kernel = kernel,
            Arguments = new KernelArguments(new OpenAIPromptExecutionSettings
            {
                FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
            })
        };
    }

    public async Task<ReviewResult> ReviewCodeAsync(
        string code,
        string fileName,
        CancellationToken cancellationToken = default)
    {
        var thread = new ChatHistoryAgentThread();
        var prompt = $"Please review the following C# code from '{fileName}':

```csharp
{code}
```";
        var sb = new StringBuilder();

        try
        {
            await foreach (var response in _agent.InvokeAsync(prompt, thread, cancellationToken: cancellationToken))
            {
                // SK 1.71.0+: InvokeAsync returns AgentResponseItem<ChatMessageContent>
                // Access the underlying message content via .Message
                var content = response.Message?.Content;
                if (!string.IsNullOrEmpty(content))
                    sb.Append(content);
            }

            return new ReviewResult { FileName = fileName, Review = sb.ToString(), Success = true };
        }
        catch (Exception ex)
        {
            return new ReviewResult { FileName = fileName, Success = false, ErrorMessage = ex.Message };
        }
    }
}

A few things worth noting here. First, FunctionChoiceBehavior.Auto() means the LLM decides when to call each tool -- but the agent instructions explicitly tell it to call all four, so it does. If you used FunctionChoiceBehavior.Required() instead, you'd have more control over which functions are mandatory. For this use case, Auto() with clear instructions works well.

Second, in SK 1.71.0 and later, InvokeAsync returns IAsyncEnumerable<AgentResponseItem<ChatMessageContent>> -- the ChatMessageContent is wrapped in an AgentResponseItem<T>. You access it via .Message. This was one of those breaking changes from earlier SK versions where InvokeAsync returned IAsyncEnumerable<ChatMessageContent> directly. Always check the return type when upgrading SK versions. For a deeper look at when to use ChatCompletionAgent vs AssistantAgent, see ChatCompletionAgent vs AssistantAgent in Semantic Kernel.

Wiring It All Together

Program.cs handles argument parsing, kernel setup, file discovery, and report generation. The kernel setup is where the configurable provider logic lives:

// Program.cs (kernel setup section)
var builder = Kernel.CreateBuilder();

if (providerConfig.Type.Equals("azureopenai", StringComparison.OrdinalIgnoreCase))
{
    builder.AddAzureOpenAIChatCompletion(
        deploymentName: providerConfig.ModelId,
        endpoint: providerConfig.Endpoint!,
        apiKey: providerConfig.ApiKey);
}
else
{
    builder.AddOpenAIChatCompletion(
        modelId: providerConfig.ModelId,
        apiKey: providerConfig.ApiKey);
}

// Register all four review plugins
builder.Plugins.AddFromType<BugDetectionPlugin>();
builder.Plugins.AddFromType<SecurityPlugin>();
builder.Plugins.AddFromType<PerformancePlugin>();
builder.Plugins.AddFromType<StylePlugin>();

var kernel = builder.Build();

The AddFromType<T>() pattern for plugin registration is covered in detail in the Semantic Kernel agents guide. All four plugins are registered on the kernel before it's built, which makes them available to the ChatCompletionAgent when it decides which tools to call.

The file discovery logic skips bin/ and obj/ directories so you can safely point the bot at a project root:

string[] files = File.Exists(filePath)
    ? [filePath]
    : Directory.GetFiles(filePath, "*.cs", SearchOption.AllDirectories)
        .Where(f =>
            !f.Contains($"{Path.DirectorySeparatorChar}bin{Path.DirectorySeparatorChar}") &&
            !f.Contains($"{Path.DirectorySeparatorChar}obj{Path.DirectorySeparatorChar}"))
        .OrderBy(f => f)
        .ToArray();

The full Program.cs is on GitHub.

Running the Bot

Once your credentials are configured, running the bot is straightforward. The CLI supports three modes: single file review, folder review (recursive), and folder review with output to a file. The folder mode is the most practical for day-to-day use:

# Review a single file
dotnet run -- --path src/MyService.cs

# Review all .cs files in a folder
dotnet run -- --path src/

# Save the report to a file
dotnet run -- --path src/ --output review-report.md

The output looks like this:

AI Code Review Bot -- 1 file(s) to review
Provider: azureopenai | Model: gpt-4.1
------------------------------------------------------------
  Reviewing BugDetectionPlugin.cs... ✓

# AI Code Review Report

**Generated:** 2026-02-19 10:43
**Files reviewed:** 1 (1 succeeded, 0 failed)
**Provider:** azureopenai / gpt-4.1

What the Bot Found When It Reviewed Itself

One of the best tests of any tool is turning it on itself. I ran the bot against BugDetectionPlugin.cs -- one of its own source files -- and it found several legitimate issues:

Bugs: Missing null checks on kernel and code parameters (? Medium). Both parameters are used without validation. If either is null, you get a runtime NullReferenceException with no helpful message.

Security: No input validation on the code parameter (? High). A prompt injection attack is possible if user-supplied code contains carefully crafted instructions that hijack the reviewer's behavior. Not critical for an internal dev tool, but worth noting.

Performance: No CancellationToken forwarding (? Medium). The plugin accepts no cancellation token, meaning long-running AI calls can't be cancelled. The .ConfigureAwait(false) pattern is also missing on the await call.

Style: No XML documentation on the public class and method (? Suggested). For a plugin that's part of a library, this is a reasonable ask.

These are all valid observations. For a production plugin library you'd want all of them addressed. For a blog demo, I left them as-is to give the bot something real to find -- and now they're documented in the review output. The bot's overall verdict was "? Needs Minor Changes", which is accurate.

This is a genuinely useful capability. The bot isn't replacing a human reviewer, but it will catch things that slip through when reviewers are tired or rushed -- and it's consistent every time.

FAQ

What model should I use with the AI code review bot Semantic Kernel C# app?

GPT-4o or GPT-4.1 work well for this use case. GPT-4o-mini is significantly cheaper per token but produces noticeably less detailed security and performance analysis. For personal or team use, start with GPT-4o and drop to mini if cost becomes a concern.

Why use FunctionChoiceBehavior.Auto() instead of calling each plugin directly?

You could call each plugin directly in sequence without an agent. The reason to use FunctionChoiceBehavior.Auto() with a ChatCompletionAgent is that the agent can synthesize results from all four plugins into a coherent report with executive summary and overall recommendation -- not just a concatenation of four raw outputs. The agent understands context across all four reviews when writing its summary.

Can I add my own review plugin?

Yes. Create a new class with a [KernelFunction] method following the same pattern as BugDetectionPlugin, then register it with builder.Plugins.AddFromType<YourPlugin>() and update the agent's Instructions to include it. The custom plugins guide covers the plugin authoring patterns in depth.

Does this work on files other than C#?

The plugins are all prompted for C# specifically. You could adapt them for other languages by changing the system prompts, but you'd also want to update the file discovery logic to look for .py, .ts, etc. instead of .cs.

Why does the bot need Configuration.Binder as a separate package?

IConfigurationSection.Get<T>() is an extension method from Microsoft.Extensions.Configuration.Binder. Even though Microsoft.SemanticKernel transitively depends on it, the extension method won't resolve at compile time without an explicit package reference. This is a common gotcha in .NET projects that use IConfiguration -- always add Configuration.Binder explicitly.

What happens if a file is too large for the model's context?

The bot warns you if a file exceeds 50,000 characters and truncates it before sending. Real-world C# files rarely hit this limit (a 50K character file is roughly 1,500 lines), but very large generated files or files with large string literals can. For production use, consider splitting large files or reviewing individual classes.

What's next -- can this review GitHub PRs or Azure DevOps PRs?

That's the natural extension for both platforms. For GitHub, the approach is to call GET /repos/{owner}/{repo}/pulls/{pull_number} with the diff Accept header, pass the changed .cs hunks through the same four plugins, and post the result back with POST /repos/{owner}/{repo}/issues/{pull_number}/comments. For Azure DevOps, the ADO REST API uses PR iteration endpoints to retrieve changed files and POST threads to post the review back. Both follow the same structure: a IDiffProvider interface with platform-specific implementations, and an ICommentPoster that posts the markdown report. I've scoped that as a separate project and have the architecture detailed in the vibe-coding log.

Conclusion

Building an AI code review bot with Semantic Kernel in C# is a natural fit for the SK plugin + agent pattern. The four-plugin design keeps each review dimension focused, the ChatCompletionAgent with FunctionChoiceBehavior.Auto() handles orchestration without manual sequencing, and the IConfiguration abstraction lets you switch between OpenAI and Azure OpenAI without code changes. The bot reviewed its own source code and found legitimate issues -- that's the kind of consistent, tireless analysis that's hard to get from human-only review.

If you want to extend this, the most valuable addition is PR integration -- both GitHub and Azure DevOps. For GitHub: fetch the PR diff via GET /repos/{owner}/{repo}/pulls/{pull_number} with the diff Accept header, run each changed file through the plugins, and post back via the Issues comments endpoint. For Azure DevOps: use the PR iterations API to get changed files, then the diffs/commits endpoint to reconstruct what changed. Both platforms need a IDiffProvider abstraction with separate implementations so the review core stays platform-agnostic. I have the full architecture plan in the vibe-coding log and it'll be the follow-up article.

The full source is at github.com/ncosentino/DevLeader/tree/master/semantic-kernel-examples/ai-code-review-bot. For more on the SK patterns used here, see the Semantic Kernel complete guide and the multi-agent orchestration guide for more advanced patterns.

Building AI Agents with Semantic Kernel in C#: A Practical Step-by-Step Guide

Learn how to build AI agents with Semantic Kernel in C# from scratch. Create ChatCompletionAgent, configure instructions, add plugins, manage conversation threads, and build production-ready AI agents in .NET.

Semantic Kernel in C#: Complete AI Orchestration Guide

Master Semantic Kernel in C# with this complete guide. Learn plugins, agents, RAG, and vector stores to build production AI applications with .NET.

Semantic Kernel Agents in C#: Complete Guide to AI Agents

Master Semantic Kernel agents in C# with ChatCompletionAgent, AgentGroupChat orchestration, and Microsoft Agent Framework integration.

An error has occurred. This application may no longer respond until reloaded. Reload