BrandGhost
Build an AI Task Planner with Semantic Kernel in C#

Build an AI Task Planner with Semantic Kernel in C#

Build an AI Task Planner with Semantic Kernel in C#

Yesterday I walked through building an AI Code Review Bot with Semantic Kernel using ChatCompletionAgent and autonomous tool dispatch. Today the pattern is deliberately different. Building an AI task planner with Semantic Kernel in C# means we want a fixed, ordered workflow -- analyze a goal, break it into phases, prioritize with effort estimates -- and that's exactly where sequential pipelines outperform agents.

This post walks through the complete implementation: KernelFunctionFactory.CreateFromPrompt() for inline prompt authoring, Kernel.InvokeAsync() for direct function invocation, and KernelArguments to carry typed context between steps. The full source code is available at github.com/devleader/semantic-kernel-examples in the ai-task-planner subfolder.

Why a Sequential Pipeline Instead of an Agent?

The AI Code Review Bot used ChatCompletionAgent with FunctionChoiceBehavior.Auto() -- the LLM autonomously decided which review plugins to call and in what order. That's the right call when the workflow is open-ended.

Task planning is different. The workflow is fully known in advance:

  1. Analyze the goal -- extract scope, constraints, and success criteria
  2. Break the goal into phases and concrete tasks
  3. Assign priorities and effort estimates to each task

There's no reason to let the LLM decide whether to skip step two. A sequential pipeline is more predictable, easier to debug, and makes the data flow explicit in code. When something goes wrong, you know exactly which step failed and what input it received.

This is a core design decision when building AI agents with Semantic Kernel in C# -- agents are for open-ended reasoning, pipelines are for known workflows. The Semantic Kernel in C# complete guide covers both patterns at a higher level.

Project Setup

The app is a .NET 9 console project with only two SK packages -- no Agents.Core required:

<PackageReference Include="Microsoft.SemanticKernel" Version="1.72.0" />
<PackageReference Include="Microsoft.SemanticKernel.Connectors.AzureOpenAI" Version="1.72.0" />

The project layout keeps responsibilities clean:

ai-task-planner/
├── Configuration/
│   └── AIProviderConfig.cs         (Type, ModelId, Endpoint, ApiKey)
├── Models/
│   ├── GoalAnalysis.cs             (Scope, Constraints[], SuccessCriteria[])
│   ├── TaskBreakdown.cs            (Phases[{Name, Tasks[]}])
│   └── TaskPlan.cs                 (Phases[{Name, Tasks[{Name, Priority, EstimatedHours}]}])
├── Planning/
│   └── TaskPlannerPipeline.cs      (3 inline prompt functions + orchestration)
└── Program.cs                      (arg parsing, kernel setup, report output)

The appsettings.json / appsettings.Development.json pattern is the same as the code review bot -- the development file is gitignored and holds the actual API key:

{
  "AIProvider": {
    "Type": "azureopenai",
    "ModelId": "gpt-4o",
    "Endpoint": "https://your-resource.openai.azure.com/",
    "ApiKey": "your-key-here"
  }
}

Defining Prompt Functions Inline with KernelFunctionFactory

In How to Create Custom Plugins for Semantic Kernel in C#, you saw how to author plugins as C# classes with [KernelFunction] attributes. KernelFunctionFactory.CreateFromPrompt() is the lighter-weight alternative for functions that are pure prompt logic with no C# business code.

Here is the goal analysis function:

_analyzeGoalFn = KernelFunctionFactory.CreateFromPrompt(
    """
    Analyze the following project goal and return a JSON object with this exact structure:
    {
      "scope": "one sentence describing what the project covers",
      "constraints": ["constraint1", "constraint2"],
      "successCriteria": ["criteria1", "criteria2"]
    }

    Goal: {{$goal}}

    Return only valid JSON. No markdown fences, no explanation.
    """,
    functionName: "AnalyzeGoal",
    description: "Analyzes a project goal and extracts scope, constraints, and success criteria");

The {{$goal}} syntax is SK's template variable notation -- at runtime, KernelArguments provides the value. The functionName and description matter: SK uses them for logging and, if you were using tool-calling mode, for auto-dispatch. Here they serve as documentation.

The same pattern applies to the task breakdown and prioritization steps. Each function has its own prompt, its own input variables, and its own expected JSON schema. When you define all three in the constructor, any configuration errors surface at startup rather than partway through a three-step pipeline run.

When to Use CreateFromPrompt vs a Plugin Class

Use KernelFunctionFactory.CreateFromPrompt() when:

  • The function's entire logic is a prompt
  • No C# code needs to execute alongside the LLM call
  • You want to keep the prompt collocated with the code that uses it

Use a C# plugin class with [KernelFunction] when:

  • The function calls an external API, database, or file system
  • You need to validate or transform inputs before they reach the LLM
  • You want the function to be reusable across multiple kernels or agents

For the task planner, all three steps are pure prompt logic -- CreateFromPrompt() is the right choice. For the AI Code Review Bot, plugin classes made sense because each review function needs to receive C# source code from disk and return structured review output.

Invoking Functions with Kernel.InvokeAsync and KernelArguments

The TaskPlannerPipeline constructor stores the three KernelFunction instances. The pipeline methods invoke them directly using Kernel.InvokeAsync():

public async Task<GoalAnalysis> AnalyzeGoalAsync(
    string goal,
    CancellationToken cancellationToken = default)
{
    var settings = new OpenAIPromptExecutionSettings
    {
        ResponseFormat = "json_object",
        FunctionChoiceBehavior = FunctionChoiceBehavior.None()
    };

    var args = new KernelArguments(settings) { ["goal"] = goal };

    var result = await _kernel.InvokeAsync(_analyzeGoalFn, args, cancellationToken);
    var json = result.GetValue<string>() ?? "{}";

    return JsonSerializer.Deserialize<GoalAnalysis>(json, JsonOptions)
        ?? new GoalAnalysis { Scope = goal };
}

Two OpenAIPromptExecutionSettings choices drive the reliability of this pipeline:

ResponseFormat = "json_object" is the most important setting. Without it, even a prompt that explicitly says "return only valid JSON" will occasionally produce JSON wrapped in a markdown code fence or prefixed with "Here is the JSON object:" -- both of which cause JsonSerializer.Deserialize to throw. With json_object mode enabled, the model API enforces valid JSON at the response level, not just via prompt instruction.

FunctionChoiceBehavior.None() disables automatic tool calling. In this pipeline, you're calling functions directly -- there's no tool dispatch happening. Setting None() makes this explicit in the code and prevents any unexpected behavior if you later register plugins on the kernel. The Semantic Kernel function calling article covers the full range of FunctionChoiceBehavior options.

The KernelArguments Constructor Overload

Notice that OpenAIPromptExecutionSettings is passed directly to the KernelArguments constructor rather than being set separately. This constructor overload attaches the execution settings to every function call made with those arguments -- you don't need a separate settings registration step.

Passing Context Between Pipeline Steps

The second and third steps receive output from earlier steps as KernelArguments values. Step 2 takes scope and constraints extracted from GoalAnalysis:

public async Task<TaskBreakdown> GenerateTaskBreakdownAsync(
    string goal,
    GoalAnalysis analysis,
    CancellationToken cancellationToken = default)
{
    var settings = new OpenAIPromptExecutionSettings
    {
        ResponseFormat = "json_object",
        FunctionChoiceBehavior = FunctionChoiceBehavior.None()
    };

    var args = new KernelArguments(settings)
    {
        ["goal"] = goal,
        ["scope"] = analysis.Scope,
        ["constraints"] = string.Join("; ", analysis.Constraints)
    };

    var result = await _kernel.InvokeAsync(_breakdownFn, args, cancellationToken);
    var json = result.GetValue<string>() ?? "{}";

    return JsonSerializer.Deserialize<TaskBreakdown>(json, JsonOptions)
        ?? new TaskBreakdown();
}

Step 3 serializes the TaskBreakdown back to JSON and passes it as a single string argument:

var breakdownJson = JsonSerializer.Serialize(breakdown, JsonOptions);

var args = new KernelArguments(settings)
{
    ["goal"] = goal,
    ["breakdown"] = breakdownJson
};

This approach has a deliberate design benefit: each step receives only the data it actually needs. There's no shared state object accumulating context across the pipeline. If you need to add a fourth step, you add a new method that takes the previous step's output as a parameter -- there's nothing to hunt down in shared mutable state.

The FunctionResult.GetValue<string>() call extracts the raw text from SK's result object. This is the standard approach for prompt-only functions -- if you were invoking a native C# function, you'd use the matching generic type.

Orchestrating the Three-Step Pipeline

Program.cs calls the three pipeline methods in sequence:

var pipeline = new TaskPlannerPipeline(kernel);

Console.Write("Step 1/3: Analyzing goal...");
var analysis = await pipeline.AnalyzeGoalAsync(goal);
Console.WriteLine(" ✓");

Console.Write("Step 2/3: Breaking down into tasks...");
var breakdown = await pipeline.GenerateTaskBreakdownAsync(goal, analysis);
Console.WriteLine(" ✓");

Console.Write("Step 3/3: Prioritizing and estimating effort...");
var plan = await pipeline.PrioritizeAndEstimateAsync(goal, breakdown);
Console.WriteLine(" ✓");

There is no agent loop, no function selection, no retry logic baked in. The pipeline is sequential and deterministic. If any step fails (network error, JSON parse failure), the exception propagates immediately and the outer try/catch in Program.cs catches it with a clean error message.

The finished plan is a strongly typed TaskPlan object with phases, task names, priorities (High/Medium/Low), and hour estimates. The report formatter in Program.cs writes it as markdown to stdout or a file via --output.

Running the AI Task Planner

The planner accepts any free-form goal string. Clone the repo, drop your credentials into appsettings.Development.json, and run it against any project description:

git clone https://github.com/devleader/semantic-kernel-examples
cd semantic-kernel-examples/ai-task-planner

# Create appsettings.Development.json with your API key (see .Development.json.example)

dotnet run -- --goal "Build a REST API for a blog platform with authentication and CRUD"
dotnet run -- --goal "Migrate a legacy WinForms app to .NET 9" --output migration-plan.md

Running the first example produces output like:

AI Task Planner
Goal: Build a REST API for a blog platform with authentication and CRUD
Provider: azureopenai | Model: gpt-4.1
------------------------------------------------------------
Step 1/3: Analyzing goal... ✓
Step 2/3: Breaking down into tasks... ✓
Step 3/3: Prioritizing and estimating effort... ✓

# Task Plan: Build a REST API for a blog platform...

## Planning & Design
- [High] Define API endpoints for authentication, posts, and comments -- 4h
- [High] Design database schema for users, posts, and comments -- 6h
...
**Total estimated effort:** 80h

Sequential Pipelines vs Agents -- Which Should You Choose?

Understanding when to use each pattern is one of the more practical skills you develop working with SK. The ChatCompletionAgent vs AssistantAgent article covers the agent side in depth.

For the pipeline vs agent decision, the clearest rule is: if you can enumerate the steps in advance, use a pipeline. Agents add value when the number of steps, the step order, or the selection of tools can't be determined until the LLM processes the input.

For the AI task planner, we always want goal analysis before task breakdown, and task breakdown before prioritization. An agent that could skip or reorder those steps would produce worse results, not better ones. The sequential pipeline enforces the right workflow while keeping the code readable.

For more on how SK's function selection mechanisms work, the Semantic Kernel Plugin Best Practices article covers when to prefer explicit invocation over auto-dispatch.

Frequently Asked Questions

How is Kernel.InvokeAsync different from using a ChatCompletionAgent?

Kernel.InvokeAsync() calls a single KernelFunction directly and returns a FunctionResult. You control exactly what gets called, in what order, with what arguments. ChatCompletionAgent runs a conversation loop where the LLM decides which plugins to call -- you set up the tools and let the model orchestrate. For deterministic, ordered workflows, InvokeAsync is simpler and easier to reason about. For open-ended tasks where the model needs to decide what tools to use, agents are more appropriate.

When should I use KernelFunctionFactory.CreateFromPrompt instead of a plugin class?

Use CreateFromPrompt() when the function's entire logic is a prompt template and there's no C# code that needs to run alongside it. Use a plugin class with [KernelFunction] when the function calls external services, reads files, queries databases, or needs to combine C# computation with LLM output. The inline prompt approach is faster to author and keeps the prompt collocated with the code that invokes it -- this makes it easy to iterate on the prompt without changing any class structure.

What does ResponseFormat = "json_object" actually do in Semantic Kernel?

It passes the response_format: { "type": "json_object" } parameter directly to the underlying OpenAI or Azure OpenAI API. This instructs the model to always return a valid JSON object -- the constraint is enforced at the API response level, not via prompt instruction alone. It eliminates the most common cause of JSON parse failures: the model wrapping the JSON in a markdown code fence or adding explanatory text around it. Note that your prompt should still describe the expected schema; json_object ensures valid JSON but not necessarily the shape you want.

Can I add retry logic or error handling between pipeline steps?

Yes -- because each step is a separate method call, you can wrap individual steps in retry policies using Polly or your own try/catch logic. A common pattern is to retry the step on a JsonException (JSON parse failure) with a prompt adjustment that explicitly asks the model to correct the previous output. For the task planner, a single retry with the original prompt usually resolves transient parse failures. The Semantic Kernel Agents in C# article covers error handling in longer-running agent loops, which is more complex but uses some of the same principles.

Does FunctionChoiceBehavior.None() affect performance?

No -- it doesn't change the model or the request payload in a way that affects latency. What it does is suppress SK's function-calling negotiation: when set to None(), SK does not include tool definitions in the API request, which marginally reduces token usage in the request header. More importantly, it prevents the model from attempting to call functions and waiting for a response, which would add a round trip. For pipeline steps that are pure prompt functions, None() is always the right choice.

How do I test a Semantic Kernel pipeline in unit tests?

The cleanest approach is to inject a mock or stub Kernel using a test-specific builder. In practice, many teams test the pipeline methods against a real (but cheap) model with short prompts in integration tests, and test the report formatting and argument passing logic in pure unit tests by injecting a fake FunctionResult. SK's Kernel is not sealed and its function invocation path is mockable via IKernelFunctionFactory or by registering a fake IChatCompletionService -- the Semantic Kernel plugin best practices article includes a testing section with concrete examples.

Can I combine this pipeline pattern with SK plugins that call real APIs?

Absolutely -- KernelArguments can carry any string value, including results from a prior step that called a real API. A practical extension of the task planner would be a fourth step that calls a ProjectManagementPlugin (with a [KernelFunction] method that talks to Jira or GitHub Issues) to actually create the tasks. The Semantic Kernel OpenAPI Plugin Integration article covers connecting real REST APIs as SK tools, which you can integrate into any pipeline or agent.

Build an AI Code Review Bot with Semantic Kernel in C#

Learn to build an AI code review bot with Semantic Kernel in C#. Uses ChatCompletionAgent with four plugins for bug, security, performance, and style review.

Building AI Agents with Semantic Kernel in C#: A Practical Step-by-Step Guide

Learn how to build AI agents with Semantic Kernel in C# from scratch. Create ChatCompletionAgent, configure instructions, add plugins, manage conversation threads, and build production-ready AI agents in .NET.

Semantic Kernel in C#: Complete AI Orchestration Guide

Master Semantic Kernel in C# with this complete guide. Learn plugins, agents, RAG, and vector stores to build production AI applications with .NET.

An error has occurred. This application may no longer respond until reloaded. Reload