How To Use Semantic Kernel Plugins In C#
January 2, 2025
• 6,914 views
csharpdotnet.net.net coredotnet coresemantic kernelai agentsopenailangchaincopilotsemantic kernel skillssemantic kernel plannersemantic kernel tutorialazure openaiintroduction to semantic kernelmicrosoft semantic kernelazure ai studiointroduction to memories in semantic kernelsemantic kernel demosemantic kernel pluginsemantic kernel toolssemantic kernel microsoftsemantic kernel copilot chatsemantic kernel embeddingssemantic kernel C#
What's cooler than using Semantic Kernel in C#?
Mixing in my favorite set of building blocks: Plugins!
This video tutorial will get us up and running with plugins in Semantic Kernel!
View Transcript
semantic kernel gives us all sorts of awesome powers that we can leverage AI for in our C applications and in this video I wanted to walk you through some of the basics of working with plugins in semantic kernel hi my name is Nick centino and I'm a principal software engineering manager at Microsoft if you haven't watched the previous video you can check it out right up here which is going to walk you through how to get the very basics of semantic kernel setup in a c project but if you're already familiar with how that works then this is going to be a great next step for getting some simple plugin to work with semantic kernel that sounds interesting just a reminder to subscribe to the channel and check out that pinned comment for my courses on do drain now let's jump over to visual studio
and talk about semantic kernel plugins on my screen I have a project that is from the previous video which walks us through how to get set up with semantic kernel but I figured it would be a good opportunity to talk about what plugins are with respect to semantic kernel in the previous video and lots of other examples that we've all been seeing online and what we're familiar with when we're talking with things like chat PBT or other llms is we have this idea of a conversation right we're able to send messages to the llm it interprets those messages and then gives us a result back based on what we sent it now as time has gone on we've seen more and more fancy features get built into these conversational AI tools with something like semantic kernel we're able to start integrating different types of code
plugins into the kernel and that way when we're using things like chat completions the AI is able to start interacting with those plugins as we're going to step through this I'll briefly walk through the setup for semantic kernel and then we're going to see how we can start leveraging the plugin so again if you haven't watch the previous video you're going to need to be able to have your endpoint set up and this is currently leveraging Azure open AI which you can see right down here where we add Azure open AI chat completion if you're using something else with a semantic kernel there's other connectors for this you can go ahead and get that set up but that's a prerequisite before we move on but this is going to be the current setup right so we're able to use the Builder pattern we build the
kernel and then what we want to do is ask the kernel for the chat completion service once we have that on line 28 we can start working with it but there's a little bit more that we have to do because if we just go on from here we can do like in the previous video where we just start chatting with the AI and it's able to do the chat completions which is still really impressive it still blows my mind that we can do this but it's not able to interact with anything and that's where plugins are going to come into play now we are going to be borrowing the lights plug-in example from Microsoft in their documentation so you can see online 30 what I'm now doing is saying kernel plugins add from type and then I'm adding the lights plugin and calling it lights
that's cool but what the heck is the lights plugin well this is going to be the plugin in this tutorial that we're going to see how that looks and you should be able to take the concepts you see in this tutorial and make something completely different if you stay to the end of this video I will have another one where we're able to see how we can actually interact with YouTube and semantic kernel all at once to start things off I'm going to jump down to this plug-in definition which is at the bottom of the file here so I am going to jump one moment and we can see that we have this lights plugin again this one is going to be very contrived it is borrowed from the Microsoft example but this lights plug-in has a few things going on so to start things
off this is going to have some State and because this is really just a contrived example Le this state is just stored in memory but in theory you could have this state persisted in a database written to a file you could be asking for this information from a service which is what we'll be seeing when we do the YouTube example after but for now this is just going to be state that is stored in memory quite literally in this list right here called lights this example has three different light models in here it has a table lamp a porch light and a chandelier and then we can see that they have some default State associated with them and an ID this is just an example if you want to play around with this code you could do the exact same thing you could add new
lights you could remove these ones rename them whatever you want to do this is just to have something to work with but the really interesting part is that we have these methods down here and they're annotated with lots of extra information if we look at something like get lights a sync it's incredibly simple right it's just returning this Lights Collection this one that came from Microsoft was marked as async already with a task I guess the reality of this one is that it doesn't need to be asynchronous based on what we're doing here but I'm just keeping that signature basically the same and we'll see the same thing down here with this change State async the code itself isn't actually doing anything that requires being asynchronous but not really the point the point of this one is that we have all of this extra information
up here and why might we want extra information when we're thinking about working with code and an llm together well we're all familiar with the fact that llms really like ver Bose information to work with right if you're experienced with providing prompts to llms if you're very short and succinct and not providing a lot of detail you're going to get a variation of results and a lot of the time it might not be something that you're happy with and you'll have to say h oh crap how do I give it more information to do what I need and that's the whole idea with the annotations that we have here so you can see that we have this description annotation so this is just called an attribute in C so description attribute we can even put it on the return and we should be able to
put it on the parameters and we don't have them down here but we can go ahead and add them onto the method parameters as well so I can add a description on both of these and I could say this is the identifier the light and then I can say on this one true if the light is on false if the light is off and that way we have these also annotated and the whole idea here is that we're providing extra information extra verbosity to the llm so that when it's trying to figure out how to work with our plugins it can see what it has available to it and it understands how to interact with it again if we just had the variable names like id id of what right change State async and we give it an ID the ID of what though right
we probably can infer as humans as developers based on the fact that we're looking at this code that's a lights plugin we could probably say well it's going to be the ID of the light but the llm the more context you give it the more accurate it's able to operate so we want to be able to provide all of this extra information so we can do that on the method itself we can do that on the return value so this says the updated state of the light will return null if the light does not exist right so giving it extra information to understand how to interpret that result as well and then uh like we just did we have it on the uh method parameters as well so lots of extra information so the llm can work with it this is just a quick Interruption
from this video sponsor which is pack publishing now P has sent me over this book from Mark J Price which is C13 net9 modern crossplatform development fundamentals and I have the previous edition of this book sitting on my bookshelf as well and now I have this one which is super exciting because this book is packed with tons of awesome examples that guide you through the different functionalities of the language now I think that this is an awesome reference guide I think that you'll have tons of things that you can walk through from the very beginning to learn about C but one of the best parts about this is that it's not just limited to looking at the language you'll get to see things like asp.net core Entity framework as well as Blazer all in action with practical examples in the book I personally cannot recommend
Mark J Price's books enough like I said he's got tons of awesome examples and he has other books as well from pack that I do highly recommend you check out so if you're interested in this you can check out the link in the description and have a link up here as well that you can check out thanks and now back to the video we also give it kernel function to note that these are methods that the Kel semantic kernel can actually leverage with the llm and we give them names as well so this name here get lights does not have to be the exact same here and this is sort of what came out of the box from the example from Microsoft so again a quick look at the body of this nothing too interesting it's going to go find the light by its ID
if it exists and if it does exist it's going to set the is on property to be whatever the llm passes in and then from there we're going to return that light that's an example of what a plug-in looks like I should also point out that this does not have to inherit from anything it's not like we had to go have um some I plugin interface to implement or inherit from some abstract class and have to override things we literally just have this class so no other dependency here and then we add on these methods that are marked as public and then we annotate them with kernel function and then extra information with these attributes pretty simple pretty cool that all we're doing is taking some code that might be very simple and then annotating it so that the llm can understand what to do
with it scrolling back up when we're working with the kernel we go ahead and add the plugin lights plugin called lights right onto the collection of plugins again very simple but what we want to do from there and I touched on this in the previous video we have these uh open AI prompt execution settings that we can configure so this is set to be Auto but if I go ahead and I put my cursor over this right here we get this enormous tool tip so I'm going to read through some of it because there's some different options that you can play around with right so to disable function calling which is not what we want to do in this video but to disable it and have the model only generate a user facing message set the property to null which is the default so by
default when you're using semantic kernel with these chat completions this is not going to be calling any functions so if we wanted to do anything else to allow the model to decide whether to call a function and if so which wants to call set the property to an instance return by this method here so this is going to be Auto that's what we're using this next one says to force the model to always call one or more functions set the property to an instance returned by the function Choice Behavior required so we're not using that we are going to be using Auto and then it says to instruct the model to not call any functions and only generate a user facing message set the property to an instance return by function Choice Behavior none so this is what you can configure on here like I
said we are going to be using Auto so it can figure out if it does need to call functions depending on how you want to use semantic kernel you might say based on how I'm configuring this it always needs to call something great maybe you want to change what's going on here but for us because it's still going to be set up as a simple conversation what we are able to do is just leave it set to Auto for this next part here this is the loop that we had in the previous video so again simple Loop to go ask for user input send it to the llm and then get that response back and I figured when I was going to put this video together I was thinking oh we'll just clean it up and we'll make it just call the plugin we'll have
that go but instead I figured it might be cool if we could just have a conversation with it like normal and then we can ask it to do something with the lights I wanted to prove to you though that it is actually calling the function that's why I wanted to sort of strip out the conversation part of this and just make it sort of do the conversation behind the scenes and we could go ask for the information off of the plugin I think we'll still do that but we can go put a breakpoint in and we can see that we're not calling that code inside of the lights plugin the llm is and that's pretty cool too so let's go ahead and run this code and we're going to go ask the AI to turn off the chandelier because if we go have a look
we can see that the chandelier starts as on so let's go ahead and try this and see what happens so write your message to the AI bot I would like you to turn off the chandelier the chandelier has been turned off okay now did we actually need a plugin to go do this right is this just the AI making stuff up and we're just believing it I don't know so let's go see if we can ask it a different question and then we're going to start dropping in some break points to see what's actually happening behind the scenes well instead of going to ask it if the chandelier is off now because I think even without a plugin it might figure out I just turned it off it's probably off let's go ask it what the ID of another light is because we haven't told
it that now I don't remember what they are so let me jump back to visual studio Let's go ask it with the i of the porch light is it says the ID of the porch light is two interesting I haven't even told it that so how did it get that information lucky guess or is it using the semantic kernel plugin let's go see how many lights there are there are three lights in total okay interesting what are the names of Lights table lamp porch light and chandelier okay kind of spooky that it would know that without a plugin right so let's go see if I put my cursor down in here if we can hit a break point so we should know that the chandelier was on and now it's off I'm going to go ask it to turn the porch light on and we
hit our break point so the llm was able to call this it knows that the ID of the porch light is to right I didn't tell it that at any point it knows that from the plug-in information and then is on is said to true because I told it turn on the porch light so if I step through this it will find it and it will go return it and that means when I press F5 says the porch light has been turned on and we got to see that it was actually calling the code in the plugin now I did say that I also wanted to show you how we can go confirm this without putting a breakpoint in place so I just wanted to take a quick moment to show you how you can go invoke the plug-in functions yourself and that way if
you wanted to go play around with this and you don't necessarily want to have a console input and typing messages in you can go ahead and see what the state is again you might be implementing plugins in a completely different way maybe you see this information being written to and read from a database but if you're dealing with it like I am in this case let's go call that function directly so what I'm going to do is run this again but I've added this code here from line 60 to 63 what I'm going to do is ask the kernel for its plugins and I'm going to say get the function from the lights plugin called get lights this is the name of the function not the name of the method itself but the name of the kernel function that we added in the attribute right
so if I scroll down we can see kernel function here get lights I'm using that same name and lights is not based on this here but when we added that plugin to the Builder up top here or directly onto the plugins rather we gave it that lights name so we're saying from that lights plugin use this kernel function and then we will invoke it and add kernel as the parameter into this method call so let's go ahead and run this we'll have to have a conversation very briefly and then when I'm done and I press enter with an empty message we should hit that breakpoint so please turn off the chandelier we hit our break point excellent now when I press enter it should end the chat Loop and we should be able to have called this method get lights when I put my cursor
over this it's a little bit hard to see cuz it's small but we can see that we have all of this stuff here right this is not just the result this is other information it's not just the return value but if we go look but if if I go ahead and I go to the non-public members that we see here we can see that we have a count of three inside of the value property right so if I go expand this we can see that we have our three light models we have our table lamp is it on no it's off by default though so nothing too surprising if we go to number one we can see the porch light is on is set to false again that is the default not very interesting but if we go to the last one the sh celier
by default it is on so what is it now it is now set to false if you look at the very bottom of my visual studio code part it says online 72 by default chandelier is on is set to true we asked the llm to turn it off and now when we invoke this function ourselves calling get lights we can also read back that data so writing that out to the console is actually not going to print out something very pretty it's not very helpful I just wanted to have a break point at the end of the file here so that we could hit it in the debugger now we can see that we can call this directly if we need to and that way it's the same type of thing that the llm is going to be doing behind the scenes a quick recap
to set things up you need to make sure you have all of your stuff set up in your Azure portal if you're using Azure that's what we're doing in this example on line 22 here from there we need to make sure that we are adding plugins of Interest onto the plugins collection on the Kernel we also need to make sure that we're allowing functions to be called so I'm using Auto in this case to allow it to automatically determine if it needs to call a function and then what we were able to do is go look at the plugin implementation and we needed to make sure that we were annotating things so we have a kernel function attribute we also have a description that we can add on to the methods the return types and the parameters and that way we're giving lots of extra
context to the llm then we saw that an action and finally we got to see that we can also invoke those methods ourselves so if you thought this was cool and you want to see more examples of working with plugins you can check out this video next to see how we can start interacting with something like YouTube thanks and I'll see you next time
Frequently Asked Questions
What are semantic kernel plugins and how do they work in C#?
Semantic kernel plugins allow us to integrate different types of code into the kernel, enabling the AI to interact with those plugins during chat completions. In this video, I walk you through how to set up and use a simple lights plugin as an example.
Do I need to watch the previous video before this one?
Yes, I recommend watching the previous video to get the basics of setting up semantic kernel in a C# project, as this video builds on that foundation.
How can I provide more context to the AI when using plugins?
These FAQs were generated by AI from the video transcript.You can provide more context by annotating your methods with extra information, such as descriptions for parameters and return types. This verbosity helps the AI understand how to interact with the plugins more accurately.
