Adding an IMemoryCache to EF Core Repository Pattern
March 7, 2025
• 1,082 views
In prior videos, we wrapped a repository pattern around our EF Core calls -- and yes, this drives some people nuts. In this video, we'll look at enhancing our repository by adding a cache with IMemoryCache!
View Transcript
have you ever wondered how you could add caching to your simple repository pattern in C hi my name is Nick centino and I'm a principal software engineering manager at Microsoft in this video we're going to be looking at adding simple caching using imemory cache to a repository pattern using Entity framework core and I realize it's a mouthful but this is part of a video series where we're looking at the repository pattern we're looking at different implementations of that and now we're going to start to layer in caching in the previous videos that I made on this which I'll link up here we're going to be talking about how we're using Entity framework core with a repository pattern now many people aren't a big fan of this and they already feel that Entity framework core kind of gives them what they need the repository patterns just
a bit too much abstraction on top of that but in this video we're going to be looking at leveraging that pattern and then trying to incorporate caching as well now the caching in this video will be using the simple IM memory cache in C and in follow-up videos as part of this playlist we'll be exploring different implementations of caches so if that sounds interesting just a reminder to subscribe to the channel and check out that pined comment for my courses on dome train let's jump over to visual studio and look at some code okay this application is based on the previous videos so again if you haven't gone ahead and watched some of them in the beginning we can go ahead and have a link up here and you can check that out so you understand some of the basics of Entity framework core and
the repository pattern to kick things off though on line four we're looking at adding dependency injection for leveraging our repository but there's going to be a difference because this line line four here a setup Entity framework crud generic and that was going to give us a repository using Entity framework core with simple create read update and delete that's what crud is and that's going to back all of these methods that we have on our minimal apis here so again disclaimer for the folks that have not gone and watched the other videos this is not how I would recommend making a crud app this is a bunch of get methods which I don't recommend you do but I'm going to show you in the browser and I just wanted to make it very easy to hit the the routes that we have here so uh to
kick things off though we're going to explore a different setup here I'm jumping into what we had previously and we're going to look at this variation instead so very very similar right you can see that between here and here this part's identical just for adding our DB context Factory yes we're going to be using SQL light if I didn't mention that already same thing as in the previous videos and then you'll notice that here is the only difference right so I'm just adding in a different implementation if I quickly jump over to this generic implementation that we had before this is just a repository pattern so we have this I repository interface we have that around our database context from Entity framework or so you'll notice that in all of these method calls we're getting a DB context we're doing something with Entity framework core
so really just one more level of abstraction and if you're looking at these going these are too simple I would just call this stuff an En framework core anyway that's totally okay I'm not here to convince you that you have to go use a repository I'm saying this in every video just uh showing you how you can go Implement these things if you'd like to but if we jump back we're going to look at a different implementation very very similar but it's also going to take in an IM memory cache so if we jump in we're going to see that like the other one it takes in this idb context Factory but we're also going to have an IM memory cache right so between the Constructor and these fields getting set this is going to enable us to start adding caching to all of the
different API calls we have to start things off if we look at our get method this is going to have a little bit of extra code in here and that's just because we're going to have the caching operation around what we had in the previous implementation so right in here this is essentially these four lines of code is what was in the previous implementation right everything else inside this method is just to support caching so we say to the memory cache that we would like to get or create an entry a synchronously the next part is that we need a key so when you're creating a cache you need to think about how you're going to key on the individual elements that are in your cach because our repository is very simple it's just using entity types that have an ID on them we are
just going to be using the key as the ID for that entity and that way if we want to go get a particular entity and it's already in our cache we can just fetch it very quickly without going to the database but before we move on this is just a reminder that I do have courses available on dome train if you want to level up in your C programming if you head over to dome train you can see that I have a course bundle that has my getting started and deep dive courses on C between the two of these that's 11 hours of programming in the C language taking you from absolutely no programming experience to being able to build basic applications you'll learn everything about variables Loops a bit of async programming and object-oriented programming as well make sure to check it out if
we look at the other part here this is just to illustrate that we can add an expiration on here so this line is not necessary but if you wanted to play around with this kind of thing then you can set different expirations and there's some other features that you can enable and play with on the memory cache so just a heads up not needed but just something that you can explore if you're interested in the idea behind here and I want to spend just a moment on this it's not super applicable to the IM memory cache but if you were to think about what this does right it's get or create if you were thinking about this in a very simple way what you might do is say hey cash do you have an entry that is for this ID and then it would check
and then it would come back and say yes or no if it had it then it would say yes I do and then you could say great thanks Cash give me that entry then so you're kind of doing a check and then a get for that entry the other thing that could be happening here is the cash is no I don't have that entry and you go great one sec let me go to the database and then from there when I get the record back from the database hey cash I would like you to go store this again it's breaking it into multiple operations this API that we have access to get or create is a common pattern you'll see when working with caches the idea behind this API is that it's structured in a way to sort of combine these operations for you that
way based on the API it should be able to try and offer things like Stampede protection from my usage with the IM memory cache implementation though that's built in I have not seen it offer Stampede protection and if you're wondering what stampede protection is it's this idea that if you have multiple things in the same process let's say they're trying to come into the cach and saying hey do you have this entry for me right if you had a hundred different requests coming in in parallel what would be able to happen is that they would go into this method and there would be an opportunity to only allow one of them to go through to the downstream service in this case the database to be able to say hey go get that record and fetch it instead of a hundred things stampeding the downstream resource
now in distributed systems something like this is not going to work unless you're doing some type of synchronization across but if you imagine again just for the sake of Simplicity here that if you had 10 servers each getting a 100 requests coming in each right so you have a total of a thousand requests each of those 10 servers might let through one if they were coming through in parallel so you have 10 requests getting through instead of all 1,000 it is some improvement it's just not a guarant guarantee but with the IM memory cache again from my experience so far at the time of recording this I have not seen that it actually offers Stampede protection there are other implementations of caches that we will look at in future videos that do offer this and we can actually go play around with this as well
so just wanted to mention that's why we generally see an API like this it's to combine the check and then being able to either get it back or set the entry in the cash overall pretty simple though because this part here is the same exact code that was in the other repository I'm going to come back to get all in just a moment I'd like to go through the other entries that we have here for the different methods on our repository so if we look at the create method this part here is exactly what we had in the other repository but you'll notice I have two lines commented out now when we're doing something like creating an entry what we could do is decide just to invalidate the cache we could say hey if there was something here we could go remove it but the
odds are that if you're creating something and that ID is already there this is probably an issue so at the create call Path this is probably not something that would ever really make sense but the other thing that we have down here is to be able to set it so the option that we have to work with here is that when we are trying to write something into the database we can make a decision if we want to also write that to the cash at the same time now this might be expensive in some cases depending on your cash cashing mechanism depending on how much you're writing but this is a designed decision that you have to think about if you go back up to our get method right every time we're trying to read something from the database we're going to store it in
the cache do you also need it every time you're writing or do you only need it when you're writing so you have to think about how your data is flowing through your system if you read heavy versus write heavy and other considerations for caching because this is a very simple example I'm not going to include writing to the cache when we write to the database and like I was saying the remove move on the cache doesn't really make a lot of sense when we're creating a new entry there shouldn't be something in the cach anyway but if we want to just for consistency and to make sure that if there was something in the cach that we invalidate that entry we can go ahead and remove it from the memory cache it's just that realistically this is probably not going to do anything effective for
us now let's explore something a little bit more interesting right so we have this update a sync method this is going to be a little bit different than the create one so the rest of the code that you see in here that's not commented out so ignore line 70 for just a moment all this code that I have highlighted except line 70 is going to be the same code that we have in the original repository but we have the same caching question like we were looking at with the create the difference though is that if we are in fact updating the repository we can invalidate the cach like so so this means that if we are going to go write a new uh value in for this particular entity we can say hey look if there was something in the cach just blow it away
that's one option we have the other option like we were talking about with create is once you create that entry could you could just go set that in the cach now depending on what your needs are in your caching implementation you could say every time I want to go update something maybe depending on how your app or service performs using this repository it's maybe very likely that you're going to be reading this pack soon so you do want to have it cached it's not going to be for me to decide I'm just trying to show you that you have some different options here so again I would in this example I'm just going to leave it out but we're going to invalidate the cache if we update an entry so that means the next read for that particular entry again we're looking at the ID
as the key here if you want to go read that back it will go to the database one more time and then cash it what we're looking at so far in all of these examples is that the get will end up putting something into the cach and anytime we're doing some type of right we're going to be invalidating or removing the entry from the memory cache finally in the last method for delete it's going to be very similar but there's nothing to update right we're actually removing something entirely so in this case we just have this single remove line to remove a possible entry from the cache but the rest of the code that you see in here is in fact the same as the other implementation hopefully that makes sense but we had one that we skipped over and that's get all async this
is something I wanted to pause to talk about because this is one of those opportunities where when people say hey like I just want to copy this code or hey can you just push this up because I just want to copy and paste the code into my own stuff I don't recommend doing that ever because I want you to understand these types of things you can pause the video and type out the code or see how it's flowing but if you're just going to copy and paste this I don't think that there's a ton of value especially if I would have filled this one out so let's think about this for a second I don't know what kind of application you're building right I have literally no idea so if you're thinking hey this generic Repository with caching that seems like it's going to be
a really good fit but you have something that has a million products in a database and then we talk about caching these things and I go implement it and you read back all of the entries and you go put every single one of them into the cash what's going to happen it's the same story that I had in the previous videos where I was talking about this being a materialized collection if you call get all async and pull in a million records for your product data base and you put them all into some readon collection it's not going to be a good time instead of me just having a blanket dummy implementation here that's going to cash everything or something else like that I just want you to think about what your needs are if you're dealing with very small repositories right maybe you have
something that's blog posts and on your site you're like I'm only ever going to have a 100 blog posts sure maybe you just want to cash them all and you have enough memory to do so and that's cool maybe the blog entities are actually small because they don't even have all of the content for the blog post themselves however it's designed if you only have a small number of these things maybe it makes sense to have them all pulled into memory all cached and that way any read is very effective I literally cannot say because I don't know what you're building so I would rather take this opportunity instead of just having some blanket code that you're going to go copy and paste kind of take a break here to say please think about what you're caching because that is going to have a memory
footprint I hope that makes sense sorry that there's no code called out here for caching if you wanted to try and cash entries you need to think about how they're going to be retrieved because in this case what we were doing was every other uh entry was keyed by the ID right so this is not keying by IDs in fact if we wanted to use the cash when we were reading this is that really going to help us they're only keyed by IDs right now would we have another cash entry that was uh keyed by like like a special uh other key that said all and we could just look at that would we use another cache all together would we try to read back everything put each individual item in the cache by its ID that means that anytime we call get all it's
going to be always hitting the database but any individual gets fast do we do something even more complicated where we say okay get all will store all of the entries into the cache and we'll set a flag that says you know the cach is full of all the entries and then we have to make that a dirty flag if someone writes something this can get ridiculously complicated very fast depending on what you want to do so what I've done in this video is of course I've showed you the very simple use cases across these other methods but you'll notice that as soon as you start to deviate a little bit this is where you'll start to say I really need to think more about what I'm doing here because this will have consequences so again just wanted to take this moment to pause on this
part so that you and think a little bit more deeply about what you're trying to do because just copying and pasting stuff and just saying some bald guy on the internet said it was a good idea it's not do think about how you would like to cash this stuff I'll give you one quick example that in the Blazer blog engine that I'm using from Steven gel he actually has a very similar implementation to all of this but we are not putting any cashing on the get all calls it's like we've optimized the other call paths but this one it's okay it it's going to take a little bit longer it's very rare that we're calling this to get everything for a particular table out of the database just a couple of variations but enough blabbing you probably want to see this thing run so let's
go ahead and go over to our program.cs we'll make sure that we set up the caching one boom so we have that in place but one more thing before we go ahead and run this is that we've set these things up here to be able to use our caching Entity framework core repository but we still need to add in the memory cache so we can use builder. services. add memory cache and this is going to be the dependency injection line that we need to get memory cache added again this is just the built-in one it's not a third party library or anything like that nothing too crazy but let's go ahead and run this and try it out okay so I have things running I have SQL light expert pulled up here we're going to go look at this record down here that ends in
7930 and I'm going to go ask for it from the Repository you can see that we ended up getting the record it has test as the value watch what happens when I press enter though it might be hard to see but do you see how fast it's coming back right the other thing that might not be obvious is if I look down here inside of visual studio if I highlight in the bottom in the console you can see that it did a select right select e.id e.v value from entities but every other time I make a call here it's not doing the SQL command anymore If instead I go pick another one so this one ends in 31 so let's go ahead and run that let's watch the sequel at the bottom though you can see that it went and did another SQL command if
I press enter on this one now it's hitting the cache it's not actually going to the database anymore let's go back to this one still cached you don't see any more SQL being written out to the console that's pretty handy but the create call is going to call remove from the cachee which isn't going to be helpful for us because it's not going to remove anything that exists already then we can try update and we should see that when we do an update to this entry that the next time we go to fetch it it has to hit the database again right so if I press enter it's still not going to the database but if I do update and then I say and value equals Dev lader so we see that it ended up running some SQL to update it but now if I
want to go fetch it we should see that this will go run more SQL to hit the database there we go it pulled it back it says Dev leader but next time I press enter right no more going to the database this is just a very simple example of a repository pattern wrapping up Entity framework core and using the IM memory cache but there's something interesting that if you were paying attention to as I was going through this you might have noticed that I said hey look this code is the same as the other repository in this next video which I'll put up here and add to the playlist you can see how we can use the decorator pattern to go do this a little bit more easily thanks and I'll see you in the next video
Frequently Asked Questions
What is the purpose of using IMemoryCache in the repository pattern with Entity Framework Core?
The purpose of using IMemoryCache in the repository pattern is to add a caching layer that improves the performance of data retrieval operations. By caching frequently accessed data, I can reduce the number of database calls, which can lead to faster response times for my application.
How do I handle cache invalidation when creating or updating records?
When creating or updating records, I typically invalidate the cache for the affected entries. For example, after a create operation, I can choose to remove the entry from the cache if it exists. For updates, I can invalidate the cache entry to ensure that the next read fetches the updated data from the database.
What should I consider when deciding what to cache in my application?
When deciding what to cache, I recommend considering the size of the data and the access patterns of your application. If you have a small dataset that is frequently accessed, caching it can be beneficial. However, if you're dealing with a large dataset, like millions of records, caching everything may not be practical due to memory constraints. It's essential to think about your specific use case and how caching will impact performance.
These FAQs were generated by AI from the video transcript.