Job Keys And Bulk Scheduling - Quartz NET Best Practices
August 2, 2024
• 1,142 views
Job scheduling systems like Quartz NET are powerful but come with a level of complexity just by the nature of what they do.
Fortunately, the Quartz team has offered some guidance!
This video explores two best practices from the Quartz documentation:
- Using static JobKeys for reusability
- How to schedule jobs in bulk
What are your tips for using Quartz effectively?
View Transcript
quartz.net is a powerful job scheduling framework that we have available to us in C but of course with really powerful Frameworks there's a lot of complexity and some best practices we should try to follow hi my name is Nick centino and I'm a principal software engineering manager at Microsoft in this video I'm going to walk through a couple of tips that come from the quartz.net documentation on how to use their framework a little bit more elegantly I made a previous video which I'll link up here if you haven't seen it there's a couple of extra tips there that you can go check out again there's so many ways that you can use quartz.net and it's not necessarily an exactly right or wrong thing but these are some tips that should help you get a little bit more out of the framework in a consistent way
if that sounds interesting remember to subscribe to the channel and check out that pin comment for my courses on dome Trin with that said let's jump over to visual studio and look at some tips for courts.net the first tip that we're going to look at comes right from the documentation and they're talking about the usage of job keys so job keys are a way that you can give an identity to the job that you want to be able to run in courts and if we look at the components of this job key we can see that it has a name and the second part of the key is a group so you have a two-part sort of identity for the job and if we look at triggers they're actually a very similar type of thing you can see right here on line 38 this says
with identity this first part is the name and the second part is the trigger now to help with some consistency you'll notice that if I look at line 48 where it says with identity job key and then on Line 39 it says for job and we pass in the job key and on line 36 I have the job key to find like like we need to have some consistency where we're passing this parameter around because that's the identity for the job that we're referring to now what they suggest doing instead of uh I mean this is already a little bit of an improvement but instead of doing something like this where you have this information duplicated in these two spots because this is prone to having issues if you were to end up saying oh I want to rename this and then you don't rename
it in the other spot we can go a little bit further than having just this variable defined right here so what they do suggest doing if we get rid of this job key entirely they do suggest having a static property or in this case a field that their documentation literally shows up a public static field so you could have a job key just like this directly on the type of the job that you're interested in so if I jump over to the test job that we have you can see that all that I've done is declared that job key directly on here one of their pieces of advice this is of course a really simple example but they do suggest that you have a static job key on on the job type itself and that way it's very consistent when you want to go refer
to it you don't need to be trying to redeclare these things or having variables kind of scattered around to go make the job key and that way if you want to go change it Hey look it's in one spot so very small tip but I think if we're taking this kind of concept you've probably seen this come up in all sorts of different situations in programming if we have this ability to kind of put some common information in one spot that can really help try to prevent some bugs in the future especially because they can be so simple to rename things in the wrong way the next tip is going to be a little bit more advanced but it's really just about scheduling jobs in bulk so if we were to go look at this code that's from line 36 down to uh 52 here
this is getting a job trigger and a job detail and then what we're doing here on line uh 57 and 58 is adding some extra information and then scheduling the job but if we wanted to go do this in bulk and schedule a whole bunch of these jobs maybe we had 10 of these jobs wanted to go run they don't suggest that you go add them individually with scheduled job they do have a bulk API for that if I go expand this bit of code here from line 62 down to 84 this is sort of right from their documentation I've just changed up the uh individual trigger and job that we're going to be talking about this is just a brief Interruption to remind you that I do have courses available on dome train focused on C so whether you're interested in getting started in
C looking for a little bit more of an intermediate course focus on objectoriented programming and some async programming or are you just looking to update your refactoring skills and see some examples that we can walk through together you can go ahead and check them out by visiting the links in the description and the comment below thanks and back to the video so to start things off what you'll see is on line 60 they have this dictionary that they're going to be using and the key is the job detail and then we have a collection of triggers that we can use if we look inside of this Loop so again it's going to be making 10 jobs that we're going to trigger we can see that I'm just using a collection for triggers and it just has one thing inside of it so this is just
one trigger I've kind of borrowed it from up top here it's very similar to this code up top but I've just put it inside of this Loop and what you'll see is I have with identity here there's going to be some job data so this is going to be the message from the trigger in fact I don't think I want to use it exactly like this because if we watch the previous videos on how we can access job information the trigger will override the merged prop properties and they do suggest that you use the merge properties so just a quick note that if you have the same key like I have message here and message here the trigger one will override so I want to make sure that when we go to look at that data we can see which job that we're talking about
okay so that's the collection of triggers and it's just going to be one in this case it's just one trigger per job and then you'll notice I'm just creating the job down here so I create the job and I say using this data this is going to be overridden so technically deleting that would have the same functionality that we'll see in just a moment but then all I'm doing is adding that job into the dictionary with the collection of triggers which as I mentioned is just one trigger in this example but that's because it's to meet this API for scheduled jobs so if you look at the intellisense uh and the tool tip that I have uh shown here we can see that the first parameter is an i readon dictionary where the key is a job detail and then the there's a the value
as a readon collection of trigger triers and that's going to be exactly what we defined up here of course and these two things so that's the collection of triggers and this is the job those meet the value and the key respectively now I'm going to talk about this replace part in just a moment but I wanted to run this example so that you can see that I am scheduling 10 jobs in bulk so I'm going to go run this and if we jump over to the console I'm just going to scroll a little bit up higher because this ran pretty quick but we can see that this is going to be the output that comes from our job and I didn't show it so if I scroll down to here you can see all these console right lines well in this case it's a dependency
that I passed in and that all that that does is write to the console but you can see that I have the first job running the second one the third one right like down here there's three there's four and it does it 10 times that did go schedule 10 jobs and go run them all and we can see all of that output and you'll notice too that this is all in order so it's not like the 10th job ran first it truly ran them in the order that they were scheduled okay so that's a very brief look at the bulk scheduling API but I just wanted to comment about this replace part because I think if you're moving between this kind of setup where we have a trigger in a job and we're using identities and things like that there's an easy thing that you
might make a mistake doing and I just wanted to highlight it so um if we notice that we have this with identity part if you instead were to have taken I'm just going to take this identity here if we took the same identity okay so this trigger is now the exact same I want talk about this replace parameter that's down here because what will happen if the key for the trigger is the same it's going to replace it so just to show you what I mean by that if I go run this you'll notice that we only see the 10th job right so instead of having all of that information printed out all that I did was I change the identity of the trigger and it's going to be the same trigger identity for each Loop and that means that every time we go to
add one of these in to the dictionary it's going to be identified the same way as a result this replace parameter what that's doing is saying hey look if you come across the same key we're just going to overwrite what's there so we end up only getting the last trigger firing here so I just wanted to mention that to you because if that's not what you're expecting you either need to give it a unique Identity or toggle this replace part so let's go stop this if you end up putting this down to false here this is going to try and protect you from having the same identity as well so just to show you if you go run this now again we're going to have 10 iterations of this every single one with the same trigger name and we're saying not to replace it's going
to protect you and I mean an exception doesn't really feel like protection but what I mean by protecting you is that it's not going to allow you to schedule the same thing 10 times if that's not what you were expecting to do so you do want to consider giving it a unique name for the identity up here which is why in this case I just gave it one like this I would consider this replace part being like a safety mechanism right so the way you set up your code if you were allowing a bunch of triggers to come in from different sources and maybe not defining them exactly in one spot like this you may want to be able to have protection and say hey look if anyone passes me the same trigger identity we either don't want to allow that or with replace being
true you're saying that's totally acceptable I'm just going to take the last one that comes into here and that will be the trigger that gets fired so just different use cases and you can pick and choose what you want to use there but that is going to be the scheduled jobs in bul API and the main idea there is that instead of calling scheduled job like we see online 58 10 times we can actually go create this dictionary and go do it in bulk so those are just some quick tips that we can use with quartz.net based on their documentation for some best practices that they want you to try and leverage now if you thought that was interesting you can check out this video next for some job listeners and other cool stuff thanks and I'll see you next time
Frequently Asked Questions
What are job keys in Quartz.NET and why are they important?
Job keys are a way to give an identity to the job that I want to run in Quartz.NET. Each job key consists of a name and a group, which together provide a two-part identity for the job. This is important for ensuring consistency when referring to jobs, as it helps prevent issues that can arise from duplicating job key information in multiple places.
How can I schedule multiple jobs at once using Quartz.NET?
To schedule multiple jobs at once, I recommend using the bulk scheduling API instead of adding each job individually. I create a dictionary where the key is the job detail and the value is a collection of triggers. This allows me to schedule multiple jobs in a single call, which is much more efficient than scheduling them one by one.
What should I be aware of regarding the replace parameter when scheduling jobs?
The replace parameter is crucial when scheduling jobs with the same trigger identity. If I set this parameter to true, it will overwrite any existing trigger with the same identity, which might not be what I want. If I want to prevent scheduling the same trigger multiple times, I should either give each trigger a unique identity or set the replace parameter to false to avoid unintended overwrites.
These FAQs were generated by AI from the video transcript.