I have a scheduling problem where new jobs (sets of tasks whose execution is sequentially connected) arrive every few seconds or so.
Each job requires some resources to be allocated at known intervals.
For example:
Job j1
is a set of tasks for which we reserve resources {r1, r2, r3}
on a known scheduling pattern:
r1:[t0 .. t1=t0+td1],
r2:[t2=t1+td2+i2 .. t3=t2+td3]
In the example, a new job j2 is being scheduled right after j1 execution has started.
The earliest start time for j2 is t1.
A job may take some minutes of execution most of which consists of waiting.
I have a scheduler that looks at the current reservation table and decides which is the earliest possible starting moment for a new job with fixed allocation times and waiting periods and makes the reservations accordingly.
(But in reality, the waiting period doesn't really need to be fixed - but within some percentage (maybe 5%) and there may be alternatives to resource usage, for example, if resource r3.1 is booked, then 3.2 may be used as such to achieve the same thing.)
However, if the scheduler is required (yes, it's been suggested) to be able to dynamically adjust all the schedule allocations when a new job arrives to maximize the total work done (in a day) by taking advantage of the fact that the waiting times need not be exactly as given and the possibility to parallel execution with some resrouce duplicates (3.1/3.2), then I'd be looking at a completely different scheduling scheme (than my current start-as-soon-as-possible approach).
As for your question regarding "alternatives to resource usage":
The pattern most commonly implemented to tackle that sort of problem is the Object Pool Pattern
The most widely known example for this probably is the ThreadPool
I suggest you implement a ResourcePool
class with a int GetResource(ResourceType type, int durationInSeconds)
method.
The return value indicates when the next resource of the given ResourceType
will be available
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With