Add Preemption Checks
The scheduling logic so far selects the next job to be scheduled using EDF priorities whenever the scheduler is invoked. However, the scheduler is not automatically invoked when a new job is released. Thus, it does not yet implement preemptive EDF scheduling. In this step, we are going to rectify this by adding a preemption check callback to the real-time domain local_queues embedded in struct demo_cpu_state.
Preemption Check Callback
The preemption check callback is invoked by the rt_domain_t code whenever a job is transferred from the release queue to the ready queue (i.e., when a future release is processed). Since the callback is invoked from within the rt_domain_t code, the calling thread already holds the ready queue lock.
1 static int demo_check_for_preemption_on_release(rt_domain_t *local_queues)
2 {
3 struct demo_cpu_state *state = container_of(local_queues, struct demo_cpu_state,
4 local_queues);
5
6 /* Because this is a callback from rt_domain_t we already hold
7 * the necessary lock for the ready queue.
8 */
9
10 if (edf_preemption_needed(local_queues, state->scheduled)) {
11 preempt_if_preemptable(state->scheduled, state->cpu);
12 return 1;
13 } else
14 return 0;
15 }
The preemption check simply extracts the containing struct demo_cpu_state from the rt_domain_t pointer using Linux's standard macro container_of(). It then checks whether there exists a job in the ready queue that has higher priority (= an earlier deadline) than the currently scheduled job (if any). If this is the case, then an invocation of the scheduler is triggered with the preempt_if_preemptable() helper function. This LITMUSRT helper function is a wrapper around Linux's preemption mechanism and transparently works for both remote cores and the local core.
Note that state->scheduled may be NULL; this case is transparently handled by preempt_if_preemptable().
(The ...if_preemptable() suffix of the function refers to non-preemptive section support and is of no relevance to this tutorial.)
Updated Initialization
The preemption check callback must be given to edf_domain_init() during plugin initialization. The updated initialization code looks as follows.
1 static long demo_activate_plugin(void)
2 {
3 int cpu;
4 struct demo_cpu_state *state;
5
6 for_each_online_cpu(cpu) {
7 TRACE("Initializing CPU%d...\n", cpu);
8
9 state = cpu_state_for(cpu);
10
11 state->cpu = cpu;
12 state->scheduled = NULL;
13 edf_domain_init(&state->local_queues,
14 demo_check_for_preemption_on_release,
15 NULL);
16 }
17
18 return 0;
19 }
Ready Queue Updates
Additional preemption checks are required whenever the ready queue may be changed due to resuming or new tasks. For instance, when a higher-priority task resumes, demo_schedule() should be invoked immediately if the currently scheduled task has lower priority (or if currently no real-time task is scheduled).
To ensure the scheduler is invoked when required, we add am explicit preemption check to demo_task_resume(). The updated code looks as follows.
1 static void demo_task_resume(struct task_struct *tsk)
2 {
3 unsigned long flags; /* needed to store the IRQ flags */
4 struct demo_cpu_state *state = cpu_state_for(get_partition(tsk));
5 lt_t now;
6
7 TRACE_TASK(tsk, "wake_up at %llu\n", litmus_clock());
8
9 /* acquire the lock protecting the state and disable interrupts */
10 raw_spin_lock_irqsave(&state->local_queues.ready_lock, flags);
11
12 now = litmus_clock();
13
14 if (is_sporadic(tsk) && is_tardy(tsk, now)) {
15 /* This sporadic task was gone for a "long" time and woke up past
16 * its deadline. Give it a new budget by triggering a job
17 * release. */
18 release_at(tsk, now);
19 }
20
21 /* This check is required to avoid races with tasks that resume before
22 * the scheduler "noticed" that it resumed. That is, the wake up may
23 * race with the call to schedule(). */
24 if (state->scheduled != tsk)
25 {
26 demo_requeue(tsk, state);
27 if (edf_preemption_needed(&state->local_queues, state->scheduled))
28 preempt_if_preemptable(state->scheduled, state->cpu);
29 }
30
31 raw_spin_unlock_irqrestore(&state->local_queues.ready_lock, flags);
32 }
Note the additional check in lines 27-28.
An equivalent check is added to demo_task_new().
1 static void demo_task_new(struct task_struct *tsk, int on_runqueue,
2 int is_running)
3 {
4 unsigned long flags; /* needed to store the IRQ flags */
5 struct demo_cpu_state *state = cpu_state_for(get_partition(tsk));
6 lt_t now;
7
8 TRACE_TASK(tsk, "is a new RT task %llu (on_rq:%d, running:%d)\n",
9 litmus_clock(), on_runqueue, is_running);
10
11 /* acquire the lock protecting the state and disable interrupts */
12 raw_spin_lock_irqsave(&state->local_queues.ready_lock, flags);
13
14 now = litmus_clock();
15
16 /* the first job exists starting as of right now */
17 release_at(tsk, now);
18
19 if (is_running) {
20 /* if tsk is running, then no other task can be running
21 * on the local CPU */
22 BUG_ON(state->scheduled != NULL);
23 state->scheduled = tsk;
24 } else if (on_runqueue) {
25 demo_requeue(tsk, state);
26 }
27
28 if (edf_preemption_needed(&state->local_queues, state->scheduled))
29 preempt_if_preemptable(state->scheduled, state->cpu);
30
31 raw_spin_unlock_irqrestore(&state->local_queues.ready_lock, flags);
32 }
Again, note the check in lines 26-27.
Testing
The DEMO plugin is now a fully working plugin, except for the fact that it still rejects all tasks, which we correct in the next step.