= Add Preemption Checks = The scheduling logic so far selects the next job to be scheduled using EDF priorities ''whenever the scheduler is invoked''. However, the scheduler is not automatically invoked when a new job is released. Thus, it does not yet implement ''preemptive'' EDF scheduling. In this step, we are going to rectify this by adding a ''preemption check callback'' to the real-time domain `local_queues` embedded in `struct demo_cpu_state`. == Preemption Check Callback == The preemption check callback is invoked by the `rt_domain_t` code whenever a job is transferred from the release queue to the ready queue (i.e., when a future release is processed). Since the callback is invoked from within the `rt_domain_t` code, the calling thread already holds the ready queue lock. {{{#!highlight c static int demo_check_for_preemption_on_release(rt_domain_t *local_queues) { struct demo_cpu_state *state = container_of(local_queues, struct demo_cpu_state, local_queues); /* Because this is a callback from rt_domain_t we already hold * the necessary lock for the ready queue. */ if (edf_preemption_needed(local_queues, state->scheduled)) { preempt_if_preemptable(state->scheduled, state->cpu); return 1; } else return 0; } }}} The preemption check simply extracts the containing `struct demo_cpu_state` from the `rt_domain_t` pointer using Linux's standard macro `container_of()`. It then checks whether there exists a job in the ready queue that has higher priority (= an earlier deadline) than the currently scheduled job (if any). If this is the case, then an invocation of the scheduler is triggered with the `preempt_if_preemptable()` helper function. This LITMUS^RT^ helper function is a wrapper around Linux's preemption mechanism and transparently works for both remote cores and the local core. Note that `state->scheduled` may be `NULL`; this case is transparently handled by `preempt_if_preemptable()`. (The `...if_preemptable()` suffix of the function refers to non-preemptive section support and is of no relevance to this tutorial.) == Updated Initialization == The preemption check callback must be given to `edf_domain_init()` during plugin initialization. The updated initialization code looks as follows. {{{#!highlight c static long demo_activate_plugin(void) { int cpu; struct demo_cpu_state *state; for_each_online_cpu(cpu) { TRACE("Initializing CPU%d...\n", cpu); state = cpu_state_for(cpu); state->cpu = cpu; state->scheduled = NULL; edf_domain_init(&state->local_queues, demo_check_for_preemption_on_release, NULL); } return 0; } }}} == Ready Queue Updates == Additional preemption checks are required whenever the ready queue may be changed due to resuming or new tasks. For instance, when a higher-priority task resumes, `demo_schedule()` should be invoked immediately if the currently scheduled task has lower priority (or if currently no real-time task is scheduled). To ensure the scheduler is invoked when required, we add am explicit preemption check to `demo_task_resume()`. The updated code looks as follows. {{{#!highlight c static void demo_task_resume(struct task_struct *tsk) { unsigned long flags; /* needed to store the IRQ flags */ struct demo_cpu_state *state = cpu_state_for(get_partition(tsk)); lt_t now; TRACE_TASK(tsk, "wake_up at %llu\n", litmus_clock()); /* acquire the lock protecting the state and disable interrupts */ raw_spin_lock_irqsave(&state->local_queues.ready_lock, flags); now = litmus_clock(); if (is_sporadic(tsk) && is_tardy(tsk, now)) { /* This sporadic task was gone for a "long" time and woke up past * its deadline. Give it a new budget by triggering a job * release. */ release_at(tsk, now); } /* This check is required to avoid races with tasks that resume before * the scheduler "noticed" that it resumed. That is, the wake up may * race with the call to schedule(). */ if (state->scheduled != tsk) { demo_requeue(tsk, state); if (edf_preemption_needed(&state->local_queues, state->scheduled)) preempt_if_preemptable(state->scheduled, state->cpu); } raw_spin_unlock_irqrestore(&state->local_queues.ready_lock, flags); } }}} Note the additional check in lines 27-28. An equivalent check is added to `demo_task_new()`. {{{#!highlight c static void demo_task_new(struct task_struct *tsk, int on_runqueue, int is_running) { unsigned long flags; /* needed to store the IRQ flags */ struct demo_cpu_state *state = cpu_state_for(get_partition(tsk)); lt_t now; TRACE_TASK(tsk, "is a new RT task %llu (on_rq:%d, running:%d)\n", litmus_clock(), on_runqueue, is_running); /* acquire the lock protecting the state and disable interrupts */ raw_spin_lock_irqsave(&state->local_queues.ready_lock, flags); now = litmus_clock(); /* the first job exists starting as of right now */ release_at(tsk, now); if (is_running) { /* if tsk is running, then no other task can be running * on the local CPU */ BUG_ON(state->scheduled != NULL); state->scheduled = tsk; } else if (on_runqueue) { demo_requeue(tsk, state); } if (edf_preemption_needed(&state->local_queues, state->scheduled)) preempt_if_preemptable(state->scheduled, state->cpu); raw_spin_unlock_irqrestore(&state->local_queues.ready_lock, flags); } }}} Again, note the check in lines 26-27. == Testing == The `DEMO` plugin is now a fully working plugin, except for the fact that it still rejects all tasks, which we correct in the [[../Step8|next step]].