Differences between revisions 2 and 3
Revision 2 as of 2014-06-12 15:50:04
Size: 4215
Editor: bbb
Comment:
Revision 3 as of 2014-06-12 15:50:45
Size: 4212
Editor: bbb
Comment:
Deletions are marked like this. Additions are marked like this.
Line 116: Line 116:
# period 100ms and WCET 10ms on CPU 1, and let it run for 10s.
$ rtspin -p 1 10 100 10
# period 100ms and WCET 10ms on CPU 1, and let it run for 5s.
$ rtspin -p 1 10 100 5
Line 120: Line 120:
The dummy task `rtspin` should terminate after 10 seconds. No output will be produced. The behavior of the plugin may be observed using the `sched_trace` tracing infrastructure described in the [[Tracing|tracing tutorial]]. The dummy task `rtspin` should terminate after 5 seconds. No output will be produced. The behavior of the plugin may be observed using the `sched_trace` tracing infrastructure described in the [[Tracing|tracing tutorial]].

Export Plugin Topology in /proc

The userspace library liblitmus offers several functions that allow tasks to migrate to the appropriate CPUs under partitioned and clustered schedulers. This code needs to know which CPUs a particular plugin considers to form a "partition" or a "cluster". To enable liblitmus to work as intended, a plugin must hence export the required topology hints via the /proc/ filesystem, for which LITMUSRT provides a wrapper API.

First, the plugin needs to include litmus/litmus_proc.h to access the /proc wrapper API.

Next, allocate a struct of type struct domain_proc_info, which will hold the required topology information.

   1 static struct domain_proc_info demo_domain_proc_info;

To communicate with the /proc wrapper, the plugin needs to define an accessor function, which is defined as follows.

   1 static long demo_get_domain_proc_info(struct domain_proc_info **ret)
   2 {
   3         *ret = &demo_domain_proc_info;
   4         return 0;
   5 }

When the plugin is activated, the current topology must be stored in demo_domain_proc_info. In short, in a simple partitioned plugin such as the DEMO plugin, each (online) processor forms its own "scheduling domain". The initialization code hence iterates over all online CPUs and creates an entry for the corresponding "scheduling domain".

   1 static void demo_setup_domain_proc(void)
   2 {
   3         int i, cpu;
   4         int num_rt_cpus = num_online_cpus();
   5 
   6         struct cd_mapping *cpu_map, *domain_map;
   7 
   8         memset(&demo_domain_proc_info, sizeof(demo_domain_proc_info), 0);
   9         init_domain_proc_info(&demo_domain_proc_info, num_rt_cpus, num_rt_cpus);
  10         demo_domain_proc_info.num_cpus = num_rt_cpus;
  11         demo_domain_proc_info.num_domains = num_rt_cpus;
  12 
  13         i = 0;
  14         for_each_online_cpu(cpu) {
  15                 cpu_map = &demo_domain_proc_info.cpu_to_domains[i];
  16                 domain_map = &demo_domain_proc_info.domain_to_cpus[i];
  17 
  18                 cpu_map->id = cpu;
  19                 domain_map->id = i;
  20                 cpumask_set_cpu(i, cpu_map->mask);
  21                 cpumask_set_cpu(cpu, domain_map->mask);
  22                 ++i;
  23         }
  24 }

When the plugin is activated, this initialization code should be called from demo_activate_plugin(), which now looks as follows.

   1 static long demo_activate_plugin(void)
   2 {
   3         int cpu;
   4         struct demo_cpu_state *state;
   5 
   6         for_each_online_cpu(cpu) {
   7                 TRACE("Initializing CPU%d...\n", cpu);
   8 
   9                 state = cpu_state_for(cpu);
  10 
  11                 state->cpu = cpu;
  12                 state->scheduled = NULL;
  13                 edf_domain_init(&state->local_queues,
  14                                 demo_check_for_preemption_on_release,
  15                                 NULL);
  16         }
  17 
  18         demo_setup_domain_proc();
  19 
  20         return 0;
  21 }

Finally, when the plugin is deactivated, the plugin should clean up the created structure. To this end, the plugin interface offers another callback, deactivate_plugin(), that is invoked when the user switches to another scheduling plugin.

   1 static long demo_deactivate_plugin(void)
   2 {
   3         destroy_domain_proc_info(&demo_domain_proc_info);
   4         return 0;
   5 }

The updated plugin definition now looks as follows.

   1 static struct sched_plugin demo_plugin = {
   2         .plugin_name            = "DEMO",
   3         .schedule               = demo_schedule,
   4         .task_wake_up           = demo_task_resume,
   5         .admit_task             = demo_admit_task,
   6         .task_new               = demo_task_new,
   7         .task_exit              = demo_task_exit,
   8         .get_domain_proc_info   = demo_get_domain_proc_info,
   9         .activate_plugin        = demo_activate_plugin,
  10         .deactivate_plugin      = demo_deactivate_plugin,
  11 };

Testing

With these changes in place, the plugin should now be fully functional and ready for use with liblitmus.

  1. Compile and boot the kernel
  2. Select the DEMO plugin with setsched.

  3. Launch some real-time tasks (e.g., with rtspin or rt_launch, which are both part of liblitmus).

# Activate the DEMO plugin
$ setsched DEMO

# Launch a dummy periodic real-time task with
# period 100ms and WCET 10ms on CPU 1, and let it run for 5s.
$ rtspin -p 1 10 100 5

The dummy task rtspin should terminate after 5 seconds. No output will be produced. The behavior of the plugin may be observed using the sched_trace tracing infrastructure described in the tracing tutorial.

CreateAPluginTutorial/Step9 (last edited 2014-06-12 15:50:45 by bbb)