Multiprocessor Scheduling
Block Seminar
Professor Lars Lundberg
Blekinge Institute of Technology, Ronneby, Sweden
The block seminar will be held on 22.5.2006 (Kommunikationszone Haus C 1.Etage).
Seminar Schedule
9.15  10.30 
Nikolai Eipel  "Optimal Recovery Schemes in FaultTolerant
Distributed Computing" (ppt)
MatthieuP. Schapranow  "Extended Golomb Rulers as the New Recovery Schemes in Distributed Dependable Computing" (ppt) 
10.45  12.00 
Stefan Hüttenrauch  "IntegratingList Heuristics into Genetic Algorithms for Multiprocessor Scheduling" (pps) André Wendt  "Bounding the minimal completition time in highperformance parallel processing" (pdf) 
13.00  15.00 
Tobias Queck  "Analyzing FixedPriority Global Multiprocessor
Scheduling" (pps)
Stefan Voigt  "FixedPriority Preemptive Multiprocessor Scheduling: To Partition or not to Partition" (ppt) Alexander Küchler  "Global Multiprocessor Scheduling of Aperiodic Tasks using TimeIndependent Priorities" (ppt) 
Course Overview
The course module consists of two major parts:
 Multiprocessor scheduling for high performance supercomputing, 4 hours
 Multiprocessor scheduling for realtime systems with guaranteed worstcase performance, 4 hours
The students are expected to prepare themselves by reading some material (to be defined later). Reading this material will take another 8 hours, i.e. for the student this course module requires 16 hours work.
Part 1: Multiprocessor scheduling for high performance supercomputing
In this case there is one parallel program, consisting of a number of communicating processes. The parallel program is executed on a multiprocessor computer. The important performance criterion in this case is the completion of the entire program. Typical examples of such applications include global weather simulation, solving large systems of equations and image rendering in animated films. The way the processes in such a program are scheduled to the processors in a multiprocessor affects the completion time. Ideally we would like to find a schedule that minimizes the completion time of the parallel program, but this problem is unfortunately NPhard so we need to resort to different heuristic approaches. It is often difficult to know how close to the optimal case these heuristic approaches actually are.
In this part of the course module we explain the supercomputing multiprocessor scheduling problem and explain the difference between static scheduling, where a process is always executed by the same processor, and dynamic scheduling, where processes may migrate from one processor to another at runtime. We will also discuss various heuristic approaches, and finally we will introduce some theoretical performance bounds can make it possible to determine how close to the optimal result a certain heuristic schedule is.
Part 2: Multiprocessor scheduling for realtime systems
In this case there is a set of independent tasks that are scheduled on a multiprocessor. The important performance criterion here is the response time of each task, and in most cases the most important issue is to guarantee that the response time of a task will even during the worstcase conditions be within certain specified limited, i.e. we want to guarantee the worstcase behavior. We consider two kinds of multiprocessor systems: systems with static scheduling, i.e. systems where a task is always executed by the same processor, and systems with dynamic scheduling, i.e. systems where a task may be executed by different processors at different points in time. We also consider two kinds of task sets: periodic task sets where each task is repeated with a certain frequency (e.g. sampling tasks), and sporadic task sets where each task is only executed once. It turns out that it is NPhard to find optimal schedules for most interesting cases.In this part of the course module we explain the realtime multiprocessor scheduling problem and explain the difference between static and dynamic scheduling and the difference between periodic and sporadic tasks. We will also discuss various heuristic scheduling techniques and some theoretical bounds on the worstcase performance of these techniques
Time & Location

Bibliothek (BE.2)

21.2.2006  11:0012:30

22.2.2006  9:3012:30

23.2.2006  11:0012:30
Slides

Optimal Recovery Schemes in Fault Tolerant Cluster and Distributed Computing

Evaluating Heuristic Scheduling Algorithms for High Performance Parallel Processing

Global Multiprocessor Scheduling of Aperiodic Tasks using TimeIndependent Priorities
Articles

Additional articles can be found here.
Leistungserfassung

Der Kurs besteht aus drei Komponenten, und zwar:
 einer Blockvorlesung am 21.23.2.2006 (8h)
 Literaturstudium zu ausgewählten Themen aus der Vorlesung (16h)
 einem Vortrag der Teilnehmer zu den studierten Materialien (6h)

In den Leistungserfassungsprozess gehen neben dem Vortrag die zugehörige Ausarbeitung ein.

Die Vorträge, deren Themen im Laufe der Vorlesung vergeben werden, sollen im Laufe des SS2006
als Blockseminar (voraussichtlich Mitte Mai) stattfinden. 
Für die Lehrveranstaltung gibt es insgesamt 3 benotete Leistungspunkte.
Vorträge

Stefan Voigt  "FixedPriority Preemptive Multiprocessor Scheduling: To Partition or not to Partition"

Tobias Queck  "Analyzing FixedPriority Global Multiprocessor Scheduling"

Nikolai Eipel  "Optimal Recovery Schemes in FaultTolerant Distributed Computing"

Alexander Küchler  "Global Multiprocessor Scheduling of Aperiodic Tasks using TimeIndependent Priorities"

Stefan Hüttenrauch  "IntegratingList Heuristics into Genetic Algorithms for Multiprocessor Scheduling"

André Wendt  "Bounding the minimal completition time in highperformance parallel processing"

MatthieuP. Schapranow  "Extended Golomb Rulers as the New Recovery Schemes in Distributed Dependable Computing"

Sebastian Steinhauer  "An Incremental Genetic Algorithm Approach to Multiprocessor Scheduling"

Bastian Steinert  "The Aperiodic Multiprocessor Utilization Bound for Liquid Tasks"