Model of Multiprocessor and Distributed Systems
This chapter describes how uniprocessor scheduling and schedulability analysis algorithms are extended to schedule hard real time tasks in static multiprocessor systems. In a static system, the scheduling involves the following steps. 1. Task assignment 2. Interprocessor synchronization 3. Task scheduling 4. Schedulability analysis In this section, we have discussed model of multiprocessor and distributed systems and their different sub topics under it.
Summary
This chapter describes how uniprocessor scheduling and schedulability analysis algorithms are extended to schedule hard real time tasks in static multiprocessor systems. In a static system, the scheduling involves the following steps. 1. Task assignment 2. Interprocessor synchronization 3. Task scheduling 4. Schedulability analysis In this section, we have discussed model of multiprocessor and distributed systems and their different sub topics under it.
Things to Remember
- A multiprocessor system is tightly coupled so that global status and workload information on all processors can be kept current at a low cost.
- Here, it is assumed that each processor has its own scheduler.
- The processors are of same type or they are identical if the processors can be used interchangeably.
- Processors of different types cannot be used interchangeably.
- According to the model of heterogeneous processors i.e. unrelated processors, each job can execute on some types of processors but not all types.
- According to the job shop model, each task T1 is a chain of n(i) jobs denoted by Ji,k for k = 1, 2, 3, . . . n(i) for all 1≤k≤n(i).
- A flow shop is a special job shop in which all the task have the same visit sequence.
MCQs
No MCQs found.
Subjective Questions
No subjective questions found.
Videos
No videos found.

Model of Multiprocessor and Distributed Systems
Model of multiprocessor and distributed systems
The systems contains more than one processors. Some systems are known as multiprocessor systems while the others are known as distributed systems. A multiprocessor system is tightly coupled so that global status and workload information on all processors can be kept current at a low cost. The system uses a centralized dispatcher or scheduler. When each processor has its own scheduler, the actions and the decisions of the scheduler of all processors are coherent. In contrast, a distributed system is loosely coupled in such a system that it is costly to keep global status and workload information current. The schedulers on the different processors may make scheduling and resource access control decisions independently. As a result, their decisions may be incoherent as a whole.
It is assumed that each processor has its own scheduler in this chapter. Each scheduling, resource access control or synchronization algorithm will be evaluated to see how much the algorithm relies on the current global information, how much coordination among schedulers is required and therefore how suitable the algorithm is for loosely coupled systems.
Identical versus Heterogeneous Processors
It is said that the processors are of the same type or identical, if the processors can be used interchangeably. For example, in a parallel machine, each of the CPUs can execute every computation job in the system, so the CPUs are identical. If any message from a source to a destination can be sent on any of the data links connecting them, then the links are identical.
Different types of processors cannot be used interchangeably. Different types of processors may have different functions. As an example, CPUs, file disk, and transmission links are functionally different. So, they cannot be used interchangeably. Processors can be of different types for many reasons. For example, if the designer decides to use some CPUs for only some components of the system but not others, then the CPUs are divided into different types according to the components that can execute them. In a static system, the application system is partitioned into µ components and jobs in each component execute on a fixed CPU. CPUs are viewed as µ different processors.
The model of heterogeneous processors used here is known as the unrelated processors model in scheduling theory literature. According to this model, each job can execute on some types of processors but, in general, not all types. Different types of processors may have different speeds. The execution times of each job on different types of processors are unrelated hence the name of the model. For example, the execution time of a computation intensive job is 1 second in CPU1 but is 5 seconds on a less powerful CPU2. Because CPU2 has better interrupt handling and I/O capabilities, the execution time of an I/O intensive job is 10 seconds on CPU1 but is only 3 seconds in CPU2. Both jobs cannot execute on a transmission link and signal processor, so their execution time on these kinds of processors are infinite. The unrelated models allows us to characterize all system.
Processor is denoted by the letter P. the system contains µ types of processors. There are a total of m1 processors of type i i.e Pi for i = 1, 2, 3, . . . , m. (Liu, 2003, pp. 345-346)
End-to-End Jobs and Tasks
A task in real time of a monitor system consists of three jobs: sampling, encoding, and processing the reading of a sensor on a field processors; sending the sensor data by a communication processor to the central control processor; and correlating and displaying the data with other sensor data on the control processor.
Job shops and flow shops
Concurrency in a multiprocessor system arises as jobs of different tasks sequencing through different processors in a pipeline manner. The classical jobs shop and flow shop model captures this types of concurrency. According to the job shop model, each task T1 is a chain of n(i) jobs denoted by Ji,k for k = 1, 2, 3, . . . n(i) for all 1≤k≤n(i). The adjacent jobs Ji,k and Ji,k+1 on the chain execute on different processors Ji,k+1 become ready for execution only when Ji,k completes. For example, the real-time monitor task is a chain of three jobs.
We can specify the processors on which n(i) jobs in each task Ti execute by the visit sequence Vi = (Vi,1, Vi,2, . . . , Vi,n(i)) of the task. The kth entry Vi,k in this sequence gives the name of the processors on which the job Ji,k executes. Therefore, the visit sequence of the real time monitor task is field processor, communication processor, and control processor.
For example, visit sequence Vi = (P=, P2), V2 = (P2, P3, P4, P3, P1) of tasks T1 and T2 in a system means that T1 has two jobs J1,1 on P1 followed by J1,2 on P2. T2 has five jobs and they execute in turn on P2 then on P3, P4, P3 and finally on P1.
A flow shop is a special job shop in which all the task have the same visit sequence. For example, in a real-time monitor system, each task samples, processes and displays, the readings of a different sensors. Therefore, the visit sequence of all the monitor task are V = (field processor, communication processor, control processor). If the system has only one processor of each type.
End to End Timing Constraints
The timing constraints that can be derived directly from the high-level requirement of the applications are typically end to end in nature. They give the release time and deadline of each task as a whole. In this, the release time ri of the task Ti in a job shop is the release time of the first job Ji,1 and the deadline Di of the task is the deadline of its last job Ji,n because the timing constraint of such task are improved on the jobs at the two ends of the task, they are called end to end release time and deadline is called end to end task.
Periodic End to End Task
An End to end task Ti is periodic with period Pi if a chain of n(i) jobs is released every Ti or more units of time and the jobs in the chain executes in turn on processors according to the visit sequence ( Vi,1, Vi,2, . . . . , Vi,n(i)).
The end to end periodic task Ti is called parent task n(i) subtask and the subtask in the same parent task are called sibling/(sub)task. The period of an end to end periodic task is the period of its first subtask. In this, the phase Φiof the task Ti means the release time of first job of first sub-task. Similarly, execution time ei,k means the maximum amount of time required to complete any job in Ti,k.
References
Liu, J. W. (2003). Real Time System. Pearson Education, Inc. and Doring Kindersley Publishing Inc.
Lesson
Multiprocessor Scheduling, Resource Access Control and Synchronization
Subject
Real Time System
Grade
IT
Recent Notes
No recent notes.
Related Notes
No related notes.