Task Assignment
There are three approaches to task assignment. All of them are computational difficult. For each approach, the section formulates the task assignment problem in terms of one or more frequently encountered problems that have good heuristics and software packages. When the objective of the task assignment is to minimize the total communication cost, as well as finding a feasible assignment using as few processors as possible, the task assignment problem can formulated as an integer linear programming problem.
Summary
There are three approaches to task assignment. All of them are computational difficult. For each approach, the section formulates the task assignment problem in terms of one or more frequently encountered problems that have good heuristics and software packages. When the objective of the task assignment is to minimize the total communication cost, as well as finding a feasible assignment using as few processors as possible, the task assignment problem can formulated as an integer linear programming problem.
Things to Remember
- To execute the hard real time system in a multiprocessor environment, the application system is partitioned into modules or sub tasks and each module assigned and bound to different individual processors. This process is called task assignment.
- There are three different types of task assignment methods. They are Simple Bin-Packing formulation, Variable Size-Bin-Packing formulation and RMFF Algorithm.
- The task assignment method that ignores the communication cost and resource access cost and only considers the execution time requirement is called simple bin packing formulation.
- Simple bin packing is used to determine whether the number and kinds of processors planned for the systems are adequate.
- According to this protocol, the scheduler of each processor schedules all the local tasks and global critical sections on the processor on a fixed priority basis and controls their resource accesses according to the basic priority ceiling protocol.
MCQs
No MCQs found.
Subjective Questions
No subjective questions found.
Videos
No videos found.

Task Assignment
Task Assignment
Most of the hard real-time systems are static in nature. To execute such hard real-time system in a multiprocessor environment, the application system is partitioned into modules or sub tasks and each module assigned and bound to different individual processors. This process is called task assignment. Generally, the task assignment is done offline but if the execution time, response requirement, data and control dependencies and the timing constraints are known then the task assignment can be done online. In this case, it is necessary to perform an acceptance test to decide whether to execute each new task or not. There are three different types of task assignment methods. They are Simple Bin-Packing formulation, Variable Size-Bin-Packing formulation and RMFF Algorithm. The first method ignores both the cost of communication and the placement of resources. The second one considers communication cost only and the third one considers both communication cost and resource accessed cost.
Task Assignment based on Execution Time Requirements (Simple Bin-Packing Formulation)
The task assignment method that ignores the communication cost and resource access cost and only considers the execution time requirement is called simple bin packing formulation. It is used to determine whether the number and kinds of processors planned for the systems are adequate. It is used for online acceptance test and load balancing.
Consider that the utilization of n periodic task is given and it is asked that the system should be portioned into modules in such a way that the tasks in each module are schedulable in a process according to a uniprocessor scheduling algorithm. A task assignment can be defined by the subset of task in every module. We can say that the assignment requires ‘m’ processors if it partitions the ‘n’ task into ‘m’ schedulable modules. The quality of the task assignment is measured by the number of processors required by the assignment.
In order to schedule ‘n’ independent preemptable periodic task with relative deadlines equal to their periods based on EDF algorithm, we need to consider the utilization. In this case, the utilization of all the task is more than one, therefore it is not possible schedule feasibly on a single processor. To determine how to partition the task so that individual modules can be scheduled on minimum number of processors. We need to know the utilization ui for i = 1, 2, 3, . . . , n of all the tasks. We know the individual task of module is schedulable if their total utilization is less than or equal to one. However, it is constrained that the total utilization of all the tasks in each module should be less than or equal to some value U<1. From this, we can formulate a task assignment problem as the simple bin packing problem in which the sizes of all the bins are equal to U and the sizes of items to be packed into the bins are ui = 1, 2, . . . , n. the number of bins required to pack all the item is the number of processors required to feasibly schedule all the n tasks.
Multiprocessor Priority Ceiling Protocol
The multiprocessor priority ceiling assumes that the tasks and resource have been assigned and statically bound to processors and also the scheduler of every synchronization processor knows the priorities and resource requirements of all the tasks requiring the global resources managed by the processor. It is also assumed that the resources used by every job during nested critical sections lies on the same processor.
According to this protocol, the scheduler of each processor schedules all the local tasks and global critical sections on the processor on a fixed priority basis and controls their resource accesses according to the basic priority ceiling protocol. According to the MPCP model, when a task uses a global resource, its global critical section executes on the synchronization processor of the resource. If the global section of a remote task were to have a lower priority than some local task on the synchronization processor, these local tasks could delay the completion of the global critical section and prolong the blocking time of the remote task. For preventing this, the multiprocessor priority ceiling protocol schedules all the global critical sections at higher priorities than all the local tasks on every synchronization processor. This can be implemented in a system where the lowest priority πlowest all the tasks is known. The scheduler of each synchronization processor schedules the global critical sections of a task with priority πi at priority πi - πlowest.
As an example, consider a system where tasks have priorities 1 through 5, πlowest is 5. A global critical section of a task with priority 5 is scheduled at priority 0, which is higher that priority 1. Also, the priority of a global critical section of a task with priority 1 is – 4 which is the highest priority in the system.(Liu, 2003, pp. 366-368)
Blocking Time due to Resource Contention
The figure below shows the illustration that the types of blocking a job may suffer under the multiprocessor priority ceiling protocol.
The system has two processors P1 and P2. Jobs J2, J4 and J5 are local to processor P1, which is the synchronization processor of the resource dotted. Dotted is a local resource since it is only required by local jobs J2 and J4. Jobs J1 and J3 are local to processor P2, which is the synchronization processor of the resource black. Black is a global resource and is required by Jobs J1, J2 and J3. The jobs are arranged in decreasing order. The short vertical bar on the timeline of each jobs marks the release time of the job.
Here, J2 is directly blocked by J4 when J2 requests dotted at time 4. J1 is directly blocked by J3 when J1 requests black at time 3. The global critical section of J2 on P2 is delayed by the global critical section of the higher priority J1 in the time interval (7, 11]. The preemption delay thus suffered by J2 must be taken into account when the schedulability of J2 is to be determined. This delay is treated as J2’s blocking time.
At time 11, J1 exits from its critical section. Its priority is lower than the priority of the global critical section of J2. As a result, J2 preempts J1 on P2 in the interval (11, 12]. The total delay experienced by a job as a result of preemption by global critical sections of lower priority jobs is also a factor in the total blocking time of the job.
Finally, a job can be delayed by a local higher priority job whose execution is suspended on the local processor when the priority job executes on a remote processor. This delay is another factor of the blocking time of the job. Here in this example, J2 is suspended at time 7. As a result, it is still not completed in (13, 14] and J5 released at time 13.2 cannot start until time 14.
References
Liu, J. W. (2003). Real Time System. Pearson Education, Inc. and Doring Kindersley Publishing Inc.
Lesson
Multiprocessor Scheduling, Resource Access Control and Synchronization
Subject
Real Time System
Grade
IT
Recent Notes
No recent notes.
Related Notes
No related notes.