Interprocess Communication
Interprocess Communication (IPC) gives a mechanism to permit processes to communicate and synchronize their activities without having the same address space. When multiple processes access and manipulate the same data concurrently, then a race condition occurs and the outcome of the execution depends on the particular order in which the access takes place. The part of the program that requires access to the shared memory is called the critical region or the critical section. When a process is accessing shar
Summary
Interprocess Communication (IPC) gives a mechanism to permit processes to communicate and synchronize their activities without having the same address space. When multiple processes access and manipulate the same data concurrently, then a race condition occurs and the outcome of the execution depends on the particular order in which the access takes place. The part of the program that requires access to the shared memory is called the critical region or the critical section. When a process is accessing shar
Things to Remember
· Interprocess Communication (IPC) gives a mechanism to permit processes to communicate and synchronize their activities without having the same address space.
· When multiple processes access and manipulate the same data concurrently, then a race condition occurs and the outcome of the execution depends on the particular order in which the access takes place.
· A program can be grouped into two parts: one that requires access to shared memory and the other that do not. The part of the program that requires access to the shared memory is called the critical region or the critical section.
· When a process is accessing shared modifiable data, the process is said to be in the critical section and all the other processes can continue to run without entering their critical region.
· When a process leaves the critical region, another process waiting to enter the critical region must be allowed to enter the critical region.
· Mutual exclusion is some way of making sure that if one process is using a shared variable, the other processes will be excluded from doing the same thing.
· The simplest solution for achieving mutual exclusion is to have each process disable all interrupts after entering its critical region and re-enable them just before leaving the critical region.
· Peterson’s algorithm consists of two procedures written in ANSI C, which means that function prototypes should be supplied for all the functions defined and used.
· Sleep is a system call that causes the caller to block until another process wakes it up. The wakeup call has the process to be awakened as its parameter.
· Both sleep and wakeup each have one parameter, a memory address used to match up sleeps and wakeups.
· While solving the producer-consumer problem using sleep and wakeup, there was a chance of losing the wakeup signal. So, to solve this problem, a semaphore can be used to solve this problem. E.W. Dijkstra suggested using an integer variable to count the number of wakeups called semaphore.
· Monitors are the higher level synchronization primitive. It is a programming language construct that guarantees appropriate access to the critical region.
· A process in a multiprogramming system is said to be in a deadlock if it is waiting for a particular event that will never occur.
· A set of processes is deadlocked if each process in the set of waiting for an event that only another process in the set can cause.
MCQs
No MCQs found.
Subjective Questions
No subjective questions found.
Videos
No videos found.

Interprocess Communication
Interprocess Communication (IPC)
Processes frequently need to communicate with different processes. For instance, in a shell pipeline, the output of the primary process must be passed to the second process, thus on down the line. Thus, there is a requirement for communication between processes, ideally in a well-organized way not utilizing interrupts (Tanenbaum, 2013). Interprocess Communication (IPC) gives a mechanism to permit processes to communicate and synchronize their activities without having the same address space. It is especially helpful in a distributed environment where the communicating process may dwell on various computers connected through a network.
There are three primary issues related to the IPC. They are:
- How can one process pass information to the other?
- How to make sure that two processes do not get into each other’s way when engaging in critical activities?
- How to maintain the proper sequence when dependencies are present?
Race Condition
When multiple processes access and manipulate the same data concurrently, then a race condition occurs and the outcome of the execution depends on the particular order in which the access takes place.
For example, let us consider a print spooler. At the point when a process needs to print a file, it enters the file name in a special spooler directory. Another process, the print daemon, periodically checks whether there are any files to be printed, and if there are, it prints them and afterward removes their name from the spooler directory.

Here, ‘in’ and ‘out’ are the two shared variables where ‘in’ points to the next free slot in the directory and ‘out’ points to the next file to be printed. At the above instant, slots 0 to 3 are empty which means the files have already been printed. Slots 4 to 6 are full which contains the name of files queued for printing. Now, process A and process B decide to queue a file for printing simultaneously. At this instant, both processes think that the next free slot is 7. Process A reads ‘in’ and stores the value 7 in a local variable called next free slot. Just then, a clock interrupt occurs and switches to process B. Process B also reads ‘in’ and stores 7 as next free slot. Process B continues to run and updates ‘in’ to be 8 after completion. Process A runs again starting from the place it left off. It looks at the next free slot and finds 7 there. Then it writes its file name in slot 7 replacing what process B put there. Then it sets the value of ‘in’ to 8. So process B will never receive its output. Situations like this where two or more processes are reading or writing the some shared data and the final result depends on who precisely runs when is called the race condition.
Critical Region (Critical Section)
A program can be grouped into two parts: one that requires access to shared memory and the other that do not. The part of the program that requires access to the shared memory is called the critical region or the critical section. If an arrangement could be made such that no two processes could be in their critical regions at the same time then race conditions could be avoided. When a process is accessing shared modifiable data, the process is said to be in the critical section and all the other processes can continue to run without entering their critical region. When a process leaves the critical region, another process waiting to enter the critical region must be allowed to enter the critical region.
Critical regions must satisfy the following conditions to have a good solution for race conditions:
- No two processes may be simultaneously inside their critical regions.
- No assumptions may be made about the speeds or the number of CPUs.
- No process running outside the critical region may block other processes.
- No process should have to wait forever to enter its critical region.
Mutual Exclusion
The key to preventing race condition and many other situations involving shared memory and files is to prohibit more than one process to read or write the shared data at the same time i.e. we need mutual exclusion. Mutual exclusion is some way of making sure that if one process is using a shared variable, the other processes will be excluded from doing the same thing.

Here in Figure 2-3, process A enters its critical region at time T1. After a while, process B attempts to enter its critical region at time T2 but fails because process A is already in its critical region. So, process B is temporarily blocked until time T3, when the process A leaves its critical region. Now, process B enters its critical region and leaves at time T4. The process continues for all the processes attempting to enter its critical regions.
Mutual Exclusion with Busy Waiting
In mutual exclusion with busy waiting, we will discuss the various proposals for achieving mutual exclusion so that while one process is busy updating shared memory in its critical region, no other process will enter its critical region and interrupt it.
Interrupt Disabling
The simplest solution for achieving mutual exclusion is to have each process disable all interrupts after entering its critical region and re-enable them just before leaving the critical region. After the interrupts are disabled, no clock interrupt can occur resulting in the prevention of switching from one process to another. Thus, it can examine and update the shared memory without fear that any other process will enter the critical region. But if the process turns off the interrupts and never turned it on again, then other processes will never enter their critical region which is a major setback of this method.
Lock Variables
Lock variables is a software solution for achieving mutual exclusion where a single shared lock variable is initially set to 0. When a process wants to enter its critical region, it firsts checks the lock. If the lock is 0, it sets it to 1 and enters its critical region. Once it finishes its job in the critical region, it sets the lock to 0 and leaves the critical region so that other processes requesting to enter their critical region can enter. If the lock is already 1, it means that some other process is in its critical region and it waits until the lock is set to 0 to enter its critical region. This method also has a flaw which is similar to the spooler directory. Suppose that a process reads that lock is 0 but before it can set it to 1, another process too reads the lock as 0 and enters it critical region and sets lock to 1. The previous process to sets the lock to 1 which results in two processes inside the critical region at the same time which violates the mutual exclusion.
Strict Alternation
In this method for achieving mutual exclusion, the processes share a common integer variable ‘turn’ which is initially 0. This variable keeps track of whose turn it is to enter the critical region.

Initially, process 0 inspects ‘turn’ to be 0 and it enters its critical region while process 1 also inspects it to be 0, so it sits in a tight loop continuously checking the ‘turn’ and waiting for it to become 1. When process 0 leaves the critical region, it sets the ‘turn’ to 1 and process 1 enters the critical region. After process 1 leaves the critical region, it sets ‘turn’ to 0.
This method also has a flaw. Suppose that process 0 leaves its critical region and sets ‘turn’ to 1 but process 1 is still busy with its non-critical region so it does not enter its critical region. If process 0 again needs to enter its critical region, then it has to wait until process 1 has finished its job at the non-critical region, entered its critical region and set ‘turn’ to 0 after leaving its critical region which makes process 0 slower and has to wait unnecessarily.
Peterson’s Algorithm
Peterson’s algorithm consists of two procedures written in ANSI C, which means that function prototypes should be supplied for all the functions defined and used.

Before entering its critical region, each process calls ‘enter_region’ with its own process number, 0 or 1, as a parameter. This call will cause it to wait if need be until it is safe to enter. While leaving its critical region, the process calls ‘leave_region’ to indicate that it has done its job in the critical region and allow other processes to enter their critical region.
Let us consider a case that both the processes call ‘enter_region’ simultaneously. Both will store their process number in ‘turn’ and whichever is done at last is the one that counts and the first one is overwritten and lost.
Test and Set Lock
The test and set lock (TSL) is a hardware solution which reads the content of memory word ‘lock’ into the register RX and then stores the non-zero value at the memory address ‘lock’.
TSL, RX, lock
NO other processor can access the memory word until the instruction is finished. When ‘lock’ is 0, any process may set it to 1 using the TSL instruction and then read or write the shared memory by using an ordinary move instruction. When the job is done, the process sets the ‘lock’ to 0.
When a process wants to enter its critical region, it calls ‘enter_region’. Then using a TSL instruction, the lock variable is copied to the register, If the register’s content is not equal to 0, the process is busy waiting, else the process can enter its critical region. It calls the ‘leave_region’ after the completion of its task and sets the ‘lock’ to 0.
Sleep and Wakeup
There are some IPC primitives that block the process instead of wasting CPU time when they are not allowed to enter their critical regions. One of them is the sleep and wakeup. Sleep is a system call that causes the caller to block until another process wakes it up. The wakeup call has the process to be awakened as its parameter. Both sleep and wakeup each have one parameter, a memory address used to match up sleep and wakeups.
The Producer-Consumer Problem
Two processes share a common fixed sized buffer where a ‘producer’ generates information and loads into the buffer while another process ‘consumer’ consumes the information from the buffer. A problem arises when there is a difference in their speeds. Their speed mismatch may lead to rapid insertion of information in the buffer by the producer which may not be consumed by the consumer at the same speed. If the producer inserts items rapidly, the buffer will be full and the producer will go to sleep until the consumer consumes it or the reverse case may occur. This may lead to a race condition and to keep track of this event, a variable ‘count’ is initialized. If the maximum number of items the buffer can hold is ‘N’, the producer’s code will first test to see if the count is N. If it is, the producer will go to sleep and if not, the producer will add an item and increment the count. The consumer’s code will test to see if the count is 0. If it is, the consumer will go to sleep otherwise, the consumer removes an item and decrements the count. Each of the processes will also test if the other should be awakened or not (Tanenbaum, 2013).
If the buffer is empty and the consumer just reads count to see if it is 0 but at that instant, the quantum expires. The producer inserts an item in the buffer, increment the count and sends a wakeup signal to the consumer as it sees count initially at 0. The consumer is not yet logically asleep so, the wakeup signal is lost. Now the consumer reads count to be 0 from the last read and goes to sleep. On the other hand, the producer keeps on producing and fills the buffer and goes to sleep which results in both of them to sleep forever.
Semaphores
While solving the producer-consumer problem using sleep and wakeup, there was a chance of losing the wakeup signal. So, to solve this problem, semaphores can be used to solve this problem. E.W. Dijkstra suggested using an integer variable to count the number of wakeups called semaphore. It could have the value 0 indicating no wakeups were saved or some positive value if one or more wakeups were pending. This method used two operations: Down and Up.
Down
- Checks if the value is greater than 0. If yes, then decrement the value and continue. If no, then the process is put to sleep without completing ‘down’ for a moment.
- Checking value, changing it and possibly going to sleep is all done in a single action.
Up
- Increments the value if one or more processes were sleeping on that semaphore.
Solving Producer-Consumer problem using semaphore
There are three semaphores in this solution: ‘Full’ for counting the number of slots that are full; ‘Empty’ for counting the number of slots that are empty; ‘Mutex’ to make sure that the producer and consumer do not access the buffer at the same time.
Initially, ‘full’ is 0, ‘empty’ is equal to the number of slots in the buffer, and ‘mutex’ is initially 1. Binary semaphores are the semaphores that are initialized to 1 and used by two or more processes to ensure that only one of them can enter its critical region at the same time. If each process does a ‘down’ before entering its critical region and an ‘up’ just after leaving it, mutual exclusion is guaranteed.
Monitors
Monitors are the higher level synchronization primitive. It is a programming language construct that guarantees appropriate access to the critical region. It is a collection of procedures, variables and data structures that are grouped together in a special kind of module or package (Tanenbaum, 2013). Processes that wish to access the shared data do so through the execution of monitor functions but they can’t directly access the monitors, internal data structures from procedures declared outside the monitor. They lie at the compiler level management and only one process can be active on a monitor at any instant. Monitors have an important property that makes them useful for achieving mutual exclusion i.e. only one process can be active on a monitor at any instant. The first few instructions of the monitor procedure will check to see if there is any other process currently active within the monitor. If so, the calling process will be suspended until the other process has left the monitor, otherwise it can enter the monitor. So, no two processes will be in their critical regions at the same time.
Message Passing
Message passing allows a system to communicate with one another without the need to resort to shared data. It provides at least two operations: send and receive, which like semaphores and unlike monitors are system calls rather than language construct (Tanenbaum, 2013). The message can be either of fixed size or variable length with the name of the receiver and sender specified. They can easily be put into library procedures, such as:
Send (destination, &message);
And
Receive (source, &message);
If ‘p’ and ‘q’ wants to communicate with each other, they must send messages to and receive messages from each other with a communication link existing between them.
The Dining Philosophers Problem
This synchronization problem was posed and solved by Dijkstra in 1965. In this problem, five philosophers are seated around a circular table. Each philosopher has a plate of spaghetti. The spaghetti is so slippery that a philosopher needs two forks to eat it. Between each pair of plates is one fork. Its layout is shown in the figure 2-9.
The life of a philosopher comprises of alternate periods of eating and thinking. (This is something of an abstraction). At the point when a philosopher gets hungry, she tries to acquire her left and right forks, each one in turn, in either order. If successful in acquiring two forks, she eats for some time, then puts down the forks, and keeps on thinking. The key question is: Can you write a program for every philosopher that does what it should do and never gets stuck?
Figure 2-10 demonstrates the obvious solution. The procedure take_fork holds up until the predetermined fork is accessible and afterward seizes it. Unfortunately, the obvious solution isn't right. Assume that every one of the five philosophers takes their left forks at the same time. None will be able to take their right fork, and there will be a deadlock.
Figure 2-11 gives the solution to the problem and is deadlock free.
It uses an array stack to monitor whether a philosopher is eating, thinking, or hungry (attempting to get forks). A philosopher may only move into eating state if neither one of the neighbors is eating. Philosopher i's neighbors are characterized by the macros LEFT and RIGHT. In other words, if i is 2, LEFT is 1 and RIGHT is 3. From a theoretical point of view, this solution is enough. From a practical one, it has a performance bug: only one philosopher can be eating at any instant. With five forks available, we should be able to allow two philosophers to eat at the same time (Classical IPC Problems, 2013).
Deadlocks
A process in a multiprogramming system is said to be in a deadlock if it is waiting for a particular event that will never occur. A set of processes is deadlocked if each process in the set of waiting for an event that only another process in the set can cause. None of the processes can run, none of them can release any resources and none of them can be awakened. The resources may be either logical or physical. Printers, scanners, CD, etc. are some examples of physical resources whereas files, semaphores and monitors are logical resources.
In general, these four strategies are used for dealing with deadlocks:
- Just ignore the problem. Maybe if you ignore it, it will ignore you.
- Detection and recovery. Let deadlocks occur, detect them and take action.
- Dynamic avoidance by careful resource allocation.
- Prevention, by structurally negating one of the four conditions for deadlock mentioned later below. (Tanenbaum, 2013)
Conditions for deadlock
A deadlock occurs if all four conditions mentioned below are present.
Mutual Exclusion
- Process claims exclusive control of resources.
- Each resource is either currently assigned to exactly one process or is available.
- Only one process uses the resource.
Hold and Wait
- Processes hold resources already allocated to them while waiting for additional resources.
No preemption
- Resources granted previously cannot be forcefully taken away from the process.
- A resource can be voluntarily released after completing its task.
Circular Wait
- Each process holds one or more resources that are requested by next process in the chain.
- There must not exist a set P= {P0, P1, P2,..Pn} of waiting processes , such that P0 is waiting for the resource held by P1, P1 is waiting for the resource held by P2 and so on.
Recovery from deadlock
Recovery through preemption
Deadlock can be recovered by temporarily taking a resource away from its current owner and give it to another process. The ability to take a resource away from a process, have the process use it and then give it back without the process noticing it is highly dependent on the nature of the resource.
Recovery through rollback
Processes are check-pointed periodically which would write the state of a process to a file so that it can be restarted later. When a deadlock is detected, the process that owns the needed resource is rolled back to a point in time before it acquired some other resources by starting one of its earlier checkpoints.
Recovery through killing process
One possibility of removing deadlock is by killing a process in the cycle. If it does not help, it can be continued until the cycle is broken. Alternatively, a process not in the cycle can be chosen as a victim in order to release its resources.
Bibliography
Classical IPC Problems. (2013, October 26). Retrieved from OSINGOBLOG: http://www.osinfoblog.com/post/104/classical-ipc-problems/
Tanenbaum, A. S. (2013). Modern Operating Systems. Delhi: PHI Learning Private Limited.
Lesson
Process Management
Subject
Operating System
Grade
IT
Recent Notes
No recent notes.
Related Notes
No related notes.