View Course Path

Deadlocks in Operating Systems

In this post, we are going to learn about deadlocks in modern operating systems and how to prevent them.

We will learn:

  • how resources get deadlocked
  • the four conditions that need to be true for deadlocks to occur
  • the four methods to prevent deadlocks from occurring

What is a deadlock in Operating Systems?

The technical definition of a deadlock: A set of processes is said to be in a deadlock if each process in the set is waiting for an event that only another process in the set can cause.

  • Consider a scenario where you and a friend decide to eat something.
  • To serve yourselves you pick a bowl but didn’t get a spoon, on the other hand, your friend picked a spoon and didn’t get a bowl.
  • Now, both of you have a resource (bowl or spoon) and require exactly one more resource (spoon or bowl) to eat the dessert.
  • Neither you nor your friend is ready to give their resource to the other person.
  • So both of you stand there waiting for each other infinitely, and this is what we call a situation of a deadlock.

Deadlock is a situation where two or more processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. In a local area network, most of the resources are shared. It becomes congested when more than two processes are in a deadlock state.

If you remember, the Operating System is responsible for managing resources, so it naturally becomes the responsibility of the OS when two processes fight over the same resource. That is why managing deadlocks is an integral part of an OS.

Conditions for Resource Deadlocks in an OS

There are four conditions which need to be true to cause a resource deadlock:

  1. Mutual Exclusion: Mutual exclusion says that a resource can be used by only one process at a time.
  2. Hold and Wait: This is a condition where a process is holding at least one resource and is waiting for other resources.
  3. No Preemption: If a process is holding a resource, it cannot be taken back from it unless the process itself releases the resource explicitly.
  4. Circular Wait: A set of processes are waiting for each other for the resources in a circular fashion. For example, P1 is waiting for P2, P2 is waiting for P3, and P3 is again waiting for P1.

All four of these conditions must be true for a resource deadlock to occur. If even one of the conditions is false, then there is no resource deadlock.

Using graphs to check for deadlocks in an OS visually

visual representation of resource deadlocks

In the above image, the squares are represented by resources and circles by processes. When a resource (square) is directed towards the process (circle), it means that the process is currently using that resource, as shown in figure 1. In figure 2, the process is directed towards the resource. This means that the process is waiting for that resource to get free. In the 3rd figure, we have arrived at a deadlock cycle.

How to prevent Deadlocks in OS?

There are four methods to prevent deadlocks.

  1. Ignorance, The Ostrich Algorithm
  2. Deadlock Detection and Recovery
  3. Deadlock Avoidance
  4. Deadlock Prevention

The Ostrich Algorithm in Deadlocks

This algorithm is the way life should be handled. It gets its name from the ostrich and the way it sticks its head in the sand and ignores everything that’s happening around. Not necessarily a life pro tip, but it works in computers.

While a group of highly qualified mathematicians are against this approach, as they want to dive deep into what is causing the deadlock, others find it more tolerate-able.

Pros of ignorance in deadlocks?

No manpower is required to do any extra work. When the system crashes, restart it. This is viable only when a deadlock occurs rarely.

Cons of ignorance in deadlocks?

If it happens quite often, then this is not an optimal solution. You’ll have to invest some time in properly dealing with deadlocks, that’s any of the next three methods.

Deadlock Detection and Recovery

There are two different types of resource deadlocks, and they need to be programmed differently.

Single resource: There is only one resource of each type. This means that multiple desktops share one scanner, one printer, and so on. We use resource graphs to detect a deadlock in this scenario.

Multiple resources: There is more than one resource of each type. This means there are multiple printers for multiple desktops. We need two matrices, the current allocation matrix, and the request matrix to detect a deadlock in this scenario.

We will take a look at a resource deadlock with just one resource.

Detecting deadlock with one resource of each type

This is a simple problem. Let’s take an example of multiple processes and resources in this scenario.

There are five processes and five resources in this case. Processes are in circles, and resources are in squares.

To detect a deadlock, if all resource types have only one instance, then we can use a graph called wait-for-graph, which is a variant of the resource-allocation graph.

Here, vertices represent processes, and a directed edge from P1 to P2 indicates that P1 is waiting for a resource held by P2. Like in the case of a resource allocation graph, a cycle in a wait-for-graph indicates a deadlock. So the system can maintain a wait-for-graph and check for cycles periodically to detect any deadlocks.

In the above graph, pick any node and see if it comes back to the same node. For example, let’s pick the process P2.

  • P2 is directed towards R5, which means it needs that resource (R5)
  • R5 is directed towards P4, which means P4 is currently holding the resource R5.
  • P4 is directed towards R2, which means it’s waiting for resource R2.
  • R2 is held by P1
  • P1 is waiting for R1
  • And finally, R1 is held by P2, our initial node.

Since we were able to come back to our initial node, for the current graph, the resources will be deadlocked.

The wait-for-graph can not be used if there are multiple instances for a resource, as a cycle may not imply a deadlock. In such a case, we can use an algorithm similar to Banker’s algorithm to detect deadlock, which we’ll discuss later on.

After the detection of a deadlock, we need to recover from it. Following are some ways to recover from a deadlock:

  1. Process Termination: To eliminate deadlock, we can simply kill one or more processes. For this, we use two methods:
  • Abort all the processes involved in a deadlock
  • Abort one process at a time until the deadlock is eliminated.
  1. Resource Preemption: To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those resources to other processes. Preempt means that we are taking away the resources from a process. This method will raise three issues –
  • Selecting a victim: We must determine which resources and which processes are to be preempted and also the order to minimize the cost.
  • Rollback: We must determine what should be done with the process from which resources are preempted. One simple idea is a total rollback, which means abort the process and restart it.
  • Starvation: In a system, it may happen that the same process is always picked as a victim. As a result, that process will never complete its designated task. This situation is called Starvation and must be avoided.

Deadlock Avoidance

To avoid deadlocks, we can make use of prior knowledge about the usage of resources by processes, including resources available, resources allocated, future requests, and future releases by processes. Most deadlock avoidance algorithms need every process to tell in advance the maximum number of resources of each type that it may need. Based on all information, we may decide if a process should wait for a resource or not, and thus avoid chances for the circular wait.

If a system is already in a safe state, we can try to stay away from an unsafe state and avoid deadlock. Deadlocks cannot be avoided in an unsafe state. A system can be considered to be in a safe state if it is not in a state of deadlock and can allocate resources to the maximum available. A safe sequence of processes and allocation of resources ensures a safe state. Deadlock avoidance algorithms try not to allocate resources to a process if it makes the system go into an unsafe state.

A resource allocation graph is generally used to avoid deadlocks. If there are no cycles in the resource allocation graph, then there are no deadlocks. If there are cycles, there may be a deadlock. If there is only one instance of every resource, then a cycle implies a deadlock.

Banker’s Algorithm to avoid Resource Deadlocks

Banker’s algorithm was written by Dijkstra. It is a scheduling algorithm and is an extension of the deadlock detection algorithm.

How does the banker’s algorithm work?

  • Consider a banker going around town giving loans to customers.
  • If granting a loan to a customer leads to an unsafe state, it is denied.
  • If granting a loan to a customer leads to a safe state, it is carried out.

In this analogy, the loan is the number of resources, the customers are processes, and the banker is the operating system.

Now imagine a scenario where the banker has ten units of a loan (10 resources). There are four customers (processes), and they have all specified their maximum required units of the loan (resources).

Take a look at this initial table.

Process Current resources Maximum resources required
A 0 6
B 0 5
C 0 4
D 0 7

Before the Operating System has granted any resource. The maximum number of combined resources is 22, but the banker has only 10 to loan out. Luckily for us, these processes don’t need all the resources at once. So we can allocate some resources to them unless they get pushed to an unsafe state.

Now take a look at this table, here we have allocated some resources already. 8 resources, to be precise.

Process Current resources Maximum resources required
A 1 6
B 1 5
C 2 4
D 4 7

The banker still has two resources to give out. How is this still a safe state? Suppose process A has asked for the remaining five resources. But the banker has only two remaining. So process A gets delayed, and the banker waits for process C to finish running, keeping the two unallocated resources with itself. This is because the banker does not know whether process C is going to ask for more or free up the resources. And it has to be ready when and if process C asks for two resources to complete running.

Once process C is finished, the banker has four resources with it now, and it can use these resources to complete other processes.

Let’s take another look at an unsafe state when the banker takes some risk.

Process Current resources Maximum resources required
A 1 6
B 2 5
C 2 4
D 4 7

The banker has allocated one more resource to process B. It is now left with only one resource. If any of the remaining processes asked for all resources to finish running, the banker is not able to provide them with it, and the deadlock takes place. That’s why the banker doesn’t give out resources if it causes an unsafe state.

Deadlock Prevention in Operating Systems

Deadlock prevention algorithms ensure that at least one of the necessary conditions (Mutual exclusion, hold and wait, no preemption, and circular wait) does not hold true. We do this by facing each of the four conditions on separate occasions. However, most prevention algorithms have poor resource utilization and hence result in reduced throughputs.

Facing the Mutual Exclusion condition

Let’s recall what the mutual exclusion condition states. Each resource is either assigned to exactly one process, or it is available. 

If no resource were ever assigned exclusively to a single process, then we would never have deadlocks. What this means is that all resources should be shared between all processes, and that will never lead to a deadlock. However, if two processes start printing on a shared printer together, then the output would be unreadable. In this model, the only process that actually requests the printer resource is the printer daemon. A daemon is a background process that handles all requests and provides services.

The daemon is strictly programmed to start printing when the complete output file is ready and not when it is being edited.

So how do we face the mutual exclusion condition? We avoid assigning a resource when that is not necessary. And we also try to make sure that as few processes as possible might claim the resource.

Facing the Hold and Wait condition

Let’s recall what the hold and wait condition was. Processes holding resources that were granted earlier can request for newer resources.

This condition is slightly more promising. Here, if the process is allocated all the resources it would require beforehand, then it won’t need to ask for more resources, and there would be no deadlock. If, for some reason, the resource can’t be allocated to the process due to unavailability, then the process just waits until it gets the resource, and then it is processed.

The immediate problem with this is that many processes don’t know what resources they would need until they have already started running. If they knew what resources are required, then we could use the banker’s algorithm. Another issue is that resources will be locked in until the process has completed its run. This does not use the resources optimally.

Some mainframe batch systems require the programmer to list out all the resources it would require at the beginning of the code. This is a waste of the users’ time and also a lot of waste of resources, but it does prevent the deadlock.

A slightly different variant to break the hold and wait condition is to make the process drop all of the resources it is currently holding, whenever requesting for a new resource. Then it can grab hold of all the required resources again. This is also not an optimal use of processing power.

Facing the No Preemption condition

Let’s recall what the no preemption condition was. Any resource that was granted to a process can not be taken from it forcibly until the process has explicitly stated that it doesn’t require the resource any more and frees it up.

Facing this condition is also a possibility to avoid deadlocks. Take, for example, the printer as the resource. If a process has been assigned the printer and is in the middle of printing its output, forcibly taking away the printer because a needed plotter isn’t readily available is tricky at best and impossible at worst. However, some resources can be virtualized to avoid this situation. Spooling printer output to the disk and allowing only the printer daemon access to the real printer eliminates deadlocks regarding the printer, but creating one for disk space.

But then again, not all resources can be virtualized, and sometimes processes will have to lock in the resources, and they can not be taken away (preempted), thus giving us the potential of a deadlock.

Facing the Circular Wait condition

Let’s recall what the circular wait condition was. There must be a circular chain of two or more processes where one process is holding a resource that is needed by the next process in the chain.

To avoid the circular wait, resources may be ordered, and we can ensure that each process can request resources only in increasing order of these numbers. The algorithm may itself increase complexity and may also lead to poor resource utilization.

Related courses for this will be up soon!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.