Problem 1. (20 points)
Five batch jobs A through E, arrive at a computer center at almost the same time. They have estimated running times 10, 6, 2, 4, and 8 minutes. Their (externally determined) priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. For each of the following scheduling algorithms, determine the mean process turn around time. Ignore process switching overhead. (a) Round Robin (b) Priority Scheduling (c) First Come First Served (run in order 10, 6, 2, 4, 8) (d) Shortest Job First For (a), assume that the system is multiprogramming, and that each job gets it fair share of the CPU. For (b) through (d), assume that only one job at a time runs, until it finishes. All jobs are completely CPU bound.
Answer 1 :
For round robin, during the first 10 minutes, each job gets 1/5 of the CPU. At the end of the 10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after which time D finishes. Then each of the three remaining jobs get 1/3 of the CPU for 6 minutes, until B finishes and so on. The finishing times for the five jobs are 10, 18, 24. 28, 30, for an average of 22 minutes. For priority scheduling, B is run first. After 6 minutes, it is finished. The other jobs finish at 14, 24, 26, 30, for an average of 18.8 minutes. If the jobs run in the order A through E, they finish at 10, 16, 18, 22, 30, for an average of 19.2 minutes. Finally, a shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.
Problem 2. (10 points) (Silberschatz -Text book - Problem 5.6)
What advantage is there in having different time quantum sizes on different levels of a multilevel queue system?
Answer 2 :
Processes which need more frequent servicing, for instance interactive processes such as editors, can be in a queue with a small time quantum. Processes with no need for frequent servicing can be in a queue with a larger time quantum, requiring fewer context switches to complete the processing, making more efficient use of the computer.
Problem 3. (20 points) (Silberschatz - Text book - Problem 5.8)
Many CPU scheduling algorithms are parameterized. For example, the RR algorithm requires a parameter to inidcate the time slice. Multilevel feedback queues require parameters to define the number of queues, the scheduling algorithm for each queue, the criteria used to move the processes between the queues , and so on. These algorithms are thus really sets of algorithms (for example, the set RR algorithms for all time slices and so on). One set of algorithms may include another (for example, the FCFS algorithm is the RR algorithm with an infinite time quatum). What (if any) relation holds between the following pairs of sets of algorithms. a. Priority and SJF b. Multilevel feedback queues and FCFS c. Priority and FCFS d. RR and SJF
Answer 3 :
a. The shortest job has the highest priority. b. The lowest level of MLFQ is FCFS. c. FCFS gives the highest priority to the job having been in existence the longest. d. None.
Problem 4. (15 points) (Silberschatz - Text book - Problem 7.13)
Consider the following snapshot of a system:
Process | Allocation | Max | Available |
P0 | 0 0 1 2 | 0 0 1 2 | 1 5 2 0 |
P1 | 1 0 0 0 | 1 7 5 0 | |
P2 | 1 3 5 4 | 2 3 5 6 | |
P3 | 0 6 3 2 | 0 6 5 2 | |
P4 | 0 0 1 4 | 0 6 5 6 |
Answer the following questions using Banker's Algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from processor P1 arrives for (0,4,2,0), can the request be granted immediately?
Answer 4 :
a. Since Need = Max - Allocation, the contents of Need is:
A B C D |
0 0 0 0 |
0 7 5 0 |
1 0 0 2 |
0 0 2 0 |
0 6 4 2 |
b. Yes, the sequence
c. Yes. Since
i. (0,4,2,0) <= Available = (1,5,2,0)
ii. (0,4,2,0) <= Max_i = (1,7,5,0)
iii. The new system system state after allocation is made is
Process | Allocation | Max | Need | Available |
P0 | 0 0 1 2 | 0 0 1 2 | 0 0 0 0 | 1 1 0 0 |
P1 | 1 4 2 0 | 1 7 5 0 | 0 3 3 0 | |
P2 | 1 3 5 4 | 2 3 5 6 | 1 0 0 2 | |
P3 | 0 6 3 2 | 0 6 5 2 | 0 0 2 0 | |
P4 | 0 0 1 4 | 0 6 5 6 | 0 6 4 2 |
and the sequence
Problem 5. (20 points)
Below are listed several ways of handling deadlocks : (a) One criteria to use in evaluating different approaches to deadlock is which permits the greatest concurrency - in other words, which allows the most threads to make progress without waiting when there isn't deadlock. Give a rank order from 1 to 6 for each of the ways of handling deadlock listed below, where 1 allows the greatest degree of concurrency, and 6 allows the least degree of concurrency. If two approaches offer roughly equal concurrency, indicate that. (b) Another criteria to use in evaluating deadlock algorithms is efficiency - in other words, which requires the least CPU overhead. Rank order the approaches according to efficiency, with 1 being the most efficient, assuming that encountering deadlock is a very rare event. Again, indicate if two approaches are equally efficient. Does your ordering change if deadlocks happen frequently? (i) Banker's algorithm (ii) detect deadlock and kill thread, releasing all resources (iii) reserve all resources in advance (iv) restart thread and release all resources if thread needs to wait (v) resource ordering (vi) detect deadlock and roll back thread's actions
Answer 5 :Handling deadlock
Algorithms to handle deadlock differ in the amount of concurrency they provide and in the runtime costs associated with the algorithms.
5.a Concurrency ratings
In order from most-concurrent to least, there is a rough partial order on the deadlock-handling algorithms:
None of these algorithms limit concurrency before deadlock occurs, since they rely on runtime checks rather than static restrictions. Their effects after deadlock is detected are harder to characterize: they still allow lots of concurrency (in some cases they enhance it), but the computation may no longer be sensical or efficient.
These algorithms cause more unnecessary waiting than the previous two by restricting the range of allowable computations. The banker's algorithm prevents unsafe allocations (a proper superset of deadlock-producing allocations) and resource ordering restricts allocation sequences so that threads have fewer options as to whether they must wait or not.
This algorithm allows less concurrency than the previous two, but is less pathological than the worst one. By reserving all resources in advance, threads have to wait longer and are more likely to block other threads while they work, so the system-wide execution is in effect more linear.
This algorithm is the strangest, since so much of its concurrency will be useless repetition; because threads compete for execution time, this algorithm also prevents useful computation from advancing. Thus, this algorithm has the dubious distinction of allowing both the most and the least amount of concurrency, depending on the definition of concurrency.
5.b Efficiency ratings
In order from most-efficient to least, there is a rough partial order on the deadlock-handling algorithms:
These algorithms are most efficient because they involve no runtime overhead. Notice that this is a result of the same static restrictions which made these rank poorly in concurrency.
These algorithms involve runtime checks on allocations which are roughly equivalent; the banker's algorithm performs a search to verify safety which is O(n m) in the number of threads and allocations, and deadlock detection performs a cycle-detection search which is O(n) in the length of resource-dependency chains. Resource-dependency chains are bounded by the number of threads, the number of resources, and the number of allocations.
This algorithm performs the same runtime check discussed previously but also entails a logging cost which is O(n) in the total number of memory writes performed.
This algorithm is grossly inefficient for two reasons. First, because threads run the risk of restarting, they have a low probability of completing. Second, they are competing with other restarting threads for finite execution time, so the entire system advances towards completion slowly if at all.
This ordering does not change when deadlock is more likely. The algorithms in the first group incur no additional runtime penalty because they statically disallow deadlock-producing execution. The second group incurs a minimal, bounded penalty when deadlock occurs. The algorithm in the third tier incurs the unrolling cost, which is O(n) in the number of memory writes performed between checkpoints. The status of the final algorithm is questionable because the algorithm does not allow deadlock to occur; it might be the case that unrolling becomes more expensive, but the behavior of this restart algorithm is so variable that accurate comparative analysis is nearly impossible.
Problem 6. (10 points)
A computer has 6 tape drives, with n processes competing for it. Each process may need 2 drives. For which values of n is the system deadlock free?
Answer 6 :
With three proccesses, each one can have two drives. With four processes, the distribution of drives will be (2,2,1,1), allowing the first two processes to finsh. With five processes, the distribution of drives will be (2,1,1,1,1). which still allows the first one to finish. With six processes, each holding one tape drive, and wanting another one, we have a deadlock. Thus for n < 6, the system is deadlock free.