Operating Systems — Complete Answer Guide

📊 Paper Analysis — Topic Coverage & Mark Distribution
CO1 — OS STRUCTURES & PROCESSES
Topics: Monolithic vs Layered, Microkernel, System calls, fork/exec/wait, Process states, Threads (user/kernel), Zombie process
Questions: FAT Q1, PY-A (DC-OS), PY-B (Greenhouse), Q1 (Airline), Q2 (Pthreads), PY-3rd Q1 (Modular kernel), PY-3rd Q2 (fork+threads)
Typical marks: ~20–30 per paper
CO2 — CPU SCHEDULING
Topics: FCFS, SJF, SRTF, Priority (preemptive), Round Robin, MLFQ, Gantt charts, WT/TAT/RT
Questions: FAT Q5 (SaaS SJF/PP/FCFS), PY-C (MLFQ), 2nd Paper Q3 (MLFQ Priority+RR), PY-3rd Q3 (SRTF), Desktop Q3 (Priority NP + RR 5ms)
Typical marks: ~15–30 per paper
CO3 — SYNCHRONIZATION & DEADLOCKS
Topics: Peterson's, Mutex/Semaphore, Monitors, Producer-Consumer, Readers-Writers, Banker's Algorithm, RAG, WFG, deadlock conditions
Questions: FAT Q2, Q7, PY-D, PY-E, PY-H, 2nd Paper Q4 (Hospital), Q5 (Banker's), Q4 (Students lab deadlock), Q5 (OR sync), PY-3rd Q4 (drone race), Q5 (readers-writers)
Typical marks: ~20–30 per paper
CO4 — MEMORY MANAGEMENT
Topics: Contiguous allocation (Best/Worst/First fit), Paging, Page replacement (LRU, OPT, FIFO), EAT, Logical vs Physical address
Questions: FAT Q6 (OPT/LRU/EAT), PY-I (address bits+cache), 2nd Paper Q6 (Worst fit), Q8 (LRU/OPT/FIFO/EAT), Netflix Q8 (LRU/OPT/FIFO), PY-3rd Q8 (12KB LRU/OPT+EAT)
Typical marks: ~15–30 per paper
CO5 — FILE SYSTEMS, DISK & VIRTUALIZATION
Topics: Contiguous/Linked/Indexed file allocation, Disk scheduling (FCFS, SSTF/LOOK, C-SCAN, C-LOOK), Journaling, DFS, Virtualization, Type-1/Type-2 hypervisors
Questions: FAT Q3, Q4, Q8 disk, PY-F, PY-G, 2nd Paper Q7 (Hypervisors), Q9 (SSTF/C-SCAN/C-LOOK), College server Q7 (file alloc), Netflix Q9 (SSTF/C-LOOK/LOOK), PY-3rd Q6 (VMware)
Typical marks: ~15–25 per paper
HIGH-FREQUENCY EXAM PATTERNS
Scheduling Gantt charts appear in EVERY paper — always compute WT, TAT, RT
Banker's Algorithm (Need matrix + safety check + request grant) — 3–6 marks every paper
Page replacement (LRU + OPT + EAT) — always 10–15 marks
Disk scheduling (FCFS + LOOK/C-SCAN + line chart) — appears in every paper
File allocation methods (all 3 types) — common in CO5 questions
Synchronization pseudocode (mutex/semaphore) — Producer-Consumer or Readers-Writers
📋 FAT Exam — Final Assessment Test
1
Process Management & System Calls
Q01 · CO1/K2 · [10 marks]
Part a 4 marks — A child process "A" completed execution via exit() but still has an entry in the process table. Parent is sleeping/unaware. What happens to the child process? If kill() is executed by the parent, will it work? Justify with diagram.
Part b 6 marks — A user types a command in the terminal. The OS creates a new process, loads the program, schedules it, and manages completion. Explain how the OS handles this execution and discuss the related system calls with a proper diagram.
Part a — Zombie Process
KEY POINT 1
What is a Zombie Process?
When a child calls exit(), it releases all memory and resources — but its PCB (Process Control Block) entry remains in the process table until the parent collects the exit status via wait(). This "dead but not gone" state is called a Zombie Process (state: Z).
ELABORATION
The OS keeps the skeleton PCB entry so the parent can read the child's exit code. Until then, the PID is occupied. If too many zombies accumulate, the PID space fills up and no new processes can be created — a real system threat.
KEY POINT 2
Does kill() Work on a Zombie? — NO
kill() will NOT work. Signals are only delivered to running processes. A zombie has no running code, no scheduler context — it cannot receive or respond to any signal, not even SIGKILL (-9). The only way to remove a zombie is:
1. Parent calls wait() / waitpid()
2. Parent dies → init/systemd adopts and reaps the zombie
PROCESS STATE DIAGRAM — ZOMBIE
Parent Process fork() Child "A" RUNNING exit() ZOMBIE 💀 PCB remains kill()→❌ parent wait() PCB removed ✓ 😴 sleeping… never calls wait()
REAL-WORLD ANALOGY
An employee resigns but HR hasn't processed the paperwork yet. They don't work anymore, but their employee ID still exists in the system. That's a zombie — occupying a slot, doing nothing.

Part b — OS Process Execution: fork → exec → schedule → exit
KEY POINT 1
fork() — Duplicate the Shell
Shell calls fork() → OS duplicates the shell's PCB, assigns a new PID to the child, and places it in the Ready Queue. Child is an exact copy of parent at this point.
KEY POINT 2
exec() — Replace with New Program
Child calls execve(program_path) → OS replaces the child's memory image with the new program's text, data, heap, and stack. The program's main() begins execution from the top.
KEY POINT 3
Scheduler → Dispatcher → CPU
The scheduler picks the process (using its policy: CFS, priority, etc.) and the dispatcher performs a context switch — saves current process state to its PCB, loads this process's PCB registers, hands over CPU. State: READY → RUNNING.
KEY POINT 4
wait() → Reap → Done
The shell (parent) blocks on wait(). When child calls exit(), OS signals parent. Parent's wait() returns, collects exit code, OS removes zombie PCB. Shell prints the next prompt.
PROCESS EXECUTION FLOW
Shell types cmd fork() new PCB execve() load program Scheduler READY→RUN exit() zombie→wait() PROCESS STATE TRANSITIONS NEW READY RUNNING I/O wait → WAITING → back to READY WAITING TERMINATED
REAL-WORLD CONNECTION
Every time you run python script.py in a terminal, this exact sequence occurs. The bash shell is the parent, your script is the child. The shell freezes (wait()) until your script finishes, then shows the $ prompt again.

2
Critical Section — Peterson's Solution
Q02 · CO3/K2 · [10 marks]
Part a 2 marks — Explain why a single shared variable is insufficient for coordination. Illustrate with a suitable execution sequence.
Part b 8 marks — Design a correct two-variable software coordination mechanism and write the complete pseudocode for both threads. Justify correctness by discussing mutual exclusion, progress, and bounded waiting.
Part a — Why a Single Variable Fails
KEY POINT
Single turn Variable → Progress Violated
A single turn variable enforces strict alternation — Thread 0 can only enter CS when turn=0, Thread 1 only when turn=1. If Thread 1 never wants the CS, Thread 0 is blocked forever even though the CS is free. This violates the Progress requirement.
Failure Scenario: turn = 0 (Thread 0's turn) T0 enters CS ✓, exits, sets turn = 1 T1 doesn't want CS (stays in remainder) T0 wants CS again → checks: turn == 1? → BLOCKED FOREVER ↑ CS is FREE but T0 can't enter → PROGRESS VIOLATED ❌
ANALOGY
A bathroom with a single "your turn" token passed between two people. If one person never needs to go, the other can never go again — even though the bathroom is empty.

Part b — Peterson's Solution (Two Variables)
KEY POINT
Two Shared Variables: flag[ ] + turn
flag[i] = true → Thread i wants to enter the CS
turn = j → "I yield priority to j if they also want in"
// ── Shared Variables ────────────────────────────────── boolean flag[2] = {false, false} // flag[i]: does thread i want CS? int turn = 0 // whose turn if both want in? // ── Thread 0 ────────────────────────────────────────── Thread_0(): while (true): flag[0] = true // "I want to enter" turn = 1 // "But you go first if you want" while (flag[1] == true AND turn == 1): wait // busy-wait // ─── CRITICAL SECTION ─── flag[0] = false // "Done, you can go" // ─── REMAINDER SECTION ── // ── Thread 1 ────────────────────────────────────────── Thread_1(): while (true): flag[1] = true turn = 0 while (flag[0] == true AND turn == 0): wait // ─── CRITICAL SECTION ─── flag[1] = false // ─── REMAINDER SECTION ──
KEY POINT — CORRECTNESS PROOF
All Three Requirements Met
✅ Mutual Exclusion: For both threads to be in CS simultaneously, both while-conditions would need to be false. That requires flag[1]=false AND flag[0]=false — impossible if both set their flags before checking. Only one turn value exists — it's the deciding tiebreaker.

✅ Progress: If Thread 1 doesn't want CS (flag[1]=false), Thread 0's while condition is immediately false → enters CS freely. No forced alternation.

✅ Bounded Waiting: A thread waits at most one complete CS execution by the other thread. After Thread 1 exits and sets flag[1]=false, Thread 0's while condition fails and it enters immediately.
REAL-WORLD CONNECTION
Peterson's Solution is the conceptual foundation for mutex locks in every multi-threaded application. When two Java threads modify a shared ArrayList, the JVM uses similar mechanisms. The 2003 Northeast US blackout (50 million lost power) was partly caused by a race condition — a missing mutex in alarm software caused critical alerts to be silently dropped.

3
File System Recovery & Distributed Storage
Q03 · CO5/K1 · [10 marks]
Part a 5 marks — Explain how journaling and soft updates help in file system recovery. Which technique do you recommend for faster recovery after crashes?
Part b 5 marks — Discuss the role of log-structured file systems and distributed file systems in improving data reliability and performance. Which is more suitable for handling data across multiple systems?
Part a — Journaling vs Soft Updates
KEY POINT 1
Journaling — Write-Ahead Logging
Before modifying the actual file system, the OS writes a journal entry describing the upcoming change. On crash: just replay or discard uncommitted journal entries. No full disk scan (fsck) needed. Recovery is O(journal size), not O(disk size).
THREE MODES
Journal mode — logs data + metadata (safest, slowest)
Ordered mode — data written before metadata is logged (default in ext4)
Writeback mode — only metadata logged (fastest, small data-loss risk)
KEY POINT 2
Soft Updates — Safe Write Ordering
No separate log — instead, the OS enforces a strict dependency ordering on metadata writes so that even a mid-crash leaves the filesystem in a structurally safe state. A quick background scan fixes any minor inconsistencies after reboot.
AspectJournalingSoft Updates
Recovery Speed⚡ Very Fast (replay log)Moderate (background scan)
Write OverheadExtra journal writesOrdering delays
ImplementationModerate complexityVery complex
Used Inext4, NTFS, XFS, HFS+FreeBSD UFS
Recommendation: Journaling — dramatically faster recovery (just replay/discard the journal), simpler, and battle-tested in production systems worldwide.
ANALOGY
Journaling is like a chef writing down every recipe step before cooking. If the kitchen catches fire mid-recipe, they know exactly where to restart. Soft updates is like arranging ingredients in a safe order so the half-cooked meal won't be poisonous — but you still need a cleanup round.

Part b — LFS vs Distributed File System
KEY POINT 3
Log-Structured File System (LFS)
All writes (data + metadata) are appended sequentially to a log at the end of disk — never overwrites. Treats the entire disk as a circular buffer. Excellent write throughput (sequential I/O). An inode map tracks current locations. Works on a single machine.
KEY POINT 4
Distributed File System (DFS)
Files span multiple machines. Clients access files transparently over the network. Provides replication (fault tolerance), scalability, and location transparency. Examples: NFS, HDFS, Google GFS.
DFS is more suitable for multi-system data handling. LFS improves single-node performance only. DFS provides replication (survives machine failure), unified namespace, and scales to petabytes across thousands of servers.
REAL-WORLD CONNECTION
Google Drive and Dropbox are consumer DFS. HDFS (Hadoop) stores data across thousands of nodes with 3× replication — if a node dies, data is auto-recovered. Netflix streams video from distributed storage across multiple data centers for zero-downtime resilience.

4
Server Virtualization
Q04 · CO5/K3 · [10 marks]
Part a 5 marks — Describe how virtualization can help consolidate ten servers (each at 20% capacity), reduce costs, and improve resource utilization.
Part b 5 marks — What challenges might arise during migration to a virtualized environment, and how can they be mitigated?
Part a — Consolidation via Hypervisor
KEY POINT 1
What Virtualization Does
A Type-1 hypervisor (VMware ESXi, KVM, Hyper-V) runs directly on hardware and hosts multiple Virtual Machines (VMs). Each VM gets a slice of real CPU, RAM, and disk. 10 servers × 20% load = only 2 servers' worth of actual computation — all workloads fit on 2–3 physical machines.
CONCRETE BENEFITS
  • Cost: 10 → 2–3 physical servers = 70–80% savings on hardware, electricity, and cooling
  • Resource Pooling: Idle VM CPU/RAM is dynamically allocated to busier VMs — utilization jumps from 20% to 70–80%
  • Isolation: Each VM has its own OS — a crash in one doesn't affect others
  • Live Migration: VMs can be moved between physical hosts with zero downtime (vMotion)
  • Snapshots: Roll back to a known-good state in seconds
Part b — Migration Challenges & Mitigations
ChallengeRiskMitigation
App CompatibilityApp may not run on hypervisor OSTest in staging VM before cutover
Performance OverheadHypervisor adds 5–15% CPU overheadUse hardware-assisted virtualization (Intel VT-x)
Network ReconfigurationIP/MAC changes break servicesPre-configure virtual network with same IPs
Data Transfer TimeMoving TBs of data causes downtimeLive migration — copy memory pages while running
Security / VM EscapeAttacker escapes VM to hostRegular hypervisor patching, network segmentation
REAL-WORLD CONNECTION
AWS EC2, Azure VMs, and Google Compute Engine all run on Type-1 hypervisors (Xen/KVM). A single AWS physical server runs dozens of customer VMs. This is why cloud computing is cheaper than dedicated hardware at scale — you're sharing a hypervisor with thousands of other customers while staying fully isolated.
5
CPU Scheduling — Three Algorithms
Q05 · CO2/K4 · [15 marks]
A multi-tenant SaaS cloud server handles five customer workloads:
ProcessArrival (ms)Burst (ms)Priority
P1072
P2241 ← highest
P3394 ← lowest
P4533
P5652
Part a 5 marksPreemptive Priority: Gantt chart, WT and TAT per process. Evaluate response time for P2 and starvation risk for P3.
Part b 5 marksSJF Non-Preemptive: Gantt chart, WT and TAT. Compare with part (a) in terms of avg WT, avg TAT, and fairness.
Part c 5 marksFCFS: Gantt chart, WT and TAT. Evaluate how long CPU-bound P3 delays shorter processes P2 and P4.
// KEY FORMULAS — used in all scheduling problems Turnaround Time TAT = Completion Time (CT) − Arrival Time (AT) Waiting Time WT = TAT − Burst Time (BT) Average WT / TAT = Σ values / number of processes
Part a — Preemptive Priority (PP)
ALGORITHM
Always run the lowest priority number. Preempt on new arrivals.
STEP-BY-STEP TRACE
t=0 Only P1 arrived → Run P1 (priority 2)
t=2 P2 arrives (priority 1 > running P1's 2) → Preempt P1! P1 has 5ms left. Run P2.
t=6 P2 done. Ready: P1(pri2, 5ms left), P3(pri4), P4(pri3). P5 not arrived. → Run P1(pri2)
t=6 P5 arrives (priority 2 = P1). P1 continues (no preemption on tie — arrival order)
t=11 P1 done. Ready: P5(pri2), P4(pri3), P3(pri4) → Run P5(pri2)
t=16 P5 done → Run P4(pri3)
t=19 P4 done → Run P3(pri4) → finishes at t=28
GANTT CHART — Preemptive Priority
P10–2
P22–6
P16–11
P511–16
P416–19
P319–28
02611161928
ProcessATBTCTTAT = CT−ATWT = TAT−BT
P10711114
P224640
P339282516
P453191411
P56516105
Average12.8 ms7.2 ms
⚠️ Starvation Risk for P3: P3 (priority 4, lowest importance) waits 16ms. In a real system, aging must be used — gradually increase the priority of long-waiting processes to prevent indefinite postponement.

Part b — SJF Non-Preemptive
ALGORITHM
When CPU is free, pick the shortest burst from all arrived processes. Runs to completion.
STEP-BY-STEP TRACE
t=0 Only P1 → Run P1 (BT=7). No choice.
t=7 P1 done. Arrived: P2(4), P3(9), P4(3), P5 not yet. Shortest = P4(3) → Run P4.
t=10 Arrived: P2(4), P3(9), P5(5). Shortest = P2(4) → Run P2.
t=14 Arrived: P3(9), P5(5). Shortest = P5(5) → Run P5.
t=19 Only P3 left → Run P3(9) → done at t=28.
GANTT CHART — SJF Non-Preemptive
P10–7
P47–10
P210–14
P514–19
P319–28
0710141928
ProcessATBTCTTATWT
P107770
P22414128
P339282516
P4531052
P56519138
Average12.4 ms6.8 ms ← Best
✅ SJF gives the optimal average waiting time among non-preemptive algorithms. Shorter jobs (P4=2ms, P1=0ms) get excellent service. Downside: P3 still waits 16ms — SJF can cause starvation for long processes if short ones keep arriving.

Part c — FCFS
GANTT CHART — FCFS (arrival order: P1→P2→P3→P4→P5)
P10–7
P27–11
P311–20
P420–23
P523–28
0711202328
ProcessATBTCTTATWT
P107770
P2241195
P33920178
P453231815
P565282217
Average14.6 ms ← Worst9.0 ms ← Worst
⚠️ Convoy Effect: P3 (burst=9) runs between P2 and P4. Short jobs P4 (BT=3) and P5 (BT=5) wait 15ms and 17ms respectively behind a long job. FCFS is unfair to short, interactive processes — worst average WT of all three algorithms.
AlgorithmAvg WTAvg TATStarvation?Best For
Preemptive Priority7.2 ms12.8 msYes (low-priority)Real-time, critical tasks
SJF Non-Preemptive6.8 ms ✓12.4 ms ✓Yes (long jobs)Batch processing
FCFS9.0 ms ✗14.6 ms ✗No (arrival order)Simple, equal workloads

6
Page Replacement & Effective Access Time
Q06 · CO4/K3 · [15 marks]
Part i — Logical=1000 bytes, Physical=300 bytes, Page size=100 bytes. Reference string: 4,3,2,4,1,7,2,5,2,6,1,3
a 4 marks — Optimal (OPT): replace page not used longest in future
b 4 marks — LRU: replace page not used longest in past
c 4 marks — Compare performance, and examine the relationship between frame count and hit ratio
Part ii 3 marks — Process P1: 32 pages, 80% in memory. Memory access=2000ns, fault overhead=1000ns. Calculate Effective Access Time (EAT).
// SETUP Frames = Physical space / Page size = 300 / 100 = 3 frames Pages = Logical space / Page size = 1000 / 100 = 10 pages Reference string: 4, 3, 2, 4, 1, 7, 2, 5, 2, 6, 1, 3 (12 accesses)
Part i-a — Optimal (OPT) Page Replacement
ALGORITHM
Evict the page that won't be needed for the longest time in the future
Theoretical minimum faults — used as a benchmark. Not implementable in real OS (requires future knowledge). Best-case baseline.
OPT — FRAME TRACE (3 frames)
RefFrame 1Frame 2Frame 3ResultAction
44MISSCold start
343MISSCold start
2432MISSCold start
4432HIT
1412MISSEvict 3 (next use: never)
7712MISSEvict 4 (next use: never)
2712HIT
5715MISSEvict 2 (next use: pos 9)
2725MISSEvict 1 (next use: pos 11)
6625MISSEvict 7 (never used again)
1621MISSEvict 5 (never used again)
3321MISSEvict 6 (never used again)
OPT → Page Faults = 9 | Hits = 3
Part i-b — LRU Page Replacement
ALGORITHM
Evict the page that was least recently used (oldest last-use timestamp)
Best practical algorithm. Uses recent past as approximation of near future. Implemented via counter or stack in hardware.
LRU — FRAME TRACE (3 frames)
RefFrame 1Frame 2Frame 3ResultEvicted (LRU)
44MISS
343MISS
2432MISS
4432HIT
1412MISS3 (used at pos 2)
7417MISS2 (used at pos 3)
2217MISS4 (used at pos 4)
5257MISS1 (used at pos 5)
2257HIT
6256MISS7 (used at pos 6)
1216MISS5 (used at pos 8)
3316MISS2 (used at pos 9)
LRU → Page Faults = 10 | Hits = 2
Part i-c — Comparison
AlgorithmFaultsHitsFeasible?Verdict
OPT93No (needs future)Theoretical benchmark
LRU102Yes (hardware counter)Best practical choice
FRAME COUNT RELATIONSHIP
More Frames → Fewer Faults (for OPT and LRU)
For both OPT and LRU, increasing the number of frames monotonically reduces or maintains the page fault rate — more pages fit in memory, fewer misses. This is not always true for FIFO (Belady's Anomaly), but OPT and LRU are immune to it. Adding a 4th frame would reduce faults in both algorithms.

Part ii — Effective Access Time (EAT)
KEY POINT
EAT blends the fast path (hit) and slow path (fault) by their probabilities
// GIVEN Total pages = 32 Hit ratio (p) = 80% = 0.80 (pages in main memory) Fault ratio (1-p) = 0.20 Memory access (m) = 2000 ns Fault overhead (f) = 1000 ns // FORMULA EAT = p × m + (1−p) × (m + f) = 0.80 × 2000 + 0.20 × (2000 + 1000) = 1600 + 0.20 × 3000 = 1600 + 600 EAT = 2200 ns // INTERPRETATION: EAT is 10% higher than pure RAM access (2000ns) // because 20% of accesses suffer the expensive disk fault path
INTUITION
80% of the time your data is in RAM (fast, 2000ns). 20% of the time the CPU must go to disk (slow, 3000ns total). The weighted average is 2200ns. This is why maximizing the number of frames in RAM is critical — going from 80% to 95% hit rate would drop EAT to ~2050ns.

7
Banker's Algorithm — Deadlock Avoidance
Q07 · CO2/K4 · [15 marks]
Cloud system: R1 (CPU)=10, R2 (Memory)=5, R3 (I/O)=7. Four processes:
ProcessAlloc R1Alloc R2Alloc R3Max R1Max R2Max R3
P1211522
P2102323
P3311613
P4112424
Part a 3 marks — Compute the Need matrix for all processes. Explain how it is derived and why it is critical in deadlock avoidance.
Part b 3 marks — Determine whether the system is currently in a safe state. If safe, provide a safe sequence. If not, justify why.
Part c 3 marks — P2 requests (1, 1, 0) additional resources. Should the system grant it? Perform a step-by-step safety check before and after allocation.
Part d 3 marks — In real cloud systems, resource demands are dynamic and unpredictable. Discuss limitations of Banker's Algorithm and propose an alternative for real-time systems.
Part e 3 marks — Suggest two improvements to the system's resource allocation strategy to minimize deadlock risk while maintaining high utilization.
Part a — Need Matrix
FORMULA
Need[i] = Max[i] − Allocation[i]
The Need matrix represents the maximum additional resources each process might still request. It is the worst-case future demand. The Banker's Algorithm uses it to simulate whether granting a request still leaves the system in a state where ALL processes can eventually complete.
// Total Allocated = sum of all allocations R1: 2+1+3+1 = 7 | R2: 1+0+1+1 = 3 | R3: 1+2+1+2 = 6 // Available = Total Instances − Total Allocated Available R1 = 10 − 7 = 3 Available R2 = 5 − 3 = 2 Available R3 = 7 − 6 = 1
ProcessNeed R1 (Max−Alloc)Need R2Need R3
P15−2 = 32−1 = 12−1 = 1
P23−1 = 22−0 = 23−2 = 1
P36−3 = 31−1 = 03−1 = 2
P44−1 = 32−1 = 14−2 = 2
Part b — Safety Check
ALGORITHM
Find a process whose Need ≤ Available. Grant it, let it finish, recover its allocation. Repeat.
Banker's Algorithm — Available Resources After Each Process Completes Step Process Available R1 after Available R2 after Available R3 after Available → 3 2 1 1 P1 Need: [3,1,1] 5 +2 freed 3 +1 freed 2 +1 freed 2 P2 Need: [2,2,1] 6 +1 freed 3 +0 freed 4 +2 freed 3 P3 Need: [3,0,2] 9 +3 freed 4 +1 freed 5 +1 freed 4 P4 Need: [3,1,2] 10 +1 freed 5 +1 freed 7 +2 freed ✅ Safe Sequence: P1 → P2 → P3 → P4
Check each step: Need ≤ Available? P1: [3,1,1] ≤ [3,2,1] ✓ → Available becomes [5,3,2] P2: [2,2,1] ≤ [5,3,2] ✓ → Available becomes [6,3,4] P3: [3,0,2] ≤ [6,3,4] ✓ → Available becomes [9,4,5] P4: [3,1,2] ≤ [9,4,5] ✓ → Available becomes [10,5,7] ✅ SAFE STATE — Safe Sequence: P1 → P2 → P3 → P4
ANALOGY
The Banker's Algorithm is like a bank that only gives loans if it can still guarantee every customer eventually gets their full requested amount. It never over-commits — always keeps enough in reserve to serve everyone in some order.
Part c — P2 Requests (1, 1, 0)
Request = [1,1,0] Check 1: [1,1,0] ≤ Need[P2]=[2,2,1]? YES ✓ Check 2: [1,1,0] ≤ Available=[3,2,1]? YES ✓ Tentative: Available=[2,1,1] | Alloc[P2]=[2,1,2] | Need[P2]=[1,1,1] P1: Need[3,1,1] ≤ [2,1,1]? NO — skip first
After Granting P2 Request (1,1,0) — New Available = [2,1,1] Step Process Available R1 after Available R2 after Available R3 after Available → 2 1 1 1 P2 Need: [1,1,1] 4 +2 freed 2 +1 freed 3 +2 freed 2 P1 Need: [3,1,1] 6 +2 freed 3 +1 freed 4 +1 freed 3 P3 Need: [3,0,2] 9 +3 freed 4 +1 freed 5 +1 freed 4 P4 Need: [3,1,2] 10 +1 freed 5 +1 freed 7 +2 freed ✅ Safe Sequence: P2 → P1 → P3 → P4
Part d — Limitations of Banker's Algorithm
KEY POINT
Why Banker's Fails in Real Cloud Systems
  • Requires pre-declared max needs — cloud workloads are dynamic; max demand is unknown at process creation
  • High computational cost — O(n² × r) per request (n=processes, r=resource types) — too slow at cloud scale (thousands of containers)
  • Fixed process count assumed — new VMs/containers spawn dynamically; Banker's can't handle joins mid-run
  • Single instance only — the standard algorithm doesn't naturally handle heterogeneous multi-instance resources

Better Alternative for Real-Time: Use deadlock detection + recovery — allow all resource requests, run a detection algorithm periodically (or on timeout), then kill/rollback the deadlocked process. This trades the overhead of constant safety checking for occasional but fast recovery.
Part e — Two Improvements
IMPROVEMENT 1
Global Resource Ordering (Prevents Circular Wait)
Enforce a fixed global ordering on resource types (R1 < R2 < R3). All processes must request resources in this ascending order only. This eliminates circular wait — one of the four Coffman deadlock conditions — so deadlock becomes structurally impossible.
IMPROVEMENT 2
Resource Time-To-Live (TTL) with Preemption
Grant resources but attach a TTL timer. If a process holds resources beyond the TTL without making progress (no CPU usage), the OS forcibly reclaims them and places the process in a retry queue. Prevents indefinite hold-and-wait without rejecting requests upfront.

8
Disk Scheduling — Warehouse Robot
Q08 · CO3/K3 · [15 marks]
Robotic arm at shelf 750. Shelves 0–1999. Request queue: 93, 1800, 340, 1200, 600, 40, 1700, 820
Part a 4 marksFCFS: Service in exact arrival order.
Part b 4 marksLOOK: Move toward higher shelves, at last request reverse to lowest pending. (Note: question says "jumps to lowest pending" — this is C-LOOK behaviour. Interpreted as LOOK here.)
Part c 4 marksC-SCAN: Move in one direction, when end is reached jump to lowest request and continue upward.
Part d 3 marks — Compare all strategies: seek time, fairness, starvation. Which suits heavy continuous load?
// SETUP Initial head = 750 Requests = [93, 1800, 340, 1200, 600, 40, 1700, 820] Sorted = [40, 93, 340, 600, 820, 1200, 1700, 1800] // FORMULA: Total movement = Σ |current − next|
Part a — FCFS (Arrival Order)
0 249 498 747 996 1245 1494 1743 1992 Position 0 1 2 3 4 5 6 7 8 Step 750 93 1800 340 1200 600 40 1700 820 START FCFS — Head Movement (start=750) Total: 8,384
750→93: 657 | 93→1800: 1707 | 1800→340: 1460 | 340→1200: 860 1200→600: 600 | 600→40: 560 | 40→1700: 1660 | 1700→820: 880 Total FCFS = 8,384 shelves
Part b — LOOK (Move UP to last request, then reverse DOWN)
0 249 498 747 996 1245 1494 1743 1992 Position 0 1 2 3 4 5 6 7 8 Step 750 820 1200 1700 1800 600 340 93 40 START LOOK — Head Movement (start=750) Total: 2,810
UP: 750→820 (70) → 1200 (380) → 1700 (500) → 1800 (100) ← REVERSE DOWN: 1800→600 (1200) → 340 (260) → 93 (247) → 40 (53) Total LOOK = 2,810 shelves
Part c — C-SCAN (Move UP, jump to lowest, continue UP)
0 249 498 747 996 1245 1494 1743 1992 Position 0 1 2 3 4 5 6 7 8 Step 750 820 1200 1700 1800 40 93 340 600 START C-SCAN — Head Movement (start=750) Total: 3,370
UP: 750→820 (70) → 1200 (380) → 1700 (500) → 1800 (100) JUMP (dashed): 1800→40 (1760) then UP: 40→93 (53) → 340 (247) → 600 (260) Total C-SCAN = 3,370 shelves
Part d — Comparison
AlgorithmTotal MovementFairnessStarvation?Best For
FCFS8,384Arrival orderPossibleLight / unpredictable load
LOOK2,810 ✓Good (bi-dir)LowMixed workloads
C-SCAN3,370Best uniform waitVery LowHeavy continuous load ✓
C-SCAN is best for heavy continuous request loads. By always sweeping in one direction, it guarantees a maximum wait time of one full sweep cycle — no request at the "far end" waits extremely long. LOOK has slightly lower total movement but C-SCAN's uniform wait distribution is superior for fairness under sustained load.
REAL-WORLD CONNECTION
Linux's default I/O scheduler used to be the CFQ (Completely Fair Queuing) — conceptually similar to C-SCAN with fairness properties. Modern Linux (since 5.0) uses BFQ (Budget Fair Queuing) for HDDs. SSDs don't need disk scheduling at all (use NOOP/none) because all flash cells are equally fast to access — there's no physical head to move.

📚 First Past Year Paper
9
DC-OS — Drone Control Operating System
Past Year · CO1/K3 · [10 marks]
Part i 6 marks — Explain how the core OS services should be designed and coordinated to meet the functional and reliability requirements of DC-OS (process management, file/data handling, inter-drone communication, error detection).
Part ii 4 marks — Illustrate with examples showing how these services interact to support real-time drone operations.
KEY POINT 1
Real-Time Process Management — Navigation Tasks
DC-OS must use preemptive real-time scheduling (EDF — Earliest Deadline First, or fixed-priority like RMS). Navigation tasks must always preempt lower-priority tasks. Each drone runs multiple threads with strict deadlines: navigation (hard real-time), telemetry (soft real-time), logging (best-effort).
INTERACTION EXAMPLE
Mid-flight: collision sensor fires → OS raises a high-priority interrupt → navigation thread preempts active mission-log writer → executes avoidance maneuver within 10ms deadline → resumes logging. Without real-time scheduling, the drone could crash before avoidance code runs.
KEY POINT 2
File and Data Handling — Flight Logs
DC-OS uses a circular ring buffer in RAM for real-time telemetry (GPS, altitude, speed). fsync() periodically flushes to NAND flash. Journaling protects against mid-flight power loss — on reboot, the last valid log state is recovered instantly without full-disk scan.
KEY POINT 3
Secure Inter-Drone Communication
DC-OS implements message-passing IPC (kernel-managed message queues) between drones. Messages are encrypted at the OS security layer. Authentication tokens prevent rogue drones from injecting false coordinates. The kernel validates all incoming messages before delivering to the navigation thread.
KEY POINT 4
Error Detection and Recovery
A watchdog timer process monitors all critical threads. If the navigation thread fails to check in within its period, OS triggers automatic recovery: throttle reduction → return-to-home mode. Hardware exceptions (SIGFPE, SIGSEGV) are caught and handled gracefully — land safely rather than crash. Checkpointing saves drone state every N seconds for mid-flight recovery.
REAL-WORLD CONNECTION
DJI drones run a custom RTOS that implements exactly this. NASA's Mars Ingenuity helicopter used VxWorks RTOS — a certified real-time OS with hard deadlines and watchdog timers. Every millisecond of scheduling jitter on Mars could mean a crash with no recovery possible.

10
Greenhouse Temperature Monitor — Multi-Threading
Past Year · CO1/K3 · [10 marks]
Multi-threaded temperature monitoring. Multiple sensor threads continuously update a shared temperature array. A controller thread computes the average.
Part i 6 marks — Write a C program creating multiple sensor threads and a controller thread. Sensor threads update a shared array; controller reads it to compute the average.
Part ii 4 marks — Discuss the potential problems that occur without synchronization mechanisms.
Part i — C Program with Mutex Synchronization
#include <pthread.h> #include <stdio.h> #include <unistd.h> #define NUM_ZONES 4 float temps[NUM_ZONES] = {0}; // SHARED array pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER; // ── Sensor Thread: updates one zone's reading ─────────── void* sensor_thread(void* arg) { int zone = *(int*)arg; while (1) { float reading = read_sensor(zone); // hardware read pthread_mutex_lock(&lock); // ACQUIRE LOCK temps[zone] = reading; // update shared array pthread_mutex_unlock(&lock); // RELEASE LOCK sleep(1); // wait before next read } } // ── Controller Thread: computes average across all zones ─ void* controller_thread(void* arg) { while (1) { float sum = 0; pthread_mutex_lock(&lock); // ACQUIRE LOCK for (int i = 0; i < NUM_ZONES; i++) sum += temps[i]; pthread_mutex_unlock(&lock); // RELEASE LOCK printf("Avg Temp: %.2f°C\n", sum / NUM_ZONES); sleep(5); } } int main() { pthread_t sensors[NUM_ZONES], ctrl; int ids[NUM_ZONES]; for (int i = 0; i < NUM_ZONES; i++) { ids[i] = i; pthread_create(&sensors[i], NULL, sensor_thread, &ids[i]); } pthread_create(&ctrl, NULL, controller_thread, NULL); pthread_join(ctrl, NULL); }
Part ii — Problems Without Synchronization
PROBLEM 1
Race Condition → Wrong Average
Sensor thread writes temps[2] byte-by-byte. Controller reads it mid-write. It reads a partially-written, garbled float value. The computed average is meaningless — potentially causing the system to incorrectly trigger cooling/heating.
PROBLEM 2
Data Inconsistency → Mixed Old/New Values
Controller reads temps[0] before a sensor update but temps[1] after. The average mixes readings from different time points — a logically inconsistent snapshot.
PROBLEM 3
Memory Visibility → Stale CPU Cache
Modern CPUs cache values in L1/L2. Without a memory barrier (provided by mutex), the controller's CPU core may see a stale cached copy of the array that hasn't been flushed to main memory by the sensor thread's core.
REAL-WORLD CONSEQUENCE
The 2003 Northeast US blackout (50 million people lost power for 2 days) was partly caused by a race condition in FirstEnergy's alarm system. A software bug caused alarms to silently drop, operators saw a normal-looking screen while the grid was deteriorating. Proper mutex synchronization would have prevented this.

11
Online Learning Platform — Multilevel Feedback Queue
Past Year · CO2/K3 · [10 marks]
Three queues: Q1 (highest, RR 5ms) → Q2 (RR 10ms) → Q3 (FCFS, lowest). All new tasks enter Q1. If a task exhausts its time slice, it is demoted to the next queue.
TaskArrival (ms)Burst (ms)Type
T1012Live Streaming
T238Quiz Evaluation
T3120Analytics
T466Live Streaming
T51210Quiz Evaluation
Part i 6 marks — Draw the Ready Queues and Gantt Chart.
Part ii 4 marks — Calculate average Turnaround Time, Waiting Time, and Response Time.
KEY CONCEPT
MLFQ — Automatic CPU-bound vs Interactive Classification
Short/interactive tasks (finish within Q1's 5ms) stay high-priority. CPU-hungry tasks (T3=20ms) naturally sink to Q3 over time. This is the OS automatically adapting to process behavior — no manual priority assignment needed.
STEP-BY-STEP TRACE
Q1 Round (5ms slices):
0–5: T1 runs in Q1 (uses full 5ms) → demoted to Q2, 7ms remaining
5–10: T3 runs in Q1 (uses full 5ms) → demoted to Q2, 15ms remaining
10–15: T2 runs in Q1 (has 8ms, uses 5ms) → demoted to Q2, 3ms remaining
15–20: T4 runs in Q1 (has 6ms, uses 5ms) → demoted to Q2, 1ms remaining

Q2 / Q1 interleaved:
20–27: T1 runs in Q2 (7ms remaining, <10ms slice) → finishes at t=27
27–32: T5 arrives at t=12, enters Q1 fresh → runs 5ms → demoted to Q2, 5ms left
32–35: T2 runs in Q2 (3ms left) → finishes at t=35
35–36: T4 runs in Q2 (1ms left) → finishes at t=36
36–46: T3 runs in Q2 (10ms slice, uses full) → demoted to Q3, 5ms remaining
46–51: T5 runs in Q2 (5ms left) → finishes at t=51
51–56: T3 runs in Q3 (FCFS, 5ms left) → finishes at t=56
GANTT CHART — Multilevel Feedback Queue
T1-Q10–5
T3-Q15–10
T2-Q110–15
T4-Q115–20
T1-Q220–27
T5-Q127–32
T2-Q232–35
T435–36
T3-Q236–46
T5-Q246–51
T3-Q351–56
05101520273236465156
// Formulas: TAT = CT − AT | WT = TAT − BT | RT = First CPU time − AT Task AT BT CT TAT WT RT T1 0 12 27 27 15 0 T2 3 8 35 32 24 7 T3 1 20 56 55 35 4 T4 6 6 36 30 24 9 T5 12 10 51 39 29 15 Avg TAT = (27+32+55+30+39)/5 = 36.6 ms Avg WT = (15+24+35+24+29)/5 = 25.4 ms Avg RT = ( 0+ 7+ 4+ 9+15)/5 = 7.0 ms
REAL-WORLD CONNECTION
Linux's CFS and Windows NT scheduler both use feedback queues internally. Interactive processes (keyboard, mouse clicks) use tiny CPU bursts so they stay in high-priority queues — this is why your mouse cursor stays responsive even while a heavy video encode job runs in the background.

12
Core Banking — Race Condition & Synchronization
Past Year · CO2/K3 · [10 marks]
Process A handles deposits. Process B handles withdrawals. Both operate on shared account_balance. Initial balance = ₹10,000. A deposits ₹2,000, B withdraws ₹1,500.
Part i 5 marks — Develop pseudocode for both processes using an appropriate synchronization mechanism.
Part ii 2 marks — Explain how your solution ensures mutual exclusion, progress, and bounded waiting.
Part iii 3 marks — Trace with and without synchronization. Show the difference in results.
Part i — Synchronized Pseudocode
// SHARED VARIABLE + MUTEX account_balance = 10000 mutex lock = UNLOCKED // ── Process A: Deposit ────────────────────────────────── Process_A(deposit_amt): lock(account_lock) // ENTER critical section temp = account_balance temp = temp + deposit_amt // temp = 10000 + 2000 = 12000 account_balance = temp unlock(account_lock) // EXIT critical section // ── Process B: Withdrawal ────────────────────────────── Process_B(withdraw_amt): lock(account_lock) // ENTER critical section if account_balance >= withdraw_amt: temp = account_balance temp = temp - withdraw_amt // temp = 12000 - 1500 = 10500 account_balance = temp unlock(account_lock) // EXIT critical section
Part ii — Correctness Justification
MUTUAL EXCLUSION
Mutex ensures only one process modifies balance at a time
The mutex lock prevents A and B from simultaneously reading and writing account_balance. Only one holds the lock — the other blocks at lock().
PROGRESS
Lock is released immediately after the balance update — never held indefinitely
BOUNDED WAITING
Each process waits at most one transaction by the other process
After A completes and releases the lock, B acquires it immediately. No process can "cut in line" more than once before the waiting process proceeds.
Part iii — Trace: With vs Without Sync
StepWITHOUT Sync (Race Condition)WITH Mutex (Correct)
1A reads balance = 10000A acquires lock
2B reads balance = 10000 (same!)B tries lock → BLOCKED
3A computes 10000+2000 = 12000A reads 10000, computes 12000, writes back
4B computes 10000−1500 = 8500A releases lock
5A writes 12000 to balanceB acquires lock, reads 12000
6B writes 8500 ← overwrites A's deposit!B computes 12000−1500 = 10500, writes back
Result₹8,500 ❌ Lost ₹2,000 deposit!₹10,500 ✓ Correct
REAL-WORLD CONNECTION
Every bank database uses ACID transactions (Atomicity, Consistency, Isolation, Durability) — the database-level equivalent of mutex + journaling. The 2012 Knight Capital trading firm lost $440 million in 45 minutes partly due to unsynchronized software state. A race condition in production financial systems is catastrophically expensive.

13
Smart Printing — Producer-Consumer (Bounded Buffer)
Past Year · CO3/K3 · [10 marks]
Document Generator (DG) and Smart Printer (SP) share a bounded print queue of size N.
Constraints: DG always produces 2 docs consecutively. SP always prints 4 docs consecutively. SP only starts when ≥4 docs are available. DG waits if buffer full. SP waits if insufficient docs.
Part i 3 marks — Identify the required synchronization primitives and shared variables with their initial values.
Part ii 7 marks — Write the pseudocode for both DG and SP processes.
Part i — Synchronization Primitives
PRIMITIVES NEEDED
3 Semaphores + 1 Mutex + 1 Counter
VariableTypeInitial ValuePurpose
mutexSemaphore (binary)1Mutual exclusion for buffer access
emptyCounting SemaphoreNNumber of empty slots in buffer
fullCounting Semaphore0Number of documents in buffer
batch_readySemaphore0Signals SP when ≥4 docs are available
countInteger0Current number of docs in buffer
Part ii — Pseudocode
// ── Document Generator (DG) — Producer ────────────────── Process_DG(): while (true): doc1 = generate_document() doc2 = generate_document() // always produce 2 wait(empty); wait(empty) // need 2 empty slots wait(mutex) // enter critical section add doc1 to buffer; in = (in+1) % N add doc2 to buffer; in = (in+1) % N count += 2 signal(mutex) // exit critical section signal(full); signal(full) // 2 new docs added if (count >= 4): // notify SP if enough docs signal(batch_ready) // ── Smart Printer (SP) — Consumer ─────────────────────── Process_SP(): while (true): wait(batch_ready) // block until ≥4 docs ready wait(full); wait(full) wait(full); wait(full) // consume 4 docs wait(mutex) // enter critical section for i = 1 to 4: doc = buffer[out] out = (out+1) % N count -= 1 print(doc) signal(mutex) // exit critical section signal(empty); signal(empty) signal(empty); signal(empty) // 4 slots freed
REAL-WORLD CONNECTION
Windows Print Spooler is exactly this pattern — applications spool jobs (producers), the printer driver batches and sends to hardware (consumer). The OS manages the bounded buffer between them. The batch_ready semaphore represents the spooler's policy of only sending when a minimum page count is queued for efficiency.

14
File Allocation Methods
Past Year · CO5/K3 · [10 marks]
Disk: 32 blocks (0–31). Unavailable: 2, 3, 7, 8, 16, 17, 25. Files to store:
FileSize (blocks)
code3
data5
temp4
sys2
logs6
bin2
Part a 3 marksContiguous allocation. Assume suitable starting blocks. Draw the allocation diagram and File Allocation Table (FAT).
Part b 4 marksLinked allocation. Assume appropriate starting and ending blocks.
Part c 3 marksIndexed allocation. Assume suitable index block. Draw the index table.
// Available blocks (removing 2,3,7,8,16,17,25): [0,1,4,5,6,9,10,11,12,13,14,15,18,19,20,21,22,23,24,26,27,28,29,30,31] Total available = 25 blocks | Total needed = 3+5+4+2+6+2 = 22 blocks ✓
Part a — Contiguous Allocation
KEY POINT
Files occupy consecutive blocks. FAT stores (start block, length).
Fast sequential access. Suffers from external fragmentation — free blocks scattered between unavailable ones mean we can't always find a long enough run. Files must be pre-allocated.
FileStart BlockLengthBlocks Used
sys020, 1
code434, 5, 6
data959, 10, 11, 12, 13
bin14214, 15
temp18418, 19, 20, 21
logs26626, 27, 28, 29, 30, 31
⚠️ Blocks 22–24 wasted — 3 free blocks that are too fragmented for any file needing contiguous space. This is external fragmentation.
Part b — Linked Allocation
KEY POINT
Each block has a pointer to the next. Any free blocks usable — no fragmentation.
FAT stores (start, end). Eliminates fragmentation but random access is O(n) — must follow chain block-by-block. A broken pointer corrupts the entire file.
FileStart → EndBlock Chain
code0 → 40 → 1 → 4 → NULL
data5 → 115 → 6 → 9 → 10 → 11 → NULL
temp12 → 1512 → 13 → 14 → 15 → NULL
sys18 → 1918 → 19 → NULL
logs20 → 2620 → 21 → 22 → 23 → 24 → 26 → NULL
bin27 → 2827 → 28 → NULL
Part c — Indexed Allocation
KEY POINT
One dedicated index block per file stores all data block addresses.
Supports direct (random) access to any block. FAT stores only the index block number. Small overhead (one block per file for the index), but for very large files, multi-level indexing is used.
FileIndex BlockData Blocks (from index)
sys01, 4
code56, 9, 10
data1112, 13, 14, 15, 18
temp1920, 21, 22, 23
logs2426, 27, 28, 29, 30, 31
bin1— wait, 1 used. Use: index=9, data=10,11 (rebalanced)
REAL-WORLD CONNECTION
FAT32 (USB drives) = linked allocation. Each FAT entry points to the next cluster. ext4 (Linux) = indexed with extents (contiguous run optimization). NTFS (Windows) = B-tree indexed structure (MFT). The more sophisticated the file system, the better it handles large files and fragmentation.

15
Hospital Medicine Robot — Disk Scheduling
Past Year · CO5/K3 · [10 marks]
Autonomous robot at floor 3600, delivering medicines. Floors 0–7999. Morning delivery requests: 1200, 4200, 3000, 6000, 7500, 200, 3800, 5100, 4500, 7000
Part a 4 marksFCFS
Part b 3 marksLOOK
Part c 3 marksC-SCAN
// SETUP Initial position = 3600 Requests = [1200, 4200, 3000, 6000, 7500, 200, 3800, 5100, 4500, 7000] Sorted = [200, 1200, 3000, 3800, 4200, 4500, 5100, 6000, 7000, 7500]
Part a — FCFS
0 999 1998 2997 3996 4995 5994 6993 7992 Position 0 1 2 3 4 5 6 7 8 9 10 Step 3600 1200 4200 3000 6000 7500 200 3800 5100 4500 7000 START FCFS — Floor Movement (start=3600) Total: 26,400
3600→1200 (2400) → 4200 (3000) → 3000 (1200) → 6000 (3000) → 7500 (1500) → 200 (7300) → 3800 (3600) → 5100 (1300) → 4500 (600) → 7000 (2500) Total FCFS = 26,400 floors
Part b — LOOK (Move UP to last request, then reverse)
0 999 1998 2997 3996 4995 5994 6993 7992 Position 0 1 2 3 4 5 6 7 8 9 10 Step 3600 3800 4200 4500 5100 6000 7000 7500 3000 1200 200 START LOOK — Floor Movement (start=3600) Total: 11,200
UP: 3600→3800 (200) → 4200 (400) → 4500 (300) → 5100 (600) → 6000 (900) → 7000 (1000) → 7500 (500) ← REVERSE DOWN: 7500→3000 (4500) → 1200 (1800) → 200 (1000) Total LOOK = 11,200 floors
Part c — C-SCAN (Move UP, then jump to lowest and continue UP)
0 999 1998 2997 3996 4995 5994 6993 7992 Position 0 1 2 3 4 5 6 7 8 9 10 Step 3600 3800 4200 4500 5100 6000 7000 7500 200 1200 3000 START C-SCAN — Floor Movement (start=3600) Total: 14,000
UP: 3600→3800 (200) → 4200 (400) → 4500 (300) → 5100 (600) → 6000 (900) → 7000 (1000) → 7500 (500) JUMP (dashed): 7500→200 (7300) then UP: 200→1200 (1000) → 3000 (1800) Total C-SCAN = 14,000 floors
AlgorithmTotal MovementUniform WaitBest For
FCFS26,400NoVery light load
LOOK11,200 ✓ModerateMixed workloads
C-SCAN14,000Best uniformityHeavy continuous load ✓

16
RAG, Deadlock Detection + Banker's Algorithm (Cloud VMs)
Past Year · CO3/K3 · [15 marks]
Part a — Deadlock Analysis
System has 4 processes P1–P4, 3 resources: R1(2 instances), R2(3 instances), R3(2 instances).
Given allocation state (1 instance each):
• P1 holds R1, requests R2, R3
• P2 holds R2, requests R1, R2
• P3 requests R2, R3 (holds nothing)
Part a(i) 2 marks — Draw the Resource Allocation Graph (RAG) for this situation.
Part a(ii) 3 marks — Convert the RAG into a Wait-For Graph (WFG). Determine whether a deadlock exists and explain why.
Part b — Banker's Algorithm (Cloud VMs)
4 VMs, 4 resource types: R1=CPU, R2=RAM, R3=Secondary Storage, R4=Primary Storage
VMAlloc R1Alloc R2Alloc R3Alloc R4Max R1Max R2Max R3Max R4
VM112113320
VM220102112
VM331213221
VM401111121
Available: R1=1, R2=1, R3=0, R4=2
Part b(i) 2 marks — Calculate the Need Matrix for all VMs.
Part b(ii) 3 marks — Determine whether the system is in a safe state or not.
Part b(iii) 3 marks — VM4 requests (R1=1, R2=0, R3=0, R4=0) at t=1ms. Analyze whether it can be safely granted.
Part b(iv) 2 marks — VM1 requests (R1=1, R2=1, R3=0, R4=1) at t=3ms. Analyze whether it can be safely granted.
Part a(i) — Resource Allocation Graph (RAG)
KEY CONCEPT
RAG has two edge types: Assignment (R→P) and Request (P→R)
RESOURCE ALLOCATION GRAPH
R1 2 instances R2 3 instances R3 2 instances P1 P2 P3 P4 holds holds Assignment (R→P: holds) Request (P→R: waiting for)
Part a(ii) — Wait-For Graph (WFG) & Deadlock Detection
KEY CONCEPT
WFG: Remove resource nodes. P→Q edge means P waits for a resource HELD BY Q.
WAIT-FOR GRAPH — Cycle = Deadlock
P1 P2 P3 waits for R2 (P2 holds) waits for R1 (P1 holds) R2 🔴 CYCLE: P1→P2→P1 = DEADLOCK
CONCLUSION
Deadlock exists between P1 and P2. P3 is an indirect victim.
P1 ↔ P2 form a deadlock cycle: P1 waits for R2 (held by P2), while P2 waits for R1 (held by P1). Neither can proceed.

P3 is not in the cycle but is effectively stuck — it's waiting for R2 which is held by deadlocked P2. P3 will never get R2 until the deadlock is resolved (typically by killing P1 or P2).
Part b(i) — Need Matrix
FORMULA
Need[i][j] = Max[i][j] − Allocation[i][j]
* Note: VM1's Max R4 = 0 but Allocation R4 = 1. This means VM1 has already been allocated more than its declared max — a data anomaly. We treat VM1's Need[R4] = 0 (it cannot need any more than its max).
VMNeed R1Need R2Need R3Need R4
VM13−1 = 23−2 = 12−1 = 10−1 → 0*
VM22−2 = 01−0 = 11−1 = 02−0 = 2
VM33−3 = 02−1 = 12−2 = 01−1 = 0
VM41−0 = 11−1 = 02−1 = 11−1 = 0
Part b(ii) — Safety Check
VM Banker's Safety Check — Available Resources After Each VM Completes Step VM R1 avail R2 avail R3 avail R4 avail Available 1 1 0 2 1 VM2 Need:[0,1,0,2] 3 +2 1 +0 1 +1 2 +0 2 VM3 Need:[0,1,0,0] 6 +3 2 +1 3 +2 3 +1 3 VM4 Need:[1,0,1,0] 6 +0 3 +1 4 +1 4 +1 4 VM1 Need:[2,1,1,0] 7 +1 5 +2 5 +1 5 +1 ✅ Safe Sequence: VM2 → VM3 → VM4 → VM1
Check each step: Need ≤ Available? VM2: [0,1,0,2] ≤ [1,1,0,2] ✓ → Available = [3,1,1,2] VM3: [0,1,0,0] ≤ [3,1,1,2] ✓ → Available = [6,2,3,3] VM4: [1,0,1,0] ≤ [6,2,3,3] ✓ → Available = [6,3,4,4] VM1: [2,1,1,0] ≤ [6,3,4,4] ✓ → Available = [7,5,5,5] ✅ SAFE STATE — Safe Sequence: VM2 → VM3 → VM4 → VM1
Part b(iii) — VM4 Requests (1, 0, 0, 0)
Request = [1,0,0,0] Check 1: Request ≤ Need[VM4] = [1,0,1,0]? [1,0,0,0] ≤ [1,0,1,0] → YES ✓ Check 2: Request ≤ Available = [1,1,0,2]? [1,0,0,0] ≤ [1,1,0,2] → YES ✓ Tentative allocation: Available = [1,1,0,2] − [1,0,0,0] = [0,1,0,2] Alloc[VM4] = [0,1,1,1] + [1,0,0,0] = [1,1,1,1] Need[VM4] = [1,0,1,0] − [1,0,0,0] = [0,0,1,0] Re-run safety with Available = [0,1,0,2]: VM2: Need[0,1,0,2] ≤ [0,1,0,2]? YES → Available = [2,1,1,2] VM3: Need[0,1,0,0] ≤ [2,1,1,2]? YES → Available = [5,2,3,3] VM4: Need[0,0,1,0] ≤ [5,2,3,3]? YES → Available = [5,3,4,4] VM1: Need[2,1,1,0] ≤ [5,3,4,4]? YES ✓ ✅ GRANT VM4's request — system remains safe New safe sequence: VM2 → VM3 → VM4 → VM1
Part b(iv) — VM1 Requests (1, 1, 0, 1)
Request = [1,1,0,1] Check 1: Request ≤ Need[VM1] = [2,1,1,0]? R4: request = 1 > need = 0 → VIOLATION! VM1 is requesting MORE than its declared maximum need. This is an abnormal/illegal request — the process is exceeding its own stated requirements. ❌ REJECT VM1's request — exceeds declared maximum need OS should flag this as an error. Potential corrective action: terminate VM1 or force it to re-declare its maximum needs.
REAL-WORLD CONNECTION
Kubernetes (container orchestration) uses a similar concept with resource requests and limits. A pod that tries to use more CPU/memory than its declared limit is throttled or killed. AWS EC2 instances can't exceed their instance type's physical limits — the hypervisor enforces hard caps, analogous to the Banker's max constraint.

17
Memory Addressing + Cache Replacement
Past Year · CO4/K3 · [15 marks]
Part a — A computer system has a logical address space of 32 pages, where each page contains 1024 bytes. Physical memory consists of 16 frames.
(i) 3 marks How many bits are required to represent the logical address?
(ii) 2 marks How many bits for the physical address?
Part b — A cloud storage client caches 3 recently accessed files. Access sequence: 1, 2, 3, 1, 2, 3, 4, 5, 1, 2, 3, 6, 7, 1
(i) 4 marks LRU replacement — hits and misses
(ii) 4 marks Optimal replacement — hits and misses
(iii) 2 marks Which performs better and why?
Part a — Address Bit Calculation
KEY CONCEPT
Logical address = [page number bits | offset bits]. Physical = [frame number bits | offset bits].
The offset bits are the same in both (same page/frame size). The difference is page number (logical) vs frame number (physical). The page table translates page# → frame#.
// GIVEN Logical pages = 32 → page number bits = log₂(32) = 5 bits Page size = 1024 B → offset bits = log₂(1024) = 10 bits Physical frames = 16 → frame number bits = log₂(16) = 4 bits // FORMULAS Logical Address bits = page number bits + offset bits = 5 + 10 = 15 bits Physical Address bits = frame number bits + offset bits = 4 + 10 = 14 bits // Logical: [5-bit page# | 10-bit offset] → 2^15 = 32 KB total logical space // Physical: [4-bit frame# | 10-bit offset] → 2^14 = 16 KB total physical space // Page table maps: page# → frame# (offset unchanged)
INTUITION
The page number is like a chapter in a book (logical structure). The frame number is the actual physical shelf location in the library. The 10-bit offset is the line number within the page — same in both addresses since page = frame in size. The page table is the library catalog.
Part b(i) — LRU Cache Replacement (cache size = 3)
AccessCache State afterResultEvicted
1[1, −, −]MISS
2[1, 2, −]MISS
3[1, 2, 3]MISS
1[2, 3, 1]HIT
2[3, 1, 2]HIT
3[1, 2, 3]HIT
4[2, 3, 4]MISS1 (LRU)
5[3, 4, 5]MISS2 (LRU)
1[4, 5, 1]MISS3 (LRU)
2[5, 1, 2]MISS4 (LRU)
3[1, 2, 3]MISS5 (LRU)
6[2, 3, 6]MISS1 (LRU)
7[3, 6, 7]MISS2 (LRU)
1[6, 7, 1]MISS3 (LRU)
TotalMisses=11, Hits=3
Part b(ii) — Optimal Replacement
AccessCache State afterResultEvicted (farthest future)
1[1, −, −]MISS
2[1, 2, −]MISS
3[1, 2, 3]MISS
1[1, 2, 3]HIT
2[1, 2, 3]HIT
3[1, 2, 3]HIT
4[1, 2, 4]MISS3 (next use: pos 11)
5[1, 4, 5]MISS2 (next use: pos 10)
1[1, 4, 5]HIT ←OPT kept 1!
2[1, 2, 5]MISS4 (never used again)
3[1, 2, 3]MISS5 (never used again)
6[2, 3, 6]MISS1 (next use: pos 14)
7[3, 6, 7]MISS2 (never used again)
1[6, 7, 1]MISS3 (never used again)
TotalMisses=10, Hits=4 ✓
Part b(iii) — Which Performs Better?
AlgorithmHitsMissesFeasible?
Optimal (OPT)4 ✓10No — needs future knowledge
LRU311Yes — uses past history
CONCLUSION
OPT is better (4 hits vs 3), but LRU is the best practical choice
OPT performs better because it uses future knowledge to retain pages that will be reused soonest — it kept page 1 in cache at position 9 (avoiding a miss) by knowing it would be accessed again at position 14. LRU cannot know this, so it evicted page 1. However, LRU is the best implementable algorithm — it closely approximates OPT by using past access history as a predictor. Neither OPT nor LRU suffers from Belady's Anomaly (unlike FIFO).
REAL-WORLD CONNECTION
Redis (used by Twitter, GitHub, Stack Overflow) uses LRU and LFU eviction policies. CPU L1/L2/L3 caches use pseudo-LRU (hardware approximation). Web browsers cache recently visited pages using LRU. Choosing the wrong eviction policy for a production cache can cause 5–10× slowdowns under load.
📄 Second Past Year Paper
18
OS Architecture — Layered vs Monolithic for HPC
2nd Paper Q01 · CO1/K1 · [10 marks]
A high-performance computing system requires: fast execution, minimal overhead, ease of maintenance, strong fault isolation, and real-time multi-user services.
Part a 5 marks — Analyze how a layered structure and a monolithic structure would handle system design. Determine which is more suitable and justify with respect to performance, modularity, and reliability.
Part b 5 marks — For the selected structure, identify any three key functionalities and discuss an appropriate abstraction model. Explain how these abstractions interact to simplify system design while ensuring efficient resource utilization and system stability.
Part a — Layered vs Monolithic: Which fits HPC?
KEY POINT 1
Monolithic OS — Everything in One Kernel
All OS services (process management, memory, file system, device drivers) run in a single large kernel binary in privileged mode. System calls go directly to the service — no layer-crossing overhead. Examples: Linux, early Unix.
KEY POINT 2
Layered OS — Strict Hierarchy of Abstraction
OS is divided into numbered layers (layer 0 = hardware, layer N = user interface). Each layer only calls services from the layer directly below it. Highly modular but slower — a system call may traverse multiple layers, each adding overhead.
CriterionMonolithicLayered
Performance⚡ Excellent — direct calls, no layer overheadModerate — cross-layer penalties
ModularityLow — tightly coupledHigh — each layer independent
Fault IsolationPoor — one bug can crash kernelGood — faults contained per layer
Ease of MaintenanceHarder (large codebase)Easier (modular changes)
Real-time OverheadMinimalHigher
For HPC with real-time requirements: Monolithic wins on performance. However, a Modular Monolithic design (like Linux) — where loadable kernel modules add modularity without full layering — is the best practical choice: raw speed + selective modularity.
REAL-WORLD
Linux (monolithic + loadable modules) powers 99% of the world's supercomputers. It outperforms layered designs because HPC workloads cannot afford the extra function-call hops. Windows NT uses a layered approach but pays the performance cost — hence it's rarely used for HPC clusters.
Part b — Three Key Functionalities + Abstraction Model
FUNCTIONALITY 1
Process Abstraction (via PCB)
The OS abstracts raw CPU execution as a Process Control Block (PCB). Each process sees a private, isolated CPU view (registers, PC, stack). The scheduler multiplexes the real CPU among PCBs transparently — processes don't know they share hardware.
FUNCTIONALITY 2
Memory Abstraction (via Virtual Address Space)
Each process gets its own virtual address space. The MMU + page tables translate virtual → physical transparently. Processes can't access each other's memory — critical for fault isolation in multi-user HPC systems.
FUNCTIONALITY 3
I/O Abstraction (via File Descriptor)
All I/O (files, sockets, pipes, devices) is abstracted as a file descriptor. The same read()/write() system call works whether you're reading a disk file, a network socket, or a GPU memory buffer. This simplifies application code dramatically.
HOW THEY INTERACT
A process (PCB abstraction) reads data (file descriptor abstraction) into its private memory (virtual address abstraction). The OS coordinates all three: the scheduler gives the process CPU time → the MMU maps its virtual buffer address to real RAM → the device driver moves data through the file descriptor into that buffer. Each abstraction hides the complexity below it.

19
Interrupts & Thread Types — Five Sub-Scenarios
2nd Paper Q02 · CO1/K2 · [10 marks]
A multitasking system runs an application with multiple threads. Five sub-scenarios occur:
(i) A key is pressed by the user while the application is running.
(ii) A thread performs independent computations without OS intervention.
(iii) A disk operation completes and signals the processor.
(iv) The OS schedules threads for execution across CPU cores.
(v) Multiple threads managed entirely by a user-level library without kernel awareness.
Part a 5 marks — Identify whether an interrupt is involved for each sub-scenario. Justify based on event type and system response.
Part b 5 marks — Determine whether each sub-scenario involves user-level threads or kernel-level threads. Provide brief reasoning.
Part a — Interrupt Involved?
ScenarioInterrupt?TypeJustification
(i) Key pressedYESHardware Interrupt (IRQ)Keyboard controller raises an IRQ. CPU suspends current task, jumps to keyboard ISR, buffers the keycode, returns.
(ii) Thread computesNONonePure computation within allocated CPU time slice. No external event — no interrupt unless a timer interrupt eventually preempts it.
(iii) Disk completesYESHardware Interrupt (I/O)Disk controller raises an interrupt when DMA transfer finishes. OS ISR marks the waiting process as ready.
(iv) OS schedules threadsYESTimer Interrupt (Trap)A periodic timer interrupt triggers the scheduler. The OS performs a context switch — saves old thread state, loads new thread.
(v) User-level library threadsNONoneUser-level thread switching is done by the library (e.g., setjmp/longjmp). No kernel involvement, no mode switch, no interrupt.
Part b — User-Level vs Kernel-Level Threads
ScenarioThread TypeReasoning
(i) Key press responseKernel-LevelKeyboard interrupt causes a kernel ISR to run. The kernel may then wake a kernel-level thread waiting for input.
(ii) Independent computationEither (context-dependent)If the thread was created by the kernel (e.g., pthread), it's kernel-level. If by a user library, user-level. Either can compute independently.
(iii) Disk I/O completionKernel-LevelDisk interrupt is handled by the kernel. The kernel unblocks the waiting kernel thread and places it in the ready queue.
(iv) OS scheduling across coresKernel-LevelOnly the kernel can schedule across multiple CPU cores — it manages one kernel thread per core. User-level threads are invisible to the scheduler.
(v) User-level library manages threadsUser-LevelBy definition — the library (e.g., GNU Pth, old Java green threads) handles switching in user space. Kernel sees only one process thread.
REAL-WORLD
Modern Linux uses kernel-level threads (via clone() syscall) for all pthreads. User-level threads (goroutines in Go, Erlang processes) are managed by a user-space runtime — thousands can run atop a few kernel threads, giving huge scalability but losing true multi-core parallelism if the runtime doesn't do M:N mapping.

20
CPU Scheduling — Preemptive Priority + Round Robin
2nd Paper Q03 · CO2/K2 · [10 marks]
Five processes P1–P5 with lower importance number = higher priority. Same-priority processes share CPU in 2ms RR. Newly arriving higher-priority process preempts immediately.
ProcessArrival (ms)Burst (ms)Priority
P1052
P2141 ← highest
P3263 ← lowest
P4331 ← highest
P5422
Part a 7 marks — Construct the Gantt chart and specify the process completion sequence.
Part b 3 marks — Calculate average waiting time, average turnaround time, and average response time.
Part a — Gantt Chart
ALGORITHM
Preemptive Priority + RR(2ms) for ties. Lower number = higher priority.
STEP-BY-STEP TRACE
t=0 Only P1 (pri 2) → Run P1
t=1 P2 arrives (pri 1 > P1's 2) → Preempt P1. P1 has 4ms left. Run P2.
t=3 P4 arrives (pri 1 = P2's 1). RR kicks in — P2 used 2ms slice → switch to P4. Run P4.
t=5 P4 used 2ms slice. Back to P2 (still 2ms left). Run P2.
t=7 P2 done. P4 has 1ms left. Run P4.
t=8 P4 done. Ready: P1(4ms,pri2), P5(2ms,pri2). RR 2ms. Run P1.
t=9 P5 arrives... already at t=4. P5 arrived at t=4 but was blocked behind priority-1 processes.
At t=8: Ready: P1(4ms,pri2), P5(2ms,pri2). RR between them. Run P1 first (arrived earlier).
t=10 P1 used 2ms → switch to P5. Run P5(2ms).
t=12 P5 done. Run P1 remaining 2ms.
t=14 P1 done. Only P3 (pri 3, lowest). Run P3(6ms).
t=20 P3 done.
GANTT CHART — Preemptive Priority + RR(2ms) for equal priority
P10–1
P21–3
P43–5
P25–7
P47–8
P18–10
P510–12
P112–14
P314–20
01357810121420
Completion sequence: P2 → P4 → P5 → P1 → P3
Part b — WT, TAT, RT
// TAT = CT − AT | WT = TAT − BT | RT = First CPU time − AT P1: AT=0, BT=5, CT=14 → TAT=14, WT=9, RT=0 (first ran at t=0) P2: AT=1, BT=4, CT=7 → TAT=6, WT=2, RT=0 (first ran at t=1) P3: AT=2, BT=6, CT=20 → TAT=18, WT=12, RT=12 (first ran at t=14) P4: AT=3, BT=3, CT=8 → TAT=5, WT=2, RT=0 (first ran at t=3) P5: AT=4, BT=2, CT=12 → TAT=8, WT=6, RT=6 (first ran at t=10) Avg TAT = (14+6+18+5+8)/5 = 10.2 ms Avg WT = (9+2+12+2+6)/5 = 6.2 ms Avg RT = (0+0+12+0+6)/5 = 3.6 ms

21
Hospital Resource Management — Banker's Algorithm
2nd Paper Q04 · CO3/K3 · [10 marks]
Hospital resources: ICU units=6, Equipment=8, Nursing teams=7. Five patients P1–P5.
PatientAlloc ICUAlloc EquipAlloc NurseMax ICUMax EquipMax Nurse
P1121231
P2201322
P3112133
P4101222
P5211232
Part a 4 marks — Compute the Available vector and the Need matrix.
Part b 6 marks — Determine if the system is in a safe state. Find the safe sequence or identify which patients' needs cannot be fulfilled.
Part a — Available Vector + Need Matrix
// Total Allocated ICU: 1+2+1+1+2 = 7 → Available ICU = 6−7 = −1 ??? // Wait — re-read: total ICU=6, let's recheck. P2=2 ICU, P3=1, P4=1, P5=2, P1=1 → total=7 > 6 // This suggests the given allocations EXCEED available — a data inconsistency in the question. // Treating this as: Available = Total − Allocated (negatives indicate over-allocation) // Most likely intended: Total ICU=8 (misread), OR some allocations differ. Using the numbers as given: Total alloc: ICU=7, Equip=4, Nurse=6 Available: ICU=6−7=−1 (over-allocated), Equip=8−4=4, Nurse=7−6=1 // For exam purposes, assume total ICU = 8 (common re-reading of such problems): Available: ICU=8−7=1, Equip=4, Nurse=1
PatientNeed ICUNeed EquipNeed Nurse
P12−1=13−2=11−1=0
P23−2=12−0=22−1=1
P31−1=03−1=23−2=1
P42−1=12−0=22−1=1
P52−2=03−1=22−1=1
Part b — Safety Check (Available = [1, 4, 1])
Available = [ICU=1, Equip=4, Nurse=1] P1: Need[1,1,0] ≤ [1,4,1]? YES → Run P1. Available = [1,4,1]+[1,2,1] = [2,6,2] P3: Need[0,2,1] ≤ [2,6,2]? YES → Run P3. Available = [2,6,2]+[1,1,2] = [3,7,4] P5: Need[0,2,1] ≤ [3,7,4]? YES → Run P5. Available = [3,7,4]+[2,1,1] = [5,8,5] P2: Need[1,2,1] ≤ [5,8,5]? YES → Run P2. Available = [5,8,5]+[2,0,1] = [7,8,6] P4: Need[1,2,1] ≤ [7,8,6]? YES → Run P4. ✓ ✅ SAFE — Safe Sequence: P1 → P3 → P5 → P2 → P4

22
Library Database — Readers-Writers Problem
2nd Paper Q05 · CO3/K3 · [10 marks]
Library database: Students and faculty can read; librarians can update/delete. Requests: R1(Student read), R2(Faculty read), W1(Librarian update), R3(Student read), W2(Librarian update), R4(Student read).
Part a 6 marks — Identify a suitable synchronization technique. Write pseudocode. Determine the execution order with justification.
Part b 4 marks — During exam time (heavy reads + frequent writes), analyze fairness, starvation prevention, and suggest improvements.
Part a — Readers-Writers with Semaphores
KEY POINT
Readers-Writers Problem: Multiple readers OK simultaneously; writers need exclusive access
Use a mutex for reader count + a write semaphore. First reader blocks writers; last reader unblocks them. This is the classic "Readers-First" solution.
// ── Shared Variables ────────────────────────────────────── semaphore write_lock = 1 // exclusive access for writers semaphore count_mutex = 1 // protects reader_count int reader_count = 0 // active readers // ── Reader Process ──────────────────────────────────────── Reader(): wait(count_mutex) reader_count++ if reader_count == 1: // first reader blocks writers wait(write_lock) signal(count_mutex) // ─── READ DATABASE ─── wait(count_mutex) reader_count-- if reader_count == 0: // last reader unblocks writers signal(write_lock) signal(count_mutex) // ── Writer Process ──────────────────────────────────────── Writer(): wait(write_lock) // exclusive lock // ─── WRITE/UPDATE DATABASE ─── signal(write_lock)
EXECUTION ORDER
R1, R2 run simultaneously → W1 waits → W1 runs when readers done → R3 runs → W2 waits → W2 runs → R4 runs
R1 and R2 are readers — they can run concurrently (reader_count reaches 2). W1 blocks on write_lock. When R1 and R2 finish, reader_count drops to 0 → write_lock signaled → W1 runs exclusively. Then R3 runs, W2 waits for R3, then W2 runs, then R4.
Part b — Fairness, Starvation, Improvements
PROBLEM
Reader-First causes Writer Starvation under heavy read load
During exams, if readers continuously enter, reader_count never drops to 0, so write_lock is never signaled → librarian writes starve indefinitely.
IMPROVEMENT 1
Writers-Priority: Once a writer arrives, no new readers allowed in
Add a write_pending flag. New readers check this — if a writer is waiting, they block. Existing readers finish, writer runs, then new readers proceed. Prevents writer starvation.
IMPROVEMENT 2
Fair FIFO Queue (e.g., using a ticket-lock or arrival-order semaphore)
Replace the binary approach with a turnstile pattern — every reader and writer passes through a FIFO semaphore on entry. Requests are served in arrival order, ensuring neither readers nor writers starve.
REAL-WORLD
PostgreSQL uses LWLocks (lightweight locks) with a similar fairness queue for its buffer manager. Linux kernel's rwsem (read-write semaphore) implements the writers-priority approach to prevent starvation. SQLite uses a single writer lock — simpler but lower throughput.

23
Contiguous Memory — Worst Fit Allocation
2nd Paper Q06 · CO4/K3 · [10 marks]
Memory blocks: B1=90KB, B2=20KB, B3=30KB, B4=70KB, B5=50KB. Processes: P1(40KB,2ns), P2(30KB,7ns), P3(50KB,4ns), P4(60KB,4ns), P5(80KB,5ns), P6(20KB,3ns), P7(40KB,6ns), P8(30KB,2ns).
Part a 6 marks — Worst Fit allocation (place each process in the largest available block). Show step-by-step assignment.
Part b 4 marks — Using given usage times, compute when each process completes. Mention the execution sequence.
Part a — Worst Fit: Place in Largest Available Block
KEY POINT
Worst Fit: Always allocate to the LARGEST free block to leave the biggest possible leftover fragment
Counter-intuitively useful — leaves larger fragments for future allocations rather than fragmenting many small holes. Opposite of Best Fit.
StepProcessSizeAvailable blocks (sorted desc)Block UsedRemaining
1P1 (40KB)40KBB1=90, B4=70, B5=50, B3=30, B2=20B1 (90KB)B1→50KB left
2P2 (30KB)30KBB4=70, B5=50, B1=50, B3=30, B2=20B4 (70KB)B4→40KB left
3P3 (50KB)50KBB5=50, B1=50, B4=40, B3=30, B2=20B5 (50KB)B5→0KB
4P4 (60KB)60KBB1=50, B4=40, B3=30, B2=20No block ≥ 60KBP4 → WAIT
5P5 (80KB)80KBB1=50, B4=40, B3=30, B2=20No block ≥ 80KBP5 → WAIT
6P6 (20KB)20KBB1=50, B4=40, B3=30, B2=20B1 (50KB)B1→30KB left
7P7 (40KB)40KBB4=40, B1=30, B3=30, B2=20B4 (40KB)B4→0KB
8P8 (30KB)30KBB1=30, B3=30, B2=20B1 or B3 (30KB)→0KB
⚠️ P4 (60KB) and P5 (80KB) cannot be allocated — no single block large enough remains after Worst Fit assignments. This highlights Worst Fit's weakness with large processes.
Part b — Execution Times (start at t=0, usage time = duration)
ProcessBlockUsage (ns)StartCompletes at
P1B1202 ns
P2B4707 ns
P3B5404 ns
P6B1 (after P1)325 ns
P7B4 (after P2)6713 ns
P8B3 or B1202 ns
P4, P5Cannot allocate
EXECUTION SEQUENCE
By completion time: P1=P8 (2ns) → P3 (4ns) → P6 (5ns) → P2 (7ns) → P7 (13ns)

24
Hypervisors — Type 1 vs Type 2
2nd Paper Q07 · CO5/K2 · [10 marks]
Part a 5 marks — Server deployment: multiple OS on physical server for high performance. Identify type of hypervisor, explain working principle with diagram.
Part b 5 marks — Personal computer testing: run multiple VMs without affecting host. Identify type of hypervisor, explain working principle with diagram.
Part a — Server Deployment: Type 1 Hypervisor (Bare-Metal)
KEY POINT
Type 1 (Bare-Metal) Hypervisor runs DIRECTLY on hardware — no host OS
Examples: VMware ESXi, Microsoft Hyper-V, Xen, KVM. The hypervisor IS the OS at the hardware level. VMs run on top of it. Maximum performance, minimum overhead — ideal for servers where every CPU cycle counts.
TYPE 1 HYPERVISOR — BARE-METAL ARCHITECTURE
Physical Hardware (CPU, RAM, NIC, Disk) Type 1 Hypervisor (VMware ESXi / KVM) — runs on bare metal VM 1Windows Server VM 2Ubuntu Linux VM 3Red Hat Enterprise App AApp B App CApp D App EApp F
WORKING PRINCIPLE
The hypervisor intercepts all privileged instructions from VMs via hardware virtualization extensions (Intel VT-x, AMD-V). Each VM thinks it has real hardware — the hypervisor translates VM requests to physical hardware operations. No host OS layer = near-native speed.
Part b — Personal Computer Testing: Type 2 Hypervisor (Hosted)
KEY POINT
Type 2 (Hosted) Hypervisor runs ON TOP of an existing host OS as an application
Examples: VMware Workstation, VirtualBox, Parallels. The host OS (Windows/macOS) runs normally. The hypervisor is just an app. Easy to install, flexible — developer can run Windows 11 + Ubuntu + Red Hat simultaneously on their MacBook, then close the app and use macOS normally. Higher overhead than Type 1 due to the host OS layer.
TYPE 2 HYPERVISOR — HOSTED ARCHITECTURE
Physical Hardware (Intel i9, 32GB RAM, 1TB SSD) Host OS (macOS / Windows 11) — runs natively Type 2 Hypervisor (VMware Workstation / VirtualBox) VM 1Windows 11 VM 2Ubuntu Linux VM 3Red Hat Test App Test App Test App
REAL-WORLD
The question scenario (VMware Workstation on Intel i9, 32GB RAM, 1TB SSD) is a textbook Type 2 setup. This is standard in software companies — developers test cross-platform apps without needing separate hardware. GitHub Actions CI/CD also uses VMs for isolation, but runs Type 1 for speed.

25
Page Replacement — LRU, OPT, FIFO + EAT
2nd Paper Q08 · CO4/K3 · [15 marks]
12KB main memory space, frame=3KB → 4 frames. Memory access=100ns, page fault service=30ms. Reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6
Part a 5 marks — LRU (Least Recently Used) — count page faults
Part b 5 marks — OPT (Optimal) — count page faults
Part c 5 marks — Compute EAT for both cases using faults obtained
// Setup: 12KB / 3KB = 4 frames. Reference string length = 20
Part a — LRU (4 frames)
RefF1F2F3F4Result
11MISS
212MISS
3123MISS
41234MISS
21234HIT
11234HIT
51254MISS (evict 3, LRU)
61256MISS (evict 4, LRU)
21256HIT
11256HIT
21256HIT
31236MISS (evict 5, LRU)
71237MISS (evict 6, LRU)
66237MISS (evict 1, LRU)
36237HIT
26237HIT
16231MISS (evict 7, LRU)
26231HIT
36231HIT
66231HIT
TotalLRU Faults = 9, Hits = 11
Part b — OPT (4 frames)
RefF1F2F3F4ResultEvict Reason
11MISS
212MISS
3123MISS
41234MISS
21234HIT
11234HIT
51254MISSEvict 3 (used at pos 12)
61256MISSEvict 4 (used at pos... not used)
21256HIT
11256HIT
21256HIT
31236MISSEvict 5 (never used again)
77236MISSEvict 1 (used at pos 17)
67236HIT
37236HIT
27236HIT
11236MISSEvict 7 (never used again)
21236HIT
31236HIT
61236HIT
TotalOPT Faults = 8, Hits = 12 ✓
Part c — Effective Access Time (EAT)
// GIVEN: memory access = 100ns, page fault service = 30ms = 30,000,000ns // n = 20 references Hit rate (LRU) = 11/20 = 0.55 | Fault rate = 9/20 = 0.45 Hit rate (OPT) = 12/20 = 0.60 | Fault rate = 8/20 = 0.40 // EAT = p×m + (1−p)×(m + f) EAT_LRU = 0.55×100 + 0.45×(100+30,000,000) = 55 + 13,500,045 ≈ 13,500,100 ns = 13.5 ms EAT_OPT = 0.60×100 + 0.40×(100+30,000,000) = 60 + 12,000,040 ≈ 12,000,100 ns = 12.0 ms // OPT saves 1.5ms per access — significant at scale!

26
Disk Scheduling — SSTF, C-SCAN, C-LOOK (Two Batches)
2nd Paper Q09 · CO5/K3 · [15 marks]
Hard disk: 1500 cylinders, 5400 RPM, avg seek=4ms, transfer=200KB/s, sector=1KB, 800 sectors/cylinder. Head at cylinder 250. B2 arrives only after B1 completes.
B1: [1200, 300, 700, 50, 900]   B2: [1400, 600, 200, 1100, 400]
Part a 4 marks — Calculate disk size (GB) and average access time for a single request.
Part b 9 marks — SSTF, C-SCAN, C-LOOK for B1 then B2. Head moves toward higher-numbered cylinders initially.
Part c 2 marks — Limitations of all three algorithms.
Part a — Disk Size + Average Access Time
Disk Size = cylinders × sectors/cylinder × sector_size = 1500 × 800 × 1KB = 1,200,000 KB = 1,200 MB ≈ 1.2 GB Rotational speed = 5400 RPM → rotation period = 60/5400 = 11.1 ms Average rotational latency = half rotation = 11.1/2 = 5.55 ms Transfer time = 1KB / 200KB/s = 0.005 s = 5 ms Average access time = seek time + rotational latency + transfer time = 4 + 5.55 + 5 = 14.55 ms per request
Part b — SSTF (Shortest Seek Time First) — B1 then B2
ALGORITHM
At each step, service the request closest to current head position
B1: head=250, requests=[1200,300,700,50,900] 250→300 (50) → 50 (250) → 700 (650) → 900 (200) → 1200 (300) Total B1 SSTF = 1450 cylinders. Head ends at 1200. B2: head=1200, requests=[1400,600,200,1100,400] 1200→1100 (100) → 1400 (300) → 600 (800) → 400 (200) → 200 (200) Total B2 SSTF = 1600 cylinders. Total SSTF = 1450 + 1600 = 3,050 cylinders
Part b — C-SCAN — B1 then B2 (moving toward higher cylinders first)
B1: head=250, moving UP. Sorted above 250: 300,700,900,1200. Below: 50 250→300(50)→700(400)→900(200)→1200(300) → jump to 50(1150) → done (50 is only below) Total B1 C-SCAN = 2100 cylinders. Head ends at 50. B2: head=50, moving UP. Requests=[1400,600,200,1100,400]. All above 50. 50→200(150)→400(200)→600(200)→1100(500)→1400(300) Total B2 C-SCAN = 1350 cylinders. Total C-SCAN = 2100 + 1350 = 3,450 cylinders
Part b — C-LOOK — B1 then B2 (only goes as far as last request)
B1: head=250, moving UP. Above 250: 300,700,900,1200. Below: 50 250→300(50)→700(400)→900(200)→1200(300) ← last UP request, jump to 50(1150) Total B1 C-LOOK = 2100 cylinders. Head ends at 50. B2: head=50. All above 50: 200,400,600,1100,1400 50→200(150)→400(200)→600(200)→1100(500)→1400(300) Total B2 C-LOOK = 1350 cylinders. Total C-LOOK = 2100 + 1350 = 3,450 cylinders (Same as C-SCAN here since no wasted travel to disk boundary)
AlgorithmB1B2TotalBest For
SSTF1,4501,6003,050 ✓Minimum seek, batch workloads
C-SCAN2,1001,3503,450Uniform wait, heavy load
C-LOOK2,1001,3503,450C-SCAN without wasted travel
Part c — Limitations
SSTF LIMITATION
Starvation — requests at far ends may never be serviced if nearby requests keep arriving
C-SCAN LIMITATION
Wasted head movement — travels to disk boundary even if no requests exist there
C-LOOK LIMITATION
Jump overhead — the rapid jump from last request back to first may cause mechanical stress and isn't instantaneous

📃 Third Past Year Paper
27
Kernel Structure — Dynamically Loadable Modules + Microkernel Protection
3rd Paper Q01 · CO1/K1+K2 · [10 marks]
Part a 3 marks — Identify the kernel structure type that facilitates a dynamically loadable module that can be added to a running system without rebooting or rebuilding the kernel.
Part b 7 marks — For the C code below, explain in detail how protection is achieved through abstraction in a microkernel OS. Justify with a neat diagram.
main() {
  int i,n;
  scanf("%d\n",&n);
  for (i=0; i<n; i++)
    printf("%d\n", i);
}
Part a — Modular (Loadable Kernel Modules — LKM)
ANSWER
Modular Kernel (with Loadable Kernel Modules — LKM)
The Modular Monolithic Kernel supports dynamically loadable modules. Core kernel is loaded at boot; additional modules (device drivers, file systems, network protocols) can be inserted or removed at runtime via insmod / rmmod (Linux) without rebooting. Example: Linux's .ko files — plug in a GPU driver while the system is running.
WHY NOT PURE MONOLITHIC?
Pure monolithic kernels require full recompilation to add new features. Pure microkernels are too slow for production. Modular monolithic gives the best of both: performance of monolithic + flexibility of modules.
Part b — Protection via Abstraction in Microkernel
KEY POINT 1
Dual-Mode Protection: User Mode vs Kernel Mode
When main() runs, the CPU is in User Mode — it cannot directly access hardware, memory of other processes, or I/O ports. When scanf() or printf() is called, a system call (trap/software interrupt) switches CPU to Kernel Mode, executes the I/O, then returns to User Mode.
KEY POINT 2
Microkernel: Each service runs in user space as a separate server process
In a microkernel, scanf sends an IPC message to the I/O Server (in user space). The I/O Server makes a minimal kernel call to the microkernel for actual hardware access. This means even if the I/O server crashes, the kernel itself is unaffected — strong fault isolation.
MICROKERNEL PROTECTION — C PROGRAM EXECUTION
USER SPACE (unprivileged) Application main() runs here scanf() / printf() I/O Server (user space) handles stdio Memory Server (user space) manages virtual mem IPC msg Microkernel — KERNEL SPACE (privileged) IPC, basic scheduling, memory protection, interrupt handling ONLY Hardware (CPU, RAM, keyboard, screen) Trap / syscall (mode switch)
PROTECTION MECHANISMS FOR THE C PROGRAM
scanf() → app sends IPC to I/O server → I/O server asks microkernel for keyboard access → kernel reads input, returns to I/O server, I/O server returns to app. The app never touches the keyboard driver directly.
for loop → runs entirely in user space. No kernel involvement. If the loop runs forever, the kernel's timer interrupt can preempt it — the app can't disable interrupts.
printf() → same path as scanf — via I/O server. Screen access is mediated, never direct.

28
fork() + Pthreads — Even Sum, Odd Sum, Average
3rd Paper Q02 · CO1/K2 · [10 marks]
Part a 6 marks — Write a C program: parent computes sum of even numbers. Child creates two threads: Thread 1 = sum of odd numbers, Thread 2 = average of all numbers. Child waits for threads, prints. Parent waits for child.
Part b 4 marks — Draw the process state diagram for the specified code.
Part a — C Program
#include <stdio.h> #include <pthread.h> #include <unistd.h> int arr[] = {1,2,3,4,5,6,7,8,9,10}; int n = 10; // Thread 1: sum of odd numbers void* sum_odd(void* arg) { int sum = 0; for (int i=0; i<n; i++) if (arr[i] % 2 != 0) sum += arr[i]; printf("Child T1 - Sum of odd: %d\n", sum); return NULL; } // Thread 2: average of all numbers void* calc_avg(void* arg) { double sum = 0; for (int i=0; i<n; i++) sum += arr[i]; printf("Child T2 - Average: %.2f\n", sum/n); return NULL; } int main() { pid_t pid = fork(); if (pid == 0) { // CHILD PROCESS pthread_t t1, t2; pthread_create(&t1, NULL, sum_odd, NULL); pthread_create(&t2, NULL, calc_avg, NULL); pthread_join(t1, NULL); pthread_join(t2, NULL); printf("Child done.\n"); } else { // PARENT PROCESS int sum_even = 0; for (int i=0; i<n; i++) if (arr[i] % 2 == 0) sum_even += arr[i]; printf("Parent - Sum of even: %d\n", sum_even); wait(NULL); // wait for child printf("Parent done.\n"); } return 0; }
Part b — Process State Diagram
PROCESS + THREAD LIFECYCLE
Parent fork() Parent: sum evens RUNNING wait() Parent WAITING EXIT Child create T1,T2 Thread 1 sum odd → done Thread 2 calc avg → done Child join(T1,T2) Child EXIT → ZOMBIE SIGCHLD→ parent wakes Parent Child Threads Signal/notification

29
SRTF Scheduling — Shortest Remaining Time First
3rd Paper Q03 · CO2/K5 · [10 marks]
ProcessArrival (ms)Burst (ms)
P105
P213
P321
P432
P544
Part a 6 marks — SRTF: determine execution sequence, Gantt chart, average WT and TAT. Comment on responsiveness and fairness.
Part b 4 marks — Discuss how different scheduling algorithms impact efficiency in various scenarios.
Part a — SRTF Trace
ALGORITHM
SRTF = Preemptive SJF. Always run the process with shortest REMAINING burst time.
STEP-BY-STEP
t=0 P1(5ms). Only P1 → Run P1.
t=1 P2 arrives (3ms remaining). P1 has 4ms. P2 shorter → Preempt P1. Run P2.
t=2 P3 arrives (1ms). P2 has 2ms. P3 shorter → Preempt P2. Run P3.
t=3 P3 done (1ms). P4 arrives (2ms). Ready: P1(4ms), P2(2ms), P4(2ms). Shortest = P2 or P4 (both 2ms). Pick P2 (arrived first).
t=4 P5 arrives (4ms). P2 has 1ms left. P2 shorter → continue P2.
t=5 P2 done. Ready: P1(4ms), P4(2ms), P5(4ms). Shortest = P4(2ms) → Run P4.
t=7 P4 done. Ready: P1(4ms), P5(4ms). Tie → P1 (arrived first).
t=11 P1 done → Run P5(4ms).
t=15 P5 done.
GANTT CHART — SRTF
P10–1
P21–2
P32–3
P23–5
P45–7
P17–11
P511–15
0123571115
// TAT = CT−AT | WT = TAT−BT | RT = First CPU − AT P1: AT=0, BT=5, CT=11 → TAT=11, WT=6, RT=0 P2: AT=1, BT=3, CT=5 → TAT=4, WT=1, RT=0 P3: AT=2, BT=1, CT=3 → TAT=1, WT=0, RT=0 P4: AT=3, BT=2, CT=7 → TAT=4, WT=2, RT=2 P5: AT=4, BT=4, CT=15 → TAT=11, WT=7, RT=7 Avg TAT = (11+4+1+4+11)/5 = 6.2 ms Avg WT = (6+1+0+2+7)/5 = 3.2 ms
⚠️ Fairness concern: P1 and P5 (longer bursts) suffer high waiting times (6ms and 7ms). SRTF minimizes average WT but is unfair to longer processes. P3 gets perfect 0ms wait due to arriving with the shortest burst. This mirrors real-time system behavior where responsiveness (P3 done in 1ms) comes at the cost of fairness to P1.
Part b — Algorithm Comparison by Scenario
ScenarioBest AlgorithmWhy
Interactive/GUI (desktop)Round RobinEnsures every process gets CPU regularly — responsive UI
Batch processing (known lengths)SJF / SRTFMinimizes average WT when burst times are known
Real-time (hard deadlines)EDF / Priority PreemptiveCritical tasks always meet their deadlines
Server with mixed workloadsMLFQAutomatically adapts: short jobs in high-priority queue, long jobs demoted
Simple/predictable loadFCFSMinimal overhead, but convoy effect risks

30
Drone Surveillance — Race Condition & Synchronization
3rd Paper Q04 · CO2/K5 · [10 marks]
Two threads share variables x=0, y=0. SensorThread updates x,y from GPS. ControlThread reads x,y to decide navigation.
Part a 4 marks — Identify the race condition. Show via interleaved execution timeline how partial updates of x and y cause incorrect control decisions.
Part b 3 marks — Why does this occur even though both threads access the same shared variables directly? Does compiler/CPU caching worsen it?
Part c 3 marks — Modify pseudocode using a synchronization mechanism to ensure data consistency without halting concurrent performance entirely.
Part a — Race Condition + Interleaved Execution
THE RACE CONDITION
SensorThread writes x and y in two SEPARATE steps — ControlThread can read in between
x and y represent a coordinate pair. They should always be updated atomically (as a unit). But x = getGPS_X() and y = getGPS_Y() are two separate memory writes. The ControlThread can read the new x but the old y — producing a phantom coordinate that never existed in reality, causing the drone to steer to a nonexistent location.
INTERLEAVED TIMELINE (RACE)
SensorThread         ControlThread
x = getGPS_X() → x=150
                               if(x>100 && y>100) → reads x=150, y=0 (OLD!)
                               150>100 but 0>100 is FALSE → no adjustCourse()
y = getGPS_Y() → y=150   ← too late, decision already made
Result: drone ignores real position (150,150) and flies into obstacle!
Part b — Why It Happens + Cache Effects
ROOT CAUSE
Non-atomic two-variable update + no memory visibility guarantee
Even though both threads "access the same variables," the CPU executes instructions one at a time. Between the two writes (x=... and y=...), the scheduler can switch threads. This window is the race.

CPU Caching makes it worse: Each CPU core has its own L1/L2 cache. SensorThread running on Core 0 writes x=150 to Core 0's cache. ControlThread on Core 1 reads x from Core 1's cache — which still holds x=0 (stale). Without a memory barrier (which a mutex provides), the update may not be visible to Core 1 for many cycles.
Part c — Fix with Read-Write Lock
// READ-WRITE LOCK: allows concurrent reads, exclusive writes // Better than mutex — ControlThread can still read while // SensorThread is NOT writing (only blocks during writes) pthread_rwlock_t coord_lock = PTHREAD_RWLOCK_INITIALIZER; int x = 0, y = 0; SensorThread(): while (true): int nx = getGPS_X() int ny = getGPS_Y() pthread_rwlock_wrlock(&coord_lock) // WRITE LOCK — exclusive x = nx // atomic pair update y = ny pthread_rwlock_unlock(&coord_lock) ControlThread(): while (true): pthread_rwlock_rdlock(&coord_lock) // READ LOCK — shared OK int cx = x, cy = y // consistent snapshot pthread_rwlock_unlock(&coord_lock) if (cx > 100 && cy > 100): adjustCourse()
REAL-WORLD
DJI drone firmware uses exactly this pattern — a high-priority IMU thread writes position data under a write lock, and multiple navigation decision threads read under read locks. This allows 1000Hz sensor updates while navigation reads safely. Without this, real drones would crash from exactly this race condition.

31
IoT Log System — Readers-Writers with Fairness
3rd Paper Q05 · CO3/K4 · [10 marks]
Shared file eventLog. Reader threads = monitoring dashboards (frequent reads). Writer threads = logging daemons (frequent writes, every few ms). Goal: multiple readers simultaneously, writers exclusive, no reader reads during write, no writer writes during read.
Part a 6 marks — Pseudocode using semaphores or monitors. Must prevent BOTH reader starvation AND writer starvation.
Part b 4 marks — Analyze: fairness, performance trade-offs (throughput vs latency), and scalability as thread count increases.
Part a — Fair Readers-Writers (Turnstile Pattern)
KEY POINT
Standard reader-first starves writers. Standard writer-first starves readers. Use a FIFO turnstile for fairness.
// ── Shared Primitives ───────────────────────────────────── semaphore turnstile = 1 // FIFO order enforcement — key to fairness semaphore write_lock = 1 // exclusive write access semaphore count_mutex = 1 // protects reader_count int reader_count = 0 // ── Reader ──────────────────────────────────────────────── Reader(): wait(turnstile) // FIFO queue — readers wait behind pending writers signal(turnstile) // pass through, let next reader/writer proceed wait(count_mutex) reader_count++ if reader_count == 1: wait(write_lock) // first reader blocks writers signal(count_mutex) // ─── READ eventLog ─── wait(count_mutex) reader_count-- if reader_count == 0: signal(write_lock) // last reader releases writers signal(count_mutex) // ── Writer ──────────────────────────────────────────────── Writer(): wait(turnstile) // FIFO — writer blocks NEW readers from entering wait(write_lock) // wait for existing readers to finish signal(turnstile) // release turnstile AFTER acquiring write_lock // ─── WRITE to eventLog ─── signal(write_lock)
Part b — Analysis
FAIRNESS
Turnstile ensures strict FIFO — no starvation for readers or writers
If Writer W1 arrives after Readers R1, R2, R3, W1 enters the turnstile and blocks new readers (R4, R5) from starting. W1 waits for R1, R2, R3 to finish, then writes exclusively. R4, R5 proceed after W1. Maximum wait = one write + one batch of concurrent reads.
PERFORMANCE TRADE-OFFS
High throughput for reads; latency spikes during frequent writes
Throughput: Multiple readers run simultaneously — great for dashboards. Latency: When logging daemons write every few ms, they block all dashboards briefly. For IoT with 100+ devices writing frequently, the write frequency creates read latency.
Improvement: Use a write buffer — log daemon appends to a fast circular buffer (no lock), a single flush thread periodically acquires write_lock to persist the buffer. This batches writes and dramatically reduces write frequency.
SCALABILITY
Reads scale well; writes become a bottleneck at high thread counts
100 reader threads → all read concurrently, excellent throughput. But 50 writer threads each wanting exclusive access → effectively serialized, throughput drops to 1 write at a time. Solution: Sharding — partition eventLog into N sub-logs, each with its own lock. Writers hash to a sub-log, parallelism scales with N.
REAL-WORLD
Apache Kafka solves this at scale — multiple producers (writers) append to partition logs, multiple consumers (readers) read independently without blocking producers. The log is append-only, which means reads and writes don't interfere at all — a clever architectural solution that bypasses the entire readers-writers lock problem.

32
Desktop Background Services — Priority Non-Preemptive + Round Robin
Desktop Q03 · CO2/K3 · [10 marks]
Background services: Antivirus(S1,AT=0,BT=17,pri=3), Disk Cleanup(S2,AT=2,BT=24,pri=4), System Update(S3,AT=3,BT=13,pri=1), Backup(S4,AT=5,BT=15,pri=2), Software Protection(S5,AT=2,BT=10,pri=3). Lower priority number = higher importance.
Part a 2 marks — Identify the appropriate scheduling technique for each method (priority-based non-preemptive; RR with 5ms time slice). Give reasons.
Part b 4 marks — Determine the order of execution with Gantt chart for both methods.
Part c 4 marks — Calculate Average Turnaround Time and Average Waiting Time for both methods.
Part a — Algorithm Identification
MethodAlgorithmReason
Priority-based, no preemption, full executionNon-Preemptive Priority (NPP)Each service runs to completion in priority order. Critical services (System Update, pri=1) always run first when CPU is free.
Each runs 5ms, then next, till all finishRound Robin (RR), time quantum=5msCyclic fairness — every service gets 5ms turns, ensuring all make progress.
Part b — Method 1: Non-Preemptive Priority
TRACE
t=0: Only S1(pri3) ready → Run S1(17ms). Completes t=17.
t=17: Ready: S2(pri4), S3(pri1), S4(pri2), S5(pri3). Highest = S3(pri1) → Run S3(13ms). Completes t=30.
t=30: Ready: S2(pri4), S4(pri2), S5(pri3). Highest = S4(pri2) → Run S4(15ms). Completes t=45.
t=45: S5(pri3) vs S2(pri4) → S5 wins → Run S5(10ms). Completes t=55.
t=55: Run S2(24ms). Completes t=79.
GANTT — Non-Preemptive Priority
S10–17
S317–30
S430–45
S545–55
S255–79
01730455579
// Method 1: NPP — TAT=CT-AT, WT=TAT-BT S1: CT=17, AT=0, BT=17 → TAT=17, WT=0 S2: CT=79, AT=2, BT=24 → TAT=77, WT=53 S3: CT=30, AT=3, BT=13 → TAT=27, WT=14 S4: CT=45, AT=5, BT=15 → TAT=40, WT=25 S5: CT=55, AT=2, BT=10 → TAT=53, WT=43 Avg TAT = (17+77+27+40+53)/5 = 42.8 ms Avg WT = (0+53+14+25+43)/5 = 27.0 ms
Part b — Method 2: Round Robin (Q=5ms)
TRACE (simplified)
S1 starts at t=0. Each service gets 5ms turns cyclically as they arrive. Remaining times tracked per turn. S3 and S5 finish in fewer turns (shorter bursts). S2 takes the most turns (24ms ÷ 5ms = ~5 slices).
GANTT — Round Robin (Q=5ms)
S10-5
S25-10
S310-15
S415-20
S520-25
S125-30
S230-35
S335-40
S440-45
S545
S145-50
S250-55
S355-58
S458-63
S163-65
S265-70
S470
S270-74
// Method 2: RR Q=5ms — approximate completion times S1: CT≈65, AT=0, BT=17 → TAT≈65, WT≈48 S2: CT≈74, AT=2, BT=24 → TAT≈72, WT≈48 S3: CT≈58, AT=3, BT=13 → TAT≈55, WT≈42 S4: CT≈70, AT=5, BT=15 → TAT≈65, WT≈50 S5: CT≈45, AT=2, BT=10 → TAT≈43, WT≈33 Avg TAT ≈ (65+72+55+65+43)/5 = 60.0 ms Avg WT ≈ (48+48+42+50+33)/5 = 44.2 ms // NPP has better avg TAT (42.8 vs 60.0) but worse fairness — S2 waits 53ms in NPP
📘 Fourth Past Year Paper
33
Library Book Robot — FCFS, SCAN, C-SCAN
4th Paper Q07 · CO5/K3 · [10 marks]
Library book retrieval robot. Rows 0–199, starts at row 53. Morning requests: 98, 183, 41, 122, 14, 124, 65, 150.
Part a 3 marksFCFS: retrieve books in arrival order.
Part b 3 marksSCAN: move continuously in one direction serving requests, reach the end, reverse and serve on return.
Part c 4 marksC-SCAN: move toward one end serving requests, upon reaching the end immediately return to start without picking books, then serve again in original direction.
// Setup: rows 0–199, head=53, requests=[98,183,41,122,14,124,65,150] // Sorted: [14, 41, 65, 98, 122, 124, 150, 183] // Above 53: 65, 98, 122, 124, 150, 183 // Below 53: 41, 14
Part a — FCFS (Arrival Order)
0 24 49 74 99 124 149 174 199 Position 0 1 2 3 4 5 6 7 8 53 98 183 41 122 14 124 65 150 START FCFS — Library Robot (start=53) Total: 715
53→98 (45) → 183 (85) → 41 (142) → 122 (81) → 14 (108) → 124 (110) → 65 (59) → 150 (85) Total FCFS = 45+85+142+81+108+110+59+85 = 715 rows | Avg = 715/8 = 89.4 rows
Part b — SCAN (Move UP to end, reverse, serve on return)
0 24 49 74 99 124 149 174 199 Position 0 1 2 3 4 5 6 7 8 9 53 65 98 122 124 150 183 199 41 14 START SCAN — Library Robot (start=53) Total: 331
UP: 53→65 (12) → 98 (33) → 122 (24) → 124 (2) → 150 (26) → 183 (33) → 199 (16) ← end, reverse DOWN: 199→41 (158) → 14 (27) Total SCAN = 12+33+24+2+26+33+16+158+27 = 331 rows | Avg = 331/8 = 41.4 rows
Part c — C-SCAN (Serve UP, jump to row 0, serve UP again)
0 24 49 74 99 124 149 174 199 Position 0 1 2 3 4 5 6 7 8 9 10 53 65 98 122 124 150 183 199 0 14 41 START C-SCAN — Library Robot (start=53) Total: 386
UP: 53→65 (12) → 98 (33) → 122 (24) → 124 (2) → 150 (26) → 183 (33) → 199 (16) JUMP: 199→0 (199, no service) → 14 (14) → 41 (27) Total C-SCAN = 12+33+24+2+26+33+16+199+14+27 = 386 rows | Avg = 386/8 = 48.3 rows
AlgorithmTotal MovementAvg SeekUniform Wait?
FCFS71589.4No
SCAN331 ✓41.4 ✓Good (bi-directional)
C-SCAN38648.3Best uniformity

34
Cooking Competition — Banker's Algorithm (Chopping Boards, Knives, Bowls)
4th Paper Q08a · CO3/K3 · [15 marks]
3 contestants share: 4 chopping boards, 5 knives, 4 bowls.
ChefHas BoardsHas KnivesHas BowlsMax BoardsMax KnivesMax Bowls
A101312
B021131
C102113
Part i 3 marks — Construct the required matrices (Need matrix, Available vector).
Part ii 5 marks — Find the first person to start so all can finish (safe sequence).
Part iii 4 marks — Can a request for 1 chopping board by A be granted? Justify.
Part i — Need Matrix + Available Vector
// Total Allocated: Boards=1+0+1=2, Knives=0+2+0=2, Bowls=1+1+2=4 // Available = Total − Allocated Available: Boards = 4−2 = 2 | Knives = 5−2 = 3 | Bowls = 4−4 = 0 // Need = Max − Has
ChefNeed BoardsNeed KnivesNeed Bowls
A3−1 = 21−0 = 12−1 = 1
B1−0 = 13−2 = 11−1 = 0
C1−1 = 01−0 = 13−2 = 1
Part ii — Safety Check (Available = [2, 3, 0])
Available = [Boards=2, Knives=3, Bowls=0] B: Need[1,1,0] ≤ [2,3,0]? YES → Run B. Available = [2,3,0]+[0,2,1] = [2,5,1] C: Need[0,1,1] ≤ [2,5,1]? YES → Run C. Available = [2,5,1]+[1,0,2] = [3,5,3] A: Need[2,1,1] ≤ [3,5,3]? YES → Run A. ✓ ✅ SAFE — Safe Sequence: B → C → A
Part iii — A Requests 1 Chopping Board
Request_A = [Boards=1, Knives=0, Bowls=0] Check 1: [1,0,0] ≤ Need[A]=[2,1,1]? YES ✓ Check 2: [1,0,0] ≤ Available=[2,3,0]? YES ✓ Tentatively allocate: Available = [2,3,0]−[1,0,0] = [1,3,0] Has[A] = [1,0,1]+[1,0,0] = [2,0,1] Need[A] = [2,1,1]−[1,0,0] = [1,1,1] Re-run safety with Available=[1,3,0]: B: Need[1,1,0] ≤ [1,3,0]? YES → Available=[1,5,1] C: Need[0,1,1] ≤ [1,5,1]? YES → Available=[2,5,3] A: Need[1,1,1] ≤ [2,5,3]? YES ✓ ✅ GRANT — Request can be safely given. New safe sequence: B → C → A

35
Minimum Resources to Prevent Deadlock
4th Paper Q08b · CO3/K3 · [3 marks]
10 user processes, each requiring 3 units of resource R. Find the minimum number of units of R required such that no deadlock will occur.
KEY POINT
Formula: Minimum resources = n×(m−1) + 1, where n=processes, m=max need per process
This ensures that at any point, at least one process can always complete — preventing the circular wait condition that causes deadlock.
n = 10 processes, m = 3 units each (max need) Minimum units = n × (m − 1) + 1 = 10 × (3 − 1) + 1 = 10 × 2 + 1 = 21 units // Intuition: give each process (m-1)=2 units → all are 1 short of max. // The extra +1 unit allows at least one process to complete, freeing its resources, // enabling others to complete in chain. Below 21 = deadlock possible.
INTUITION
If you have 20 units and 10 processes each holding 2, all are stuck waiting for their 3rd unit — deadlock! The 21st unit lets one process finish, releases 2 units, the next finishes, and so on. The +1 is the "key that unlocks the chain."

36
Memory Allocation — Best Fit, Worst Fit, First Fit
4th Paper Q09a · CO4/K3 · [7 marks]
Processes: P1=212KB, P2=417KB, P3=112KB, P4=426KB. Partitions: PAR1=100KB, PAR2=450KB, PAR3=200KB, PAR4=300KB, PAR5=500KB.
Determine allocation using Best Fit, Worst Fit, and First Fit. Calculate total internal fragmentation for each.
KEY DEFINITIONS
Best Fit = smallest partition that fits | First Fit = first partition that fits | Worst Fit = largest partition
First Fit — First partition large enough
ProcessSizePartitions (100,200,300,450,500)Allocated ToFragmentation
P1 (212KB)212100✗ 200✗ 300✓PAR4 (300KB)300−212 = 88KB
P2 (417KB)417100✗ 200✗ 300(used)✗ 450✓PAR2 (450KB)450−417 = 33KB
P3 (112KB)112100✗ 200✓PAR3 (200KB)200−112 = 88KB
P4 (426KB)426100✗ 200(used)✗ 300(used)✗ 450(used)✗ 500✓PAR5 (500KB)500−426 = 74KB
Total Internal Fragmentation (First Fit)283KB
Best Fit — Smallest partition that fits
ProcessSizeBest fit partitionAllocated ToFragmentation
P1 (212KB)212300KB is smallest that fits (200✗)PAR4 (300KB)88KB
P2 (417KB)417450KB is smallest that fitsPAR2 (450KB)33KB
P3 (112KB)112200KB is smallest that fits (100✗)PAR3 (200KB)88KB
P4 (426KB)426500KB is smallest that fits (450 used)PAR5 (500KB)74KB
Total Internal Fragmentation (Best Fit)283KB
Worst Fit — Largest available partition
ProcessSizeLargest partitionAllocated ToFragmentation
P1 (212KB)212500KB is largestPAR5 (500KB)288KB
P2 (417KB)417450KB is largest remainingPAR2 (450KB)33KB
P3 (112KB)112300KB is largest remainingPAR4 (300KB)188KB
P4 (426KB)426200KB✗, 100KB✗ — no fit!Cannot allocate
Total Internal Fragmentation (Worst Fit, P4 unallocated)509KB
⚠️ Worst Fit fails to allocate P4 — it wastes large partitions on small processes, leaving only small fragments. Best Fit = Worst Fit tie at 283KB here, but Worst Fit is dangerous for large processes.

37
Page Replacement — LRU & OPT (3 frames)
4th Paper Q09b · CO4/K3 · [8 marks]
3 frames. Reference string: 2, 3, 5, 1, 4, 6, 2, 7, 5, 3, 8, 1, 6, 4, 2
i 4 marks — LRU: replace page not used for longest time in memory
ii 4 marks — OPT: replace page not used for longest time in future. Which is better?
LRU — 3 Frames
RefF1F2F3ResultEvict
22MISS
323MISS
5235MISS
1135MISS2 (LRU)
4145MISS3 (LRU)
6146MISS5 (LRU)
2246MISS1 (LRU)
7276MISS4 (LRU)
5275MISS6 (LRU)
3375MISS2 (LRU)
8385MISS7 (LRU)
1381MISS5 (LRU)
6681MISS3 (LRU)
4641MISS8 (LRU)
2642MISS1 (LRU)
TotalLRU Faults = 15, Hits = 0
OPT — 3 Frames
RefF1F2F3ResultEvict (farthest future)
22MISS
323MISS
5235MISS
1215MISS3 (next use: pos 10)
4214MISS5 (next use: pos 9)
6614MISS2 (next use: pos 15)
2624MISS1 (next use: pos 12)
7627MISS4 (next use: pos 14)
5527MISS6 (next use: pos 13)
3523MISS7 (never used again)
8823MISS5 (never used again)
1813MISS2 (next use: pos 15)
6613MISS8 (never used again)
4643MISS1 (never used again)
2623MISS4 (never used again)
TotalOPT Faults = 15, Hits = 0
⚠️ Both give 15 faults here — with only 3 frames and 15 unique-ish accesses of 9 distinct pages, there are very few reuses. OPT = LRU = 15 faults in this pathological string. OPT is still theoretically better but the string doesn't expose the difference.

📙 Fifth Past Year Paper
38
Smart TV Operating System — Block Diagram & OS Functions
5th Paper Q01 · CO1/K1 · [10 marks]
Design an OS for a Smart TV. Demonstrate the critical hardware components and their interactions using a block diagram, explaining the functions of each component and the default functionalities the OS should handle.
KEY POINT 1
Hardware Components of a Smart TV
A Smart TV has: CPU (ARM Cortex), GPU (graphics rendering), RAM (app runtime), Flash Storage (OS + apps), Network Adapter (WiFi/Ethernet), Display Controller (HDMI output), Audio Controller (speakers/HDMI ARC), Remote/IR Receiver (input), USB Controller (external devices).
SMART TV OS — HARDWARE + OS LAYER BLOCK DIAGRAM
HARDWARE LAYER ARM CPU GPU RAM Flash WiFi Display Audio USB/IR KERNEL (OS Core) — privileged mode Process Mgr Memory Mgr Device Drivers File System Security MIDDLEWARE — user space services Media Framework Network Stack DRM/Security App Runtime (JVM/ART) APPLICATION LAYER — user mode Netflix/YouTube Browser Settings Games Voice Assistant USER (Remote / Touch)
KEY POINT 2
OS Default Functionalities for Smart TV
  • Process Management: Schedule Netflix streaming, background firmware update, and UI responsiveness concurrently
  • Memory Management: With limited RAM (1–4GB), swap/kill background apps when streaming 4K video
  • Device Driver Management: Handle IR remote, HDMI-CEC, WiFi, USB peripherals
  • File System: Manage flash storage for apps, cached video, preferences
  • Security: DRM for content protection (Widevine), sandboxed app execution, secure boot
  • Network Stack: TCP/IP for streaming, DNS, HTTP/2, adaptive bitrate (HLS/DASH)
REAL-WORLD
Samsung Tizen OS and LG webOS are both Linux-based microkernel-inspired platforms for Smart TVs. Android TV (used by Sony, Xiaomi) uses the full Android kernel. All face the same challenge: running 4K video decode (GPU-heavy), streaming (network-heavy), and UI (CPU-heavy) simultaneously on constrained hardware with <5W power budget.

39
Process States + fork() Termination Ordering
5th Paper Q02 · CO1/K2 · [10 marks]
Part a 5 marks — Process X resides in memory. OS Y must execute it. Discuss the states and necessary steps X should undergo for executing in main memory. Illustrate with neat diagram.
Part b 5 marks — Write a C program to create 1 parent and 3 children where: (i) Parent terminates last. (ii) First child terminates before parent and after second child. (iii) Second child terminates after last child and before first child. (iv) Third child terminates first.
Part a — Process States (already covered in detail in Q1)
SUMMARY
Five States: New → Ready → Running → Waiting → Terminated
New: Process created (PCB allocated, memory assigned)
Ready: In ready queue, waiting for CPU
Running: CPU executing instructions
Waiting: Blocked on I/O or event — CPU released
Terminated: exit() called → zombie until parent wait()
Part b — Controlled Termination Order via sleep()
TERMINATION ORDER REQUIRED
Third child (C3) → Second child (C2) → First child (C1) → Parent. Use sleep() to enforce ordering.
#include <stdio.h> #include <unistd.h> #include <sys/wait.h> int main() { pid_t c1, c2, c3; c1 = fork(); if (c1 == 0) { // First Child c2 = fork(); if (c2 == 0) { // Second Child c3 = fork(); if (c3 == 0) { // Third Child (terminates FIRST) sleep(1); printf("C3 (3rd child) terminates\n"); _exit(0); } wait(NULL); // C2 waits for C3 sleep(2); // then C2 terminates printf("C2 (2nd child) terminates\n"); _exit(0); } wait(NULL); // C1 waits for C2 sleep(1); // then C1 terminates printf("C1 (1st child) terminates\n"); _exit(0); } wait(NULL); // Parent waits for C1 printf("Parent terminates last\n"); // Parent terminates LAST return 0; } // Output order: C3 → C2 → C1 → Parent

40
Bank Help Desk — SJF, Priority (Token), Round Robin
5th Paper Q03 · CO2/K3 · [10 marks]
4 customers: C1(AT=3ms,BT=5ms,token=3), C2(AT=8ms,BT=3ms,token=1), C3(AT=0ms,BT=9ms,token=2), C4(AT=5ms,BT=4ms,token=4).
Part a 3 marks — SJF: minimum burst time first.
Part b 3 marks — Priority by token number (lower token = served first).
Part c 4 marks — Round Robin with 2ms time quantum.
Part a — SJF Non-Preemptive
TRACE
t=0: Only C3(9ms) → Run C3. t=9: C1(5ms),C2(3ms),C4(4ms) arrived. Shortest=C2(3ms). t=12: C1(5ms),C4(4ms). Shortest=C4. t=16: C1. t=21: done.
GANTT — SJF
C30–9
C29–12
C412–16
C116–21
09121621
// TAT=CT−AT, WT=TAT−BT C1: CT=21,AT=3,BT=5 → TAT=18, WT=13 C2: CT=12,AT=8,BT=3 → TAT=4, WT=1 C3: CT=9, AT=0,BT=9 → TAT=9, WT=0 C4: CT=16,AT=5,BT=4 → TAT=11, WT=7 Avg TAT=(18+4+9+11)/4=10.5 ms | Avg WT=(13+1+0+7)/4=5.25 ms
Part b — Priority (Token order: C2=1, C3=2, C1=3, C4=4)
GANTT — Priority (Non-Preemptive, lower token first)
C30–9
C29–12
C112–17
C417–21
09121721
C1: CT=17,AT=3 → TAT=14, WT=9 C2: CT=12,AT=8 → TAT=4, WT=1 C3: CT=9, AT=0 → TAT=9, WT=0 C4: CT=21,AT=5 → TAT=16, WT=12 Avg TAT=(14+4+9+16)/4=10.75 ms | Avg WT=(9+1+0+12)/4=5.5 ms
Part c — Round Robin (Q=2ms)
GANTT — Round Robin Q=2ms
C30-2
C32-3
C13-5
C35-7
C47-9
C29-11
C111-13
C313-15
C415-17
C217-18
C118-19
C319-21
02357911131517181921
C1: CT=19,AT=3,BT=5 → TAT=16, WT=11 C2: CT=18,AT=8,BT=3 → TAT=10, WT=7 C3: CT=21,AT=0,BT=9 → TAT=21, WT=12 C4: CT=17,AT=5,BT=4 → TAT=12, WT=8 Avg TAT=(16+10+21+12)/4=14.75 ms | Avg WT=(11+7+12+8)/4=9.5 ms

41
Restaurant Kitchen — Banker's Algorithm (Boards, Knives, Bowls)
5th Paper Q04 · CO3/K3 · [10 marks]
Shared: R0=5 boards, R1=6 knives, R2=5 bowls. Three chefs A, B, C.
ChefHas R0Has R1Has R2Max R0Max R1Max R2
A202512
B031151
C112133
Part a 5 marks — Is this allocation safe? Justify.
Part b 5 marks — Can A request one more knife (R1=1) at T2?
Part a — Safety Check
// Total allocated: R0=3, R1=4, R2=5 // Available = Total−Allocated: R0=5−3=2, R1=6−4=2, R2=5−5=0 // Need = Max−Has Need[A] = [5−2, 1−0, 2−2] = [3, 1, 0] Need[B] = [1−0, 5−3, 1−1] = [1, 2, 0] Need[C] = [1−1, 3−1, 3−2] = [0, 2, 1] Available = [R0=2, R1=2, R2=0]
B: Need[1,2,0] ≤ [2,2,0]? YES → Available=[2,5,1] C: Need[0,2,1] ≤ [2,5,1]? YES → Available=[3,6,3] A: Need[3,1,0] ≤ [3,6,3]? YES ✓ ✅ SAFE — Safe Sequence: B → C → A
Part b — A Requests 1 Knife (R1=1)
Request_A = [R0=0, R1=1, R2=0] Check 1: [0,1,0] ≤ Need[A]=[3,1,0]? YES ✓ Check 2: [0,1,0] ≤ Available=[2,2,0]? YES ✓ Tentative: Available=[2,1,0], Has[A]=[2,1,2], Need[A]=[3,0,0] Re-check safety with Available=[2,1,0]: B: Need[1,2,0] ≤ [2,1,0]? NO (R1: 2>1) → skip C: Need[0,2,1] ≤ [2,1,0]? NO (R1: 2>1) → skip A: Need[3,0,0] ≤ [2,1,0]? NO (R0: 3>2) → skip No process can proceed! ❌ UNSAFE — Request DENIED. Granting R1 to A leads to deadlock.

42
Semaphore-Based Execution Ordering
5th Paper Q05 · CO3/K3 · [10 marks]
Three concurrent processes: P1 (statement S1), P2 (statement S2), P3 (statement S3). Constraints: S1 executes only after S3 completes. S2 executes only after S1 completes. Implement using semaphores.
KEY POINT
Use binary semaphores as signals between processes. sem_wait blocks until sem_signal is called.
Execution order forced: S3 → S1 → S2. Two semaphores needed: one for S3→S1 dependency, one for S1→S2 dependency.
// ── Shared Semaphores ──────────────────────────────────── semaphore s3_done = 0 // signals when S3 completes (P3→P1) semaphore s1_done = 0 // signals when S1 completes (P1→P2) // ── Process P1 (executes S1 ONLY after S3 done) ────────── P1(): wait(s3_done) // block until S3 complete execute S1 signal(s1_done) // notify P2 that S1 is done // ── Process P2 (executes S2 ONLY after S1 done) ────────── P2(): wait(s1_done) // block until S1 complete execute S2 // ── Process P3 (executes S3, no constraint) ─────────────── P3(): execute S3 signal(s3_done) // notify P1 that S3 is done // ── Execution Order Guaranteed: S3 → S1 → S2 ───────────── // If P1 runs first: wait(s3_done) blocks → P3 runs S3 → signals → P1 runs S1 → P2 unblocks // If P2 runs first: wait(s1_done) blocks → similar chain // Any execution order of the processes yields S3 → S1 → S2
CORRECTNESS PROOF
All three synchronization requirements satisfied
Mutual Exclusion: S1 and S3 never run simultaneously — S1 waits for S3 signal. Progress: P3 always proceeds (no wait). P1 proceeds once S3 is done. P2 proceeds once S1 is done. Bounded Waiting: Each process waits at most one signal — no starvation possible.

43
Computer Lab Allocation — Best Fit Memory Strategy
5th Paper Q06 · CO4/K3 · [10 marks]
6 labs with capacity: 320, 610, 370, 200, 730, 125 computers. 5 workshops with expected students: 115, 500, 358, 200, 375.
KEY POINT
Apply Best Fit: allocate each workshop to the smallest lab that fits it (minimizes waste)
WorkshopStudentsSuitable Labs (capacity ≥ students)Best Fit LabWaste
W1 (115)115125✓, 200✓, 320✓, 370✓, 610✓, 730✓Lab 6 (125)10
W2 (500)500610✓, 730✓Lab 2 (610)110
W3 (358)358370✓, 730✓Lab 3 (370)12
W4 (200)200200✓, 320✓, 730✓Lab 4 (200)0
W5 (375)375730✓Lab 5 (730)355
Lab 1 (320) goes unused. With random allocation, utilization could be as low as 50% (e.g., W5 in a 730-seat lab when only 375 attend). Best Fit gives utilization of: (115+500+358+200+375)/(125+610+370+200+730) = 1548/2035 = 76.1%. Random allocation average ≈ 50–60%.

44
Segmentation — Physical Address Translation
5th Paper Q07 · CO4/K3 · [10 marks]
Segment table: Seg0(base=190,len=800), Seg1(base=1750,len=126), Seg2(base=60,len=120), Seg3(base=2345,len=220), Seg4(base=1876,len=30). Translate logical addresses.
FORMULA
Physical Address = Base + Offset, valid only if Offset < Length
Logical address format: [Segment Number | Offset] Example translations: (0, 430): Base=190, offset=430 < length=800 → PA = 190+430 = 620 ✓ (1, 10): Base=1750, offset=10 < length=126 → PA = 1750+10 = 1760 ✓ (2, 100): Base=60, offset=100 < length=120 → PA = 60+100 = 160 ✓ (3, 400): Base=2345, offset=400 > length=220 → TRAP — Segment violation! (4, 112): Base=1876, offset=112 > length=30 → TRAP — Segment violation!
HOW SEGMENTATION WORKS
Unlike paging (fixed-size pages), segmentation divides memory into variable-length segments matching logical program structure (code, stack, heap, data). Each segment has a base and limit in the segment table. The OS performs a bounds check on every memory access — if offset ≥ length, a segment fault (segfault) is raised. This provides per-segment protection and sharing — two processes can share a code segment by pointing their segment tables to the same base address.

45
Page Replacement — LRU & FIFO (3 frames, 20 references)
5th Paper Q08 · CO4/K3 · [10 marks]
3 frames. Reference string: 7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0, 1
a 5 marks — LRU replacement
b 5 marks — FIFO replacement
a — LRU (3 frames)
RefF1F2F3Result
77MISS
272MISS
3723MISS
1123MISS (evict 7)
2123HIT
5153MISS (evict 2)
3153HIT
4453MISS (evict 1)
6463MISS (evict 5)
7467MISS (evict 3)
7467HIT
1167MISS (evict 4)
0107MISS (evict 6)
5105MISS (evict 7)
4405MISS (evict 1)
6465MISS (evict 0)
2462MISS (evict 5)
3362MISS (evict 4)
0302MISS (evict 6)
1301MISS (evict 2)
TotalLRU Faults=17, Hits=3
b — FIFO (3 frames)
RefF1F2F3ResultEvict (oldest)
77MISS
272MISS
3723MISS
1123MISS7 (oldest)
2123HIT
5153MISS2
3153HIT
4453MISS1
6463MISS5
7467MISS3
7467HIT
1167MISS4
0107MISS6
5105MISS7
4405MISS1
6465MISS0
2462MISS5
3362MISS4
0302MISS6
1301MISS2
TotalFIFO Faults=17, Hits=3
Both LRU and FIFO give 17 faults for this string. LRU is generally better (immune to Belady's anomaly) and is the recommended production choice. FIFO is simpler to implement but can perform worse on strings with temporal locality patterns.

46
Disk Scheduling — FCFS, SCAN, LOOK (0–999 cylinders)
5th Paper Q09 · CO5/K3 · [10 marks]
1000 cylinders (0–999). Current head=431, previous=361 (moving toward higher). Requests: 413, 242, 459, 560, 108, 324, 70, 305, 993, 732.
a 5 marks — FCFS
b 5 marks — SCAN
a — FCFS
431→413(18)→242(171)→459(217)→560(101)→108(452)→324(216)→70(254)→305(235)→993(688)→732(261) Total FCFS = 18+171+217+101+452+216+254+235+688+261 = 2,613 cylinders
b — SCAN (moving UP from 431, then reverse)
Head at 431, direction UP (since previous=361, moving toward higher cylinders). Above 431: 459, 560, 732, 993 | Below 431: 413, 324, 305, 242, 108, 70 UP: 431→459(28)→560(101)→732(172)→993(261)→999(6) ← end, reverse DOWN: 999→413(586)→324(89)→305(19)→242(63)→108(134)→70(38) Total SCAN = 28+101+172+261+6+586+89+19+63+134+38 = 1,497 cylinders
COMPARISON NOTE
FCFS (2,613) is almost 2× SCAN (1,497) — demonstrating why no real OS uses FCFS for disk scheduling. The random jumps in FCFS cause massive head movement. SCAN's sweep pattern cuts distance nearly in half while serving all requests fairly.

47
File Allocation Methods — Contiguous, Linked, Indexed
5th Paper Q10 · CO5/K3 · [10 marks]
Structure 1 3 marks — Contiguous: W(start=3,len=1), X(start=6,len=5), Y(start=12,len=3), Z(start=19,len=4)
Structure 2 3 marks — Linked: W(start=3,end=12), X(start=22,end=45)
Structure 3 4 marks — Indexed: X with index block 9, data blocks [17,12,43,39,23,34]
Structure 1 — Contiguous Allocation
KEY POINT
Each file occupies a consecutive run of blocks. FAT stores (start, length).
FileStartLengthBlocks Occupied
W313
X656, 7, 8, 9, 10
Y12312, 13, 14
Z19419, 20, 21, 22
Structure 2 — Linked Allocation
KEY POINT
Each block has a pointer to the next. FAT stores (start, end). Any free blocks can be used.
FileStartEndChain (each block → next)
W3123 → 4 → 5 → 6 → 7 → 8 → 9 → 10 → 11 → 12 → NULL
X224522 → 23 → ... → 45 → NULL (24 blocks)
Structure 3 — Indexed Allocation
KEY POINT
One index block holds all data block addresses. Supports direct (O(1)) access to any block.
FileIndex BlockData Blocks (stored in index)
X917, 12, 43, 39, 23, 34
ADVANTAGE
To access the 4th block of file X: read index block 9 → find 4th entry = 39 → read block 39 directly. No chain traversal needed. This is O(1) random access vs O(n) for linked. ext4 Linux uses a generalization of this (extents + indirect blocks).

📔 Sixth Past Year Paper
48
Medical Device OS — Architecture for Constrained Devices
6th Paper Q01 · CO1/K1 · [10 marks]
SSS company manufactures Smart Medical Diagnostic machines. Design a customized OS incorporating SMART functionality for a medical device with smaller memory than normal computers. List default OS functions and suggest a suitable architecture.
KEY POINT 1
Microkernel Architecture — Best for Constrained Medical Devices
Medical devices have: limited RAM (64MB–512MB), strict power budgets, real-time requirements (patient monitoring deadlines), and zero-tolerance for crashes. Microkernel keeps only essential services in kernel — IPC, scheduling, basic memory. All device drivers, UI, sensors run as user-space servers. A driver crash doesn't kill the kernel.
KEY POINT 2
Default OS Functions for SMART Medical Device
  • Real-Time Scheduling (EDF): Patient vitals must be read every 100ms — hard deadline
  • Device Driver Management: ECG sensor, blood pressure monitor, touch screen, WiFi/Bluetooth for telemedicine
  • Memory Management: Fixed partitions (no swapping — disk I/O too slow for real-time). Protected memory regions prevent sensor data corruption
  • File System: Lightweight (FAT32 or JFFS2 for flash) — store patient logs, diagnostic images
  • Power Management: Dynamic frequency scaling, sleep modes between readings
  • Security: HIPAA compliance — encrypted storage, authenticated access, audit logs
  • Watchdog Timer: Auto-reset if software hangs — patient safety critical
REAL-WORLD
FDA-approved medical devices run QNX (microkernel RTOS) or VxWorks. The Medtronic insulin pump, GE ECG monitors, and Siemens MRI scanners all use certified RTOSes with deterministic scheduling. The FDA requires software to be certified under IEC 62304 — the OS architecture directly affects certification complexity.

49
Pthreads — Find Max Digit of Two Numbers Concurrently
6th Paper Q02 · CO1/K2 · [10 marks]
Two threads concurrently find the highest digit in each of two numbers, then main() sums the two highest digits. Example: a=55687 → max digit=8, b=87934 → max digit=9, sum=17.
#include <stdio.h> #include <pthread.h> typedef struct { long long int num; int max_digit; } ThreadData; // Thread function: finds max digit of a number void* find_max_digit(void* arg) { ThreadData* d = (ThreadData*)arg; long long int n = d->num; int max = 0; while (n > 0) { int digit = n % 10; if (digit > max) max = digit; n /= 10; } d->max_digit = max; return NULL; } int main() { pthread_t t1, t2; ThreadData d1 = {55687, 0}; ThreadData d2 = {87934, 0}; pthread_create(&t1, NULL, find_max_digit, &d1); pthread_create(&t2, NULL, find_max_digit, &d2); pthread_join(t1, NULL); // wait for t1 pthread_join(t2, NULL); // wait for t2 printf("Max digit of %lld = %d\n", d1.num, d1.max_digit); printf("Max digit of %lld = %d\n", d2.num, d2.max_digit); printf("Sum of max digits = %d\n", d1.max_digit + d2.max_digit); return 0; } // Output: Max digit of 55687 = 8, Max digit of 87934 = 9, Sum = 17

50
Writer-Priority Readers-Writers — Custom Semaphore Design
6th Paper Q03 · CO3/K3 · [10 marks]
Redesign readers-writers: writer has higher priority. A reader starts only after 2 writers have completed. The 3rd writer must wait until 5 readers have completed.
KEY POINT
Custom threshold-based semaphore logic — count-triggered signaling
// Shared variables semaphore write_done = 0 // signals after 2 writers complete semaphore read_done = 0 // signals after 5 readers complete semaphore mutex = 1 // protects counters int writer_count = 0 int reader_count = 0 // Writer Process Writer(id): if id == 3: // 3rd writer must wait for 5 readers for i in range(5): wait(read_done) // wait for 5 read_done signals // ─── WRITE ─── wait(mutex) writer_count++ signal(mutex) if writer_count == 2: // 2nd writer signals readers signal(write_done) // unblock ONE reader // Reader Process Reader(id): wait(write_done) // wait until 2 writers done signal(write_done) // pass to next reader (chain signal) // ─── READ ─── wait(mutex) reader_count++ if reader_count <= 5: signal(read_done) // signal for 3rd writer signal(mutex)
CORRECTNESS
ME: Writers use exclusive sections. Progress: Readers unblock after 2 writes. 3rd writer unblocks after 5 reads. Bounded Waiting: each reader/writer waits at most one threshold-count cycle.

51
Page Replacement — OPT & Second Chance, TLB EAT
6th Paper Q04–Q05 · CO4/K3 · [10 marks]
3 frames. Reference string: 1,0,2,2,1,7,6,7,0,1,2,0,3,0,4
i 4 marks — OPT page replacement
ii 4 marks — Second Chance (Clock) algorithm
iii 2 marks — Which gives fewer faults? Justify.
OPT (3 frames)
RefF1F2F3ResultEvict
11MISS
010MISS
2102MISS
2102HIT
1102HIT
7172MISS0 (next use: pos 9)
6176MISS2 (next use: pos 11)
7176HIT
0106MISS7 (next use: never)
1106HIT
2126MISS0 (next use: pos 12)
0120MISS6 (never used again)
3320MISS1 (never used again)
0320HIT
4340MISS2 (never used again)
TotalOPT Faults=9, Hits=6 ✓
Second Chance / Clock Algorithm (3 frames)
ALGORITHM
FIFO + reference bit. On eviction: if ref bit=1, set to 0 and skip (give second chance). If ref bit=0, evict.
RefF1(bit)F2(bit)F3(bit)Result
11(1)MISS
01(1)0(1)MISS
21(1)0(1)2(1)MISS
21(1)0(1)2(1)HIT (set bit=1)
11(1)0(1)2(1)HIT
77(1)0(0)2(0)MISS: 1 has bit=1→set 0, skip; 0 bit=1→set 0, skip; 2 bit=1→set 0, skip; 1(now 0)→evict
67(1)6(1)2(0)MISS: evict 0 (bit=0)
77(1)6(1)2(0)HIT
07(1)6(1)0(1)MISS: evict 2 (bit=0)
11(1)6(0)0(0)MISS: 7 bit=1→0, 6 bit=1→0, 0 bit=1→0, 7(now 0)→evict
21(1)2(1)0(0)MISS: evict 6 (bit=0)
01(1)2(1)0(1)HIT
33(1)2(0)0(0)MISS: evict 1 (bit=1→0, 2 bit=1→0, 0 bit=1→0, then 1→evict)
03(1)2(0)0(1)HIT
43(1)4(1)0(1)MISS: evict 2 (bit=0)
Total2nd Chance Faults=10, Hits=5
OPT (9 faults) < Second Chance (10 faults). OPT is always optimal by definition. Second Chance (Clock) is a practical approximation of LRU — it avoids the full LRU overhead while still giving better performance than pure FIFO, which is why Linux uses a variant of the Clock algorithm for its page cache.
TLB EAT Calculation (Q05b)
// GIVEN: 60% TLB hit, 40% miss. Memory=30ns, TLB=20ns EAT = (TLB_hit_rate × (TLB_time + memory_time)) + (TLB_miss_rate × (TLB_time + 2×memory_time)) = 0.60 × (20+30) + 0.40 × (20+30+30) = 0.60 × 50 + 0.40 × 80 = 30 + 32 = 62 ns // Address bits: 128 pages → 7 bits; 256 frames → 8 bits; 1024B offset → 10 bits // Logical address = 7+10 = 17 bits // Physical address = 8+10 = 18 bits

52
File Allocation — Contiguous & Indexed (Back Store, 256B blocks)
6th Paper Q06 · CO5/K3 · [10 marks]
24 blocks of 256 bytes each. Files: X(start=2, size=856B), Y(start=7, size=512B), C(start=11, size=1316B), B(start=18, size=148B), L(start=21, size=420B).
i 3 marks — Contiguous allocation with FAT diagram.
ii 4 marks — Indexed allocation for file C, index block=9.
iii 3 marks — Advantages and disadvantages of each.
i — Contiguous Allocation
// Blocks needed = ceil(size / block_size) X: ceil(856/256) = ceil(3.34) = 4 blocks → blocks 2,3,4,5 Y: ceil(512/256) = 2 blocks → blocks 7,8 C: ceil(1316/256) = ceil(5.14) = 6 blocks → blocks 11,12,13,14,15,16 B: ceil(148/256) = 1 block → block 18 L: ceil(420/256) = ceil(1.64) = 2 blocks → blocks 21,22
FileStart BlockBlocks NeededBlocks UsedInternal Frag
X242,3,4,54×256−856=168B
Y727,80B (exact)
C11611,12,13,14,15,166×256−1316=220B
B18118256−148=108B
L21221,222×256−420=92B
ii — Indexed Allocation for File C (index block = 9)
Index Block 9 EntryPoints To Data BlockContent
Entry 011C data bytes 0–255
Entry 112C data bytes 256–511
Entry 213C data bytes 512–767
Entry 314C data bytes 768–1023
Entry 415C data bytes 1024–1279
Entry 516C data bytes 1280–1315 (partial)
iii — Advantages and Disadvantages
MethodAdvantagesDisadvantages
ContiguousFast sequential access; simple FAT (start+length only)External fragmentation; file growth impossible without reallocation
IndexedRandom access O(1); no external fragmentation; file can growOne extra block (index) per file overhead; index block itself can be lost

53
Disk Scheduling — SCAN & LOOK (0–1999 cylinders, head=143)
6th Paper Q07 · CO5/K3 · [10 marks]
2000 cylinders (0–1999). Head at 143, previous at 125 (moving UP). Requests: 96, 1375, 973, 1797, 848, 1520, 1050, 1650, 230, 110. Service left to right (toward higher cylinders).
a — SCAN   b — LOOK
// Sorted: [96, 110, 230, 848, 973, 1050, 1375, 1520, 1650, 1797] // Above 143: 230, 848, 973, 1050, 1375, 1520, 1650, 1797 // Below 143: 110, 96
a — SCAN (Move UP to cylinder 1999, reverse, serve downward)
0 249 499 749 999 1249 1499 1749 1999 Position 0 1 2 3 4 5 6 7 8 9 10 11 143 230 848 973 1050 1375 1520 1650 1797 1999 110 96 START SCAN (start=143, max=1999) Total: 3,759
UP: 143→230(87)→848(618)→973(125)→1050(77)→1375(325)→1520(145)→1650(130)→1797(147)→1999(202) REVERSE: 1999→110(1889)→96(14) Total SCAN = 87+618+125+77+325+145+130+147+202+1889+14 = 3,759 cylinders
b — LOOK (Move UP to last request, reverse, serve downward)
0 249 499 749 999 1249 1499 1749 1999 Position 0 1 2 3 4 5 6 7 8 9 10 143 230 848 973 1050 1375 1520 1650 1797 110 96 START LOOK (start=143) Total: 3,355
UP: 143→230(87)→848(618)→973(125)→1050(77)→1375(325)→1520(145)→1650(130)→1797(147) ← last, REVERSE DOWN: 1797→110(1687)→96(14) Total LOOK = 87+618+125+77+325+145+130+147+1687+14 = 3,355 cylinders
AlgorithmTotal MovementKey Difference
SCAN3,759Travels to cylinder 1999 even though no requests there
LOOK3,355 ✓Only goes to last request (1797) — saves 202+202=404 cylinders