Topics: Contiguous/Linked/Indexed file allocation, Disk scheduling (FCFS, SSTF/LOOK, C-SCAN, C-LOOK), Journaling, DFS, Virtualization, Type-1/Type-2 hypervisors Questions: FAT Q3, Q4, Q8 disk, PY-F, PY-G, 2nd Paper Q7 (Hypervisors), Q9 (SSTF/C-SCAN/C-LOOK), College server Q7 (file alloc), Netflix Q9 (SSTF/C-LOOK/LOOK), PY-3rd Q6 (VMware) Typical marks: ~15–25 per paper
HIGH-FREQUENCY EXAM PATTERNS
✦ Scheduling Gantt charts appear in EVERY paper — always compute WT, TAT, RT
✦ Banker's Algorithm (Need matrix + safety check + request grant) — 3–6 marks every paper
✦ Page replacement (LRU + OPT + EAT) — always 10–15 marks
✦ Disk scheduling (FCFS + LOOK/C-SCAN + line chart) — appears in every paper
✦ File allocation methods (all 3 types) — common in CO5 questions
✦ Synchronization pseudocode (mutex/semaphore) — Producer-Consumer or Readers-Writers
📋 FAT Exam — Final Assessment Test
1
Process Management & System Calls
Q01 · CO1/K2 · [10 marks]
Part a 4 marks — A child process "A" completed execution via exit() but still has an entry in the process table. Parent is sleeping/unaware. What happens to the child process? If kill() is executed by the parent, will it work? Justify with diagram.
Part b 6 marks — A user types a command in the terminal. The OS creates a new process, loads the program, schedules it, and manages completion. Explain how the OS handles this execution and discuss the related system calls with a proper diagram.
Part a — Zombie Process
KEY POINT 1
What is a Zombie Process?
When a child calls exit(), it releases all memory and resources — but its PCB (Process Control Block) entry remains in the process table until the parent collects the exit status via wait(). This "dead but not gone" state is called a Zombie Process (state: Z).
ELABORATION
The OS keeps the skeleton PCB entry so the parent can read the child's exit code. Until then, the PID is occupied. If too many zombies accumulate, the PID space fills up and no new processes can be created — a real system threat.
KEY POINT 2
Does kill() Work on a Zombie? — NO
kill() will NOT work. Signals are only delivered to running processes. A zombie has no running code, no scheduler context — it cannot receive or respond to any signal, not even SIGKILL (-9). The only way to remove a zombie is:
1. Parent calls wait() / waitpid()
2. Parent dies → init/systemd adopts and reaps the zombie
PROCESS STATE DIAGRAM — ZOMBIE
REAL-WORLD ANALOGY
An employee resigns but HR hasn't processed the paperwork yet. They don't work anymore, but their employee ID still exists in the system. That's a zombie — occupying a slot, doing nothing.
Part b — OS Process Execution: fork → exec → schedule → exit
KEY POINT 1
fork() — Duplicate the Shell
Shell calls fork() → OS duplicates the shell's PCB, assigns a new PID to the child, and places it in the Ready Queue. Child is an exact copy of parent at this point.
KEY POINT 2
exec() — Replace with New Program
Child calls execve(program_path) → OS replaces the child's memory image with the new program's text, data, heap, and stack. The program's main() begins execution from the top.
KEY POINT 3
Scheduler → Dispatcher → CPU
The scheduler picks the process (using its policy: CFS, priority, etc.) and the dispatcher performs a context switch — saves current process state to its PCB, loads this process's PCB registers, hands over CPU. State: READY → RUNNING.
KEY POINT 4
wait() → Reap → Done
The shell (parent) blocks on wait(). When child calls exit(), OS signals parent. Parent's wait() returns, collects exit code, OS removes zombie PCB. Shell prints the next prompt.
PROCESS EXECUTION FLOW
REAL-WORLD CONNECTION
Every time you run python script.py in a terminal, this exact sequence occurs. The bash shell is the parent, your script is the child. The shell freezes (wait()) until your script finishes, then shows the $ prompt again.
2
Critical Section — Peterson's Solution
Q02 · CO3/K2 · [10 marks]
Part a 2 marks — Explain why a single shared variable is insufficient for coordination. Illustrate with a suitable execution sequence.
Part b 8 marks — Design a correct two-variable software coordination mechanism and write the complete pseudocode for both threads. Justify correctness by discussing mutual exclusion, progress, and bounded waiting.
Part a — Why a Single Variable Fails
KEY POINT
Single turn Variable → Progress Violated
A single turn variable enforces strict alternation — Thread 0 can only enter CS when turn=0, Thread 1 only when turn=1. If Thread 1 never wants the CS, Thread 0 is blocked forever even though the CS is free. This violates the Progress requirement.
Failure Scenario:
turn = 0 (Thread 0's turn)
T0 enters CS ✓, exits, sets turn = 1
T1 doesn't want CS (stays in remainder)
T0 wants CS again → checks: turn == 1? → BLOCKED FOREVER
↑ CS is FREE but T0 can't enter → PROGRESS VIOLATED ❌
ANALOGY
A bathroom with a single "your turn" token passed between two people. If one person never needs to go, the other can never go again — even though the bathroom is empty.
Part b — Peterson's Solution (Two Variables)
KEY POINT
Two Shared Variables: flag[ ] + turn
flag[i] = true → Thread i wants to enter the CS turn = j → "I yield priority to j if they also want in"
// ── Shared Variables ──────────────────────────────────boolean flag[2] = {false, false} // flag[i]: does thread i want CS?int turn = 0 // whose turn if both want in?// ── Thread 0 ──────────────────────────────────────────Thread_0():
while (true):
flag[0] = true// "I want to enter"
turn = 1// "But you go first if you want"while (flag[1] == true AND turn == 1): wait // busy-wait// ─── CRITICAL SECTION ───
flag[0] = false// "Done, you can go"// ─── REMAINDER SECTION ──// ── Thread 1 ──────────────────────────────────────────Thread_1():
while (true):
flag[1] = true
turn = 0while (flag[0] == true AND turn == 0): wait
// ─── CRITICAL SECTION ───
flag[1] = false// ─── REMAINDER SECTION ──
KEY POINT — CORRECTNESS PROOF
All Three Requirements Met
✅ Mutual Exclusion: For both threads to be in CS simultaneously, both while-conditions would need to be false. That requires flag[1]=false AND flag[0]=false — impossible if both set their flags before checking. Only one turn value exists — it's the deciding tiebreaker.
✅ Progress: If Thread 1 doesn't want CS (flag[1]=false), Thread 0's while condition is immediately false → enters CS freely. No forced alternation.
✅ Bounded Waiting: A thread waits at most one complete CS execution by the other thread. After Thread 1 exits and sets flag[1]=false, Thread 0's while condition fails and it enters immediately.
REAL-WORLD CONNECTION
Peterson's Solution is the conceptual foundation for mutex locks in every multi-threaded application. When two Java threads modify a shared ArrayList, the JVM uses similar mechanisms. The 2003 Northeast US blackout (50 million lost power) was partly caused by a race condition — a missing mutex in alarm software caused critical alerts to be silently dropped.
3
File System Recovery & Distributed Storage
Q03 · CO5/K1 · [10 marks]
Part a 5 marks — Explain how journaling and soft updates help in file system recovery. Which technique do you recommend for faster recovery after crashes?
Part b 5 marks — Discuss the role of log-structured file systems and distributed file systems in improving data reliability and performance. Which is more suitable for handling data across multiple systems?
Part a — Journaling vs Soft Updates
KEY POINT 1
Journaling — Write-Ahead Logging
Before modifying the actual file system, the OS writes a journal entry describing the upcoming change. On crash: just replay or discard uncommitted journal entries. No full disk scan (fsck) needed. Recovery is O(journal size), not O(disk size).
THREE MODES
Journal mode — logs data + metadata (safest, slowest) Ordered mode — data written before metadata is logged (default in ext4) Writeback mode — only metadata logged (fastest, small data-loss risk)
KEY POINT 2
Soft Updates — Safe Write Ordering
No separate log — instead, the OS enforces a strict dependency ordering on metadata writes so that even a mid-crash leaves the filesystem in a structurally safe state. A quick background scan fixes any minor inconsistencies after reboot.
Aspect
Journaling
Soft Updates
Recovery Speed
⚡ Very Fast (replay log)
Moderate (background scan)
Write Overhead
Extra journal writes
Ordering delays
Implementation
Moderate complexity
Very complex
Used In
ext4, NTFS, XFS, HFS+
FreeBSD UFS
✅ Recommendation: Journaling — dramatically faster recovery (just replay/discard the journal), simpler, and battle-tested in production systems worldwide.
ANALOGY
Journaling is like a chef writing down every recipe step before cooking. If the kitchen catches fire mid-recipe, they know exactly where to restart. Soft updates is like arranging ingredients in a safe order so the half-cooked meal won't be poisonous — but you still need a cleanup round.
Part b — LFS vs Distributed File System
KEY POINT 3
Log-Structured File System (LFS)
All writes (data + metadata) are appended sequentially to a log at the end of disk — never overwrites. Treats the entire disk as a circular buffer. Excellent write throughput (sequential I/O). An inode map tracks current locations. Works on a single machine.
KEY POINT 4
Distributed File System (DFS)
Files span multiple machines. Clients access files transparently over the network. Provides replication (fault tolerance), scalability, and location transparency. Examples: NFS, HDFS, Google GFS.
✅ DFS is more suitable for multi-system data handling. LFS improves single-node performance only. DFS provides replication (survives machine failure), unified namespace, and scales to petabytes across thousands of servers.
REAL-WORLD CONNECTION
Google Drive and Dropbox are consumer DFS. HDFS (Hadoop) stores data across thousands of nodes with 3× replication — if a node dies, data is auto-recovered. Netflix streams video from distributed storage across multiple data centers for zero-downtime resilience.
4
Server Virtualization
Q04 · CO5/K3 · [10 marks]
Part a 5 marks — Describe how virtualization can help consolidate ten servers (each at 20% capacity), reduce costs, and improve resource utilization.
Part b 5 marks — What challenges might arise during migration to a virtualized environment, and how can they be mitigated?
Part a — Consolidation via Hypervisor
KEY POINT 1
What Virtualization Does
A Type-1 hypervisor (VMware ESXi, KVM, Hyper-V) runs directly on hardware and hosts multiple Virtual Machines (VMs). Each VM gets a slice of real CPU, RAM, and disk. 10 servers × 20% load = only 2 servers' worth of actual computation — all workloads fit on 2–3 physical machines.
CONCRETE BENEFITS
Cost: 10 → 2–3 physical servers = 70–80% savings on hardware, electricity, and cooling
Resource Pooling: Idle VM CPU/RAM is dynamically allocated to busier VMs — utilization jumps from 20% to 70–80%
Isolation: Each VM has its own OS — a crash in one doesn't affect others
Live Migration: VMs can be moved between physical hosts with zero downtime (vMotion)
Snapshots: Roll back to a known-good state in seconds
Part b — Migration Challenges & Mitigations
Challenge
Risk
Mitigation
App Compatibility
App may not run on hypervisor OS
Test in staging VM before cutover
Performance Overhead
Hypervisor adds 5–15% CPU overhead
Use hardware-assisted virtualization (Intel VT-x)
Network Reconfiguration
IP/MAC changes break services
Pre-configure virtual network with same IPs
Data Transfer Time
Moving TBs of data causes downtime
Live migration — copy memory pages while running
Security / VM Escape
Attacker escapes VM to host
Regular hypervisor patching, network segmentation
REAL-WORLD CONNECTION
AWS EC2, Azure VMs, and Google Compute Engine all run on Type-1 hypervisors (Xen/KVM). A single AWS physical server runs dozens of customer VMs. This is why cloud computing is cheaper than dedicated hardware at scale — you're sharing a hypervisor with thousands of other customers while staying fully isolated.
5
CPU Scheduling — Three Algorithms
Q05 · CO2/K4 · [15 marks]
A multi-tenant SaaS cloud server handles five customer workloads:
Process
Arrival (ms)
Burst (ms)
Priority
P1
0
7
2
P2
2
4
1 ← highest
P3
3
9
4 ← lowest
P4
5
3
3
P5
6
5
2
Part a 5 marks — Preemptive Priority: Gantt chart, WT and TAT per process. Evaluate response time for P2 and starvation risk for P3.
Part b 5 marks — SJF Non-Preemptive: Gantt chart, WT and TAT. Compare with part (a) in terms of avg WT, avg TAT, and fairness.
Part c 5 marks — FCFS: Gantt chart, WT and TAT. Evaluate how long CPU-bound P3 delays shorter processes P2 and P4.
// KEY FORMULAS — used in all scheduling problems
Turnaround Time TAT = Completion Time (CT) − Arrival Time (AT)
Waiting Time WT = TAT − Burst Time (BT)
Average WT / TAT = Σ values / number of processes
Part a — Preemptive Priority (PP)
ALGORITHM
Always run the lowest priority number. Preempt on new arrivals.
STEP-BY-STEP TRACE
t=0 Only P1 arrived → Run P1 (priority 2) t=2 P2 arrives (priority 1 > running P1's 2) → Preempt P1! P1 has 5ms left. Run P2. t=6 P2 done. Ready: P1(pri2, 5ms left), P3(pri4), P4(pri3). P5 not arrived. → Run P1(pri2) t=6 P5 arrives (priority 2 = P1). P1 continues (no preemption on tie — arrival order) t=11 P1 done. Ready: P5(pri2), P4(pri3), P3(pri4) → Run P5(pri2) t=16 P5 done → Run P4(pri3) t=19 P4 done → Run P3(pri4) → finishes at t=28
GANTT CHART — Preemptive Priority
P10–2
P22–6
P16–11
P511–16
P416–19
P319–28
02611161928
Process
AT
BT
CT
TAT = CT−AT
WT = TAT−BT
P1
0
7
11
11
4
P2
2
4
6
4
0
P3
3
9
28
25
16
P4
5
3
19
14
11
P5
6
5
16
10
5
Average
12.8 ms
7.2 ms
⚠️ Starvation Risk for P3: P3 (priority 4, lowest importance) waits 16ms. In a real system, aging must be used — gradually increase the priority of long-waiting processes to prevent indefinite postponement.
Part b — SJF Non-Preemptive
ALGORITHM
When CPU is free, pick the shortest burst from all arrived processes. Runs to completion.
STEP-BY-STEP TRACE
t=0 Only P1 → Run P1 (BT=7). No choice. t=7 P1 done. Arrived: P2(4), P3(9), P4(3), P5 not yet. Shortest = P4(3) → Run P4. t=10 Arrived: P2(4), P3(9), P5(5). Shortest = P2(4) → Run P2. t=14 Arrived: P3(9), P5(5). Shortest = P5(5) → Run P5. t=19 Only P3 left → Run P3(9) → done at t=28.
GANTT CHART — SJF Non-Preemptive
P10–7
P47–10
P210–14
P514–19
P319–28
0710141928
Process
AT
BT
CT
TAT
WT
P1
0
7
7
7
0
P2
2
4
14
12
8
P3
3
9
28
25
16
P4
5
3
10
5
2
P5
6
5
19
13
8
Average
12.4 ms
6.8 ms ← Best
✅ SJF gives the optimal average waiting time among non-preemptive algorithms. Shorter jobs (P4=2ms, P1=0ms) get excellent service. Downside: P3 still waits 16ms — SJF can cause starvation for long processes if short ones keep arriving.
⚠️ Convoy Effect: P3 (burst=9) runs between P2 and P4. Short jobs P4 (BT=3) and P5 (BT=5) wait 15ms and 17ms respectively behind a long job. FCFS is unfair to short, interactive processes — worst average WT of all three algorithms.
Algorithm
Avg WT
Avg TAT
Starvation?
Best For
Preemptive Priority
7.2 ms
12.8 ms
Yes (low-priority)
Real-time, critical tasks
SJF Non-Preemptive
6.8 ms ✓
12.4 ms ✓
Yes (long jobs)
Batch processing
FCFS
9.0 ms ✗
14.6 ms ✗
No (arrival order)
Simple, equal workloads
6
Page Replacement & Effective Access Time
Q06 · CO4/K3 · [15 marks]
Part i — Logical=1000 bytes, Physical=300 bytes, Page size=100 bytes. Reference string: 4,3,2,4,1,7,2,5,2,6,1,3 a 4 marks — Optimal (OPT): replace page not used longest in future b 4 marks — LRU: replace page not used longest in past c 4 marks — Compare performance, and examine the relationship between frame count and hit ratio
Part ii 3 marks — Process P1: 32 pages, 80% in memory. Memory access=2000ns, fault overhead=1000ns. Calculate Effective Access Time (EAT).
Evict the page that won't be needed for the longest time in the future
Theoretical minimum faults — used as a benchmark. Not implementable in real OS (requires future knowledge). Best-case baseline.
OPT — FRAME TRACE (3 frames)
Ref
Frame 1
Frame 2
Frame 3
Result
Action
4
4
—
—
MISS
Cold start
3
4
3
—
MISS
Cold start
2
4
3
2
MISS
Cold start
4
4
3
2
HIT
—
1
4
1
2
MISS
Evict 3 (next use: never)
7
7
1
2
MISS
Evict 4 (next use: never)
2
7
1
2
HIT
—
5
7
1
5
MISS
Evict 2 (next use: pos 9)
2
7
2
5
MISS
Evict 1 (next use: pos 11)
6
6
2
5
MISS
Evict 7 (never used again)
1
6
2
1
MISS
Evict 5 (never used again)
3
3
2
1
MISS
Evict 6 (never used again)
OPT → Page Faults = 9 | Hits = 3
Part i-b — LRU Page Replacement
ALGORITHM
Evict the page that was least recently used (oldest last-use timestamp)
Best practical algorithm. Uses recent past as approximation of near future. Implemented via counter or stack in hardware.
LRU — FRAME TRACE (3 frames)
Ref
Frame 1
Frame 2
Frame 3
Result
Evicted (LRU)
4
4
—
—
MISS
—
3
4
3
—
MISS
—
2
4
3
2
MISS
—
4
4
3
2
HIT
—
1
4
1
2
MISS
3 (used at pos 2)
7
4
1
7
MISS
2 (used at pos 3)
2
2
1
7
MISS
4 (used at pos 4)
5
2
5
7
MISS
1 (used at pos 5)
2
2
5
7
HIT
—
6
2
5
6
MISS
7 (used at pos 6)
1
2
1
6
MISS
5 (used at pos 8)
3
3
1
6
MISS
2 (used at pos 9)
LRU → Page Faults = 10 | Hits = 2
Part i-c — Comparison
Algorithm
Faults
Hits
Feasible?
Verdict
OPT
9
3
No (needs future)
Theoretical benchmark
LRU
10
2
Yes (hardware counter)
Best practical choice
FRAME COUNT RELATIONSHIP
More Frames → Fewer Faults (for OPT and LRU)
For both OPT and LRU, increasing the number of frames monotonically reduces or maintains the page fault rate — more pages fit in memory, fewer misses. This is not always true for FIFO (Belady's Anomaly), but OPT and LRU are immune to it. Adding a 4th frame would reduce faults in both algorithms.
Part ii — Effective Access Time (EAT)
KEY POINT
EAT blends the fast path (hit) and slow path (fault) by their probabilities
// GIVEN
Total pages = 32
Hit ratio (p) = 80% = 0.80 (pages in main memory)
Fault ratio (1-p) = 0.20
Memory access (m) = 2000 ns
Fault overhead (f) = 1000 ns
// FORMULA
EAT = p × m + (1−p) × (m + f)
= 0.80 × 2000 + 0.20 × (2000 + 1000)
= 1600 + 0.20 × 3000
= 1600 + 600
EAT = 2200 ns// INTERPRETATION: EAT is 10% higher than pure RAM access (2000ns)// because 20% of accesses suffer the expensive disk fault path
INTUITION
80% of the time your data is in RAM (fast, 2000ns). 20% of the time the CPU must go to disk (slow, 3000ns total). The weighted average is 2200ns. This is why maximizing the number of frames in RAM is critical — going from 80% to 95% hit rate would drop EAT to ~2050ns.
7
Banker's Algorithm — Deadlock Avoidance
Q07 · CO2/K4 · [15 marks]
Cloud system: R1 (CPU)=10, R2 (Memory)=5, R3 (I/O)=7. Four processes:
Process
Alloc R1
Alloc R2
Alloc R3
Max R1
Max R2
Max R3
P1
2
1
1
5
2
2
P2
1
0
2
3
2
3
P3
3
1
1
6
1
3
P4
1
1
2
4
2
4
Part a 3 marks — Compute the Need matrix for all processes. Explain how it is derived and why it is critical in deadlock avoidance.
Part b 3 marks — Determine whether the system is currently in a safe state. If safe, provide a safe sequence. If not, justify why.
Part c 3 marks — P2 requests (1, 1, 0) additional resources. Should the system grant it? Perform a step-by-step safety check before and after allocation.
Part d 3 marks — In real cloud systems, resource demands are dynamic and unpredictable. Discuss limitations of Banker's Algorithm and propose an alternative for real-time systems.
Part e 3 marks — Suggest two improvements to the system's resource allocation strategy to minimize deadlock risk while maintaining high utilization.
Part a — Need Matrix
FORMULA
Need[i] = Max[i] − Allocation[i]
The Need matrix represents the maximum additional resources each process might still request. It is the worst-case future demand. The Banker's Algorithm uses it to simulate whether granting a request still leaves the system in a state where ALL processes can eventually complete.
// Total Allocated = sum of all allocations
R1: 2+1+3+1 = 7 | R2: 1+0+1+1 = 3 | R3: 1+2+1+2 = 6
// Available = Total Instances − Total Allocated
Available R1 = 10 − 7 = 3
Available R2 = 5 − 3 = 2
Available R3 = 7 − 6 = 1
Process
Need R1 (Max−Alloc)
Need R2
Need R3
P1
5−2 = 3
2−1 = 1
2−1 = 1
P2
3−1 = 2
2−0 = 2
3−2 = 1
P3
6−3 = 3
1−1 = 0
3−1 = 2
P4
4−1 = 3
2−1 = 1
4−2 = 2
Part b — Safety Check
ALGORITHM
Find a process whose Need ≤ Available. Grant it, let it finish, recover its allocation. Repeat.
Check each step: Need ≤ Available?
P1: [3,1,1] ≤ [3,2,1] ✓ → Available becomes [5,3,2]
P2: [2,2,1] ≤ [5,3,2] ✓ → Available becomes [6,3,4]
P3: [3,0,2] ≤ [6,3,4] ✓ → Available becomes [9,4,5]
P4: [3,1,2] ≤ [9,4,5] ✓ → Available becomes [10,5,7]
✅ SAFE STATE — Safe Sequence: P1 → P2 → P3 → P4
ANALOGY
The Banker's Algorithm is like a bank that only gives loans if it can still guarantee every customer eventually gets their full requested amount. It never over-commits — always keeps enough in reserve to serve everyone in some order.
Requires pre-declared max needs — cloud workloads are dynamic; max demand is unknown at process creation
High computational cost — O(n² × r) per request (n=processes, r=resource types) — too slow at cloud scale (thousands of containers)
Fixed process count assumed — new VMs/containers spawn dynamically; Banker's can't handle joins mid-run
Single instance only — the standard algorithm doesn't naturally handle heterogeneous multi-instance resources
Better Alternative for Real-Time: Use deadlock detection + recovery — allow all resource requests, run a detection algorithm periodically (or on timeout), then kill/rollback the deadlocked process. This trades the overhead of constant safety checking for occasional but fast recovery.
Part e — Two Improvements
IMPROVEMENT 1
Global Resource Ordering (Prevents Circular Wait)
Enforce a fixed global ordering on resource types (R1 < R2 < R3). All processes must request resources in this ascending order only. This eliminates circular wait — one of the four Coffman deadlock conditions — so deadlock becomes structurally impossible.
IMPROVEMENT 2
Resource Time-To-Live (TTL) with Preemption
Grant resources but attach a TTL timer. If a process holds resources beyond the TTL without making progress (no CPU usage), the OS forcibly reclaims them and places the process in a retry queue. Prevents indefinite hold-and-wait without rejecting requests upfront.
8
Disk Scheduling — Warehouse Robot
Q08 · CO3/K3 · [15 marks]
Robotic arm at shelf 750. Shelves 0–1999. Request queue: 93, 1800, 340, 1200, 600, 40, 1700, 820
Part a 4 marks — FCFS: Service in exact arrival order.
Part b 4 marks — LOOK: Move toward higher shelves, at last request reverse to lowest pending. (Note: question says "jumps to lowest pending" — this is C-LOOK behaviour. Interpreted as LOOK here.)
Part c 4 marks — C-SCAN: Move in one direction, when end is reached jump to lowest request and continue upward.
Part d 3 marks — Compare all strategies: seek time, fairness, starvation. Which suits heavy continuous load?
✅ C-SCAN is best for heavy continuous request loads. By always sweeping in one direction, it guarantees a maximum wait time of one full sweep cycle — no request at the "far end" waits extremely long. LOOK has slightly lower total movement but C-SCAN's uniform wait distribution is superior for fairness under sustained load.
REAL-WORLD CONNECTION
Linux's default I/O scheduler used to be the CFQ (Completely Fair Queuing) — conceptually similar to C-SCAN with fairness properties. Modern Linux (since 5.0) uses BFQ (Budget Fair Queuing) for HDDs. SSDs don't need disk scheduling at all (use NOOP/none) because all flash cells are equally fast to access — there's no physical head to move.
📚 First Past Year Paper
9
DC-OS — Drone Control Operating System
Past Year · CO1/K3 · [10 marks]
Part i 6 marks — Explain how the core OS services should be designed and coordinated to meet the functional and reliability requirements of DC-OS (process management, file/data handling, inter-drone communication, error detection).
Part ii 4 marks — Illustrate with examples showing how these services interact to support real-time drone operations.
KEY POINT 1
Real-Time Process Management — Navigation Tasks
DC-OS must use preemptive real-time scheduling (EDF — Earliest Deadline First, or fixed-priority like RMS). Navigation tasks must always preempt lower-priority tasks. Each drone runs multiple threads with strict deadlines: navigation (hard real-time), telemetry (soft real-time), logging (best-effort).
INTERACTION EXAMPLE
Mid-flight: collision sensor fires → OS raises a high-priority interrupt → navigation thread preempts active mission-log writer → executes avoidance maneuver within 10ms deadline → resumes logging. Without real-time scheduling, the drone could crash before avoidance code runs.
KEY POINT 2
File and Data Handling — Flight Logs
DC-OS uses a circular ring buffer in RAM for real-time telemetry (GPS, altitude, speed). fsync() periodically flushes to NAND flash. Journaling protects against mid-flight power loss — on reboot, the last valid log state is recovered instantly without full-disk scan.
KEY POINT 3
Secure Inter-Drone Communication
DC-OS implements message-passing IPC (kernel-managed message queues) between drones. Messages are encrypted at the OS security layer. Authentication tokens prevent rogue drones from injecting false coordinates. The kernel validates all incoming messages before delivering to the navigation thread.
KEY POINT 4
Error Detection and Recovery
A watchdog timer process monitors all critical threads. If the navigation thread fails to check in within its period, OS triggers automatic recovery: throttle reduction → return-to-home mode. Hardware exceptions (SIGFPE, SIGSEGV) are caught and handled gracefully — land safely rather than crash. Checkpointing saves drone state every N seconds for mid-flight recovery.
REAL-WORLD CONNECTION
DJI drones run a custom RTOS that implements exactly this. NASA's Mars Ingenuity helicopter used VxWorks RTOS — a certified real-time OS with hard deadlines and watchdog timers. Every millisecond of scheduling jitter on Mars could mean a crash with no recovery possible.
10
Greenhouse Temperature Monitor — Multi-Threading
Past Year · CO1/K3 · [10 marks]
Multi-threaded temperature monitoring. Multiple sensor threads continuously update a shared temperature array. A controller thread computes the average.
Part i 6 marks — Write a C program creating multiple sensor threads and a controller thread. Sensor threads update a shared array; controller reads it to compute the average.
Part ii 4 marks — Discuss the potential problems that occur without synchronization mechanisms.
Part i — C Program with Mutex Synchronization
#include<pthread.h>#include<stdio.h>#include<unistd.h>#define NUM_ZONES 4float temps[NUM_ZONES] = {0}; // SHARED array
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
// ── Sensor Thread: updates one zone's reading ───────────void* sensor_thread(void* arg) {
int zone = *(int*)arg;
while (1) {
float reading = read_sensor(zone); // hardware read
pthread_mutex_lock(&lock); // ACQUIRE LOCK
temps[zone] = reading; // update shared array
pthread_mutex_unlock(&lock); // RELEASE LOCK
sleep(1); // wait before next read
}
}
// ── Controller Thread: computes average across all zones ─void* controller_thread(void* arg) {
while (1) {
float sum = 0;
pthread_mutex_lock(&lock); // ACQUIRE LOCKfor (int i = 0; i < NUM_ZONES; i++)
sum += temps[i];
pthread_mutex_unlock(&lock); // RELEASE LOCK
printf("Avg Temp: %.2f°C\n", sum / NUM_ZONES);
sleep(5);
}
}
int main() {
pthread_t sensors[NUM_ZONES], ctrl;
int ids[NUM_ZONES];
for (int i = 0; i < NUM_ZONES; i++) {
ids[i] = i;
pthread_create(&sensors[i], NULL, sensor_thread, &ids[i]);
}
pthread_create(&ctrl, NULL, controller_thread, NULL);
pthread_join(ctrl, NULL);
}
Part ii — Problems Without Synchronization
PROBLEM 1
Race Condition → Wrong Average
Sensor thread writes temps[2] byte-by-byte. Controller reads it mid-write. It reads a partially-written, garbled float value. The computed average is meaningless — potentially causing the system to incorrectly trigger cooling/heating.
PROBLEM 2
Data Inconsistency → Mixed Old/New Values
Controller reads temps[0]before a sensor update but temps[1]after. The average mixes readings from different time points — a logically inconsistent snapshot.
PROBLEM 3
Memory Visibility → Stale CPU Cache
Modern CPUs cache values in L1/L2. Without a memory barrier (provided by mutex), the controller's CPU core may see a stale cached copy of the array that hasn't been flushed to main memory by the sensor thread's core.
REAL-WORLD CONSEQUENCE
The 2003 Northeast US blackout (50 million people lost power for 2 days) was partly caused by a race condition in FirstEnergy's alarm system. A software bug caused alarms to silently drop, operators saw a normal-looking screen while the grid was deteriorating. Proper mutex synchronization would have prevented this.
Three queues: Q1 (highest, RR 5ms) → Q2 (RR 10ms) → Q3 (FCFS, lowest). All new tasks enter Q1. If a task exhausts its time slice, it is demoted to the next queue.
Task
Arrival (ms)
Burst (ms)
Type
T1
0
12
Live Streaming
T2
3
8
Quiz Evaluation
T3
1
20
Analytics
T4
6
6
Live Streaming
T5
12
10
Quiz Evaluation
Part i 6 marks — Draw the Ready Queues and Gantt Chart.
Part ii 4 marks — Calculate average Turnaround Time, Waiting Time, and Response Time.
KEY CONCEPT
MLFQ — Automatic CPU-bound vs Interactive Classification
Short/interactive tasks (finish within Q1's 5ms) stay high-priority. CPU-hungry tasks (T3=20ms) naturally sink to Q3 over time. This is the OS automatically adapting to process behavior — no manual priority assignment needed.
STEP-BY-STEP TRACE
Q1 Round (5ms slices):
0–5: T1 runs in Q1 (uses full 5ms) → demoted to Q2, 7ms remaining
5–10: T3 runs in Q1 (uses full 5ms) → demoted to Q2, 15ms remaining
10–15: T2 runs in Q1 (has 8ms, uses 5ms) → demoted to Q2, 3ms remaining
15–20: T4 runs in Q1 (has 6ms, uses 5ms) → demoted to Q2, 1ms remaining
Q2 / Q1 interleaved:
20–27: T1 runs in Q2 (7ms remaining, <10ms slice) → finishes at t=27
27–32: T5 arrives at t=12, enters Q1 fresh → runs 5ms → demoted to Q2, 5ms left
32–35: T2 runs in Q2 (3ms left) → finishes at t=35
35–36: T4 runs in Q2 (1ms left) → finishes at t=36
36–46: T3 runs in Q2 (10ms slice, uses full) → demoted to Q3, 5ms remaining
46–51: T5 runs in Q2 (5ms left) → finishes at t=51
51–56: T3 runs in Q3 (FCFS, 5ms left) → finishes at t=56
GANTT CHART — Multilevel Feedback Queue
T1-Q10–5
T3-Q15–10
T2-Q110–15
T4-Q115–20
T1-Q220–27
T5-Q127–32
T2-Q232–35
T435–36
T3-Q236–46
T5-Q246–51
T3-Q351–56
05101520273236465156
// Formulas: TAT = CT − AT | WT = TAT − BT | RT = First CPU time − AT
Task AT BT CT TAT WT RT
T1 0 12 27 27 15 0
T2 3 8 35 32 24 7
T3 1 20 56 55 35 4
T4 6 6 36 30 24 9
T5 12 10 51 39 29 15
Avg TAT = (27+32+55+30+39)/5 = 36.6 ms
Avg WT = (15+24+35+24+29)/5 = 25.4 ms
Avg RT = ( 0+ 7+ 4+ 9+15)/5 = 7.0 ms
REAL-WORLD CONNECTION
Linux's CFS and Windows NT scheduler both use feedback queues internally. Interactive processes (keyboard, mouse clicks) use tiny CPU bursts so they stay in high-priority queues — this is why your mouse cursor stays responsive even while a heavy video encode job runs in the background.
12
Core Banking — Race Condition & Synchronization
Past Year · CO2/K3 · [10 marks]
Process A handles deposits. Process B handles withdrawals. Both operate on shared account_balance. Initial balance = ₹10,000. A deposits ₹2,000, B withdraws ₹1,500.
Part i 5 marks — Develop pseudocode for both processes using an appropriate synchronization mechanism.
Part ii 2 marks — Explain how your solution ensures mutual exclusion, progress, and bounded waiting.
Part iii 3 marks — Trace with and without synchronization. Show the difference in results.
Mutex ensures only one process modifies balance at a time
The mutex lock prevents A and B from simultaneously reading and writing account_balance. Only one holds the lock — the other blocks at lock().
PROGRESS
Lock is released immediately after the balance update — never held indefinitely
BOUNDED WAITING
Each process waits at most one transaction by the other process
After A completes and releases the lock, B acquires it immediately. No process can "cut in line" more than once before the waiting process proceeds.
Part iii — Trace: With vs Without Sync
Step
WITHOUT Sync (Race Condition)
WITH Mutex (Correct)
1
A reads balance = 10000
A acquires lock
2
B reads balance = 10000 (same!)
B tries lock → BLOCKED
3
A computes 10000+2000 = 12000
A reads 10000, computes 12000, writes back
4
B computes 10000−1500 = 8500
A releases lock
5
A writes 12000 to balance
B acquires lock, reads 12000
6
B writes 8500 ← overwrites A's deposit!
B computes 12000−1500 = 10500, writes back
Result
₹8,500 ❌ Lost ₹2,000 deposit!
₹10,500 ✓ Correct
REAL-WORLD CONNECTION
Every bank database uses ACID transactions (Atomicity, Consistency, Isolation, Durability) — the database-level equivalent of mutex + journaling. The 2012 Knight Capital trading firm lost $440 million in 45 minutes partly due to unsynchronized software state. A race condition in production financial systems is catastrophically expensive.
Document Generator (DG) and Smart Printer (SP) share a bounded print queue of size N.
Constraints: DG always produces 2 docs consecutively. SP always prints 4 docs consecutively. SP only starts when ≥4 docs are available. DG waits if buffer full. SP waits if insufficient docs.
Part i 3 marks — Identify the required synchronization primitives and shared variables with their initial values.
Part ii 7 marks — Write the pseudocode for both DG and SP processes.
Part i — Synchronization Primitives
PRIMITIVES NEEDED
3 Semaphores + 1 Mutex + 1 Counter
Variable
Type
Initial Value
Purpose
mutex
Semaphore (binary)
1
Mutual exclusion for buffer access
empty
Counting Semaphore
N
Number of empty slots in buffer
full
Counting Semaphore
0
Number of documents in buffer
batch_ready
Semaphore
0
Signals SP when ≥4 docs are available
count
Integer
0
Current number of docs in buffer
Part ii — Pseudocode
// ── Document Generator (DG) — Producer ──────────────────Process_DG():
while (true):
doc1 = generate_document()
doc2 = generate_document() // always produce 2
wait(empty); wait(empty) // need 2 empty slots
wait(mutex) // enter critical section
add doc1 to buffer; in = (in+1) % N
add doc2 to buffer; in = (in+1) % N
count += 2
signal(mutex) // exit critical section
signal(full); signal(full) // 2 new docs addedif (count >= 4): // notify SP if enough docs
signal(batch_ready)
// ── Smart Printer (SP) — Consumer ───────────────────────Process_SP():
while (true):
wait(batch_ready) // block until ≥4 docs ready
wait(full); wait(full)
wait(full); wait(full) // consume 4 docs
wait(mutex) // enter critical sectionfor i = 1 to 4:
doc = buffer[out]
out = (out+1) % N
count -= 1
print(doc)
signal(mutex) // exit critical section
signal(empty); signal(empty)
signal(empty); signal(empty) // 4 slots freed
REAL-WORLD CONNECTION
Windows Print Spooler is exactly this pattern — applications spool jobs (producers), the printer driver batches and sends to hardware (consumer). The OS manages the bounded buffer between them. The batch_ready semaphore represents the spooler's policy of only sending when a minimum page count is queued for efficiency.
Part a 3 marks — Contiguous allocation. Assume suitable starting blocks. Draw the allocation diagram and File Allocation Table (FAT).
Part b 4 marks — Linked allocation. Assume appropriate starting and ending blocks.
Part c 3 marks — Indexed allocation. Assume suitable index block. Draw the index table.
// Available blocks (removing 2,3,7,8,16,17,25):
[0,1,4,5,6,9,10,11,12,13,14,15,18,19,20,21,22,23,24,26,27,28,29,30,31]
Total available = 25 blocks | Total needed = 3+5+4+2+6+2 = 22 blocks ✓
Fast sequential access. Suffers from external fragmentation — free blocks scattered between unavailable ones mean we can't always find a long enough run. Files must be pre-allocated.
File
Start Block
Length
Blocks Used
sys
0
2
0, 1
code
4
3
4, 5, 6
data
9
5
9, 10, 11, 12, 13
bin
14
2
14, 15
temp
18
4
18, 19, 20, 21
logs
26
6
26, 27, 28, 29, 30, 31
⚠️ Blocks 22–24 wasted — 3 free blocks that are too fragmented for any file needing contiguous space. This is external fragmentation.
Part b — Linked Allocation
KEY POINT
Each block has a pointer to the next. Any free blocks usable — no fragmentation.
FAT stores (start, end). Eliminates fragmentation but random access is O(n) — must follow chain block-by-block. A broken pointer corrupts the entire file.
File
Start → End
Block Chain
code
0 → 4
0 → 1 → 4 → NULL
data
5 → 11
5 → 6 → 9 → 10 → 11 → NULL
temp
12 → 15
12 → 13 → 14 → 15 → NULL
sys
18 → 19
18 → 19 → NULL
logs
20 → 26
20 → 21 → 22 → 23 → 24 → 26 → NULL
bin
27 → 28
27 → 28 → NULL
Part c — Indexed Allocation
KEY POINT
One dedicated index block per file stores all data block addresses.
Supports direct (random) access to any block. FAT stores only the index block number. Small overhead (one block per file for the index), but for very large files, multi-level indexing is used.
FAT32 (USB drives) = linked allocation. Each FAT entry points to the next cluster. ext4 (Linux) = indexed with extents (contiguous run optimization). NTFS (Windows) = B-tree indexed structure (MFT). The more sophisticated the file system, the better it handles large files and fragmentation.
Part b(i) 2 marks — Calculate the Need Matrix for all VMs.
Part b(ii) 3 marks — Determine whether the system is in a safe state or not.
Part b(iii) 3 marks — VM4 requests (R1=1, R2=0, R3=0, R4=0) at t=1ms. Analyze whether it can be safely granted.
Part b(iv) 2 marks — VM1 requests (R1=1, R2=1, R3=0, R4=1) at t=3ms. Analyze whether it can be safely granted.
Part a(i) — Resource Allocation Graph (RAG)
KEY CONCEPT
RAG has two edge types: Assignment (R→P) and Request (P→R)
RESOURCE ALLOCATION GRAPH
Part a(ii) — Wait-For Graph (WFG) & Deadlock Detection
KEY CONCEPT
WFG: Remove resource nodes. P→Q edge means P waits for a resource HELD BY Q.
WAIT-FOR GRAPH — Cycle = Deadlock
CONCLUSION
Deadlock exists between P1 and P2. P3 is an indirect victim.
P1 ↔ P2 form a deadlock cycle: P1 waits for R2 (held by P2), while P2 waits for R1 (held by P1). Neither can proceed.
P3 is not in the cycle but is effectively stuck — it's waiting for R2 which is held by deadlocked P2. P3 will never get R2 until the deadlock is resolved (typically by killing P1 or P2).
Part b(i) — Need Matrix
FORMULA
Need[i][j] = Max[i][j] − Allocation[i][j]
* Note: VM1's Max R4 = 0 but Allocation R4 = 1. This means VM1 has already been allocated more than its declared max — a data anomaly. We treat VM1's Need[R4] = 0 (it cannot need any more than its max).
VM
Need R1
Need R2
Need R3
Need R4
VM1
3−1 = 2
3−2 = 1
2−1 = 1
0−1 → 0*
VM2
2−2 = 0
1−0 = 1
1−1 = 0
2−0 = 2
VM3
3−3 = 0
2−1 = 1
2−2 = 0
1−1 = 0
VM4
1−0 = 1
1−1 = 0
2−1 = 1
1−1 = 0
Part b(ii) — Safety Check
Check each step: Need ≤ Available?
VM2: [0,1,0,2] ≤ [1,1,0,2] ✓ → Available = [3,1,1,2]
VM3: [0,1,0,0] ≤ [3,1,1,2] ✓ → Available = [6,2,3,3]
VM4: [1,0,1,0] ≤ [6,2,3,3] ✓ → Available = [6,3,4,4]
VM1: [2,1,1,0] ≤ [6,3,4,4] ✓ → Available = [7,5,5,5]
✅ SAFE STATE — Safe Sequence: VM2 → VM3 → VM4 → VM1
Request = [1,1,0,1]
Check 1: Request ≤ Need[VM1] = [2,1,1,0]?
R4: request = 1 > need = 0 → VIOLATION!
VM1 is requesting MORE than its declared maximum need.
This is an abnormal/illegal request — the process is exceeding its own stated requirements.
❌ REJECT VM1's request — exceeds declared maximum need
OS should flag this as an error. Potential corrective action:
terminate VM1 or force it to re-declare its maximum needs.
REAL-WORLD CONNECTION
Kubernetes (container orchestration) uses a similar concept with resource requests and limits. A pod that tries to use more CPU/memory than its declared limit is throttled or killed. AWS EC2 instances can't exceed their instance type's physical limits — the hypervisor enforces hard caps, analogous to the Banker's max constraint.
17
Memory Addressing + Cache Replacement
Past Year · CO4/K3 · [15 marks]
Part a — A computer system has a logical address space of 32 pages, where each page contains 1024 bytes. Physical memory consists of 16 frames.
(i) 3 marks How many bits are required to represent the logical address?
(ii) 2 marks How many bits for the physical address?
Part b — A cloud storage client caches 3 recently accessed files. Access sequence: 1, 2, 3, 1, 2, 3, 4, 5, 1, 2, 3, 6, 7, 1 (i) 4 marks LRU replacement — hits and misses
(ii) 4 marks Optimal replacement — hits and misses
(iii) 2 marks Which performs better and why?
Part a — Address Bit Calculation
KEY CONCEPT
Logical address = [page number bits | offset bits]. Physical = [frame number bits | offset bits].
The offset bits are the same in both (same page/frame size). The difference is page number (logical) vs frame number (physical). The page table translates page# → frame#.
The page number is like a chapter in a book (logical structure). The frame number is the actual physical shelf location in the library. The 10-bit offset is the line number within the page — same in both addresses since page = frame in size. The page table is the library catalog.
Part b(i) — LRU Cache Replacement (cache size = 3)
Access
Cache State after
Result
Evicted
1
[1, −, −]
MISS
—
2
[1, 2, −]
MISS
—
3
[1, 2, 3]
MISS
—
1
[2, 3, 1]
HIT
—
2
[3, 1, 2]
HIT
—
3
[1, 2, 3]
HIT
—
4
[2, 3, 4]
MISS
1 (LRU)
5
[3, 4, 5]
MISS
2 (LRU)
1
[4, 5, 1]
MISS
3 (LRU)
2
[5, 1, 2]
MISS
4 (LRU)
3
[1, 2, 3]
MISS
5 (LRU)
6
[2, 3, 6]
MISS
1 (LRU)
7
[3, 6, 7]
MISS
2 (LRU)
1
[6, 7, 1]
MISS
3 (LRU)
Total
Misses=11, Hits=3
Part b(ii) — Optimal Replacement
Access
Cache State after
Result
Evicted (farthest future)
1
[1, −, −]
MISS
—
2
[1, 2, −]
MISS
—
3
[1, 2, 3]
MISS
—
1
[1, 2, 3]
HIT
—
2
[1, 2, 3]
HIT
—
3
[1, 2, 3]
HIT
—
4
[1, 2, 4]
MISS
3 (next use: pos 11)
5
[1, 4, 5]
MISS
2 (next use: pos 10)
1
[1, 4, 5]
HIT ←
OPT kept 1!
2
[1, 2, 5]
MISS
4 (never used again)
3
[1, 2, 3]
MISS
5 (never used again)
6
[2, 3, 6]
MISS
1 (next use: pos 14)
7
[3, 6, 7]
MISS
2 (never used again)
1
[6, 7, 1]
MISS
3 (never used again)
Total
Misses=10, Hits=4 ✓
Part b(iii) — Which Performs Better?
Algorithm
Hits
Misses
Feasible?
Optimal (OPT)
4 ✓
10
No — needs future knowledge
LRU
3
11
Yes — uses past history
CONCLUSION
OPT is better (4 hits vs 3), but LRU is the best practical choice
OPT performs better because it uses future knowledge to retain pages that will be reused soonest — it kept page 1 in cache at position 9 (avoiding a miss) by knowing it would be accessed again at position 14. LRU cannot know this, so it evicted page 1. However, LRU is the best implementable algorithm — it closely approximates OPT by using past access history as a predictor. Neither OPT nor LRU suffers from Belady's Anomaly (unlike FIFO).
REAL-WORLD CONNECTION
Redis (used by Twitter, GitHub, Stack Overflow) uses LRU and LFU eviction policies. CPU L1/L2/L3 caches use pseudo-LRU (hardware approximation). Web browsers cache recently visited pages using LRU. Choosing the wrong eviction policy for a production cache can cause 5–10× slowdowns under load.
📄 Second Past Year Paper
18
OS Architecture — Layered vs Monolithic for HPC
2nd Paper Q01 · CO1/K1 · [10 marks]
A high-performance computing system requires: fast execution, minimal overhead, ease of maintenance, strong fault isolation, and real-time multi-user services.
Part a 5 marks — Analyze how a layered structure and a monolithic structure would handle system design. Determine which is more suitable and justify with respect to performance, modularity, and reliability.
Part b 5 marks — For the selected structure, identify any three key functionalities and discuss an appropriate abstraction model. Explain how these abstractions interact to simplify system design while ensuring efficient resource utilization and system stability.
Part a — Layered vs Monolithic: Which fits HPC?
KEY POINT 1
Monolithic OS — Everything in One Kernel
All OS services (process management, memory, file system, device drivers) run in a single large kernel binary in privileged mode. System calls go directly to the service — no layer-crossing overhead. Examples: Linux, early Unix.
KEY POINT 2
Layered OS — Strict Hierarchy of Abstraction
OS is divided into numbered layers (layer 0 = hardware, layer N = user interface). Each layer only calls services from the layer directly below it. Highly modular but slower — a system call may traverse multiple layers, each adding overhead.
Criterion
Monolithic
Layered
Performance
⚡ Excellent — direct calls, no layer overhead
Moderate — cross-layer penalties
Modularity
Low — tightly coupled
High — each layer independent
Fault Isolation
Poor — one bug can crash kernel
Good — faults contained per layer
Ease of Maintenance
Harder (large codebase)
Easier (modular changes)
Real-time Overhead
Minimal
Higher
✅ For HPC with real-time requirements: Monolithic wins on performance. However, a Modular Monolithic design (like Linux) — where loadable kernel modules add modularity without full layering — is the best practical choice: raw speed + selective modularity.
REAL-WORLD
Linux (monolithic + loadable modules) powers 99% of the world's supercomputers. It outperforms layered designs because HPC workloads cannot afford the extra function-call hops. Windows NT uses a layered approach but pays the performance cost — hence it's rarely used for HPC clusters.
Part b — Three Key Functionalities + Abstraction Model
FUNCTIONALITY 1
Process Abstraction (via PCB)
The OS abstracts raw CPU execution as a Process Control Block (PCB). Each process sees a private, isolated CPU view (registers, PC, stack). The scheduler multiplexes the real CPU among PCBs transparently — processes don't know they share hardware.
FUNCTIONALITY 2
Memory Abstraction (via Virtual Address Space)
Each process gets its own virtual address space. The MMU + page tables translate virtual → physical transparently. Processes can't access each other's memory — critical for fault isolation in multi-user HPC systems.
FUNCTIONALITY 3
I/O Abstraction (via File Descriptor)
All I/O (files, sockets, pipes, devices) is abstracted as a file descriptor. The same read()/write() system call works whether you're reading a disk file, a network socket, or a GPU memory buffer. This simplifies application code dramatically.
HOW THEY INTERACT
A process (PCB abstraction) reads data (file descriptor abstraction) into its private memory (virtual address abstraction). The OS coordinates all three: the scheduler gives the process CPU time → the MMU maps its virtual buffer address to real RAM → the device driver moves data through the file descriptor into that buffer. Each abstraction hides the complexity below it.
19
Interrupts & Thread Types — Five Sub-Scenarios
2nd Paper Q02 · CO1/K2 · [10 marks]
A multitasking system runs an application with multiple threads. Five sub-scenarios occur:
(i) A key is pressed by the user while the application is running.
(ii) A thread performs independent computations without OS intervention.
(iii) A disk operation completes and signals the processor.
(iv) The OS schedules threads for execution across CPU cores.
(v) Multiple threads managed entirely by a user-level library without kernel awareness.
Part a 5 marks — Identify whether an interrupt is involved for each sub-scenario. Justify based on event type and system response.
Part b 5 marks — Determine whether each sub-scenario involves user-level threads or kernel-level threads. Provide brief reasoning.
Part a — Interrupt Involved?
Scenario
Interrupt?
Type
Justification
(i) Key pressed
YES
Hardware Interrupt (IRQ)
Keyboard controller raises an IRQ. CPU suspends current task, jumps to keyboard ISR, buffers the keycode, returns.
(ii) Thread computes
NO
None
Pure computation within allocated CPU time slice. No external event — no interrupt unless a timer interrupt eventually preempts it.
(iii) Disk completes
YES
Hardware Interrupt (I/O)
Disk controller raises an interrupt when DMA transfer finishes. OS ISR marks the waiting process as ready.
(iv) OS schedules threads
YES
Timer Interrupt (Trap)
A periodic timer interrupt triggers the scheduler. The OS performs a context switch — saves old thread state, loads new thread.
(v) User-level library threads
NO
None
User-level thread switching is done by the library (e.g., setjmp/longjmp). No kernel involvement, no mode switch, no interrupt.
Part b — User-Level vs Kernel-Level Threads
Scenario
Thread Type
Reasoning
(i) Key press response
Kernel-Level
Keyboard interrupt causes a kernel ISR to run. The kernel may then wake a kernel-level thread waiting for input.
(ii) Independent computation
Either (context-dependent)
If the thread was created by the kernel (e.g., pthread), it's kernel-level. If by a user library, user-level. Either can compute independently.
(iii) Disk I/O completion
Kernel-Level
Disk interrupt is handled by the kernel. The kernel unblocks the waiting kernel thread and places it in the ready queue.
(iv) OS scheduling across cores
Kernel-Level
Only the kernel can schedule across multiple CPU cores — it manages one kernel thread per core. User-level threads are invisible to the scheduler.
(v) User-level library manages threads
User-Level
By definition — the library (e.g., GNU Pth, old Java green threads) handles switching in user space. Kernel sees only one process thread.
REAL-WORLD
Modern Linux uses kernel-level threads (via clone() syscall) for all pthreads. User-level threads (goroutines in Go, Erlang processes) are managed by a user-space runtime — thousands can run atop a few kernel threads, giving huge scalability but losing true multi-core parallelism if the runtime doesn't do M:N mapping.
20
CPU Scheduling — Preemptive Priority + Round Robin
2nd Paper Q03 · CO2/K2 · [10 marks]
Five processes P1–P5 with lower importance number = higher priority. Same-priority processes share CPU in 2ms RR. Newly arriving higher-priority process preempts immediately.
Process
Arrival (ms)
Burst (ms)
Priority
P1
0
5
2
P2
1
4
1 ← highest
P3
2
6
3 ← lowest
P4
3
3
1 ← highest
P5
4
2
2
Part a 7 marks — Construct the Gantt chart and specify the process completion sequence.
Part b 3 marks — Calculate average waiting time, average turnaround time, and average response time.
Part a — Gantt Chart
ALGORITHM
Preemptive Priority + RR(2ms) for ties. Lower number = higher priority.
STEP-BY-STEP TRACE
t=0 Only P1 (pri 2) → Run P1 t=1 P2 arrives (pri 1 > P1's 2) → Preempt P1. P1 has 4ms left. Run P2. t=3 P4 arrives (pri 1 = P2's 1). RR kicks in — P2 used 2ms slice → switch to P4. Run P4. t=5 P4 used 2ms slice. Back to P2 (still 2ms left). Run P2. t=7 P2 done. P4 has 1ms left. Run P4. t=8 P4 done. Ready: P1(4ms,pri2), P5(2ms,pri2). RR 2ms. Run P1. t=9 P5 arrives... already at t=4. P5 arrived at t=4 but was blocked behind priority-1 processes.
At t=8: Ready: P1(4ms,pri2), P5(2ms,pri2). RR between them. Run P1 first (arrived earlier). t=10 P1 used 2ms → switch to P5. Run P5(2ms). t=12 P5 done. Run P1 remaining 2ms. t=14 P1 done. Only P3 (pri 3, lowest). Run P3(6ms). t=20 P3 done.
GANTT CHART — Preemptive Priority + RR(2ms) for equal priority
P10–1
P21–3
P43–5
P25–7
P47–8
P18–10
P510–12
P112–14
P314–20
01357810121420
Completion sequence: P2 → P4 → P5 → P1 → P3
Part b — WT, TAT, RT
// TAT = CT − AT | WT = TAT − BT | RT = First CPU time − AT
P1: AT=0, BT=5, CT=14 → TAT=14, WT=9, RT=0 (first ran at t=0)
P2: AT=1, BT=4, CT=7 → TAT=6, WT=2, RT=0 (first ran at t=1)
P3: AT=2, BT=6, CT=20 → TAT=18, WT=12, RT=12 (first ran at t=14)
P4: AT=3, BT=3, CT=8 → TAT=5, WT=2, RT=0 (first ran at t=3)
P5: AT=4, BT=2, CT=12 → TAT=8, WT=6, RT=6 (first ran at t=10)
Avg TAT = (14+6+18+5+8)/5 = 10.2 ms
Avg WT = (9+2+12+2+6)/5 = 6.2 ms
Avg RT = (0+0+12+0+6)/5 = 3.6 ms
21
Hospital Resource Management — Banker's Algorithm
2nd Paper Q04 · CO3/K3 · [10 marks]
Hospital resources: ICU units=6, Equipment=8, Nursing teams=7. Five patients P1–P5.
Patient
Alloc ICU
Alloc Equip
Alloc Nurse
Max ICU
Max Equip
Max Nurse
P1
1
2
1
2
3
1
P2
2
0
1
3
2
2
P3
1
1
2
1
3
3
P4
1
0
1
2
2
2
P5
2
1
1
2
3
2
Part a 4 marks — Compute the Available vector and the Need matrix.
Part b 6 marks — Determine if the system is in a safe state. Find the safe sequence or identify which patients' needs cannot be fulfilled.
Part a — Available Vector + Need Matrix
// Total Allocated
ICU: 1+2+1+1+2 = 7 → Available ICU = 6−7 = −1 ???// Wait — re-read: total ICU=6, let's recheck. P2=2 ICU, P3=1, P4=1, P5=2, P1=1 → total=7 > 6// This suggests the given allocations EXCEED available — a data inconsistency in the question.// Treating this as: Available = Total − Allocated (negatives indicate over-allocation)// Most likely intended: Total ICU=8 (misread), OR some allocations differ. Using the numbers as given:
Total alloc: ICU=7, Equip=4, Nurse=6
Available: ICU=6−7=−1 (over-allocated), Equip=8−4=4, Nurse=7−6=1// For exam purposes, assume total ICU = 8 (common re-reading of such problems):
Available: ICU=8−7=1, Equip=4, Nurse=1
Patient
Need ICU
Need Equip
Need Nurse
P1
2−1=1
3−2=1
1−1=0
P2
3−2=1
2−0=2
2−1=1
P3
1−1=0
3−1=2
3−2=1
P4
2−1=1
2−0=2
2−1=1
P5
2−2=0
3−1=2
2−1=1
Part b — Safety Check (Available = [1, 4, 1])
Available = [ICU=1, Equip=4, Nurse=1]
P1: Need[1,1,0] ≤ [1,4,1]? YES → Run P1. Available = [1,4,1]+[1,2,1] = [2,6,2]
P3: Need[0,2,1] ≤ [2,6,2]? YES → Run P3. Available = [2,6,2]+[1,1,2] = [3,7,4]
P5: Need[0,2,1] ≤ [3,7,4]? YES → Run P5. Available = [3,7,4]+[2,1,1] = [5,8,5]
P2: Need[1,2,1] ≤ [5,8,5]? YES → Run P2. Available = [5,8,5]+[2,0,1] = [7,8,6]
P4: Need[1,2,1] ≤ [7,8,6]? YES → Run P4. ✓
✅ SAFE — Safe Sequence: P1 → P3 → P5 → P2 → P4
22
Library Database — Readers-Writers Problem
2nd Paper Q05 · CO3/K3 · [10 marks]
Library database: Students and faculty can read; librarians can update/delete. Requests: R1(Student read), R2(Faculty read), W1(Librarian update), R3(Student read), W2(Librarian update), R4(Student read).
Part a 6 marks — Identify a suitable synchronization technique. Write pseudocode. Determine the execution order with justification.
Part b 4 marks — During exam time (heavy reads + frequent writes), analyze fairness, starvation prevention, and suggest improvements.
Part a — Readers-Writers with Semaphores
KEY POINT
Readers-Writers Problem: Multiple readers OK simultaneously; writers need exclusive access
Use a mutex for reader count + a write semaphore. First reader blocks writers; last reader unblocks them. This is the classic "Readers-First" solution.
// ── Shared Variables ──────────────────────────────────────
semaphore write_lock = 1 // exclusive access for writers
semaphore count_mutex = 1 // protects reader_countint reader_count = 0 // active readers// ── Reader Process ────────────────────────────────────────Reader():
wait(count_mutex)
reader_count++
if reader_count == 1: // first reader blocks writers
wait(write_lock)
signal(count_mutex)
// ─── READ DATABASE ───
wait(count_mutex)
reader_count--
if reader_count == 0: // last reader unblocks writers
signal(write_lock)
signal(count_mutex)
// ── Writer Process ────────────────────────────────────────Writer():
wait(write_lock) // exclusive lock// ─── WRITE/UPDATE DATABASE ───
signal(write_lock)
R1 and R2 are readers — they can run concurrently (reader_count reaches 2). W1 blocks on write_lock. When R1 and R2 finish, reader_count drops to 0 → write_lock signaled → W1 runs exclusively. Then R3 runs, W2 waits for R3, then W2 runs, then R4.
Part b — Fairness, Starvation, Improvements
PROBLEM
Reader-First causes Writer Starvation under heavy read load
During exams, if readers continuously enter, reader_count never drops to 0, so write_lock is never signaled → librarian writes starve indefinitely.
IMPROVEMENT 1
Writers-Priority: Once a writer arrives, no new readers allowed in
Add a write_pending flag. New readers check this — if a writer is waiting, they block. Existing readers finish, writer runs, then new readers proceed. Prevents writer starvation.
IMPROVEMENT 2
Fair FIFO Queue (e.g., using a ticket-lock or arrival-order semaphore)
Replace the binary approach with a turnstile pattern — every reader and writer passes through a FIFO semaphore on entry. Requests are served in arrival order, ensuring neither readers nor writers starve.
REAL-WORLD
PostgreSQL uses LWLocks (lightweight locks) with a similar fairness queue for its buffer manager. Linux kernel's rwsem (read-write semaphore) implements the writers-priority approach to prevent starvation. SQLite uses a single writer lock — simpler but lower throughput.
Part a 6 marks — Worst Fit allocation (place each process in the largest available block). Show step-by-step assignment.
Part b 4 marks — Using given usage times, compute when each process completes. Mention the execution sequence.
Part a — Worst Fit: Place in Largest Available Block
KEY POINT
Worst Fit: Always allocate to the LARGEST free block to leave the biggest possible leftover fragment
Counter-intuitively useful — leaves larger fragments for future allocations rather than fragmenting many small holes. Opposite of Best Fit.
Step
Process
Size
Available blocks (sorted desc)
Block Used
Remaining
1
P1 (40KB)
40KB
B1=90, B4=70, B5=50, B3=30, B2=20
B1 (90KB)
B1→50KB left
2
P2 (30KB)
30KB
B4=70, B5=50, B1=50, B3=30, B2=20
B4 (70KB)
B4→40KB left
3
P3 (50KB)
50KB
B5=50, B1=50, B4=40, B3=30, B2=20
B5 (50KB)
B5→0KB
4
P4 (60KB)
60KB
B1=50, B4=40, B3=30, B2=20
No block ≥ 60KB
P4 → WAIT
5
P5 (80KB)
80KB
B1=50, B4=40, B3=30, B2=20
No block ≥ 80KB
P5 → WAIT
6
P6 (20KB)
20KB
B1=50, B4=40, B3=30, B2=20
B1 (50KB)
B1→30KB left
7
P7 (40KB)
40KB
B4=40, B1=30, B3=30, B2=20
B4 (40KB)
B4→0KB
8
P8 (30KB)
30KB
B1=30, B3=30, B2=20
B1 or B3 (30KB)
→0KB
⚠️ P4 (60KB) and P5 (80KB) cannot be allocated — no single block large enough remains after Worst Fit assignments. This highlights Worst Fit's weakness with large processes.
Part b — Execution Times (start at t=0, usage time = duration)
Part a 5 marks — Server deployment: multiple OS on physical server for high performance. Identify type of hypervisor, explain working principle with diagram.
Part b 5 marks — Personal computer testing: run multiple VMs without affecting host. Identify type of hypervisor, explain working principle with diagram.
Part a — Server Deployment: Type 1 Hypervisor (Bare-Metal)
KEY POINT
Type 1 (Bare-Metal) Hypervisor runs DIRECTLY on hardware — no host OS
Examples: VMware ESXi, Microsoft Hyper-V, Xen, KVM. The hypervisor IS the OS at the hardware level. VMs run on top of it. Maximum performance, minimum overhead — ideal for servers where every CPU cycle counts.
TYPE 1 HYPERVISOR — BARE-METAL ARCHITECTURE
WORKING PRINCIPLE
The hypervisor intercepts all privileged instructions from VMs via hardware virtualization extensions (Intel VT-x, AMD-V). Each VM thinks it has real hardware — the hypervisor translates VM requests to physical hardware operations. No host OS layer = near-native speed.
Part b — Personal Computer Testing: Type 2 Hypervisor (Hosted)
KEY POINT
Type 2 (Hosted) Hypervisor runs ON TOP of an existing host OS as an application
Examples: VMware Workstation, VirtualBox, Parallels. The host OS (Windows/macOS) runs normally. The hypervisor is just an app. Easy to install, flexible — developer can run Windows 11 + Ubuntu + Red Hat simultaneously on their MacBook, then close the app and use macOS normally. Higher overhead than Type 1 due to the host OS layer.
TYPE 2 HYPERVISOR — HOSTED ARCHITECTURE
REAL-WORLD
The question scenario (VMware Workstation on Intel i9, 32GB RAM, 1TB SSD) is a textbook Type 2 setup. This is standard in software companies — developers test cross-platform apps without needing separate hardware. GitHub Actions CI/CD also uses VMs for isolation, but runs Type 1 for speed.
Disk Scheduling — SSTF, C-SCAN, C-LOOK (Two Batches)
2nd Paper Q09 · CO5/K3 · [15 marks]
Hard disk: 1500 cylinders, 5400 RPM, avg seek=4ms, transfer=200KB/s, sector=1KB, 800 sectors/cylinder. Head at cylinder 250. B2 arrives only after B1 completes.
Part b — C-SCAN — B1 then B2 (moving toward higher cylinders first)
B1: head=250, moving UP. Sorted above 250: 300,700,900,1200. Below: 50
250→300(50)→700(400)→900(200)→1200(300) → jump to 50(1150) → done (50 is only below)
Total B1 C-SCAN = 2100 cylinders. Head ends at 50.
B2: head=50, moving UP. Requests=[1400,600,200,1100,400]. All above 50.
50→200(150)→400(200)→600(200)→1100(500)→1400(300)
Total B2 C-SCAN = 1350 cylinders.
Total C-SCAN = 2100 + 1350 = 3,450 cylinders
Part b — C-LOOK — B1 then B2 (only goes as far as last request)
B1: head=250, moving UP. Above 250: 300,700,900,1200. Below: 50
250→300(50)→700(400)→900(200)→1200(300) ← last UP request, jump to 50(1150)
Total B1 C-LOOK = 2100 cylinders. Head ends at 50.
B2: head=50. All above 50: 200,400,600,1100,1400
50→200(150)→400(200)→600(200)→1100(500)→1400(300)
Total B2 C-LOOK = 1350 cylinders.
Total C-LOOK = 2100 + 1350 = 3,450 cylinders
(Same as C-SCAN here since no wasted travel to disk boundary)
Algorithm
B1
B2
Total
Best For
SSTF
1,450
1,600
3,050 ✓
Minimum seek, batch workloads
C-SCAN
2,100
1,350
3,450
Uniform wait, heavy load
C-LOOK
2,100
1,350
3,450
C-SCAN without wasted travel
Part c — Limitations
SSTF LIMITATION
Starvation — requests at far ends may never be serviced if nearby requests keep arriving
C-SCAN LIMITATION
Wasted head movement — travels to disk boundary even if no requests exist there
C-LOOK LIMITATION
Jump overhead — the rapid jump from last request back to first may cause mechanical stress and isn't instantaneous
Part a 3 marks — Identify the kernel structure type that facilitates a dynamically loadable module that can be added to a running system without rebooting or rebuilding the kernel.
Part b 7 marks — For the C code below, explain in detail how protection is achieved through abstraction in a microkernel OS. Justify with a neat diagram.
main() {
int i,n;
scanf("%d\n",&n);
for (i=0; i<n; i++)
printf("%d\n", i);
}
The Modular Monolithic Kernel supports dynamically loadable modules. Core kernel is loaded at boot; additional modules (device drivers, file systems, network protocols) can be inserted or removed at runtime via insmod / rmmod (Linux) without rebooting. Example: Linux's .ko files — plug in a GPU driver while the system is running.
WHY NOT PURE MONOLITHIC?
Pure monolithic kernels require full recompilation to add new features. Pure microkernels are too slow for production. Modular monolithic gives the best of both: performance of monolithic + flexibility of modules.
Part b — Protection via Abstraction in Microkernel
KEY POINT 1
Dual-Mode Protection: User Mode vs Kernel Mode
When main() runs, the CPU is in User Mode — it cannot directly access hardware, memory of other processes, or I/O ports. When scanf() or printf() is called, a system call (trap/software interrupt) switches CPU to Kernel Mode, executes the I/O, then returns to User Mode.
KEY POINT 2
Microkernel: Each service runs in user space as a separate server process
In a microkernel, scanf sends an IPC message to the I/O Server (in user space). The I/O Server makes a minimal kernel call to the microkernel for actual hardware access. This means even if the I/O server crashes, the kernel itself is unaffected — strong fault isolation.
MICROKERNEL PROTECTION — C PROGRAM EXECUTION
PROTECTION MECHANISMS FOR THE C PROGRAM
scanf() → app sends IPC to I/O server → I/O server asks microkernel for keyboard access → kernel reads input, returns to I/O server, I/O server returns to app. The app never touches the keyboard driver directly. for loop → runs entirely in user space. No kernel involvement. If the loop runs forever, the kernel's timer interrupt can preempt it — the app can't disable interrupts. printf() → same path as scanf — via I/O server. Screen access is mediated, never direct.
28
fork() + Pthreads — Even Sum, Odd Sum, Average
3rd Paper Q02 · CO1/K2 · [10 marks]
Part a 6 marks — Write a C program: parent computes sum of even numbers. Child creates two threads: Thread 1 = sum of odd numbers, Thread 2 = average of all numbers. Child waits for threads, prints. Parent waits for child.
Part b 4 marks — Draw the process state diagram for the specified code.
Part a — C Program
#include<stdio.h>#include<pthread.h>#include<unistd.h>int arr[] = {1,2,3,4,5,6,7,8,9,10};
int n = 10;
// Thread 1: sum of odd numbersvoid* sum_odd(void* arg) {
int sum = 0;
for (int i=0; i<n; i++)
if (arr[i] % 2 != 0) sum += arr[i];
printf("Child T1 - Sum of odd: %d\n", sum);
return NULL;
}
// Thread 2: average of all numbersvoid* calc_avg(void* arg) {
double sum = 0;
for (int i=0; i<n; i++) sum += arr[i];
printf("Child T2 - Average: %.2f\n", sum/n);
return NULL;
}
int main() {
pid_t pid = fork();
if (pid == 0) { // CHILD PROCESS
pthread_t t1, t2;
pthread_create(&t1, NULL, sum_odd, NULL);
pthread_create(&t2, NULL, calc_avg, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
printf("Child done.\n");
} else { // PARENT PROCESSint sum_even = 0;
for (int i=0; i<n; i++)
if (arr[i] % 2 == 0) sum_even += arr[i];
printf("Parent - Sum of even: %d\n", sum_even);
wait(NULL); // wait for child
printf("Parent done.\n");
}
return 0;
}
Part b — Process State Diagram
PROCESS + THREAD LIFECYCLE
29
SRTF Scheduling — Shortest Remaining Time First
3rd Paper Q03 · CO2/K5 · [10 marks]
Process
Arrival (ms)
Burst (ms)
P1
0
5
P2
1
3
P3
2
1
P4
3
2
P5
4
4
Part a 6 marks — SRTF: determine execution sequence, Gantt chart, average WT and TAT. Comment on responsiveness and fairness.
Part b 4 marks — Discuss how different scheduling algorithms impact efficiency in various scenarios.
Part a — SRTF Trace
ALGORITHM
SRTF = Preemptive SJF. Always run the process with shortest REMAINING burst time.
STEP-BY-STEP
t=0 P1(5ms). Only P1 → Run P1. t=1 P2 arrives (3ms remaining). P1 has 4ms. P2 shorter → Preempt P1. Run P2. t=2 P3 arrives (1ms). P2 has 2ms. P3 shorter → Preempt P2. Run P3. t=3 P3 done (1ms). P4 arrives (2ms). Ready: P1(4ms), P2(2ms), P4(2ms). Shortest = P2 or P4 (both 2ms). Pick P2 (arrived first). t=4 P5 arrives (4ms). P2 has 1ms left. P2 shorter → continue P2. t=5 P2 done. Ready: P1(4ms), P4(2ms), P5(4ms). Shortest = P4(2ms) → Run P4. t=7 P4 done. Ready: P1(4ms), P5(4ms). Tie → P1 (arrived first). t=11 P1 done → Run P5(4ms). t=15 P5 done.
GANTT CHART — SRTF
P10–1
P21–2
P32–3
P23–5
P45–7
P17–11
P511–15
0123571115
// TAT = CT−AT | WT = TAT−BT | RT = First CPU − AT
P1: AT=0, BT=5, CT=11 → TAT=11, WT=6, RT=0
P2: AT=1, BT=3, CT=5 → TAT=4, WT=1, RT=0
P3: AT=2, BT=1, CT=3 → TAT=1, WT=0, RT=0
P4: AT=3, BT=2, CT=7 → TAT=4, WT=2, RT=2
P5: AT=4, BT=4, CT=15 → TAT=11, WT=7, RT=7
Avg TAT = (11+4+1+4+11)/5 = 6.2 ms
Avg WT = (6+1+0+2+7)/5 = 3.2 ms
⚠️ Fairness concern: P1 and P5 (longer bursts) suffer high waiting times (6ms and 7ms). SRTF minimizes average WT but is unfair to longer processes. P3 gets perfect 0ms wait due to arriving with the shortest burst. This mirrors real-time system behavior where responsiveness (P3 done in 1ms) comes at the cost of fairness to P1.
Part b — Algorithm Comparison by Scenario
Scenario
Best Algorithm
Why
Interactive/GUI (desktop)
Round Robin
Ensures every process gets CPU regularly — responsive UI
Batch processing (known lengths)
SJF / SRTF
Minimizes average WT when burst times are known
Real-time (hard deadlines)
EDF / Priority Preemptive
Critical tasks always meet their deadlines
Server with mixed workloads
MLFQ
Automatically adapts: short jobs in high-priority queue, long jobs demoted
Two threads share variables x=0, y=0. SensorThread updates x,y from GPS. ControlThread reads x,y to decide navigation.
Part a 4 marks — Identify the race condition. Show via interleaved execution timeline how partial updates of x and y cause incorrect control decisions.
Part b 3 marks — Why does this occur even though both threads access the same shared variables directly? Does compiler/CPU caching worsen it?
Part c 3 marks — Modify pseudocode using a synchronization mechanism to ensure data consistency without halting concurrent performance entirely.
Part a — Race Condition + Interleaved Execution
THE RACE CONDITION
SensorThread writes x and y in two SEPARATE steps — ControlThread can read in between
x and y represent a coordinate pair. They should always be updated atomically (as a unit). But x = getGPS_X() and y = getGPS_Y() are two separate memory writes. The ControlThread can read the new x but the old y — producing a phantom coordinate that never existed in reality, causing the drone to steer to a nonexistent location.
INTERLEAVED TIMELINE (RACE)
SensorThreadControlThread
x = getGPS_X() → x=150
if(x>100 && y>100) → reads x=150, y=0 (OLD!)
150>100 but 0>100 is FALSE → no adjustCourse()
y = getGPS_Y() → y=150 ← too late, decision already made Result: drone ignores real position (150,150) and flies into obstacle!
Part b — Why It Happens + Cache Effects
ROOT CAUSE
Non-atomic two-variable update + no memory visibility guarantee
Even though both threads "access the same variables," the CPU executes instructions one at a time. Between the two writes (x=... and y=...), the scheduler can switch threads. This window is the race.
CPU Caching makes it worse: Each CPU core has its own L1/L2 cache. SensorThread running on Core 0 writes x=150 to Core 0's cache. ControlThread on Core 1 reads x from Core 1's cache — which still holds x=0 (stale). Without a memory barrier (which a mutex provides), the update may not be visible to Core 1 for many cycles.
Part c — Fix with Read-Write Lock
// READ-WRITE LOCK: allows concurrent reads, exclusive writes// Better than mutex — ControlThread can still read while// SensorThread is NOT writing (only blocks during writes)
pthread_rwlock_t coord_lock = PTHREAD_RWLOCK_INITIALIZER;
int x = 0, y = 0;
SensorThread():
while (true):
int nx = getGPS_X()
int ny = getGPS_Y()
pthread_rwlock_wrlock(&coord_lock) // WRITE LOCK — exclusive
x = nx // atomic pair update
y = ny
pthread_rwlock_unlock(&coord_lock)
ControlThread():
while (true):
pthread_rwlock_rdlock(&coord_lock) // READ LOCK — shared OKint cx = x, cy = y // consistent snapshot
pthread_rwlock_unlock(&coord_lock)
if (cx > 100 && cy > 100):
adjustCourse()
REAL-WORLD
DJI drone firmware uses exactly this pattern — a high-priority IMU thread writes position data under a write lock, and multiple navigation decision threads read under read locks. This allows 1000Hz sensor updates while navigation reads safely. Without this, real drones would crash from exactly this race condition.
31
IoT Log System — Readers-Writers with Fairness
3rd Paper Q05 · CO3/K4 · [10 marks]
Shared file eventLog. Reader threads = monitoring dashboards (frequent reads). Writer threads = logging daemons (frequent writes, every few ms). Goal: multiple readers simultaneously, writers exclusive, no reader reads during write, no writer writes during read.
Part a 6 marks — Pseudocode using semaphores or monitors. Must prevent BOTH reader starvation AND writer starvation.
Part b 4 marks — Analyze: fairness, performance trade-offs (throughput vs latency), and scalability as thread count increases.
Part a — Fair Readers-Writers (Turnstile Pattern)
KEY POINT
Standard reader-first starves writers. Standard writer-first starves readers. Use a FIFO turnstile for fairness.
// ── Shared Primitives ─────────────────────────────────────
semaphore turnstile = 1 // FIFO order enforcement — key to fairness
semaphore write_lock = 1 // exclusive write access
semaphore count_mutex = 1 // protects reader_countint reader_count = 0
// ── Reader ────────────────────────────────────────────────Reader():
wait(turnstile) // FIFO queue — readers wait behind pending writers
signal(turnstile) // pass through, let next reader/writer proceed
wait(count_mutex)
reader_count++
if reader_count == 1:
wait(write_lock) // first reader blocks writers
signal(count_mutex)
// ─── READ eventLog ───
wait(count_mutex)
reader_count--
if reader_count == 0:
signal(write_lock) // last reader releases writers
signal(count_mutex)
// ── Writer ────────────────────────────────────────────────Writer():
wait(turnstile) // FIFO — writer blocks NEW readers from entering
wait(write_lock) // wait for existing readers to finish
signal(turnstile) // release turnstile AFTER acquiring write_lock// ─── WRITE to eventLog ───
signal(write_lock)
Part b — Analysis
FAIRNESS
Turnstile ensures strict FIFO — no starvation for readers or writers
If Writer W1 arrives after Readers R1, R2, R3, W1 enters the turnstile and blocks new readers (R4, R5) from starting. W1 waits for R1, R2, R3 to finish, then writes exclusively. R4, R5 proceed after W1. Maximum wait = one write + one batch of concurrent reads.
PERFORMANCE TRADE-OFFS
High throughput for reads; latency spikes during frequent writes
Throughput: Multiple readers run simultaneously — great for dashboards. Latency: When logging daemons write every few ms, they block all dashboards briefly. For IoT with 100+ devices writing frequently, the write frequency creates read latency. Improvement: Use a write buffer — log daemon appends to a fast circular buffer (no lock), a single flush thread periodically acquires write_lock to persist the buffer. This batches writes and dramatically reduces write frequency.
SCALABILITY
Reads scale well; writes become a bottleneck at high thread counts
100 reader threads → all read concurrently, excellent throughput. But 50 writer threads each wanting exclusive access → effectively serialized, throughput drops to 1 write at a time. Solution: Sharding — partition eventLog into N sub-logs, each with its own lock. Writers hash to a sub-log, parallelism scales with N.
REAL-WORLD
Apache Kafka solves this at scale — multiple producers (writers) append to partition logs, multiple consumers (readers) read independently without blocking producers. The log is append-only, which means reads and writes don't interfere at all — a clever architectural solution that bypasses the entire readers-writers lock problem.
32
Desktop Background Services — Priority Non-Preemptive + Round Robin
Desktop Q03 · CO2/K3 · [10 marks]
Background services: Antivirus(S1,AT=0,BT=17,pri=3), Disk Cleanup(S2,AT=2,BT=24,pri=4), System Update(S3,AT=3,BT=13,pri=1), Backup(S4,AT=5,BT=15,pri=2), Software Protection(S5,AT=2,BT=10,pri=3). Lower priority number = higher importance.
Part a 2 marks — Identify the appropriate scheduling technique for each method (priority-based non-preemptive; RR with 5ms time slice). Give reasons.
Part b 4 marks — Determine the order of execution with Gantt chart for both methods.
Part c 4 marks — Calculate Average Turnaround Time and Average Waiting Time for both methods.
Part a — Algorithm Identification
Method
Algorithm
Reason
Priority-based, no preemption, full execution
Non-Preemptive Priority (NPP)
Each service runs to completion in priority order. Critical services (System Update, pri=1) always run first when CPU is free.
Each runs 5ms, then next, till all finish
Round Robin (RR), time quantum=5ms
Cyclic fairness — every service gets 5ms turns, ensuring all make progress.
Part b — Method 1: Non-Preemptive Priority
TRACE
t=0: Only S1(pri3) ready → Run S1(17ms). Completes t=17.
t=17: Ready: S2(pri4), S3(pri1), S4(pri2), S5(pri3). Highest = S3(pri1) → Run S3(13ms). Completes t=30.
t=30: Ready: S2(pri4), S4(pri2), S5(pri3). Highest = S4(pri2) → Run S4(15ms). Completes t=45.
t=45: S5(pri3) vs S2(pri4) → S5 wins → Run S5(10ms). Completes t=55.
t=55: Run S2(24ms). Completes t=79.
S1 starts at t=0. Each service gets 5ms turns cyclically as they arrive. Remaining times tracked per turn. S3 and S5 finish in fewer turns (shorter bursts). S2 takes the most turns (24ms ÷ 5ms = ~5 slices).
GANTT — Round Robin (Q=5ms)
S10-5
S25-10
S310-15
S415-20
S520-25
S125-30
S230-35
S335-40
S440-45
S545
S145-50
S250-55
S355-58
S458-63
S163-65
S265-70
S470
S270-74
// Method 2: RR Q=5ms — approximate completion times
S1: CT≈65, AT=0, BT=17 → TAT≈65, WT≈48
S2: CT≈74, AT=2, BT=24 → TAT≈72, WT≈48
S3: CT≈58, AT=3, BT=13 → TAT≈55, WT≈42
S4: CT≈70, AT=5, BT=15 → TAT≈65, WT≈50
S5: CT≈45, AT=2, BT=10 → TAT≈43, WT≈33
Avg TAT ≈ (65+72+55+65+43)/5 = 60.0 ms
Avg WT ≈ (48+48+42+50+33)/5 = 44.2 ms// NPP has better avg TAT (42.8 vs 60.0) but worse fairness — S2 waits 53ms in NPP
📘 Fourth Past Year Paper
33
Library Book Robot — FCFS, SCAN, C-SCAN
4th Paper Q07 · CO5/K3 · [10 marks]
Library book retrieval robot. Rows 0–199, starts at row 53. Morning requests: 98, 183, 41, 122, 14, 124, 65, 150.
Part a 3 marks — FCFS: retrieve books in arrival order.
Part b 3 marks — SCAN: move continuously in one direction serving requests, reach the end, reverse and serve on return.
Part c 4 marks — C-SCAN: move toward one end serving requests, upon reaching the end immediately return to start without picking books, then serve again in original direction.
Part i 3 marks — Construct the required matrices (Need matrix, Available vector).
Part ii 5 marks — Find the first person to start so all can finish (safe sequence).
Part iii 4 marks — Can a request for 1 chopping board by A be granted? Justify.
Part i — Need Matrix + Available Vector
// Total Allocated: Boards=1+0+1=2, Knives=0+2+0=2, Bowls=1+1+2=4
// Available = Total − Allocated
Available: Boards = 4−2 = 2 | Knives = 5−2 = 3 | Bowls = 4−4 = 0// Need = Max − Has
Chef
Need Boards
Need Knives
Need Bowls
A
3−1 = 2
1−0 = 1
2−1 = 1
B
1−0 = 1
3−2 = 1
1−1 = 0
C
1−1 = 0
1−0 = 1
3−2 = 1
Part ii — Safety Check (Available = [2, 3, 0])
Available = [Boards=2, Knives=3, Bowls=0]
B: Need[1,1,0] ≤ [2,3,0]? YES → Run B. Available = [2,3,0]+[0,2,1] = [2,5,1]
C: Need[0,1,1] ≤ [2,5,1]? YES → Run C. Available = [2,5,1]+[1,0,2] = [3,5,3]
A: Need[2,1,1] ≤ [3,5,3]? YES → Run A. ✓
✅ SAFE — Safe Sequence: B → C → A
Part iii — A Requests 1 Chopping Board
Request_A = [Boards=1, Knives=0, Bowls=0]
Check 1: [1,0,0] ≤ Need[A]=[2,1,1]? YES ✓
Check 2: [1,0,0] ≤ Available=[2,3,0]? YES ✓
Tentatively allocate:
Available = [2,3,0]−[1,0,0] = [1,3,0]
Has[A] = [1,0,1]+[1,0,0] = [2,0,1]
Need[A] = [2,1,1]−[1,0,0] = [1,1,1]
Re-run safety with Available=[1,3,0]:
B: Need[1,1,0] ≤ [1,3,0]? YES → Available=[1,5,1]
C: Need[0,1,1] ≤ [1,5,1]? YES → Available=[2,5,3]
A: Need[1,1,1] ≤ [2,5,3]? YES ✓
✅ GRANT — Request can be safely given. New safe sequence: B → C → A
35
Minimum Resources to Prevent Deadlock
4th Paper Q08b · CO3/K3 · [3 marks]
10 user processes, each requiring 3 units of resource R. Find the minimum number of units of R required such that no deadlock will occur.
KEY POINT
Formula: Minimum resources = n×(m−1) + 1, where n=processes, m=max need per process
This ensures that at any point, at least one process can always complete — preventing the circular wait condition that causes deadlock.
n = 10 processes, m = 3 units each (max need)
Minimum units = n × (m − 1) + 1
= 10 × (3 − 1) + 1
= 10 × 2 + 1
= 21 units// Intuition: give each process (m-1)=2 units → all are 1 short of max.
// The extra +1 unit allows at least one process to complete, freeing its resources,
// enabling others to complete in chain. Below 21 = deadlock possible.
INTUITION
If you have 20 units and 10 processes each holding 2, all are stuck waiting for their 3rd unit — deadlock! The 21st unit lets one process finish, releases 2 units, the next finishes, and so on. The +1 is the "key that unlocks the chain."
36
Memory Allocation — Best Fit, Worst Fit, First Fit
Determine allocation using Best Fit, Worst Fit, and First Fit. Calculate total internal fragmentation for each.
KEY DEFINITIONS
Best Fit = smallest partition that fits | First Fit = first partition that fits | Worst Fit = largest partition
First Fit — First partition large enough
Process
Size
Partitions (100,200,300,450,500)
Allocated To
Fragmentation
P1 (212KB)
212
100✗ 200✗ 300✓
PAR4 (300KB)
300−212 = 88KB
P2 (417KB)
417
100✗ 200✗ 300(used)✗ 450✓
PAR2 (450KB)
450−417 = 33KB
P3 (112KB)
112
100✗ 200✓
PAR3 (200KB)
200−112 = 88KB
P4 (426KB)
426
100✗ 200(used)✗ 300(used)✗ 450(used)✗ 500✓
PAR5 (500KB)
500−426 = 74KB
Total Internal Fragmentation (First Fit)
283KB
Best Fit — Smallest partition that fits
Process
Size
Best fit partition
Allocated To
Fragmentation
P1 (212KB)
212
300KB is smallest that fits (200✗)
PAR4 (300KB)
88KB
P2 (417KB)
417
450KB is smallest that fits
PAR2 (450KB)
33KB
P3 (112KB)
112
200KB is smallest that fits (100✗)
PAR3 (200KB)
88KB
P4 (426KB)
426
500KB is smallest that fits (450 used)
PAR5 (500KB)
74KB
Total Internal Fragmentation (Best Fit)
283KB
Worst Fit — Largest available partition
Process
Size
Largest partition
Allocated To
Fragmentation
P1 (212KB)
212
500KB is largest
PAR5 (500KB)
288KB
P2 (417KB)
417
450KB is largest remaining
PAR2 (450KB)
33KB
P3 (112KB)
112
300KB is largest remaining
PAR4 (300KB)
188KB
P4 (426KB)
426
200KB✗, 100KB✗ — no fit!
Cannot allocate
—
Total Internal Fragmentation (Worst Fit, P4 unallocated)
509KB
⚠️ Worst Fit fails to allocate P4 — it wastes large partitions on small processes, leaving only small fragments. Best Fit = Worst Fit tie at 283KB here, but Worst Fit is dangerous for large processes.
i 4 marks — LRU: replace page not used for longest time in memory
ii 4 marks — OPT: replace page not used for longest time in future. Which is better?
LRU — 3 Frames
Ref
F1
F2
F3
Result
Evict
2
2
—
—
MISS
—
3
2
3
—
MISS
—
5
2
3
5
MISS
—
1
1
3
5
MISS
2 (LRU)
4
1
4
5
MISS
3 (LRU)
6
1
4
6
MISS
5 (LRU)
2
2
4
6
MISS
1 (LRU)
7
2
7
6
MISS
4 (LRU)
5
2
7
5
MISS
6 (LRU)
3
3
7
5
MISS
2 (LRU)
8
3
8
5
MISS
7 (LRU)
1
3
8
1
MISS
5 (LRU)
6
6
8
1
MISS
3 (LRU)
4
6
4
1
MISS
8 (LRU)
2
6
4
2
MISS
1 (LRU)
Total
LRU Faults = 15, Hits = 0
OPT — 3 Frames
Ref
F1
F2
F3
Result
Evict (farthest future)
2
2
—
—
MISS
—
3
2
3
—
MISS
—
5
2
3
5
MISS
—
1
2
1
5
MISS
3 (next use: pos 10)
4
2
1
4
MISS
5 (next use: pos 9)
6
6
1
4
MISS
2 (next use: pos 15)
2
6
2
4
MISS
1 (next use: pos 12)
7
6
2
7
MISS
4 (next use: pos 14)
5
5
2
7
MISS
6 (next use: pos 13)
3
5
2
3
MISS
7 (never used again)
8
8
2
3
MISS
5 (never used again)
1
8
1
3
MISS
2 (next use: pos 15)
6
6
1
3
MISS
8 (never used again)
4
6
4
3
MISS
1 (never used again)
2
6
2
3
MISS
4 (never used again)
Total
OPT Faults = 15, Hits = 0
⚠️ Both give 15 faults here — with only 3 frames and 15 unique-ish accesses of 9 distinct pages, there are very few reuses. OPT = LRU = 15 faults in this pathological string. OPT is still theoretically better but the string doesn't expose the difference.
📙 Fifth Past Year Paper
38
Smart TV Operating System — Block Diagram & OS Functions
5th Paper Q01 · CO1/K1 · [10 marks]
Design an OS for a Smart TV. Demonstrate the critical hardware components and their interactions using a block diagram, explaining the functions of each component and the default functionalities the OS should handle.
KEY POINT 1
Hardware Components of a Smart TV
A Smart TV has: CPU (ARM Cortex), GPU (graphics rendering), RAM (app runtime), Flash Storage (OS + apps), Network Adapter (WiFi/Ethernet), Display Controller (HDMI output), Audio Controller (speakers/HDMI ARC), Remote/IR Receiver (input), USB Controller (external devices).
SMART TV OS — HARDWARE + OS LAYER BLOCK DIAGRAM
KEY POINT 2
OS Default Functionalities for Smart TV
Process Management: Schedule Netflix streaming, background firmware update, and UI responsiveness concurrently
Memory Management: With limited RAM (1–4GB), swap/kill background apps when streaming 4K video
Device Driver Management: Handle IR remote, HDMI-CEC, WiFi, USB peripherals
File System: Manage flash storage for apps, cached video, preferences
Network Stack: TCP/IP for streaming, DNS, HTTP/2, adaptive bitrate (HLS/DASH)
REAL-WORLD
Samsung Tizen OS and LG webOS are both Linux-based microkernel-inspired platforms for Smart TVs. Android TV (used by Sony, Xiaomi) uses the full Android kernel. All face the same challenge: running 4K video decode (GPU-heavy), streaming (network-heavy), and UI (CPU-heavy) simultaneously on constrained hardware with <5W power budget.
39
Process States + fork() Termination Ordering
5th Paper Q02 · CO1/K2 · [10 marks]
Part a 5 marks — Process X resides in memory. OS Y must execute it. Discuss the states and necessary steps X should undergo for executing in main memory. Illustrate with neat diagram.
Part b 5 marks — Write a C program to create 1 parent and 3 children where: (i) Parent terminates last. (ii) First child terminates before parent and after second child. (iii) Second child terminates after last child and before first child. (iv) Third child terminates first.
Part a — Process States (already covered in detail in Q1)
SUMMARY
Five States: New → Ready → Running → Waiting → Terminated
New: Process created (PCB allocated, memory assigned) Ready: In ready queue, waiting for CPU Running: CPU executing instructions Waiting: Blocked on I/O or event — CPU released Terminated: exit() called → zombie until parent wait()
Part b — Controlled Termination Order via sleep()
TERMINATION ORDER REQUIRED
Third child (C3) → Second child (C2) → First child (C1) → Parent. Use sleep() to enforce ordering.
#include<stdio.h>#include<unistd.h>#include<sys/wait.h>int main() {
pid_t c1, c2, c3;
c1 = fork();
if (c1 == 0) { // First Child
c2 = fork();
if (c2 == 0) { // Second Child
c3 = fork();
if (c3 == 0) { // Third Child (terminates FIRST)
sleep(1);
printf("C3 (3rd child) terminates\n");
_exit(0);
}
wait(NULL); // C2 waits for C3
sleep(2); // then C2 terminates
printf("C2 (2nd child) terminates\n");
_exit(0);
}
wait(NULL); // C1 waits for C2
sleep(1); // then C1 terminates
printf("C1 (1st child) terminates\n");
_exit(0);
}
wait(NULL); // Parent waits for C1
printf("Parent terminates last\n"); // Parent terminates LASTreturn 0;
}
// Output order: C3 → C2 → C1 → Parent
40
Bank Help Desk — SJF, Priority (Token), Round Robin
B: Need[1,2,0] ≤ [2,2,0]? YES → Available=[2,5,1]
C: Need[0,2,1] ≤ [2,5,1]? YES → Available=[3,6,3]
A: Need[3,1,0] ≤ [3,6,3]? YES ✓
✅ SAFE — Safe Sequence: B → C → A
Part b — A Requests 1 Knife (R1=1)
Request_A = [R0=0, R1=1, R2=0]
Check 1: [0,1,0] ≤ Need[A]=[3,1,0]? YES ✓
Check 2: [0,1,0] ≤ Available=[2,2,0]? YES ✓
Tentative: Available=[2,1,0], Has[A]=[2,1,2], Need[A]=[3,0,0]
Re-check safety with Available=[2,1,0]:
B: Need[1,2,0] ≤ [2,1,0]? NO (R1: 2>1) → skip
C: Need[0,2,1] ≤ [2,1,0]? NO (R1: 2>1) → skip
A: Need[3,0,0] ≤ [2,1,0]? NO (R0: 3>2) → skip
No process can proceed!
❌ UNSAFE — Request DENIED. Granting R1 to A leads to deadlock.
42
Semaphore-Based Execution Ordering
5th Paper Q05 · CO3/K3 · [10 marks]
Three concurrent processes: P1 (statement S1), P2 (statement S2), P3 (statement S3). Constraints: S1 executes only after S3 completes. S2 executes only after S1 completes. Implement using semaphores.
KEY POINT
Use binary semaphores as signals between processes. sem_wait blocks until sem_signal is called.
Execution order forced: S3 → S1 → S2. Two semaphores needed: one for S3→S1 dependency, one for S1→S2 dependency.
// ── Shared Semaphores ────────────────────────────────────
semaphore s3_done = 0 // signals when S3 completes (P3→P1)
semaphore s1_done = 0 // signals when S1 completes (P1→P2)// ── Process P1 (executes S1 ONLY after S3 done) ──────────P1():
wait(s3_done) // block until S3 complete
execute S1
signal(s1_done) // notify P2 that S1 is done// ── Process P2 (executes S2 ONLY after S1 done) ──────────P2():
wait(s1_done) // block until S1 complete
execute S2
// ── Process P3 (executes S3, no constraint) ───────────────P3():
execute S3
signal(s3_done) // notify P1 that S3 is done// ── Execution Order Guaranteed: S3 → S1 → S2 ─────────────
// If P1 runs first: wait(s3_done) blocks → P3 runs S3 → signals → P1 runs S1 → P2 unblocks
// If P2 runs first: wait(s1_done) blocks → similar chain
// Any execution order of the processes yields S3 → S1 → S2
CORRECTNESS PROOF
All three synchronization requirements satisfied
Mutual Exclusion: S1 and S3 never run simultaneously — S1 waits for S3 signal. Progress: P3 always proceeds (no wait). P1 proceeds once S3 is done. P2 proceeds once S1 is done. Bounded Waiting: Each process waits at most one signal — no starvation possible.
43
Computer Lab Allocation — Best Fit Memory Strategy
Apply Best Fit: allocate each workshop to the smallest lab that fits it (minimizes waste)
Workshop
Students
Suitable Labs (capacity ≥ students)
Best Fit Lab
Waste
W1 (115)
115
125✓, 200✓, 320✓, 370✓, 610✓, 730✓
Lab 6 (125)
10
W2 (500)
500
610✓, 730✓
Lab 2 (610)
110
W3 (358)
358
370✓, 730✓
Lab 3 (370)
12
W4 (200)
200
200✓, 320✓, 730✓
Lab 4 (200)
0
W5 (375)
375
730✓
Lab 5 (730)
355
Lab 1 (320) goes unused. With random allocation, utilization could be as low as 50% (e.g., W5 in a 730-seat lab when only 375 attend). Best Fit gives utilization of: (115+500+358+200+375)/(125+610+370+200+730) = 1548/2035 = 76.1%. Random allocation average ≈ 50–60%.
Unlike paging (fixed-size pages), segmentation divides memory into variable-length segments matching logical program structure (code, stack, heap, data). Each segment has a base and limit in the segment table. The OS performs a bounds check on every memory access — if offset ≥ length, a segment fault (segfault) is raised. This provides per-segment protection and sharing — two processes can share a code segment by pointing their segment tables to the same base address.
Both LRU and FIFO give 17 faults for this string. LRU is generally better (immune to Belady's anomaly) and is the recommended production choice. FIFO is simpler to implement but can perform worse on strings with temporal locality patterns.
46
Disk Scheduling — FCFS, SCAN, LOOK (0–999 cylinders)
431→413(18)→242(171)→459(217)→560(101)→108(452)→324(216)→70(254)→305(235)→993(688)→732(261)
Total FCFS = 18+171+217+101+452+216+254+235+688+261 = 2,613 cylinders
b — SCAN (moving UP from 431, then reverse)
Head at 431, direction UP (since previous=361, moving toward higher cylinders).
Above 431: 459, 560, 732, 993 | Below 431: 413, 324, 305, 242, 108, 70
UP: 431→459(28)→560(101)→732(172)→993(261)→999(6) ← end, reverse
DOWN: 999→413(586)→324(89)→305(19)→242(63)→108(134)→70(38)
Total SCAN = 28+101+172+261+6+586+89+19+63+134+38 = 1,497 cylinders
COMPARISON NOTE
FCFS (2,613) is almost 2× SCAN (1,497) — demonstrating why no real OS uses FCFS for disk scheduling. The random jumps in FCFS cause massive head movement. SCAN's sweep pattern cuts distance nearly in half while serving all requests fairly.
Structure 1 3 marks — Contiguous: W(start=3,len=1), X(start=6,len=5), Y(start=12,len=3), Z(start=19,len=4)
Structure 2 3 marks — Linked: W(start=3,end=12), X(start=22,end=45)
Structure 3 4 marks — Indexed: X with index block 9, data blocks [17,12,43,39,23,34]
Structure 1 — Contiguous Allocation
KEY POINT
Each file occupies a consecutive run of blocks. FAT stores (start, length).
File
Start
Length
Blocks Occupied
W
3
1
3
X
6
5
6, 7, 8, 9, 10
Y
12
3
12, 13, 14
Z
19
4
19, 20, 21, 22
Structure 2 — Linked Allocation
KEY POINT
Each block has a pointer to the next. FAT stores (start, end). Any free blocks can be used.
File
Start
End
Chain (each block → next)
W
3
12
3 → 4 → 5 → 6 → 7 → 8 → 9 → 10 → 11 → 12 → NULL
X
22
45
22 → 23 → ... → 45 → NULL (24 blocks)
Structure 3 — Indexed Allocation
KEY POINT
One index block holds all data block addresses. Supports direct (O(1)) access to any block.
File
Index Block
Data Blocks (stored in index)
X
9
17, 12, 43, 39, 23, 34
ADVANTAGE
To access the 4th block of file X: read index block 9 → find 4th entry = 39 → read block 39 directly. No chain traversal needed. This is O(1) random access vs O(n) for linked. ext4 Linux uses a generalization of this (extents + indirect blocks).
📔 Sixth Past Year Paper
48
Medical Device OS — Architecture for Constrained Devices
6th Paper Q01 · CO1/K1 · [10 marks]
SSS company manufactures Smart Medical Diagnostic machines. Design a customized OS incorporating SMART functionality for a medical device with smaller memory than normal computers. List default OS functions and suggest a suitable architecture.
KEY POINT 1
Microkernel Architecture — Best for Constrained Medical Devices
Medical devices have: limited RAM (64MB–512MB), strict power budgets, real-time requirements (patient monitoring deadlines), and zero-tolerance for crashes. Microkernel keeps only essential services in kernel — IPC, scheduling, basic memory. All device drivers, UI, sensors run as user-space servers. A driver crash doesn't kill the kernel.
KEY POINT 2
Default OS Functions for SMART Medical Device
Real-Time Scheduling (EDF): Patient vitals must be read every 100ms — hard deadline
Watchdog Timer: Auto-reset if software hangs — patient safety critical
REAL-WORLD
FDA-approved medical devices run QNX (microkernel RTOS) or VxWorks. The Medtronic insulin pump, GE ECG monitors, and Siemens MRI scanners all use certified RTOSes with deterministic scheduling. The FDA requires software to be certified under IEC 62304 — the OS architecture directly affects certification complexity.
49
Pthreads — Find Max Digit of Two Numbers Concurrently
6th Paper Q02 · CO1/K2 · [10 marks]
Two threads concurrently find the highest digit in each of two numbers, then main() sums the two highest digits. Example: a=55687 → max digit=8, b=87934 → max digit=9, sum=17.
#include<stdio.h>#include<pthread.h>typedef struct {
long long int num;
int max_digit;
} ThreadData;
// Thread function: finds max digit of a numbervoid* find_max_digit(void* arg) {
ThreadData* d = (ThreadData*)arg;
long long int n = d->num;
int max = 0;
while (n > 0) {
int digit = n % 10;
if (digit > max) max = digit;
n /= 10;
}
d->max_digit = max;
return NULL;
}
int main() {
pthread_t t1, t2;
ThreadData d1 = {55687, 0};
ThreadData d2 = {87934, 0};
pthread_create(&t1, NULL, find_max_digit, &d1);
pthread_create(&t2, NULL, find_max_digit, &d2);
pthread_join(t1, NULL); // wait for t1
pthread_join(t2, NULL); // wait for t2
printf("Max digit of %lld = %d\n", d1.num, d1.max_digit);
printf("Max digit of %lld = %d\n", d2.num, d2.max_digit);
printf("Sum of max digits = %d\n", d1.max_digit + d2.max_digit);
return 0;
}
// Output: Max digit of 55687 = 8, Max digit of 87934 = 9, Sum = 17
Redesign readers-writers: writer has higher priority. A reader starts only after 2 writers have completed. The 3rd writer must wait until 5 readers have completed.
// Shared variables
semaphore write_done = 0 // signals after 2 writers complete
semaphore read_done = 0 // signals after 5 readers complete
semaphore mutex = 1 // protects countersint writer_count = 0
int reader_count = 0
// Writer ProcessWriter(id):
if id == 3: // 3rd writer must wait for 5 readersfor i in range(5):
wait(read_done) // wait for 5 read_done signals// ─── WRITE ───
wait(mutex)
writer_count++
signal(mutex)
if writer_count == 2: // 2nd writer signals readers
signal(write_done) // unblock ONE reader// Reader ProcessReader(id):
wait(write_done) // wait until 2 writers done
signal(write_done) // pass to next reader (chain signal)// ─── READ ───
wait(mutex)
reader_count++
if reader_count <= 5:
signal(read_done) // signal for 3rd writer
signal(mutex)
CORRECTNESS
ME: Writers use exclusive sections. Progress: Readers unblock after 2 writes. 3rd writer unblocks after 5 reads. Bounded Waiting: each reader/writer waits at most one threshold-count cycle.
MISS: evict 1 (bit=1→0, 2 bit=1→0, 0 bit=1→0, then 1→evict)
0
3(1)
2(0)
0(1)
HIT
4
3(1)
4(1)
0(1)
MISS: evict 2 (bit=0)
Total
2nd Chance Faults=10, Hits=5
✅ OPT (9 faults) < Second Chance (10 faults). OPT is always optimal by definition. Second Chance (Clock) is a practical approximation of LRU — it avoids the full LRU overhead while still giving better performance than pure FIFO, which is why Linux uses a variant of the Clock algorithm for its page cache.
ii — Indexed Allocation for File C (index block = 9)
Index Block 9 Entry
Points To Data Block
Content
Entry 0
11
C data bytes 0–255
Entry 1
12
C data bytes 256–511
Entry 2
13
C data bytes 512–767
Entry 3
14
C data bytes 768–1023
Entry 4
15
C data bytes 1024–1279
Entry 5
16
C data bytes 1280–1315 (partial)
iii — Advantages and Disadvantages
Method
Advantages
Disadvantages
Contiguous
Fast sequential access; simple FAT (start+length only)
External fragmentation; file growth impossible without reallocation
Indexed
Random access O(1); no external fragmentation; file can grow
One extra block (index) per file overhead; index block itself can be lost
53
Disk Scheduling — SCAN & LOOK (0–1999 cylinders, head=143)
6th Paper Q07 · CO5/K3 · [10 marks]
2000 cylinders (0–1999). Head at 143, previous at 125 (moving UP). Requests: 96, 1375, 973, 1797, 848, 1520, 1050, 1650, 230, 110. Service left to right (toward higher cylinders).