GTU Operating System Summer 2022 Paper Solution

Download Question Paper : click here

Q.1

(a) List any four functions of operating system?

Four functions of an operating system:

  1. Process Management: Manages processes, including creation, scheduling, and termination.
  2. Memory Management: Allocates and deallocates memory to processes, handles fragmentation.
  3. File System Management: Organizes, accesses, and secures files on secondary storage.
  4. Device Management: Controls and coordinates interaction with hardware devices.

(b) Explain the essential properties of

i) Batch system

  • Efficient utilization of system resources by executing jobs in a batch.
  • Automatic job sequencing and execution based on priority or predefined rules.

ii) Time Sharing (Multitasking) System:

  • Fair allocation of CPU time to multiple users or tasks.
  • Rapid context switching between processes to provide the illusion of simultaneous execution.

iii) Real-Time System:

  • Guaranteed response times and deadlines for critical tasks.
  • Prioritization of tasks based on their timing requirements.

iv) Distributed System:

  • Resource sharing and collaboration among multiple interconnected computers.
  • Enhanced reliability and fault tolerance through redundancy and distributed processing

(c) Explain process states and process control block in details

Process States: Process states refer to the different stages or conditions in which a process can exist during its execution. The following are the commonly recognized process states:

  1. New: The process is being created or initialized, and the necessary resources are being allocated to it.
  2. Ready: The process is prepared to execute but is waiting for the CPU to be assigned to it. It is in the ready queue, awaiting its turn.
  3. Running: The process is currently being executed on the CPU.
  4. Blocked (Waiting): The process cannot proceed further and is temporarily halted due to an event such as I/O operation, waiting for a resource, or an interrupt. It is waiting for the event to occur or the required resource to become available.
  5. Terminated: The process has finished its execution and has been terminated. Its resources are released back to the system.

http://www.hexainclude.com/wp-content/uploads/2016/11/process-state-diagram.png

  • The PCB is a data structure used by the operating system to manage processes.
  • It contains information about each process, including its identifier (PID).
  • The PCB stores the current state of the process.
  • It maintains the program counter (PC), which holds the address of the next instruction to be executed.
  • The PCB includes CPU registers, which store the current values of the process's registers.
  • Memory management details, such as base address and limit registers, are stored in the PCB.
  • Scheduling information, such as priority and scheduling algorithm, is included in the PCB.
  • The PCB tracks the I/O status of the process, including allocated devices and their current status.
  • Accounting information, such as CPU time consumed and execution history, is stored in the PCB.

Q.2

(a) What are the various criteria for a good process scheduling algorithm?

Various criteria for a good process scheduling algorithm include:

  1. CPU Utilization: The scheduler should aim to keep the CPU as busy as possible to maximize its utilization.
  2. Throughput: The scheduling algorithm should maximize the number of processes completed per unit of time.
  3. Turnaround Time: The scheduler should minimize the time taken for a process to complete its execution, including waiting and execution time.
  4. Waiting Time: The scheduler should aim to minimize the amount of time a process spends waiting in the ready queue.

(b) What is thread? Explain classical thread model.

  • A thread is a unit of CPU utilization that represents an independent sequence of instructions.
  • The classical thread model includes user-level threads (ULTs) and kernel-level threads (KLTs).
  • ULTs are managed by user-level thread libraries and are not known to the operating system.
  • KLTs are managed by the operating system and provide better concurrency and responsiveness.
  • In the classical thread model, processes have their own address space but can have multiple threads sharing it.
  • This allows for concurrent execution, resource sharing, and efficient communication within a process.

(c) How semaphores can be used to deal with n-process critical section problem? Explain.

Semaphores can be used to address the n-process critical section problem by providing a synchronization mechanism that ensures mutually exclusive access to a critical section among multiple processes.

Here's how semaphores can be utilized:

  1. Initialization: Initialize a semaphore called "mutex" with an initial value of 1 to control access to the critical section.
  2. Entering the Critical Section: Processes attempt to acquire the mutex semaphore using the "wait" operation. If the semaphore value is greater than 0, the process acquires it and enters the critical section. If it's 0, the process is blocked until the semaphore value becomes greater than 0.
  3. Exiting the Critical Section: After executing the critical section, the process releases the mutex semaphore using the "signal" operation, allowing another process to enter.
  4. Mutual Exclusion: The mutex semaphore ensures that only one process can acquire it at a time, ensuring mutual exclusion and preventing simultaneous execution of the critical section.
  5. Fairness: Semaphores guarantee fairness by ensuring processes acquire them in the order they requested, preventing starvation or indefinite blocking.

OR

(c) What is monitor? Explain solution for producer-consumer problem using monitor.

A Monitor is a synchronization construct that provides higher-level abstraction for managing shared resources in concurrent programs.

monitor ProducerConsumer
condition full, empty;
int count;

procedure enter();
  {
if (count == N) wait(full);        // if buffer is full, block
    put_item(widget);                  // put item in buffer
    count = count + 1;                 // increment count of full slots
if (count == 1)  signal(empty);    // if buffer was empty, wake consumer
  }

 procedure remove();
  {
if (count == 0)  wait(empty);      // if buffer is empty, block
    remove_item(widget);               // remove item from buffer
    count = count - 1;                 // decrement count of full slots
if (count == N-1)  signal(full);   // if buffer was full, wake producer
  }

  count = 0;
end monitor;

  Producer();
  {
while (TRUE)
   {
      make_item(widget);               // make a new item
      ProducerConsumer.enter;          // call enter function in monitor
   }
  }

  Consumer();
  {
while (TRUE)
    {
      ProducerConsumer.remove;         // call remove function in monitor
      consume_item;                    // consume an item
   }
  }

Q.3

(a) Define preemption and nonpreemption

  • Preemption refers to the act of interrupting the execution of a currently running process or thread by forcefully suspending its execution and transferring control to another process or thread.
  • Nonpreemption, on the other hand, refers to a scheduling policy or system where once a process or thread starts executing, it continues to run until it completes its task or voluntarily releases the CPU.
  • In nonpreemptive scheduling, a running process retains control of the CPU until it finishes, enters a blocking state, or yields the CPU voluntarily.

i) Race condition

A race condition occurs when multiple processes or threads access shared resources or variables concurrently, and the final outcome of the execution depends on the order of their execution. Race conditions can lead to unexpected and erroneous results.

ii) Critical section

A critical section refers to a part of the program or code that accesses shared resources or variables. It is a section of code where only one process or thread should be allowed to execute at a time to ensure proper synchronization and avoid race conditions.

iii) Mutual exclusion

Mutual exclusion is a concept that ensures that only one process or thread can access a shared resource or enter a critical section at a time. It prevents simultaneous access and guarantees exclusive access to shared resources to avoid race conditions and maintain data integrity.

iv) Semaphores

Semaphores are synchronization primitives used in inter-process communication (IPC). They are variables that help control access to shared resources by allowing or blocking processes or threads.

(c) How does deadlock avoidance differ from deadlock prevention? Write about deadlock avoidance algorithm in detail.

Deadlock avoidance differs from deadlock prevention in the following ways:

Deadlock prevention:

  1. Aims to eliminate one or more necessary conditions for deadlock occurrence.
  2. Involves careful resource allocation, resource ordering, and avoidance of circular wait.
  3. Requires static allocation and may lead to resource underutilization.
  4. Designed to ensure that the system never enters a deadlock state.

Deadlock avoidance:

  1. Focuses on dynamically analyzing resource allocation requests to determine if they will lead to a deadlock.
  2. Uses an algorithm to assess safe states and avoid the occurrence of deadlocks.
  3. Considers resource allocation information, resource availability, and process resource requirements.
  4. Allows for more flexible resource allocation but requires runtime monitoring and decision-making.

Deadlock avoidance algorithm (Banker's algorithm):

  1. Initialize available resources, maximum resource requirements, and currently allocated resources for each process.
  2. Simulate resource allocation requests and check if granting the request will lead to a safe state.
  3. If granting the request results in a safe state, allocate the resources and update the available resources.
  4. If granting the request leads to an unsafe state, deny the request or put the process in a waiting state.
  5. Continuously monitor resource allocation and dynamically assess if the system remains in a safe state.

OR

Q.3

(a) Give the Difference between Thread and Process.

The difference between a thread and a process is as follows:

ThreadProcess
Lightweight unit of executionIndependent unit of execution
Shares memory space and resources with other threadsHas its own memory space and resources
Executes concurrently with other threadsExecutes as a standalone program
Faster context switchingSlower context switching
Less expensive to create and manageHigher overhead in terms of memory and resource allocation

(b) Explain the Priority scheduling algorithm

Priority scheduling is a CPU scheduling algorithm where each process is assigned a priority value, and the process with the highest priority is given the CPU for execution.

  • Each process is assigned a priority value based on factors such as importance, user-defined priority levels, or specific characteristics.
  • Preemption may occur when a higher priority process arrives or when a running process's priority is increased. In such cases, the currently executing process may be preempted to allow the higher priority process to run.

Advantages of Priority Scheduling:

  • Allows processes to be scheduled based on their relative importance or priority.
  • Useful in scenarios where certain processes require immediate attention or have higher priority over others.

Limitations of Priority Scheduling:

  • May lead to starvation if a process with lower priority never gets a chance to execute.
  • Can result in priority inversion, where a lower priority process holds a resource needed by a higher priority process, causing delays.

(c) How to characterize the structure of deadlock? Explain the two solutions of recovery from deadlock

Characterizing the structure of deadlock involves identifying the necessary conditions for deadlock to occur. The two commonly used conditions are the "mutual exclusion" condition and the "hold and wait" condition. Additionally, the "no preemption" and "circular wait" conditions are also considered.

  1. Mutual Exclusion: Resources involved in the deadlock must be non-sharable, meaning only one process can use a resource at a time. This condition ensures that resources cannot be simultaneously accessed or modified by multiple processes.
  2. Hold and Wait: Processes hold resources while waiting for additional resources to become available. A process in a deadlock situation is holding at least one resource and waiting for another resource that is held by another process. This condition leads to resources being locked and unavailable for other processes.

Solutions for recovery from deadlock include:

  1. Deadlock Avoidance: This approach focuses on preventing the occurrence of deadlock by carefully managing resource allocation. It involves using resource allocation algorithms that ensure the system remains in a safe state, where there is no possibility of deadlock. Techniques like Banker's algorithm and resource scheduling based on resource needs and availability can be used to avoid deadlock.
  2. Deadlock Detection and Recovery: In this approach, deadlock detection algorithms periodically check the system's resource allocation graph to identify the presence of a deadlock. Once a deadlock is detected, recovery mechanisms are applied to resolve the deadlock. Common recovery methods include: a) Process Termination: One or more processes involved in the deadlock are terminated to break the circular wait and release the resources they hold. This approach, however, should be implemented carefully to avoid data loss or disruption of critical processes. b) Resource Preemption: Resources are forcibly taken back from one or more processes to satisfy the resource requirements of other processes. The preempted resources are then allocated to the waiting processes to break the deadlock. Preemption should be performed judiciously, considering factors like process priorities and impact on system stability. ## Q.4 ### (a) List out the seven RAID levels.

The seven RAID (Redundant Array of Independent Disks) levels are:

  • RAID 0: Striping
  • RAID 1: Mirroring
  • RAID 2: Error Correction Coding
  • RAID 3: Bit-level Striping with Parity
  • RAID 4: Block-level Striping with Parity
  • RAID 5: Block-level Striping with Distributed Parity
  • RAID 6: Block-level Striping with Double Distributed Parity

(b) Write short note on: Relocation problem for multiprogramming with fixed partitions.

  • In multiprogramming with fixed partitions, the relocation problem refers to the challenge of efficiently allocating memory to different programs.
  • It involves addressing issues such as internal fragmentation and external fragmentation.
  • To tackle this problem, various memory allocation strategies can be employed, including first-fit, best-fit, worst-fit, and compaction.
    1. First-fit: Allocates memory in the first available partition that fits the program.
    2. Best-fit: Allocates memory in the smallest partition that fits the program.
    3. Worst-fit: Allocates memory in the largest partition available.
    4. Compaction: Rearranges memory to reduce external fragmentation.
  • These strategies aim to optimize memory utilization, minimize wasted space, and ensure efficient execution of multiple programs in the fixed partitions.

(c) What is paging? Discuss basic paging technique in details.

Paging is a memory management technique used in operating systems to handle virtual memory. It allows the physical memory to be divided into fixed-size blocks called "pages" and the logical memory to be divided into fixed-size blocks called "page frames." Each page frame can hold one page of the process's virtual memory.

The basic paging technique involves the following steps:

  1. Divide the logical memory: The logical memory is divided into fixed-size pages. Each page has a unique identifier or address.
  2. Divide the physical memory: The physical memory is divided into fixed-size page frames. The size of the page frames is the same as the size of the pages.
  3. Page table: Each process has a page table that keeps track of the mapping between logical pages and physical page frames. The page table contains page table entries (PTEs) that store the mapping information.
  4. Address translation: When a process references a memory location, the virtual address is divided into a page number and an offset. The page number is used to index the page table and retrieve the corresponding PTE. The PTE contains the physical page frame number where the desired data is stored.
  5. Page fault: If the required page is not present in the physical memory (i.e., a page fault occurs), the operating system triggers a page fault handler. The page fault handler is responsible for fetching the required page from secondary storage (e.g., disk) into an available page frame in the physical memory.
  6. Page replacement: If there are no available page frames in the physical memory, the operating system needs to select a page to evict and make space for the incoming page. Various page replacement algorithms, such as LRU (Least Recently Used) or FIFO (First-In-First-Out), are used to decide which page to replace.
  7. Memory protection: Paging provides memory protection by assigning each page a protection attribute (e.g., read-only, read-write, execute). The protection attributes are checked during memory accesses to ensure that processes can only access their own memory.

OR

Q.4

(a) What is the difference between logical I/O and device I/O?

Logical I/ODevice I/O
Refers to input/output operations performed at the logical or virtual level of the operating system.Refers to input/output operations performed at the physical level, directly involving the hardware devices.
Abstracts the underlying hardware and provides a unified interface for applications to perform I/O operations.Deals with the actual hardware devices and the low-level communication protocols required for data transfer.
Involves file operations, network communication, and other high-level I/O abstractions.Involves device drivers, device controllers, and physical data transfer between the device and memory.

(b) Write the first, best fit memory allocation techniques.

First Fit:

  1. Start from the beginning of the memory space.
  2. Search for the first available memory block that is large enough to accommodate the requested size.
  3. Allocate the memory block to the process.
  4. If the allocated block is larger than required, split it into two parts, where one part is allocated to the process and the other part remains as free memory.
  5. Repeat the process for subsequent memory allocation requests.

Best Fit:

  1. Start from the beginning of the memory space.
  2. Search for the smallest available memory block that is large enough to accommodate the requested size.
  3. Allocate the memory block to the process.
  4. If the allocated block is larger than required, split it into two parts, where one part is allocated to the process and the other part remains as free memory.
  5. Repeat the process for subsequent memory allocation requests.

(c) Define Virtual Memory. Explain the process of converting virtual addresses to physical addresses with a neat diagram.

Virtual Memory: Virtual memory is a memory management technique that allows a computer system to use secondary storage, such as hard disk space, as an extension of the primary memory (RAM). It enables the system to run programs that require more memory than what is physically available by using a combination of RAM and disk storage.

Process of Converting Virtual Addresses to Physical Addresses:

https://media.geeksforgeeks.org/wp-content/uploads/operating_system.png

  1. The CPU generates a virtual address that is used by the program.
  2. The virtual address is divided into a page number and an offset.
  3. The page number is used as an index to access the page table, which is a data structure that maps virtual pages to physical pages.
  4. The page table entry corresponding to the page number is retrieved, which contains the physical page number.
  5. The offset is added to the physical page number to form the physical address.
  6. The CPU then uses the physical address to access the actual data in the main memory.

Q.5

(a) Explain access control list.

Access Control List (ACL):

  1. Definition: An ACL is a security mechanism that specifies the access permissions for different entities (users, groups, or roles) on a resource.
  2. Permissions: ACLs define the level of access that each entity has on a resource, such as read, write, execute, or delete.
  3. Access Evaluation: When a user or process requests access to a resource, the system checks the ACL associated with that resource to determine if the requested access is allowed based on the permissions specified in the ACL.

(b) Differentiate between Windows and Linux file system.

Windows File System (NTFS)Linux File System (ext4)
Proprietary file system developed by Microsoft.Open-source file system used by various Linux distributions.
Supports advanced features like encryption, compression, and access control lists (ACLs).Supports file and folder permissions with three levels of access control: read, write, and execute.
Provides support for larger disk sizes and file sizes.Provides support for large file sizes and partition sizes.
Supports journaling for data consistency and recovery.Supports journaling for data integrity and reliability.
Widely used as the default file system for Windows.Commonly used as the default file system for Linux.

(c) Write about Least Recently Used page replacement algorithm all its variants with an example.

The Least Recently Used (LRU) page replacement algorithm is a popular method used in operating systems to manage memory and replace pages in a page table. It is based on the principle that the page that has been least recently accessed is the least likely to be accessed in the near future. The LRU algorithm aims to minimize the number of page faults and improve overall system performance. There are several variants of the LRU algorithm, including:

  1. Basic LRU:
    • In this variant, each page is assigned a timestamp or a counter value that indicates the last time the page was accessed.
    • When a page needs to be replaced, the page with the oldest timestamp or the lowest counter value is selected.
    • This algorithm requires updating the timestamp or counter value for every page access.
  2. Stack-based LRU:
    • In this variant, a stack data structure is used to keep track of the page access history.
    • Whenever a page is accessed, it is moved to the top of the stack, indicating it is the most recently used.
    • When a page needs to be replaced, the page at the bottom of the stack (least recently used) is selected.
    • This algorithm requires updating the stack for every page access and page replacement.
  3. Clock-based LRU:
    • This variant uses a circular clock-like data structure called a clock hand or reference bit.
    • Each page has a reference bit associated with it, which is initially set to 0.
    • As pages are accessed, their reference bit is set to 1, indicating they have been recently used.
    • When a page needs to be replaced, the clock hand scans the pages in a circular manner.
    • The page encountered with a reference bit of 0 is considered the least recently used and is selected for replacement.
    • This algorithm requires periodic resetting of reference bits and advancing the clock hand.

OR

Q.5

(a) Explain domain protection mechanism.

  • Domain protection mechanism refers to a security feature that separates processes or programs into isolated domains.
  • This mechanism ensures that processes running in one domain cannot interfere with or access resources in another domain
  • It helps prevent unauthorized access, enhances system security, and provides fault containment.

(b) Write a short note: Unix kernel.

The Unix kernel is the core component of the Unix operating system. It serves as the bridge between user programs and the hardware of the computer.

Unix kernel:

  1. Core Functionality: The Unix kernel provides essential operating system services such as process management, memory management, file system management, device management, and inter-process communication.
  2. Multitasking and Multiprocessing: The kernel enables multitasking by allowing multiple processes to run concurrently, sharing the CPU's time. It also supports multiprocessing, allowing execution on multiple processors or cores.
  3. File System: The Unix kernel manages the file system, handling file operations, directory management, and file access permissions. It provides a hierarchical file structure and supports various file systems, such as ext4, NFS, and FAT.
  4. Device Management: The kernel interacts with hardware devices, managing their initialization, control, and data transfer. It provides device drivers to facilitate communication between the operating system and hardware components.

(c) Describe in detail about variety of techniques used to improve the efficiency and performance of secondary storage.

Efficiency and performance improvements in secondary storage are crucial for enhancing overall system performance. There are several techniques commonly used to achieve these improvements:

  1. Caching: Caching involves storing frequently accessed data in a faster storage medium, such as a solid-state drive (SSD) or RAM, to reduce access latency.
  2. Buffering: Buffering is a technique that involves using a memory buffer to temporarily hold data before it is written to or read from the secondary storage.
  3. Prefetching: Prefetching anticipates future data access patterns and fetches data from the secondary storage in advance. By fetching data before it is actually needed, prefetching reduces the latency associated with accessing data from slower storage devices.
  4. Compression: Compression techniques are used to reduce the size of data stored on secondary storage. Compressed data occupies less space, resulting in improved storage efficiency and reduced I/O overhead.
  5. Deduplication: Deduplication eliminates duplicate data blocks from secondary storage, thereby saving space and reducing storage I/O operations. It is commonly used in scenarios where large amounts of redundant data are present, such as in backup systems.
  6. RAID (Redundant Array of Independent Disks): RAID techniques involve combining multiple physical disks into a single logical unit to improve performance, reliability, and storage capacity. RAID configurations, such as RAID 0 (striping), RAID 1 (mirroring), or RAID 5 (striping with parity), distribute data and its parity across multiple disks, enabling parallel access and fault tolerance.
  7. Tiered Storage: Tiered storage involves organizing data into different tiers based on its access patterns and importance. Frequently accessed or critical data is stored on faster storage tiers, while less frequently accessed or less critical data is stored on slower and more cost-effective storage tiers.