GTU Operating System Winter 2021 Paper Solution

Download Question Paper : click here

Q.1

(a) Define the followings:

(1) System bus

The system bus is a collection of wires that enables the transfer of data, addresses, and control signals between the various components of a computer system.

(2) Auxiliary memory

Auxiliary memory, also known as secondary memory or external storage, refers to non-volatile storage devices used for long-term or permanent data storage, such as hard disk drives (HDDs), solid-state drives (SSDs), magnetic tapes, and optical discs (e.g., CDs, DVDs, Blu-ray discs).

(b) What do you mean by cache memory? Explain the cache read operation.

Cache memory is a high-speed memory component positioned between the CPU and the main memory in a computer system. Its purpose is to store frequently accessed data and instructions, aiming to improve overall system performance by reducing average access time.

The cache read operation involves checking if the requested data is present in the cache memory and retrieving it if found. The steps involved are as follows:

  1. The CPU sends a memory request for the desired data.
  2. The cache controller checks if the data is present in the cache by comparing the memory address with the cache tags.
  3. If the data is found in the cache (cache hit), it is quickly accessed and returned to the CPU, significantly reducing access time.
  4. If the data is not in the cache (cache miss), the cache coherence protocol ensures data consistency and triggers a fetch from the main memory. The fetched data is stored in the cache and returned to the CPU.
  5. Replacement algorithms may be employed to determine which data should be evicted from the cache when new data needs to be loaded.

(c) What is process? Explain the process creation and termination

Process: a process refers to an instance of a computer program that is being executed. Each process has its own memory space, program counter, registers, and other resources that are necessary for its execution.

Process Creation:

Process creation is the act of generating a new process in an operating system. This typically occurs in response to a user request or when a program is launched. The steps involved in process creation are as follows:

  1. Request: A process creation request is made.
  2. Allocation: The operating system allocates necessary resources like memory and a unique process identifier (PID).
  3. Address Space: An address space is established for the process.
  4. Loading: The program code and associated data are loaded into memory.
  5. Initialization: The process is initialized with required values.
  6. Execution: The new process is scheduled by the operating system and begins execution.

Process Termination:

Process termination refers to the orderly conclusion of a process, involving the release of resources, updating process control information, and removal from the system's process table.

The steps are as follows:

  1. Exit Request: A termination request is initiated.
  2. Resource Deallocation: The process releases acquired resources.
  3. PCB Update: The process control block is updated with termination information.
  4. Parent Notification: If applicable, the parent process may be notified.
  5. Exit Status: The process may return an exit status or code.
  6. PCB Cleanup: The process control block is removed, and resources are deallocated.
  7. Process Removal: The terminated process is removed from the system's process table.

Q.2

(a) Define the term critical section

  • The critical section refers to a specific part of a program where shared resources or shared data structures are accessed and modified.
  • It is a section of code that must be executed atomically, meaning that only one process or thread can access it at a time, to maintain data integrity and prevent race conditions.
  • The critical section is a crucial segment of a program that requires exclusive access to shared resources to ensure proper synchronization and avoid conflicts among concurrent processes or threads.

(b) Difference between user level and kernel level thread.

User-Level Threads (ULTs)Kernel-Level Threads (KLTs)
ManagementManaged by user-level librariesManaged by the operating system (kernel)
Thread ControlControlled by the application or runtimeControlled by the operating system scheduler
Creation/SchedulingFaster creation and switchingSlightly slower due to kernel involvement
BlockingBlocking one thread can block the entire processBlocking one thread does not affect others
ConcurrencyLimited concurrency within a processHigher concurrency, utilizing operating system features
OverheadLower overhead without involving the kernelSlightly higher overhead due to kernel involvement
ImplementationImplementation of User threads is easy.Implementation of Kernel thread is complicated.

(c) Consider following processes with length of CPU burst time in milliseconds

ProcessBurst Time
P15
P210
P32
P41

All process arrived in order p1, p2, p3, p4 all time zero.

1) Draw gantt charts illustrating execution of these processes for SJF and round robin (quantum=1)

Gantt chart for SJF -

gantt_chart_os.png

Gantt chart for Round Robbin -

(2) Calculate waiting time for each process for each scheduling algorithm

For SJF -

ProcessBurst TimeWaiting Time
P153
P2108
P321
P410

(3) Calculate average waiting time for each scheduling algorithm

For SJF -

(3+8+1+0)/4 = 3 ms

OR

(c) What are various criteria for a good process scheduling algorithm? Explain any two preemptive scheduling algorithms in brief.

Various criteria for a good process scheduling algorithm include:

  1. CPU Utilization: The scheduler should aim to keep the CPU as busy as possible to maximize its utilization.
  2. Throughput: The scheduling algorithm should maximize the number of processes completed per unit of time.
  3. Turnaround Time: The scheduler should minimize the time taken for a process to complete its execution, including waiting and execution time.
  4. Waiting Time: The scheduler should aim to minimize the amount of time a process spends waiting in the ready queue.
  5. Response Time: The scheduling algorithm should provide quick response times to interactive processes to ensure a smooth user experience.
  6. Fairness: The scheduler should ensure fairness by providing equal opportunities for all processes to execute and avoid process starvation.

Preemptive Scheduling Algorithms:

  1. Shortest Remaining Time First (SRTF):
    • This algorithm is an extension of Shortest Job First (SJF) where the scheduler selects the process with the smallest remaining burst time.
    • If a new process arrives with a shorter burst time than the currently executing process, preemption occurs, and the new process is scheduled.
    • SRTF provides optimal turnaround time but may suffer from increased overhead due to frequent context switches.
  2. Round Robin (RR):
    • RR is a preemptive scheduling algorithm where each process is assigned a fixed time quantum.
    • Processes are executed in a cyclic manner, allowing each process to run for the quantum time before being preempted.
    • If a process does not complete within the quantum, it is moved to the back of the ready queue.

      Q.3 (a) What is meant by priority inversion?

  • Priority inversion refers to a situation where a lower-priority process holds a resource that a higher-priority process requires, causing a delay in the execution of the higher-priority process.
  • This can occur in a multi-priority scheduling system when resource dependencies are not properly managed.
  • This delay in the execution of higher-priority processes due to the resource hold by a lower-priority process is known as priority inversion.

(b) What is the criterion used to select the time quantum in case of round-robin scheduling algorithm? Explain it with a suitable example.

  • The time quantum in the Round-Robin (RR) scheduling algorithm is selected based on a trade-off between reducing context switch overhead and providing reasonable response times.
  • The time quantum should allow each process to execute for a fair amount of time before being preempted.
  • The chosen time quantum strikes a balance between responsiveness and minimizing overhead.
  • For example, if the time quantum is set to 4 milliseconds in a system with processes P1, P2, and P3, each process will get a chance to execute for 4 milliseconds before the scheduler switches to the next process.

(c) What is Semaphore? Give the implementation of Bounded Buffer Producer Consumer Problem using Semaphore.

A semaphore is a variable that provides an abstraction for controlling the access of a shared resource by multiple processes in a parallel programming environment.

There are 2 types of semaphores:

  • Binary semaphores :-
  • Binary semaphores can take only 2 values (0/1).
  • Binary semaphores have 2 methods associated with it (up, down / lock, unlock).
  • They are used to acquire locks.
  • Counting semaphores :-
  • Counting semaphore can have possible values more than two.
#define N 4

typedef int semaphore;

semaphore mutex = 1;
semaphore empty = N;
semaphore full = 0;

void producer(void) {
int item;
while (true) {
    item = produce_item();

    down(&empty);

    down(&mutex);

    insert_item(item);

    up(&mutex);

    up(&full);
}
}
void consumer(void) {
int item;

while (true) {
    down(&full);

    down(&mutex);

    item = remove_item(item);

    up(&mutex);

    up(&empty);

    consume_item(item);

}

}

OR

Q.3

(a) What is Deadlock? List the conditions that lead to deadlock

Deadlock refers to a situation in a concurrent system where two or more processes are unable to proceed because each is waiting for a resource held by another process, resulting in a state where no progress can be made.

https://arctype.com/blog/content/images/size/w1750/2021/02/deadlock.jpeg

Conditions that lead to deadlock:

  1. Mutual Exclusion: Resources cannot be shared between processes.
  2. Hold and Wait: Processes hold allocated resources while waiting for additional resources.
  3. No Preemption: Resources cannot be forcibly taken away from processes.
  4. Circular Wait: There exists a circular chain of processes, where each process is waiting for a resource held by another process in the chain.

(b) List criterions used to evaluate the performance of CPU scheduling algorithms.

The criteria used to evaluate the performance of CPU scheduling algorithms include:

  1. CPU Utilization: The extent to which the CPU is utilized efficiently. Higher CPU utilization indicates better performance.
  2. Throughput: The total number of processes completed per unit of time. Higher throughput implies better performance.
  3. Turnaround Time: The total time taken for a process to complete from submission to termination. Lower turnaround time is desirable for better performance.
  4. Waiting Time: The amount of time a process spends waiting in the ready queue before its execution. Minimizing waiting time improves performance.

These criteria help assess the efficiency, effectiveness, and responsiveness of CPU scheduling algorithms and their impact on system performance.

(c) What is advantage of using Monitor? Give the implementation of Bounded Buffer Producer Consumer Problem using Monitor.

  • A Monitor provides a convenient and structured way to handle shared resources in concurrent programs.
  • It ensures that only one thread can access a shared resource at a time, preventing conflicts and data corruption.
  • Monitors encapsulate shared resources and their synchronization mechanisms, making concurrent programming easier to manage and understand.
  • They provide built-in features like condition variables for efficient thread synchronization and data protection.

Q.4

(a) What is resource allocation graph?

  • A resource allocation graph is a graphical representation that depicts the relationships between processes and resources in a system.
  • It is used to analyze resource allocation and detect potential deadlock conditions.
  • In the graph, processes are represented by nodes, and resources are represented by edges.
  • The main purpose of the resource allocation graph is to identify if there exist circular wait conditions, which can lead to deadlock.

(b) Explain paging technique.

  • Paging is a memory management technique used in operating systems.
  • It divides the logical memory of a process into fixed-size blocks called pages.
  • The physical memory is also divided into blocks of the same size called page frames.
  • A page table is used to map logical pages to physical page frames.
  • This mapping allows processes to access memory in a flexible and efficient manner.
  • Paging improves memory allocation and addressing.
  • It enables processes to utilize non-contiguous physical memory.
  • Overall, paging enhances system performance and simplifies memory management.

(c) Explain the following allocation algorithms: (1) First-fit (2) Best-fit (3) Worst-fit

  1. First-Fit:
    • The first-fit allocation algorithm assigns the first available memory block that is large enough to satisfy a process's memory requirements.
    • It searches the memory blocks from the beginning and selects the first block that can accommodate the process.
    • This algorithm is simple and efficient in terms of searching and allocation time.
    • However, it may lead to external fragmentation, as smaller memory blocks might remain unused between allocated blocks.
  2. Best-Fit:
    • The best-fit allocation algorithm selects the smallest available memory block that is large enough to hold the process.
    • It searches through the entire memory space and selects the block with the least wastage (i.e., the smallest unused portion after allocation).
    • Best-fit aims to minimize external fragmentation and efficiently utilizes memory space.
    • However, it requires more time for searching and can be less efficient than other algorithms for large memory systems.
  3. Worst-Fit:
    • The worst-fit allocation algorithm selects the largest available memory block to allocate the process.
    • It searches the entire memory space and assigns the block with the most significant unused portion.
    • Worst-fit can lead to more significant external fragmentation compared to other algorithms.
    • It is generally less efficient in terms of memory utilization but can be useful in scenarios where large memory blocks are occasionally required.

OR

Q.4

(a) When is a system in a safe state?

A system is considered to be in a safe state in the context of resource allocation and deadlock avoidance if the system can allocate resources to all processes in some order without entering a deadlock situation.

  • In a safe state, the system can allocate resources to each process's maximum resource requirements without causing a deadlock.
  • The safe state ensures that all processes will eventually complete their execution, avoiding resource deadlocks.

(b) Explain segmentation.

  • Segmentation is a memory management technique that divides the logical address space of a process into variable-sized segments.
  • Each segment represents a distinct unit of the program, such as code, data, stack, or heap.
  • Segmentation allows for flexible memory allocation and efficient sharing of code segments among multiple processes.
  • However, it can lead to external fragmentation, where free memory is scattered in small segments.

(c) What is fragmentation? Explain the difference between internal and external fragmentation

Fragmentation refers to the phenomenon where free memory in a computer system becomes divided into small, non-contiguous blocks, making it challenging to allocate large contiguous blocks of memory.

Here's an explanation of the difference between internal and external fragmentation:

  1. Internal Fragmentation:
    • Internal fragmentation occurs when a process is allocated more memory than it actually needs, resulting in unused or wasted memory.
    • It happens when the allocated memory block is larger than the requested memory size, and the excess space within the block remains unused.
    • Internal fragmentation is more common in fixed-size memory allocation schemes, such as partitioning or paging.
    • It reduces overall memory efficiency as some memory is allocated but remains unused.
  2. External Fragmentation:
    • External fragmentation occurs when free memory is scattered in small non-contiguous blocks, making it challenging to find contiguous blocks of memory to allocate to processes.
    • It happens when allocated memory blocks are deallocated or become smaller, leaving small gaps of unused memory scattered throughout the system.
    • External fragmentation is more common in variable-size memory allocation schemes, such as segmentation or dynamic memory allocation.
    • It reduces memory utilization and can lead to inefficient memory allocation, especially when large contiguous blocks are required.

Q.5

(a) Explain RAID. How it is helpful to increase CPU performance?

RAID (Redundant Array of Independent Disks) is a storage technology that combines multiple physical disk drives into a single logical unit to improve data performance, reliability, or both.

  • RAID improves CPU performance by utilizing techniques such as data striping and parallelism.
  • Data striping distributes data across multiple disks, allowing for simultaneous read and write operations that increase data access speed.
  • Parallelism enables multiple disk drives to work in parallel, reducing disk I/O bottlenecks and improving overall system throughput.

(b) Explain the following Linux commands:

(1) mkdir

  • The mkdir command is used to create a new directory or folder.
  • It accepts the name of the directory as an argument and creates it in the current working directory.
  • Example: mkdir documents will create a new directory named "documents".

(2) touch

  • The touch command is used to create an empty file or update the access and modification timestamp of an existing file.
  • It accepts the name of the file as an argument and creates an empty file if it doesn't exist.
  • Example: touch file.txt will create a new file named "file.txt".

(3) cat

  • The cat command is used to display the contents of a file on the terminal.
  • It can be used to concatenate multiple files and display their contents together.
  • Example: cat file.txt will display the contents of "file.txt" on the terminal.

(4) rm

  • The rm command is used to remove or delete files and directories.
  • It accepts the name of the file or directory as an argument and deletes it permanently.
  • Example: rm file.txt will delete the file named "file.txt".

(c) What do you mean by security? Discuss in brief access control list.

Security refers to protecting computer systems, networks, and data from unauthorized access, use, disclosure, disruption, modification, or destruction. It involves implementing measures to safeguard information and resources, ensuring confidentiality, integrity, and availability.

Access Control List (ACL) is a security mechanism used to define permissions and access rights for users or groups on various resources such as files, directories, or network objects.

  • ACL is a list of permissions associated with an object, specifying who can access it and what actions they can perform.
  • It provides granular control over resource access by allowing administrators to define permissions for individual users or groups.
  • ACL consists of entries that define access permissions, typically including read, write, execute, and delete rights.
  • Each entry in the ACL contains a subject (user or group) and the corresponding permissions granted or denied.
  • They are managed by operating systems or network devices and are an essential component of access control mechanisms.
  • They play a vital role in ensuring data confidentiality, preventing unauthorized modifications, and maintaining system integrity.

OR

Q.5

(a) Explain i/o buffering

  • I/O buffering is a technique used to improve the efficiency of input and output operations by temporarily storing data in a buffer or cache.
  • Read buffering involves reading data in larger chunks and storing it in a buffer to minimize disk or network accesses.
  • Write buffering accumulates data in a buffer before initiating the actual write operation, reducing writes to the storage or network.

(b) What is virtualization? Explain the benefits of virtualization.

Virtualization allows for the creation of multiple virtual instances or environments on a single physical resource, enabling efficient utilization of hardware resources.

Benefits of virtualization include:

  1. Increased Efficiency: Virtualization enables consolidation of multiple virtual machines (VMs) onto a single physical server, reducing hardware costs and improving resource utilization.
  2. Flexibility and Scalability: Virtual machines can be easily provisioned, replicated, and moved across physical servers, providing flexibility and scalability to adapt to changing workload demands.
  3. Cost Savings: By consolidating multiple physical servers into virtual machines, organizations can save on power, cooling, maintenance, and physical space requirements.

(c) Why is segmented paging important (as compared to a paging system)? What are the different pieces of the virtual address in a segmented paging?

Segmented paging combines the advantages of both segmentation and paging techniques, offering improved memory management capabilities.

Importance of Segmented Paging:

  • Efficient Memory Management: Segmented paging provides a flexible and efficient way to manage memory by dividing the logical address space into segments and then further dividing those segments into pages.
  • Address Space Organization: It allows for the logical division of the address space into multiple segments, which can correspond to different parts of an application or different memory requirements.
  • Protection and Sharing: Segmented paging enables better protection and sharing of memory among different processes or segments by assigning different access rights and permissions to each segment.

Different Pieces of the Virtual Address in Segmented Paging:

  • Segment Selector: It is a field within the virtual address that identifies the segment to which the address belongs. The segment selector is used to index the segment descriptor table, which contains information about the segment, such as its base address, limit, and access rights.
  • Offset: It represents the displacement or distance within the segment. The offset is used to access a specific location within the segment.

In segmented paging, the virtual address is divided into two parts: the segment selector and the offset. The segment selector is used to identify the segment, and the offset is used to access a specific location within the segment. This combination allows for efficient memory management, protection, and sharing of memory resources.