Operating System



1. New: Newly Created Process (or) being created process.

2. Ready: After creation Process moves to Ready state, i.e.,
         process is ready for execution.

3. Run: Currently running process in CPU (only one process at
       a time can be under execution in a single processor).

4. Wait (or Block): When process request for I/O request.

5. Complete (or Terminated): Process Completed its execution.

6. Suspended Ready: When ready queue becomes full, some processes
                   are moved to suspend ready state

7. Suspended Block: When waiting queue becomes full.

Note:

User-Level Threads

User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.
Advantages:
The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are
  • User-level threads does not require modification to operating systems.
  • Simple Representation:
  •     Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space.
  • Simple Management:
  •     This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel.
  • Fast and Efficient:
  •     Thread switching is not much more expensive than a procedure call.
Disadvantages:
  • There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads.
  • User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.

Kernel-Level Threads

In this method, the kernel knows about and manages the threads. No runtime system is needed in this case. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system. In addition, the kernel also maintains the traditional process table to keep track of processes. Operating Systems kernel provides system call to create and manage threads.

The implementation of general structure of kernel-level thread is
                           <DIAGRAM>
Advantages:
  • Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads.
  • Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
  • The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.
  • Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.


2. Which of the following need not necessarily be saved on a context switch between processes? (GATE CS 2000)
Translation look-aside buffer
3. Where does the swap space reside ?
Disk
4.Which of the following does not interrupt a running process?
Scheduler process
Note:

Context Switch

To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time the current running process has used. If it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from one process to another is called a context switch.

Major Steps of Context Switching

  • The values of the CPU registers are saved in the process table of the process that was running just before the clock interrupt occurred.
  • The registers are loaded from the process picked by the CPU scheduler to run next.

Virtual Memory (its storage allocation scheme)

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory. The addresses a program may use to reference memory are distinguished from the addresses the memory system uses to identify physical storage sites, and program generated addresses are translated automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.
  1. All memory references within a process are logical addresses that are dynamically translated into physical addresses at run time. This means that a process can be swapped in and out of main memory such that it occupies different places in main memory at different times during the course of execution.
  2. A process may be broken into number of pieces and these pieces need not be continuously located in the main memory during execution. The combination of dynamic run-time addres translation and use of page or segment table permits this.
Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is known as demand paging.
The process includes the following steps :
Thrashing

 
Objective
Which of the following page replacement algorithms suffers from Belady’s anomaly?
 FIFO

Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. 

.............................

What is the swap space in the disk used for?
Saving process data

.....................................

Increasing the RAM of a computer typically improves performance because:
Fewer page faults occur

When there is more RAM, there would be more mapped virtual pages in physical memory, hence fewer page faults. A page fault causes performance degradation as the page has to be loaded from secondary device. 

.....................................

A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the virtual address space is of the same size as the physical address space, the operating system designers decide to get rid of the virtual memory entirely. Which one of the following is true?

Answer:
Hardware support for memory management is no longer needed 

....................................
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:

Size of a page = 4KB = 2^12 Total number of bits needed to address a page frame = 32 – 12 = 20 If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag.

..............................

Thrashing occurs when Processes on system frequently access pages not memory

Thrashing occurs processes on system require more memory than it has. If processes do not have “enough” pages, the pagefault rate is very high. This leads to: – low CPU utilization – operating system spends most of its time swapping to disk The above situation is called thrashing 

.....................................

Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If one page fault is generated for every 10^6 memory accesses, what is the effective access time for the memory?

Let P be the page fault rate
Effective Memory Access Time = p * (page fault service time) +
                              (1 - p) * (Memory access time)
                            = ( 1/(10^6) )* 10 * (10^6) ns +
                              (1 - 1/(10^6)) * 20 ns
                            = 30 ns (approx) 
 
...................................

 
The essential content(s) in each entry of a page table is / are

Page frame number
A page table entry must contain Page frame number. Virtual page number is typically used as index in page table to get the corresponding page frame number.  

................................

A multilevel page table is preferred in comparison to a single level page table for translating virtual address to physical address because

It helps to reduce the size of page table needed to implement the virtual address space of a process.
It helps to reduce the size of page table needed to implement the virtual address space of a process.
P: Increasing the number of page frames allocated to a

  process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference.


Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________.
hit ratio * (TLB access time + Main memory access time) + (1 - hit ratio) * (TLB access time + 2 * main memory time) = 0.6*(10+80) + (1-0.6)*(10+2*80) = 0.6 * (90) + 0.4 * (170) = 122


A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
Size of a page = 4KB = 2^12 means 12 offset bits CPU generates 32-bit virtual addresses Total number of bits needed to address a page frame = 32 – 12 = 20 If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag. Option (C) is the correct answer.
15 bits


The absolute minimum number of frames that a process must be allocated is dependent on system architecture, and corresponds to the number of pages that could be touched by a single (machine) instruction. So, it is instruction set architecture


In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to physical address translation is not practical because of

the large memory overhead in maintaining page tables


Which of the following is not a form of memory?
instruction opcode


Which of the following is NOT an advantage of using shared, dynamically linked libraries as opposed to using statically linked libraries ?
Faster program startup
nOTE:
In Non-Shared (static) libraries, since library code is connected at compile time, the final executable has no dependencies on the the library at run time i.e. no additional run-time loading costs, it means that you don’t need to carry along a copy of the library that is being used and you have everything under your control and there is no dependency.
Dynamic linking can cause security concerns because:
The path for searching dynamic libraries is not known till runtime


About vertual memory

Virtual memory implements the translation of a program‘s address space into physical memory address space
Virtual memory allows each program to exceed the size of the primary memory.
Virtual memory increases the degree of multiprogramming
Virtual memory inceases the context switching overhead.
FOR VERTUAL MEMORY
  1. We can write large program when we have memory extension.
  2. Less I/O requirement in case of virtual memory.
  3. More addressable memory available with virtual memory
  4. Faster and easy swapping of process with the help of page replacement policy in virtual memory.



The process of assigning load addresses to the various parts of the program and adjusting the code and date in the program to reflect the assigned addresses is called
Relocation

Relocation of code is the process done by the linker-loader when a program is copied from external storage into main memory.
A linker relocates the code by searching files and libraries to replace symbolic references of libraries with actual usable addresses in memory before running a program.


Where does the swap space reside?
Disc


1. Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4KB, what is the approximate size of the page table? (GATE 2001)
(a) 16 MB
(b) 8 MB
(c) 2 MB
(d) 24 MB
Number of entries in page table =
                   (virtual address space size)/(page size)
Size of page table =
 (total number of page table entries) *(size of a page table entry)
  = (2^20 *2) = 2MB

Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)
average memory access time = p*t1 + (1-p)*t2
                   =(99.99*1 +  0.01*(10*1000 + 1))/100
                                                            =1.9999 *10^-6 sec


Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and page table entries of 4 bytes each. The size of the page table in the system in megabytes is
Number of entries in page table = 232 / 4Kbyte  
                               = 232 / 212
                                       = 220

Size of page table = (No. page table entries)*(Size of an entry)
                  = 220 * 4 bytes
                  = 222
                  = 4 Megabytes


A computer system implements a 40 bit virtual address, page size of 8 kilobytes, and a 128-entry translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tag does not store any process id. The minimum length of the TLB tag in bits is _________
Total virtual address size = 40

Since there are 32 sets, set offset = 5

Since page size is 8kilobytes, word offset = 13

Minimum tag size = 40 - 5- 13 = 22


Which one of the following is NOT shared by the threads of the same process?
Statck


In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is 4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number of bits that can be used for storing protection and other information in each page table entry?
Virtual memory = 232 bytes Physical memory = 230 bytes
Page size = Frame size = 4 * 103 bytes = 22 * 210 bytes = 212 bytes                                                            Number of frames = Physical memory / Frame size = 230/212 = 218                                                         Therefore, Numbers of bits for frame = 18 bits                                                                                      Page Table Entry Size = Number of bits for frame + Other information Other information = 32 - 18 = 14 bits

I-b, II-c, III-d, IV-a


Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. The number of bits in the TAG, SET and WORD fields, respectively are:
According to the question it is given that No. of bytes in a word= 1byte No. of words per block of memory= 128 words Total size of the cache memory= 8 KB So the total number of block can be calculated as under Cache size/(no. words per block* size of 1 word) = 8KB/( 128*1) =64 Since, it is given that the computer has a 4 way set associative memory. Therefore, Total number of sets in the cache memory given = number of cache blocks given/4 = 64/4 = 16 So, the number of SET bits required = 4 as 16= power(2, 4). Thus, with 4 bits we will be able to get 16 possible output bits As per the question only physical memory information is given we can assume that cache memory is physically tagged. So, the memory can be divided into 16 regions or blocks. Size of the region a single set can address = 1MB/ 16 = power(2, 16 )Bytes = power(2, 16) / 128 = power(2, 9) cache blocks Thus, to uniquely identify these power(2, 9) blocks we will need 9 bits to tag these blocks. Thus, TAG= 9 Cache block is 128 words so for indicating any particular block we will need 7 bits as 128=power(2,7). Thus, WORD = 7. Hence the answer will be (TAG, SET, WORD) = (9,4,7).


A. Thread                     1. Interrupt
B. Virtual address space       2. Memory
C. File system                 3. CPU
D. Signal                      4. Disk
ANSWER
A-3, B-2, C-4, D-1

A multi-user, multi-processing operating system cannot be implemented on hardware that does not support: a) Address translation b) DMA for disk transfer c) At least two modes of CPU execution (privileged and non-privileged). d) Demand Paging
A, B and C


Which of the following is/are advantages of virtual memory? a) Faster access to memory on an average. b) Processes can be given protected address spaces. c) Linker can assign addresses independent of where the program will be loaded in physical memory. d) Programs larger than the physical memory size can be run.
b and d


Locality of reference implies that the page reference being made by a process
is likely to be to one of the pages used in the last few page references


Thrashing
implies excessive page I/O


Dirty bit for a page in a page table
  helps avoid unnecessary writes on a paging device


Pure paging does not suffer from external fragmentation, but instead suffers from internal fragmentation and Pure segmentation also suffers from external fragmentation.
External fragmentation is occurred when total memory space exists to satisfy the a request, but it is not contiguous. Internal fragmentation is occurred when allocated memory may be slightly larger the requested memory, this size difference is memory internal to a partition,but not being used. Pure paging does not suffer from external fragmentation, but instead suffers from internal fragmentation and Pure segmentation also suffers from external fragmentation

In designing a computer’s cache system, the cache block (or cache line) size is an important parameter
Smaller block size incurs lower cache miss penalty

A cache memory needs an access time of 30 ns and main memory 150 ns, what is the average access time of CPU (assume hit ratio = 80%)?
Hit ratio of cache = Hcache = 0.8 Tcache = 30 ns Tmemory = 150 ns CPU access time = Hcache * Tcache + (1 - Hcache)(Tcache + Tmemory) = 0.8 * 30 + 0.2 * (30 + 150) = 60 ns So, option (A) is correct.

In a paging system, it takes 30 ns to search translation Look-a-side Buffer (TLB) and 90 ns to access the main memory. If the TLB hit ratio is 70%, the effective memory access time is :
Effective access time = Hit ratio*Time during hit + Miss Ratio * Time During Miss
=0.7*(30+90) + 0.3 (30+90+90)
=84 + 63
=147 ns


A memory management system has 64 pages with 512 bytes page size. Physical memory consists of 32 page frames. Number of bits required in logical and physical address are respectively:
we know that Number of pages = virtual memory space / page size.
and Number of frames = physical memory space / frame size.
and page size is equal to frame size.
According to question and given data:
     virtual memory space = Number of pages * page size
i.e.  virtual memory space = 64 * 512 B
     virtual memory space = 26 * 29B
                 i.e. = 215B.
So, 15 bits are required for virtual memory space.
physical memory space =  Number of frames * frame size.
physical memory space = 32 * 512 B
physical memory space = 25 * 29B
               i.e. = 214B.
So, 14 bits are required for virtual memory space.
nOTE:
A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses.

2. Consider Peterson’s algorithm for mutual exclusion between two concurrent processes i and j. The program executed by process is shown below.
repeat   
     flag [i] = true;
     turn = j;
     while ( P ) do no-op;
     Enter critical section, perform actions, then exit critical
     section
     flag [ i ] = false;
     Perform other non-critical section actions.
  until false;
b) flag [j] = true and turn = j


Dirty bit is used to indicate which of the following?
A page has been modified after being loaded into cache.When a page is modified inside the cache and the changes need to be stored back in the main memory, the valid bit is set to 1 so as to maintain the record of modified pages.


A system shares 9 tape drives. The current allocation and maximum requirement of tape drives for 4 processes are shown below:1

Question 26 Explanation:
There are 9 tapes and allocated (3+1+3+0 =) 7, so remaining available tapes are 9 - 7 = 2.2Now we can satisfies only process P3 first which require only 2 tapes, then current available will be total 5. Similarly, we can satisfies only process P2 which require only 5 tapes, then current available will be total 6. Then, we can satisfies only process P1 which require only 6 tapes, then current available will be total 9. But we can not satisfied remaining need of process P4, because it requires 10 tapes and remaining need 9. Option (C) is correct.


Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a maximum of K instances. Resource instances can be requested and released only one at a time. The largest value of K that will always avoid deadlock is _______ . Note - This was Numerical Type question.
R ≥ P(N − 1) + 1
Where R is total number of resources, P is the number of processes, and N is the max need for each resource.
4 ≥ 3(N − 1) + 1
3 ≥ 3(N − 1)
1 ≥ (N − 1)
N ≤ 2


Available (3, 3, 0), which can satisfy either P0 or P2. Take P0 <3, 3, 0>.
After completion we have (3, 3, 0) + (1, 0, 1) = (4, 3, 1)
Take P2 <0, 3, 0>. After completion we have (4, 3, 1) + (1, 0, 3) = (5, 3, 4)
Take P1 <1, 0, 2>. After completion we have (5, 3, 4) + (1, 1, 2) = (6, 4, 6)
Take P3 <3, 4, 1>. After completion we have (6, 4, 6) + (2, 0, 0) = (8, 4, 6) Safe sequence : P0-->P2-->P1-->P3

What problem is solved by Dijkstra banker’s algorithm?
Deadlock avoidance
With single resource, deadlock occurs
none of these
There are 4 conditions required for a deadlock to occur: 1) Mutual Exclusion 2) No pre-emption 3) Hold and wait 4) Circular wait When there is only one resource, the conditions of hold and wait and circular wait get eliminated. With the assumption that no process can use a resource for infinite amount of time, whenever a process will finish its execution , another process can get the resource. So a deadlock can't occur. So, option (D) is correct.


What is the minimum number of resources required to ensure that deadlock will never occur, if there are currently three processes P1, P2 and P3 running in a system whose maximum demand for the resources of same type are 3, 4, and 5 respectively.
let the resources needed by P1 = R1 =3, the number of resources needed by P2 = R2 = 4 and so on.. Minimum resources required to ensure that deadlock will never occur = (R1-1) + (R2 1) + (R3-1) + 1 = (3-1) + (4-1)+ (5-1) + 1 = 10.

What is the size of the physical address space in a paging system which has a page table containing 64 entries of 11 bit each (including valid and invalid bit) and a page size of 512 bytes?
Size of Physical Address = Paging bits + Offset bits Paging bits = 11 - 1 = 10 (As 1 valid bit is also included) Offset bits = log2page size = 9 Size of Physical Address = 10 + 9 = 19 bits

There are 4 necessary conditions for Deadlock to occur: 1) Mutual Exclusion 2) No pre-emption 3) Hold and Wait 4) Circular wait Option (B) is correct.                   “ Munna bhai HighCourt gaya”

When the degree of multiprogramming keep on increasing, the number of page faults increase due to the insufficient memory space per process and there's hardly any progress in process execution. This situation is termed as Thrashing.
Consider a logical address space of 8 pages of 1024 words each, mapped onto a physical memory of 32 frames. How many bits are there in the physical address and logical address respectively?
Logical address space has 8 pages, size of each page, offset = 1024 words Number if bits in page# field of the logical address = log28 bits = 3 bits Offset bits = log21024 = 10 bits So,Logical address = 3 + 10 = 13 bits Physical memory has 32 frames, offset = 1024 words Number of bits in frame# field of the Physical address = log232 bits = 5 bits Physical address = 5 + 10 = 15 bits


In a 64-bit machine, with 2 GB RAM, and 8 KB page size, how many entries will be there in the page table if it is inverted?
Memory size = 2 GB = 231
Page size = 8 KB = 213
Number of pagetable entries in inverted pagetable
= 231/213 =  218



When a process is rolled back as a result of deadlock the difficulty which arises is
Starvation


Mutual Exclusion is a condition when one or more than one resource are non-sharable (Only one process can use at a time) i.e. processes claim exclusive control of the resources they require. So, option (B) is correct.

Consider a system having ‘m’ resources of the same type. The resources are shared by 3 processes A, B, C, which have peak time demands of 3, 4, 6 respectively. The minimum value of ‘m’ that ensures that deadlock will never occur is                                “jab minimum dekho (r1-1)+ (r2 -1)  + ( r3 -1 +1”
Minimum resources required to avoid deadlock = (m1 - 1) + (m2 - 1) +..+ (my - 1) + 1 where m = resource required by process y = number of processes so, Number of resources that ensures that deadlock will never occur is = (3-1) + (4-1) + (6-1) + 1 = 11


Suppose n processes, P1, …. Pn share m identical resource units, which can be reserved and released one at a time. The maximum resource requirement of process Pi is Si, where Si > 0. Which one of the following is a sufficient condition for ensuring that deadlock does not occur?
ANSWER IS C.

FALSE STATEMENT
In deadlock prevention, the request for resources is always granted if the resulting state is safe


2.A system has 6 identical resources and N processes competing for them. Each process can request atmost 2 resources. Which one of the following values of N could lead to a deadlock?
Answer :4


  

Comments

Popular posts from this blog

dynamic programming important question