Operating System for cse
1. New: Newly Created Process (or) being created process.
2. Ready: After creation Process moves to Ready state, i.e.,
process is ready for execution.
3. Run: Currently running process in CPU (only one process at
a time can be under execution in a single processor).
4. Wait (or Block): When process request for I/O request.
5. Complete (or Terminated): Process Completed its execution.
6. Suspended Ready: When ready queue becomes full, some processes
are moved to suspend ready state
7. Suspended Block: When waiting queue becomes full.
Note:
User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.
Advantages:
The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are
- User-level threads does not require modification to operating systems.
- Simple Representation:
- Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space.
- Simple Management:
- This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel.
- Fast and Efficient:
- Thread switching is not much more expensive than a procedure call.
Disadvantages:
- There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads.
- User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process block
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime system is needed in this case. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system. In addition, the kernel also maintains the traditional process table to keep track of processes. Operating Systems kernel provides system call to create and manage threads.
Advantages:
- Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads.
- Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
- The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.
- Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.
.................................
Which of the following need not necessarily be saved on a context switch between processes?
Translation look-aside buffer
Where does the swap space reside ?
Disk
Which of the following does not interrupt a running process?
Scheduler process
......................................
Context Switch
To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time the current running process has used. If it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from one process to another is called a context switch.
Major Steps of Context Switching
The values of the CPU registers are saved in the process table of the process that was running just before the clock interrupt occurred.
The registers are loaded from the process picked by the CPU scheduler to run next.
.....................................................
Virtual Memory (its storage allocation scheme)
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory. The addresses a program may use to reference memory are distinguished from the addresses the memory system uses to identify physical storage sites, and program generated addresses are translated automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.
- All memory references within a process are logical addresses that are dynamically translated into physical addresses at run time. This means that a process can be swapped in and out of main memory such that it occupies different places in main memory at different times during the course of execution
A process may be broken into number of pieces and these pieces need not be continuously located in the main memory during execution. The combination of dynamic run-time addres translation and use of page or segment table permits this.
The process of loading the page into memory on demand (whenever page fault occurs) is known as demand paging.
The process includes the following steps :
......................
Thrashing
What is page fault and when does it occur?
When the page (data) requested by a program is not available in the memory, it is called as a page fault. This usually results in the application being shut down.
A page is a fixed length memory block used as a transferring unit between physical memory and an external storage. A page fault occurs when a program accesses a page that has been mapped in address space, but has not been loaded in the physical memory.
..........................................
WHAT IS VIRTUAL MEMORY
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
In computing virtual memory (also virtual storage) is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine which "creates the illusion to users of a very large (main) memory."
The computer's operating System, using a combination of hardware and software, maps memory address used by a program, called virtual address, into physical addresses in computer memory.
Virtual memory is implemented using Demand Paging or Demand Segmentation.
In virtual memory System, demand paging is a type of swapping in which pages of data are not copies from disk to ram until they are needed. In contrast, some virtual memory systems use anticipatory paging, in which the operating attempts to anticipate which data will be needed next and copies it to RAM before it is actually required
..............................................
Memory Management
Memory management is an important activity done effectively in the kernel. Memory management is the process of managing the computer memory. ie, This includes assigning memory to various running programs to keep the performance of the system stable. The memory should be allocated dynamically according to the requirement and it should be freed up when no longer used so that it can be reallocated when needed. Memory management depends on the effective configuration in hardware, operating system, and programs or applications. In hardware, memory management involves physical devices that store the data. The Random-Access Memory is an example of this. This also includes memory caches and flash-based Solid-State Drives. In the case of OS, memory management involves the allocation of specific memory blocks to individual programs as user demand changes. At the application level, memory management ensures the memory demanded by the objects and data structures of each running program is available always. Modern computers manage memory at two levels; at the system levels and at the application level. The application level memory management is categorized as either automatic or manual memory management. In this article, we are going to see the memory management based on virtual memory and demand paging.
Virtual Memory
Virtual memory is a memory management technique that can be implemented using both hardware and software. As the name indicates, it adds virtual memory to available memory, so that your system will appear to have more memory than what actually exists. Virtual memory is a layer of memory addresses (virtual addresses) that map to physical addresses. The memory addresses used by a program is the virtual addresses. These virtual address spaces and the assignment of real memory to the virtual memory are managed by the operating system. The working of virtual memory is as follows. The computer system has a limited amount of Static RAM. When a program gets executed, an instance of the program is loaded into the RAM. This is the process of allocating the memory for the instructions to execute. When the program demands more RAM than available, it will be allocated to the virtual memory. This prevents the program from lacking the necessary RAM to execute. This virtual memory is actually the memory of the hard disk and it is then mapped into the physical memory.
Demand Paging
Demand paging is a type of swapping done in virtual memory systems. In demand paging, the data is not copied from the disk to the RAM until they are needed or being demanded by some program. The data will not be copied when the data is already available on the memory. This is otherwise called a lazy evaluation because only the demanded pages of memory are being swapped from the secondary storage (disk space) to the main memory. In contrast during pure swapping, all the memory for a process is swapped from secondary storage to main memory during the process startup.
Anticipatory Paging
The anticipatory paging is another type of swapping in virtual memory systems. Here, the operating system attempts to anticipate the data that will be needed next and copies it to RAM before it is actually required.
The anticipatory paging is another type of swapping in virtual memory systems. Here, the operating system attempts to anticipate the data that will be needed next and copies it to RAM before it is actually required.
Demand Paging Working
The demand paging working is based on a page table implementation. The page table maps logical memory to physical memory. The page table uses a bitwise operator to mark if a page is valid or invalid. A valid page is one that currently resides in main memory. An invalid page can be defined as the one that currently resides in secondary memory. When a process tries to access a page, the following will happen.
1) Attempt to access the page
2) The page is valid. Page processing instruction continues as normal.
3) If the page is an invalid one, then a page-fault trap occurs.
4) The memory reference is then checked to determine if it is a valid reference to a location on secondary memory or not. If not, the process is terminated (illegal memory access). Otherwise, the required page is paged in.
5) Now the disk operation to read the desired page into main memory is scheduled.
6) Finally, the instruction that was interrupted by the operating system trap is restarted.WHAT IS SIMAPHORE?
1) Attempt to access the page
2) The page is valid. Page processing instruction continues as normal.
3) If the page is an invalid one, then a page-fault trap occurs.
4) The memory reference is then checked to determine if it is a valid reference to a location on secondary memory or not. If not, the process is terminated (illegal memory access). Otherwise, the required page is paged in.
5) Now the disk operation to read the desired page into main memory is scheduled.
6) Finally, the instruction that was interrupted by the operating system trap is restarted.WHAT IS SIMAPHORE?
Semaphore is a simply a variable. This variable is used to solve critical section problem and to achieve process synchronization in the multi processing environment.
The two most common kinds of semaphores are counting semaphores and binary semaphores. Counting semaphore can take non-negative integer values and Binary semaphore can take the value 0 & 1. Only.
Some point regarding P and V operation
- P operation is also called wait, sleep or down operation and V operation is also called signal, wake-up or up operation.
- Both operations are atomic and semaphore(s) is always initialized to one.
- A critical section is surrounded by both operations to implement process synchronization.See below image.critical section of Process P is in between P and V operation.
P(Semaphore s)
{
s = s - 1;
if (s < 0) {
// add process to queue
block();
}
}
V(Semaphore s)
{
s = s + 1;
if (s >= 0) {
// remove process p from queue
wakeup(p);
}
}
..........................................
Context switches are computationally intensive since register and memory state must be saved and restored. To avoid the amount of context switching time, some hardware systems employ two or more sets of processor registers. When the process is switched, the following information is stored for later use.
Context Switch in Operating System Processes
A context switch occurs when a computer’s CPU switches from one process or thread to a different process or thread.
- Context switching allows for one CPU to handle numerous processes or threads without the need for additional processors.
- A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time.
- Any operating system that allows for multitasking relies heavily on the use of context switching to allow different processes to run at the same time.
Typically, there are three situations that a context switch is necessary, as shown below.
- Multitasking – When the CPU needs to switch processes in and out of memory, so that more than one process can be running.
- Kernel/User Switch – When switching between user mode to kernel mode, it may be used (but isn’t always necessary).
- Interrupts – When the CPU is interrupted to return data from a disk read.
The steps in a full process switch are:
- Save the context of the processor, including program counter and other registers.
- Update the process control block of the process that is currently in the Running state. This includes changing the state of the process to one of the other states (Ready; Blocked; Ready/Suspend; or Exit). Other relevant fields must also be updated, including the reason for leaving the Running state and accounting information.
- Move the process control block of this process to the appropriate queue(Ready; Blocked on Event i ; Ready/Suspend).
- Select another process for execution. Update the process control block of the process selected. This includes changing the state of this process to Running. Update memory management data structures. This may be required, depending on how address translation is managed.
- Restore the context of the processor to that which existed at the time the selected process was last switched out of the Running state, by loading in the previous values of the program counter and other registers.
- Program Counter
- Scheduling information
- Base and limit register value
- Currently used register
- Changed State
- I/O State information
- Accounting information
the producer–consumer problem (also known as the bounded-buffer problem) is a classic example of
a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, which share a common, fixed-size buffer used as a queue.
- The producer’s job is to generate data, put it into the buffer, and start again.
- At the same time, the consumer is consuming the data (i.e. removing it from the buffer), one piece at a time.
Problem
To make sure that the producer won’t try to add data into the buffer if it’s full and that the consumer won’t try to remove data from an empty buffer.
Solution
The producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer.An inadequate solution could result in a deadlock where both processes are waiting to be awakened.
// Java program to implement solution of producer
// consumer problem.
import java.util.LinkedList;
public class Threadexample
{
public static void main(String[] args throws InterruptedException
{
// Object of a class that has both produce()
// and consume() methods
final PC pc = new PC();
// Create producer thread
Thread t1 = new Thread(new Runnable()
{
@Override
public void run()
{
try
{
pc.produce();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
});
// Create consumer thread
Thread t2 = new Thread(new Runnable()
{
@Override
public void run()
{
try
{
pc.consume();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
});
// Start both threads
t1.start();
t2.start();
// t1 finishes before t2
t1.join();
t2.join();
}
// This class has a list, producer (adds items to list
// and consumber (removes items).
public static class PC
{
// Create a list shared by producer and consumer
// Size of list is 2.
LinkedList<Integer> list = new LinkedList<>();
int capacity = 2;
// Function called by producer thread
public void produce() throws InterruptedException
{
int value = 0;
while (true)
{
synchronized (this)
{
// producer thread waits while list
// is full
while (list.size()==capacity)
wait();
System.out.println("Producer produced-”+ value);
// to insert the jobs in the list
list.add(value++);
// notifies the consumer thread that
// now it can start consuming
notify();
// makes the working of program easier
// to understand
Thread.sleep(1000);
}
}
}
// Function called by consumer thread
public void consume() throws InterruptedException
{
while (true)
{
synchronized (this)
{
// consumer thread waits while list
// is empty
while (list.size()==0)
wait();
//to retrive the ifrst job in the list
int val = list.removeFirst();
System.out.println("Consumer consumed-”+ val);
// Wake up producer thread
notify();
// and sleep
Thread.sleep(1000);
}
}
}
}
}
.........................................
What is Swap out and swap in?
Swap out means to take the program from RAM and to bring them in Hard disk. Swap in means to take the program from Hard disk and again bring them to the RAM.
What are advantages of swapping?
- With the help of swapping we can manage many processes within the same RAM.
- Swapping helps to create the virtual memory.
- Swapping is economical.
Swap space:
Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory.
Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.
//System.out.println(Thread.currentThread().getName());
Differences between Deadlock and Starvation in OS :
- Deadlock occurs when none of the processes in the set is able to move ahead due to occupancy of the required resources by some other process as shown in the figure below, on the other hand Starvation occurs when a process waits for an indefinite period of time to get the resource it requires.
- Other name of deadlock is Circular Waiting. Other name of starvation is Lived lock.
- When deadlock occurs no process can make progress, while in starvation apart from the victim process other processes can progress or proceed.
Solution to Starvation : Aging
Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.For example, if priority range from 127(low) to 0(high), we could increase the priority of a waiting process by 1 Every 15 minutes. Eventually even a process with an initial priority of 127 would take no more than 32 hours for priority 127 process to age to a priority-0 process.
.............................................
Fragmentation occurs in a dynamic memory allocation system. fragmentation is a phenomenon in which storage space is used inefficiently, reducing capacity or performance and often both. The exact consequences of fragmentation depend on the specific system of storage allocation in use and the particular form of fragmentation.
External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.
Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.
.................................................
fork()
- To create child process we use fork(). fork() returns :
- <0 fail to create child (new) process
- =0 for child process
- >0 i.e process ID of the child process to the parent process. When >0 parent process will execute.
f(n) = 2*f(n-1),n>1
1 ,n=1
..............................................................
interprocess communication (IPC)
...........................................................
Examples of IPC systems
- Posix : uses shared memory method.
2. Windows XP : uses message passing using local procedural call
It's a pretty simple difference. In a shared memory model, multiple workers all operate on the same data. This opens up a lot of the concurrency issues that are common in parallel programming.Message passing systems make workers communicate through a messaging system. Messages keep everyone seperated, so that workers cannot modify each other's data.
Note:The Portable Operating System Interface (POSIX).POSIX defines the applicatin programming interface (API), along with command line shells and utility interfaces, for software compatibility with variants of unix and other operating systems.
................................
Using Mutex:
A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.
At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.
Using Semaphore:
A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.
Misconception:
There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.
Strictly speaking, a mutex is locking machanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there is ownership associated with mutex, and only the owner can release the lock (mutex).
Semaphore is signal machanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend calls you, an interrupt is triggered upon which an interrupt service routine (ISR) signals the call processing task to wakeup.
...................................
Due to priority of process, what type of problem occurs in priority scheduling?
A major problem with priority scheduling algorithms is indefinite blocking, or starvation.
..................................
What is real time OS?
A real-time operating system (RTOS) is an operating System (OS) intended to serve real time applications that process data as it comes in, typically without buffer delays. Processing time requirements (including any OS delay) are measured in tenths of seconds or shorter increments of time. A real time system is a time bound system which has well defined fixed time constraints. Processing must be done within the defined constraints or the system will fail. They either are event driven or time sharing. Event driven systems switch between tasks based on their priorities while time sharing systems switch the task based on clock interrupts. Most RTOS’s use a pre-emptive scheduling algorithm.
The most common designs are
In typical designs, a task has three states:
- Running (executing on the CPU);
- Ready (ready to be executed);
- Blocked (waiting for an event, I/O for example)................................................
Difference between preemptive and non- preemptive algorithms
Note:If time quantum is very large, then scheduling happens according to FCFS.
Objective
Which of the following page replacement algorithms suffers from Belady’s anomaly?
FIFO
.............................
What is the swap space in the disk used for?
Saving process data
.....................................
Increasing the RAM of a computer typically improves performance because:
Fewer page faults occur
When there is more RAM, there would be more mapped virtual pages in physical memory, hence fewer page faults. A page fault causes performance degradation as the page has to be loaded from secondary device.
.....................................
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the virtual address space is of the same size as the physical address space, the operating system designers decide to get rid of the virtual memory entirely. Which one of the following is true?
Answer:
Hardware support for memory management is no longer needed
....................................
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
Size of a page = 4KB = 2^12 Total number of bits needed to address a page frame = 32 – 12 = 20 If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag.
......................................
Thrashing occurs when Processes on system frequently access pages not memory
The essential content(s) in each entry of a page table is / are
Now we can satisfies only process P3 first which require only 2 tapes, then current available will be total 5. Similarly......................................
Thrashing occurs when Processes on system frequently access pages not memory
Thrashing occurs processes on system require more memory than it has. If processes do not have “enough” pages, the pagefault rate is very high. This leads to: – low CPU utilization – operating system spends most of its time swapping to disk The above situation is called thrashing
.....................................
Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If one page fault is generated for every 10^6 memory accesses, what is the effective access time for the memory?
Let P be the page fault rate
Effective Memory Access Time = p * (page fault service time) +
(1 - p) * (Memory access time)
= ( 1/(10^6) )* 10 * (10^6) ns +
(1 - 1/(10^6)) * 20 ns
= 30 ns (approx)
Effective Memory Access Time = p * (page fault service time) +
(1 - p) * (Memory access time)
= ( 1/(10^6) )* 10 * (10^6) ns +
(1 - 1/(10^6)) * 20 ns
= 30 ns (approx)
...................................
The essential content(s) in each entry of a page table is / are
Page frame number
A page table entry must contain Page frame number. Virtual page number is typically used as index in page table to get the corresponding page frame number.
................................
A multilevel page table is preferred in comparison to a single level page table for translating virtual address to physical address because
It helps to reduce the size of page table needed to implement the virtual address space of a process.
It helps to reduce the size of page table needed to implement the virtual address space of a process.
P: Increasing the number of page frames allocated to a
process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference.
..........................................
Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________.Q: Some programs do not exhibit locality of reference.
..........................................
hit ratio * (TLB access time + Main memory access time) + (1 - hit ratio) * (TLB access time + 2 * main memory time) = 0.6*(10+80) + (1-0.6)*(10+2*80) = 0.6 * (90) + 0.4 * (170) = 122
..................................
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
..................................
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
Size of a page = 4KB = 2^12 means 12 offset bits CPU generates 32-bit virtual addresses Total number of bits needed to address a page frame = 32 – 12 = 20 If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag. Option (C) is the correct answer.
15 bits
...............................................
for 4 processes are shown below:
Question 26 Explanation:
There are 9 tapes and allocated (3+1+3+0 =) 7, so remaining available tapes are 9 - 7 =
...............................................
The absolute minimum number of frames that a process must be allocated is dependent on system architecture, and corresponds to the number of pages that could be touched by a single (machine) instruction. So, it is instruction set architecture
..........................................
..........................................
In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to physical address translation is not practical because of
the large memory overhead in maintaining page tables
...................................................
...................................................
Which of the following is not a form of memory?
instruction opcode
................................................
................................................
Which of the following is NOT an advantage of using shared, dynamically linked libraries as opposed to using statically linked libraries ?
Faster program startup
nOTE:
In Non-Shared (static) libraries, since library code is connected at compile time, the final executable has no dependencies on the the library at run time i.e. no additional run-time loading costs, it means that you don’t need to carry along a copy of the library that is being used and you have everything under your control and there is no dependency.
Dynamic linking can cause security concerns because:
The path for searching dynamic libraries is not known till runtime
............................................
About vertual memory
Virtual memory implements the translation of a program‘s address space into physical memory address space
Virtual memory allows each program to exceed the size of the primary memory.
Virtual memory increases the degree of multiprogramming.
Virtual memory inceases the context switching overhead.
FOR VERTUAL MEMORY
We can write large program when we have memory extension.
Less I/O requirement in case of virtual memory
More addressable memory available with virtual memory
Faster and easy swapping of process with the help of page replacement policy in virtual memory.
.............................................
The process of assigning load addresses to the various parts of the program and adjusting the code and date in the program to reflect the assigned addresses is called
Relocation
.......................................................
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)
Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and page table entries of 4 bytes each. The size of the page table in the system in megabytes is
A computer system implements a 40 bit virtual address, page size of 8 kilobytes, and a 128-entry translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tag does not store any process id. The minimum length of the TLB tag in bits is _________
Which one of the following is NOT shared by the threads of the same process?
Virtual memory increases the degree of multiprogramming.
Virtual memory inceases the context switching overhead.
FOR VERTUAL MEMORY
We can write large program when we have memory extension.
Less I/O requirement in case of virtual memory
More addressable memory available with virtual memory
Faster and easy swapping of process with the help of page replacement policy in virtual memory.
.............................................
The process of assigning load addresses to the various parts of the program and adjusting the code and date in the program to reflect the assigned addresses is called
Relocation
.......................................................
Relocation of code is the process done by the linker-loader when a program is copied from external storage into main memory.
A linker relocates the code by searching files and libraries to replace symbolic references of libraries with actual usable addresses in memory before running a program.
Where does the swap space reside?
Disc
1. Consider a machine with 64 MB physical memory and a 32-bit virtual address space.
If the page size is 4KB, what is the approximate size of the page table? (GATE 2001)
If the page size is 4KB, what is the approximate size of the page table? (GATE 2001)
(a) 16 MB
(b) 8 MB
(c) 2 MB
(d) 24 MB
Number of entries in page table =
(virtual address space size)/(page size)
(virtual address space size)/(page size)
Size of page table =
(total number of page table entries) *(size of a page table entry)
= (2^20 *2) = 2MB
(total number of page table entries) *(size of a page table entry)
= (2^20 *2) = 2MB
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)
average memory access time = p*t1 + (1-p)*t2
=(99.99*1 + 0.01*(10*1000 + 1))/100
=1.9999 *10^-6 sec
=(99.99*1 + 0.01*(10*1000 + 1))/100
=1.9999 *10^-6 sec
Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and page table entries of 4 bytes each. The size of the page table in the system in megabytes is
Number of entries in page table = 232 / 4Kbyte
= 232 / 212
= 220
Size of page table = (No. page table entries)*(Size of an entry)
= 220 * 4 bytes
= 222
= 4 Megabytes
= 232 / 212
= 220
Size of page table = (No. page table entries)*(Size of an entry)
= 220 * 4 bytes
= 222
= 4 Megabytes
A computer system implements a 40 bit virtual address, page size of 8 kilobytes, and a 128-entry translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tag does not store any process id. The minimum length of the TLB tag in bits is _________
Total virtual address size = 40
Since there are 32 sets, set offset = 5
Since page size is 8kilobytes, word offset = 13
Minimum tag size = 40 - 5- 13 = 22
Since there are 32 sets, set offset = 5
Since page size is 8kilobytes, word offset = 13
Minimum tag size = 40 - 5- 13 = 22
Which one of the following is NOT shared by the threads of the same process?
Statck
................................................
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a
total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8
KB. The number of bits in the TAG, SET and WORD fields, respectively are:
.............................
................................................
In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is 4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number of bits that can be used for storing protection and other information in each page table entry?
Virtual memory = 232 bytes Physical memory = 230 bytes
Page size = Frame size = 4 * 103 bytes = 22 * 210 bytes = 212 bytes Number of frames = Physical memory / Frame size = 230/212 = 218 Therefore, Numbers of bits for frame = 18 bits Page Table Entry Size = Number of bits for frame + Other information Other information = 32 - 18 = 14 bits
I-b, II-c, III-d, IV-a
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a
total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8
KB. The number of bits in the TAG, SET and WORD fields, respectively are:
According to the question it is given that No. of bytes in a word= 1byte No. of words per block of memory= 128 words Total size of the cache memory= 8 KB So the total number of block can be calculated as under Cache size/(no. words per block* size of 1 word) = 8KB/( 128*1) =64 Since, it is given that the computer has a 4 way set associative memory. Therefore, Total number of sets in the cache memory given = number of cache blocks given/4 = 64/4 = 16 So, the number of SET bits required = 4 as 16= power(2, 4). Thus, with 4 bits we will be able to get 16 possible output bits As per the question only physical memory information is given we can assume that cache memory is physically tagged. So, the memory can be divided into 16 regions or blocks. Size of the region a single set can address = 1MB/ 16 = power(2, 16 )Bytes = power(2, 16) / 128 = power(2, 9) cache blocks Thus, to uniquely identify these power(2, 9) blocks we will need 9 bits to tag these blocks. Thus, TAG= 9 Cache block is 128 words so for indicating any particular block we will need 7 bits as 128=power(2,7). Thus, WORD = 7. Hence the answer will be (TAG, SET, WORD) = (9,4,7).
A. Thread 1. Interrupt
B. Virtual address space 2. Memory
C. File system 3. CPU
D. Signal 4. Disk
B. Virtual address space 2. Memory
C. File system 3. CPU
D. Signal 4. Disk
ANSWER
A-3, B-2, C-4, D-1
A multi-user, multi-processing operating system cannot be implemented on hardware that does not support: a) Address translation b) DMA for disk transfer c) At least two modes of CPU execution (privileged and non-privileged). d) Demand Paging
A, B and C
Which of the following is/are advantages of virtual memory? a) Faster access to memory on an average. b) Processes can be given protected address spaces. c) Linker can assign addresses independent of where the program will be loaded in physical memory. d) Programs larger than the physical memory size can be run.
b and d
Locality of reference implies that the page reference being made by a process
is likely to be to one of the pages used in the last few page references
Thrashing
implies excessive page I/O
.................................
Dirty bit for a page in a page table
.................................
Dirty bit for a page in a page table
helps avoid unnecessary writes on a paging device
.............................................
.............................................
Pure paging does not suffer from external fragmentation, but instead suffers from internal fragmentation and Pure segmentation also suffers from external fragmentation.
External fragmentation is occurred when total memory space exists to satisfy the a request, but it is not contiguous. Internal fragmentation is occurred when allocated memory may be slightly larger the requested memory, this size difference is memory internal to a partition,but not being used. Pure paging does not suffer from external fragmentation, but instead suffers from internal fragmentation and Pure segmentation also suffers from external fragmentation
................................
In designing a computer’s cache system, the cache block (or cache line) size is an important parameter
................................
In designing a computer’s cache system, the cache block (or cache line) size is an important parameter
Smaller block size incurs lower cache miss penalty
............................................
A cache memory needs an access time of 30 ns and main memory 150 ns, what is the average access time of CPU (assume hit ratio = 80%)?
............................................
A cache memory needs an access time of 30 ns and main memory 150 ns, what is the average access time of CPU (assume hit ratio = 80%)?
Hit ratio of cache = Hcache = 0.8 Tcache = 30 ns Tmemory = 150 ns CPU access time = Hcache * Tcache + (1 - Hcache)(Tcache + Tmemory) = 0.8 * 30 + 0.2 * (30 + 150) = 60 ns So, option (A) is correct.
............................
In a paging system, it takes 30 ns to search translation Look-a-side Buffer (TLB) and 90 ns to access the main memory. If the TLB hit ratio is 70%, the effective memory access time is :
............................
In a paging system, it takes 30 ns to search translation Look-a-side Buffer (TLB) and 90 ns to access the main memory. If the TLB hit ratio is 70%, the effective memory access time is :
Effective access time = Hit ratio*Time during hit + Miss Ratio * Time During Miss
=0.7*(30+90) + 0.3 (30+90+90)
=84 + 63
=147 ns
..............................................
A memory management system has 64 pages with 512 bytes page size. Physical memory consists of 32 page frames. Number of bits required in logical and physical address are respectively:=0.7*(30+90) + 0.3 (30+90+90)
=84 + 63
=147 ns
..............................................
we know that Number of pages = virtual memory space / page size.
and Number of frames = physical memory space / frame size.
and page size is equal to frame size.
According to question and given data:
virtual memory space = Number of pages * page size
i.e. virtual memory space = 64 * 512 B
virtual memory space = 26 * 29B
i.e. = 215B.
So, 15 bits are required for virtual memory space.
physical memory space = Number of frames * frame size.
physical memory space = 32 * 512 B
physical memory space = 25 * 29B
i.e. = 214B.
So, 14 bits are required for virtual memory space.
and Number of frames = physical memory space / frame size.
and page size is equal to frame size.
According to question and given data:
virtual memory space = Number of pages * page size
i.e. virtual memory space = 64 * 512 B
virtual memory space = 26 * 29B
i.e. = 215B.
So, 15 bits are required for virtual memory space.
physical memory space = Number of frames * frame size.
physical memory space = 32 * 512 B
physical memory space = 25 * 29B
i.e. = 214B.
So, 14 bits are required for virtual memory space.
nOTE:
A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses.
2. Consider Peterson’s algorithm for mutual exclusion between two concurrent processes i and j. The program executed by process is shown below.
repeat
flag [i] = true;
turn = j;
while ( P ) do no-op;
Enter critical section, perform actions, then exit critical
section
flag [ i ] = false;
Perform other non-critical section actions.
until false;
flag [i] = true;
turn = j;
while ( P ) do no-op;
Enter critical section, perform actions, then exit critical
section
flag [ i ] = false;
Perform other non-critical section actions.
until false;
b) flag [j] = true and turn = j
.............................
Dirty bit is used to indicate which of the following?
A page has been modified after being loaded into cache.When a page is modified inside the cache and the changes need to be stored back in the main memory, the valid bit is set to 1 so as to maintain the record of modified pages.
.........................................
A system shares 9 tape drives. The current allocation and maximum requirement of tape drives

.........................................
A system shares 9 tape drives. The current allocation and maximum requirement of tape drives
for 4 processes are shown below:
Question 26 Explanation:
There are 9 tapes and allocated (3+1+3+0 =) 7, so remaining available tapes are 9 - 7 =
, we can satisfies only process P2 which require only 5 tapes, then current available will be total 6. Then, we can
satisfies only process P1 which require only 6 tapes, then current available will be total 9. But we can not satisfied
remaining need of process P4, because it requires 10 tapes and remaining need 9. Option (C) is correct
.........................................
Consider a system with 3 processes that share 4 instances of the same resource type. Each process can
request a maximum of K instances. Resource instances can be requested and released only one at a
time. The largest value of K that will always avoid deadlock is _______ . Note - This was Numerical
Type question.
request a maximum of K instances. Resource instances can be requested and released only one at a
time. The largest value of K that will always avoid deadlock is _______ . Note - This was Numerical
Type question.
R ≥ P(N − 1) + 1
Where R is total number of resources, P is the number of processes, and N is the max need for each resource.
4 ≥ 3(N − 1) + 1
3 ≥ 3(N − 1)
1 ≥ (N − 1)
N ≤ 2
..........................................
3 ≥ 3(N − 1)
1 ≥ (N − 1)
N ≤ 2
..........................................
Available (3, 3, 0), which can satisfy either P0 or P2. Take P0 <3, 3, 0>.
After completion we have (3, 3, 0) + (1, 0, 1) = (4, 3, 1)
Take P1 <1, 0, 2>. After completion we have (5, 3, 4) + (1, 1, 2) = (6, 4, 6)
Take P3 <3, 4, 1>. After completion we have (6, 4, 6) + (2, 0, 0) = (8, 4, 6) Safe sequence : P0-->P2-->P1-->P3
..................................................
..................................................
What problem is solved by Dijkstra banker’s algorithm?
Deadlock avoidance
With single resource, deadlock occurs
none of these
There are 4 conditions required for a deadlock to occur: 1) Mutual Exclusion 2) No pre-emption 3) Hold and wait 4) Circular wait When there is only one resource, the conditions of hold and wait and circular wait get eliminated. With the assumption that no process can use a resource for infinite amount of time, whenever a process will finish its execution , another process can get the resource. So a deadlock can't occur. So, option (D)
is correct.
.................................
is correct.
.................................
What is the minimum number of resources required to ensure that deadlock will never occur, if there are currently three processes P1, P2 and P3 running in a system whose maximum demand for the resources of same type are 3, 4, and 5 respectively.
let the resources needed by P1 = R1 =3, the number of resources needed by P2 = R2 = 4 and so on.. Minimum resources required to ensure that deadlock will never occur = (R1-1) + (R2 1) + (R3-1) + 1 = (3-1) + (4-1)+ (5-1) + 1 = 10.
.........................................
.........................................
What is the size of the physical address space in a paging system which has a page table containing 64 entries of 11 bit each (including valid and invalid bit) and a page size of 512 bytes?
Size of Physical Address = Paging bits + Offset bits Paging bits = 11 - 1 = 10 (As 1 valid bit is also included) Offset bits = log2page size = 9 Size of Physical Address = 10 + 9 = 19 bits
There are 4 necessary conditions for Deadlock to occur: 1) Mutual Exclusion 2) No pre-emption 3) Hold and Wait 4) Circular wait Option (B) is correct. “ Munna bhai HighCourt gaya”
When the degree of multiprogramming keep on increasing, the number of page faults increase due to the insufficient memory space per process and there's hardly any progress in process execution. This situation is termed as Thrashing.
Consider a logical address space of 8 pages of 1024 words each, mapped onto a physical memory of 32 frames. How many bits are there in the physical address and logical address respectively?
Logical address space has 8 pages, size of each page, offset = 1024 words Number if bits in page# field of the logical address = log28 bits = 3 bits Offset bits = log21024 = 10 bits So,Logical address = 3 + 10 = 13 bits Physical memory has 32 frames, offset = 1024 words Number of bits in frame# field of the Physical address = log232 bits = 5 bits Physical address = 5 + 10 = 15 bits
................................
In a 64-bit machine, with 2 GB RAM, and 8 KB page size, how many entries will be there in the page table if it is inverted?
................................
In a 64-bit machine, with 2 GB RAM, and 8 KB page size, how many entries will be there in the page table if it is inverted?
Memory size = 2 GB = 231
Page size = 8 KB = 213
Number of pagetable entries in inverted pagetable
= 231/213 = 218
....................................Page size = 8 KB = 213
Number of pagetable entries in inverted pagetable
= 231/213 = 218
When a process is rolled back as a result of deadlock the difficulty which arises is
Starvation
....................................
Mutual Exclusion is a condition when one or more than one resource are non-sharable (Only one process can use at a time) i.e. processes claim exclusive control of the resources they require. So, option (B) is correct.
..........................................
Consider a system having ‘m’ resources of the same type. The resources are shared by 3 processes A, B, C, which have peak time demands of 3, 4, 6 respectively. The minimum value of ‘m’ that ensures that deadlock will never occur is “jab minimum dekho (r1-1)+ (r2 -1) + ( r3 -1 +1”
....................................
Mutual Exclusion is a condition when one or more than one resource are non-sharable (Only one process can use at a time) i.e. processes claim exclusive control of the resources they require. So, option (B) is correct.
..........................................
Consider a system having ‘m’ resources of the same type. The resources are shared by 3 processes A, B, C, which have peak time demands of 3, 4, 6 respectively. The minimum value of ‘m’ that ensures that deadlock will never occur is “jab minimum dekho (r1-1)+ (r2 -1) + ( r3 -1 +1”
Minimum resources required to avoid deadlock = (m1 - 1) + (m2 - 1) +..+ (my - 1) + 1 where m = resource required by process y = number of processes so, Number of resources that ensures that deadlock will never occur is = (3-1) + (4-1) + (6-1) + 1 = 11
''''''''''''''''''''''''''''''''''''''''
Suppose n processes, P1, …. Pn share m identical resource units, which can be reserved and released one at a time. The maximum resource requirement of process Pi is Si, where Si > 0. Which one of the following is a sufficient condition for ensuring that deadlock does not occur
''''''''''''''''''''''''''''''''''''''''
Suppose n processes, P1, …. Pn share m identical resource units, which can be reserved and released one at a time. The maximum resource requirement of process Pi is Si, where Si > 0. Which one of the following is a sufficient condition for ensuring that deadlock does not occur
ANSWER IS C.
......................................
FALSE STATEMENT
In deadlock prevention, the request for resources is always granted if the resulting state is safe
....................................................
A system has 6 identical resources and N processes competing for them. Each process can request atmost 2 resources. Which one of the following values of N could lead to a deadlock?
Answer :4......................................
FALSE STATEMENT
In deadlock prevention, the request for resources is always granted if the resulting state is safe
....................................................
A system has 6 identical resources and N processes competing for them. Each process can request atmost 2 resources. Which one of the following values of N could lead to a deadlock?
Comments
Post a Comment