Scheduling Algorithm

Published on November 2016 | Categories: Documents | Downloads: 37 | Comments: 0 | Views: 319
of 7
Download PDF   Embed   Report

Comments

Content

SCHEDULING ALGORITHM ROUND ROBIN Round Robin is the simplest algorithm for a preemptive scheduler. Only a single queue of processes is used. When the system timer fires, the next process in the queue is switched to, and the preempted process is put back into the queue. Shortest Process Next Version of SRTN (Shortest Remaining Time Next) for interactive systems. The Problem here is that we can't say what the user's next command will be. This algorithm needs prediction Lottery Scheduling Lottery Scheduling is a simple algorithm that statistically guarantees a variable fraction of processor time to each runnable process. The concept is much like a lottery. At each scheduling decision, each runnable process is given a number of "lottery tickets". Then a random number is generated, corresponding to a specific ticket. The process with that ticket gets the quantum.

First Come First Served (simple and fair) This scheduling method is used on Batch-Systems, it is NON-PREEMPTIVE. It implements just one queue which holds the tasks in order they come in.

SHORTEST JOB FIRST (SJF) (Nearly optimal (Turnaround Time) is also NON-PREEMPTIVE. It selects the shortest Job/Process, which is available in the run queue. Shortest Remaining Time Next Preemptive version of SJF (Shortest Job First). Schedulre picks the job/process which has the lowest remaining time to run. (probably optimal (Tournaround Time))

PROCESS AND THREAD A process, in the simplest terms, is an executing program. One or more threads run in the context of the process. A thread is the basic unit to which the operating system allocates processor time. A thread can execute any part of the process code, including parts currently being executed by another thread.

THREADS Will by default share memory Will share file descriptors Will share file system context Will share signal handling PROCESSES Will by default not share memory Most file descriptors not shared

Don't share file system context Don't share signal handling ADVANTAGES THREAD Lower context switching overhead. Linux supports the POSIX threads standard. Shared resources Performance - Threads improve the performance (throughput, computational speed, responsiveness, or some combination of these) of a program. Potential simplicity PROCESS Need for updating and maintenance Use of routinely collected data. Processes are typically independent Proeesses have separate address spaces DISADVANTAGES THREAD Inadvertent modification of shared variable Many library functios are not threar safe Lack of robustness: If one thread crashes, the whole application crashes. PROCESS The context switching between processes is slow Hardly share resources SWITCHING The main distinction between a thread switch and a process switch is that during a thread switch, the virtual memory space remains the same, while it does not during a process switch. Both types involve handing control over to the operating system kernel to perform the context switch. The process of switching in and out of the OS kernel along with the cost of switching out the registers is the largest fixed cost of performing a context switch. Multithreading Multithreading as a widespread programming and execution model allows multiple threads to exist within the context of a single process. These threads share the process' resources but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. However, perhaps the most interesting application of the technology is when it is applied to a single process to enable parallel execution on a multiprocessor system. Advantages Some advantages include: If a thread gets a lot of cache misses, the other thread(s) can continue, taking advantage of the unused computing resources, which thus can lead to faster overall execution, as these resources would have been idle if only a single thread was executed.

If a thread cannot use all the computing resources of the CPU (because instructions depend on each other's result), running another thread permits to not leave these idle. If several threads work on the same set of data, they can actually share their cache, leading to better cache usage or synchronization on its values. Disadvantages Some criticisms of multithreading include: Multiple threads can interfere with each other when sharing hardware resources such as caches or translation look aside buffers (TLBs). Execution times of a single thread are not improved but can be degraded, even when only one thread is executing. This is due to slower frequencies and/or additional pipeline stages that are necessary to accommodate thread-switching hardware. Hardware support for multithreading is more visible to software, thus requiring more changes to both application programs and operating systems than Multiprocessing. TYPES Block multithreading : The simplest type of multi-threading occurs when one thread runs until it is blocked by an event that normally would create a long latency stall. Interleaved Multi-threading -: The purpose of this type of multithreading is to remove all data dependency stalls from the execution pipeline. Simultaneous multi-threading ; - The most advanced type of multi-threading applies to superscalar processors.

MULTIPROGRAMMING Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is done only when the currently running process requests I/O, or terminates. It was commonly used to keep the CPU busy while one or more processes are doing I/O. It is now mostly superceded by multitasking, in which processes also lose the CPU when their time quantum expires. Multiprogramming makes efficient use of the CPU by overlapping the demands for the CPU and its I/O devices from various users. It attempts to increase CPU utilization by always having something for the CPU to execute. REAL TIME OS A real-time operating system (RTOS) is an operating system (OS) intended to serve realtime application requests. A key characteristic of a RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter. To be considered "real-time", an operating system must have a known maximum time for each of the operations that it performs (or at least be able to guarantee that maximum most of the time). Some of these operations include OS calls and interrupt handling. Operating systems that can absolutely guarantee a maximum time for these operations are referred to as "hard real-time", while operating systems that can only guarantee a maximum most of the time are referred to as "soft real-time". To fully grasp these concepts, it is helpful to consider an example. The RTOS simply provides facilities that help you with meeting deadlines. You could also program on "bare metal" (w/o a RTOS) in a big main loop and meet you deadlines.

Also keep in mind that unlike a more general purpose OS, an RTOS has a very limited set of tasks and processes running. Some of the facilities an RTOS provide: Priority-based Scheduler System Clock interrupt routine Deterministic behavior

HOW TO PREVENT RESOURCES STARVATION Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation.

DEAD LOCK A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, resulting in both programs ceasing to function.

Waiting for an event could be: Waiting for access to a critical section Waiting for a resource Note that it is usually a non-preemptable (resource). preemptable resources can be yanked away and given to another. Conditions for Deadlock Mutual exclusion: resources cannot be shared. Hold and wait: processes request resources incrementally, and hold on to what they've got. No preemption: resources cannot be forcibly taken from processes. Circular wait: circular chain of waiting, in which each process is waiting for a resource held by the next process in the chain. Strategies for dealing with Deadlock Ignore the problem altogether ie. Ostrich algorithm it may occur very infrequently, cost of detection/prevention etc may not be worth it. Detection and recovery Avoidance by careful resource allocation Prevention by structurally negating one of the four necessary conditions. Deadlock Prevention Difference from avoidance is that here, the system itself is build in such a way that there are no deadlocks. Make sure atleast one of the 4 deadlock conditions is never satisfied. This may however be even more conservative than deadlock avoidance strategy.

Attacking Mutex condition  Never grant exclusive access. but this may not be possible for several resources. Attacking preemption  Not something you want to do. Attacking hold and wait condition  Make a process hold at the most 1 resource at a time.  Make all the requests at the beginning. All or nothing policy. If you feel, retry. eg. 2-phase locking Attacking circular wait  Order all the resources.  Make sure that the requests are issued in the correct order so that there are no cycles present in the resource graph.

SCENARIO Suppose thread 1 is running and locks M1, but before it can lock M2, it is interrupted. Thread 2 starts running; it locks M2, when it tries to obtain and lock M1, it is blocked because M1 is already locked (by thread 1). Eventually thread 1 starts running again, and it tries to obtain and lock M2, but it is blocked because M2 is already locked by thread 2. Both threads are blocked; each is waiting for an event which will never occur.

Deadlock Avoidance Avoid actions that may lead to a deadlock. Think of it as a state machine moving from 1 state to another as each instruction is executed. Safe State Safe state is one where  It is not a deadlocked state  There is some sequence by which all requests can be satisfied. To avoid deadlocks, we try to make only those transitions that will take you from one safe state to another. We avoid transitions to unsafe state (a state that is not deadlocked, and is not safe) eg. Total # of instances of resource = 12 (Max, Allocated, Still Needs) P0 (10, 5, 5) P1 (4, 2, 2) P2 (9, 2, 7) Free = 3 - Safe The sequence is a reducible sequence the first state is safe. What if P2 requests 1 more and is allocated 1 more instance? - results in Unsafe state So do not allow P2's request to be satisfied. Banker's Algorithm for Deadlock Avoidance When a request is made, check to see if after the request is satisfied, there is a (atleast one!) sequence of moves that can satisfy all the requests. ie. the new state is safe. If so, satisfy the request, else make the request wait. How do you find if a state is safe

Dont allow any process to lock partial no of resources, if a process need 5 resources, wait until all the are available. if u use semaphore here, u can unblock/un-wait the resource occupied by other thread. by this i mean pre-emption is another reason.

DISK SCHEDULING FCFS SCHEDULING FCFS scheduling service I/O requests in the order in which they arrive. It is, of course, the simplest scheduling algorithm and actually does no scheduling. It serves as a useful baseline to compare other scheduling algorithms.

SSTF Selects the request with the minimum seek time from the current head position. SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests.

SCAN The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. Sometimes called the elevator algorithm.

C-SCAN A variant of SCAN Provides a more uniform wait time than SCAN. The head moves from one end of the disk to the other. Servicing requests as it goes. When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip.

C-LOOK Variant of C-SCAN Disk arm only travels as far as the last request in each direction, then reverses direction immediately, without first going all the way to the end of the disk.

To read or write data, disk device must move the arm to the appropriate track. The time to carry this out this is called Seek Time. Then, the disk device must wait for the desired sector/data to rotate into position under the head (rotational latency). Each track is recorded in units called Sectors. A sector is the smallest amount of data that can be physically read or written.

The disk access time can be calculated as follows: Disk Access time = Seek time + Rotational Latency The overhead of getting into and out of the OS, and the time the OS spends fiddling with queues, etc. The queuing time spent waiting for the disk to become available. The latency spent waiting for the disk to get the right track and sector. The transfer time spent actually reading or writing the data.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close