4.1 Scheduling types: scheduling Objectives, CPU and I/O burst cycles, Pre-emptive, Non- Pre-emptive Scheduling, Scheduling criteria.

22516 Operating System MSBTE CO IT 4.1 Scheduling types: scheduling Objectives, CPU and I/O burst cycles, Pre-emptive, Non- Pre-emptive Scheduling, Scheduling criteria.

Scheduling Types

            In computing, scheduling is the method by which work is assigned to resources that complete the work. Here are several types of scheduling in operating systems:

  1. First-Come, First-Served (FCFS) Scheduling: Jobs are executed on first come, first serve basis. It's simple, easy to understand and implement.
  2. Shortest-Job-Next (SJN) Scheduling: Also known as Shortest Job First (SJF) or Shortest Process Next (SPN). In this type of scheduling, the process with the smallest execution time is selected for execution next. This minimizes waiting time for a computing process but can cause longer processes to wait indefinitely.
  3. Priority Scheduling: In priority scheduling, each process is assigned a priority and the process with the highest priority is executed first. If two processes have the same priority then FCFS is used to break the tie.
  4. Round Robin (RR) Scheduling: Each process in the ready queue is assigned a time quantum (a small unit of time) and execution is allowed for that time period. If execution is not completed, then the process goes back to the ready queue and waits for its next turn.
  5. Multilevel Queue Scheduling: This scheduler assigns processes to different queues based upon properties of the process, such as memory size, process priority, or process type. Each queue may have its own scheduling algorithms.
  6. Multilevel Feedback Queue Scheduling: This is a complex scheduler that separates processes according to their needs for the processor. If a process uses too much CPU time, it will be moved to a lower-priority queue.

These different scheduling methods provide trade-offs between various factors including fairness, response time, throughput, and resource utilization. The choice of scheduling algorithm can greatly affect the performance of the system.

 

 

Scheduling Objectives

            Scheduling is a key task in operating systems to manage the execution of processes. The scheduler is responsible for deciding which processes should run, when they should run, and for how long they should be allowed to run. The main objectives of scheduling include:

  1. Fairness: Each process should have a fair share of the CPU.
  2. Efficiency: Keep the CPU and other system resources as busy as possible.
  3. Throughput: Maximize the number of processes that complete their execution per time unit.
  4. Turnaround Time: Minimize the time it takes for a process to complete. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
  5. Waiting Time: Minimize the time that processes spend in the ready queue.
  6. Response Time: Minimize the time it takes from when a request was submitted until the first response is produced. This is particularly important in interactive systems.
  7. Deadlock Prevention: Ensure that the system will never enter a deadlock state, where a group of processes are each waiting for resources held by others in the group, causing all of them to wait indefinitely.
  8. Protection: Ensure that the activities of one process do not adversely affect other processes.
  9. Resource Utilization: Ensure efficient use of system resources, including the CPU, memory, disk I/O, etc.
  10. Predictability: The system's behavior should be predictable and not drastically vary the outcomes under similar conditions.

These objectives are often contradictory (improving one can worsen another), and thus, there is usually a need for a trade-off among them. The choice of specific objectives, or the weight given to each, will depend on the specific needs of the system and its users.

 

 

CPU and I/O burst cycles

In the life of a process, there are typically two types of bursts or cycles - CPU bursts and I/O bursts.

  1. CPU Burst: CPU bursts refer to the time periods during which a process uses the CPU (for example, for doing calculations). The length of these bursts is determined by the nature of the process - some processes have longer CPU bursts than others. A CPU-intensive task, like mathematical computation, would have longer CPU bursts compared to a task that is more I/O-focused.
  2. I/O Burst: I/O bursts refer to the time periods during which a process uses I/O services, such as reading from a disk or writing to a network socket. During an I/O burst, the process waits for the I/O operation to complete and doesn't need the CPU.

            The pattern of CPU and I/O bursts is characteristic for each process. Some processes might be CPU-bound (long CPU bursts, few I/O bursts), others might be I/O-bound (many short CPU bursts, long I/O bursts). This pattern of bursts is also taken into account by the CPU scheduler when deciding the execution order of processes.

            A typical process execution scenario starts with a CPU burst. That is followed by an I/O burst, during which the CPU is idle (unless there are other ready processes that can use it). This is followed by another CPU burst, then another I/O burst, and so on, until the process is complete. The pattern might be described as CPU burst – I/O wait – CPU burst – I/O wait – CPU burst, and so on.

 

 

Pre-emptive and Non- Pre-emptive Scheduling

Preemptive and non-preemptive (also known as cooperative) scheduling are two different approaches to process scheduling in operating systems:

  1. Preemptive Scheduling:
    • In preemptive scheduling, the operating system has the authority to interrupt a running process and force it to relinquish the CPU, even if the process has not completed its execution.
    • The scheduler can interrupt a process and switch to another process based on predetermined criteria, such as time quantum expiration or higher-priority processes becoming ready.
    • Preemptive scheduling ensures that no single process can monopolize the CPU for an extended period, improving fairness and responsiveness.
    • Context switching is more frequent in preemptive scheduling, as processes are often switched even before they voluntarily release the CPU.
    • Examples of preemptive scheduling algorithms include Round Robin, Priority Scheduling, and Multilevel Feedback Queue Scheduling.
  2. Non-Preemptive Scheduling:
    • In non-preemptive scheduling, a running process continues to execute on the CPU until it voluntarily releases the CPU or until it completes its execution.
    • The scheduler does not forcibly interrupt a process; instead, it waits for a process to finish its CPU burst and relinquish the CPU voluntarily.
    • Non-preemptive scheduling provides better predictability, as a process can execute without interruption until it finishes its current task.
    • However, it may lead to lower CPU utilization and less responsiveness if a long-running process does not release the CPU.
    • Examples of non-preemptive scheduling algorithms include First-Come, First-Served (FCFS) Scheduling and Shortest-Job-Next (SJN) Scheduling.

The choice between preemptive and non-preemptive scheduling depends on the specific requirements of the system and the applications running on it. Preemptive scheduling is often preferred in modern operating systems as it provides better control over process execution and responsiveness in multitasking environments. Non-preemptive scheduling is still used in some embedded systems and real-time applications where predictability and deterministic behavior are critical.

 

 

 

Scheduling criteria

Scheduling is a critical function of an operating system that manages the execution of processes. Several criteria are used to compare and evaluate scheduling algorithms:

  1. CPU Utilization: The goal is to keep the CPU as busy as possible. High CPU utilization is desired to get maximum usage out of the CPU.
  2. Throughput: This refers to the number of processes that complete their execution per time unit. A higher throughput is generally better as it means more processes are being completed.
  3. Turnaround Time: This is the amount of time taken to execute a particular process. It is the interval from the time of submission of a process to the time of completion. The goal is to minimize the average turnaround time for a batch of processes.
  4. Waiting Time: This is the sum of the periods spent waiting in the ready queue. The scheduling algorithm should aim to minimize the average waiting time.
  5. Response Time: This is the amount of time it takes from when a request was submitted until the first response is produced. For interactive systems, minimizing response time is more important than minimizing waiting or turnaround time.
  6. Fairness: Ensuring all processes get a fair share of the CPU and no process is starved for too long.
  7. Prioritization: Some processes may have a higher priority than others, and the scheduler needs to take this into account.
  8. Balancing Resource Utilization: The scheduler should not only manage the CPU but also take into account the utilization of other resources such as I/O devices and memory.

Each scheduling algorithm may prioritize these criteria differently, and the best choice of scheduling algorithm depends on the specific requirements of the system and its workload.

Post a Comment

Previous Post Next Post