Unit-IV CPU Scheduling and Algorithms
CPU
scheduling is a fundamental concept in operating systems. It determines which
processes will run on the CPU and for how long. The goal is to use CPU
resources efficiently, ensuring fair use and minimizing waiting times. Here's
an overview:
CPU Scheduling:
CPU
scheduling is the process of determining the sequence in which the processes
access the CPU. Since multiple processes are often in memory simultaneously,
the operating system needs a strategy to decide which one gets the CPU when.
Criteria for CPU Scheduling:
- CPU Utilization: Keep the CPU as busy as
possible.
- Throughput: Number of processes that
complete their execution per time unit.
- Turnaround Time: Total time taken to execute
the process, from submission to completion.
- Waiting Time: Total time a process has been
in the ready queue waiting for the CPU.
- Response Time: Time it takes from the
submission of a request until the first response is produced.
Scheduling Algorithms:
Various algorithms are used for
CPU scheduling. Each has its advantages and drawbacks:
- First-Come, First-Served (FCFS):
- Processes are scheduled in the order they arrive.
- Pros: Simple and easy to understand.
- Cons: Can lead to the "convoy effect",
where short processes wait for long processes to finish, leading to
inefficient CPU utilization.
- Shortest Job Next (SJN) or Shortest Job First
(SJF):
- The process with the smallest execution time is
the next to execute.
- Pros: Minimizes waiting time for short processes.
- Cons: Difficult to predict the next CPU burst's
length; can lead to process starvation.
- Priority Scheduling:
- Each process is assigned a priority. The highest
priority process runs next.
- Pros: Useful for systems that have jobs with
different priority levels.
- Cons: Can lead to starvation if low-priority
processes are continually pre-empted by high-priority processes.
- Round Robin (RR):
- Each process is assigned a fixed time slice or
quantum.
- Processes are dispatched in a circular order, with
each allowed to run for a quantum or until completion.
- Pros: Fair and simple; suitable for time-sharing
systems.
- Cons: Choice of time quantum is critical and can
impact performance.
- Multilevel Queue Scheduling:
- Processes are divided into multiple queues, each
having its own scheduling algorithm.
- Processes are assigned to a queue based on their
characteristics.
- Pros: Offers flexibility and can accommodate
various types of processes.
- Cons: Determining which process goes into which
queue can be complex.
- Multilevel Feedback Queue:
- A variation of the multilevel queue scheduling.
- Allows processes to move between queues based on
their behavior and estimation of CPU bursts.
- Pros: Adaptable to a process's needs and behavior.
- Cons: More complex to implement.
- Fair Share Scheduling:
- Used in systems where each user is allocated a
portion of the CPU time.
- Ensures that users get a fair share of CPU
resources.
Pre-emptive vs. Non-pre-emptive
Scheduling:
- Pre-emptive: In this mode, the OS can remove
a process from the CPU and replace it with another process. This can
happen if a higher-priority process arrives or after a time quantum
expires in Round Robin scheduling.
- Non-pre-emptive: Once a process is in the
running state, it continues until it completes or yields the CPU,
typically used in systems where interruption could be problematic or
inefficient.
Conclusion:
CPU scheduling is vital for
ensuring system responsiveness, fairness, and effective utilization of CPU resources.
The choice of scheduling algorithm can affect system performance, and often
operating systems may use a combination of algorithms based on the system's
requirements and the types of processes it handles.