Types of Scheduling algorithms
Several types of scheduling
algorithms are used in operating systems to manage the execution of processes:
- First-Come, First-Served (FCFS): Jobs are
processed in the order in which they arrive. Once a job begins execution,
it runs to completion.
- Shortest Job Next (SJN): Also known as
Shortest Job First (SJF) or Shortest Process Next (SPN), this algorithm
selects the job with the smallest total CPU burst time. This approach can
lead to process starvation for processes which will require a longer time
to complete if short processes are continually added.
- Priority Scheduling: In this type of
scheduling, each process is assigned a priority, and priority is used to
schedule the processes. If processes have the same priority, a FCFS
scheduling is used to break the tie. Priority can be either static (set by
the user or system at the creation time) or dynamic (changing based on the
behavior of the process).
- Round Robin (RR): Round Robin scheduling is
a preemptive algorithm that is quite similar to FCFS scheduling, but
preemption is added to switch between processes. A small unit of time
called a time quantum is defined, all processes get to execute for one
quantum at a time.
- Multilevel Queue Scheduling: In this
scheduling, processes are divided into multiple groups or queues based on
process attributes like process type, CPU burst time, process age,
priority, etc. Each queue can have its own scheduling algorithm.
- Multilevel Feedback Queue Scheduling: This
is a complex version of Multilevel Queue Scheduling. In this scheduling, a
process can move between queues. The idea is to separate processes with
different CPU-burst characteristics.
The choice of scheduling algorithm
can greatly impact the system's performance, and different algorithms are
suited to different circumstances. The scheduler in modern operating systems
often use a combination of these algorithms.
First come first served (FCFS)
First-Come, First-Served (FCFS) is
one of the simplest scheduling algorithms used by operating systems for process
scheduling. As the name suggests, in FCFS, the process that arrives first gets
executed first.
Here's how it works:
- The operating system maintains a queue of all the
ready processes. The processes are positioned in the queue in the order
that they arrive.
- The process at the head of the queue is selected
for execution on the CPU.
- The process runs until it completes or until it
blocks because it needs to wait for an I/O operation to complete.
- When the running process completes or blocks, the
scheduler picks the next process in the queue.
FCFS is simple to understand and
implement but it has some significant limitations:
- Convoy Effect: In a mix of short and long
processes, if a long process gets the CPU first, then all the other shorter
processes have to wait for a long time leading to poor average response
time. This scenario is known as the convoy effect.
- No Preemption: In FCFS, once a process
starts executing, it runs to completion. There's no way to temporarily
stop a running process to let another process run.
- Non-Optimal Waiting Time: FCFS can lead to
high average waiting time. For example, if a very long process enters the
system just before many short processes, the short processes unfairly wait
for a long time.
- No Priority: FCFS doesn't consider the
priority of processes. A higher priority process has to wait if a lower
priority process arrives earlier.
Despite its simplicity, because of
these limitations, FCFS is not often used in modern general-purpose operating
systems. However, it might still be suitable for simple or specialized
environments where tasks have similar lengths or where preemption and
priorities are not a concern.
Shortest Job First (SJF)
Shortest Job First (SJF) is a
scheduling algorithm used by operating systems to manage the execution of
processes or tasks. As the name suggests, in SJF, the process with the shortest
burst time (execution time) is scheduled next.
Here's how it works:
- The operating system maintains a list or a queue of
all the ready processes.
- The scheduler goes through the list of ready
processes and selects the process with the shortest estimated execution
time.
- The selected process is then dispatched to the CPU
for execution.
- When the running process completes or blocks (for
example, to wait for an I/O operation), the scheduler again looks for the
ready process with the shortest execution time.
SJF can be either non-preemptive
(once a process starts executing, it runs to completion) or preemptive (if a
new process arrives with a shorter burst time than what's left of the currently
executing process, the current process is preempted, and the new process takes
over).
The advantages of SJF include:
- It is optimal in terms of minimizing the average
waiting time for a set of processes.
However, it has some significant
limitations:
- Starvation: Processes with longer burst
times may end up waiting indefinitely if shorter processes keep coming.
This problem is known as starvation, or the convoy effect.
- Burst Time Prediction: It's difficult to
know the execution time of a process in advance. Real systems don't have
perfect knowledge of how long a job or process will last.
Because of these limitations, pure
SJF isn't often used in general-purpose operating systems. Instead, variations
of SJF, like adaptive or feedback algorithms, might be used, where the
scheduler estimates the lengths of the CPU bursts.
Shortest Remaining Time (SRTN)
Shortest Remaining Time Next
(SRTN), also known as Shortest Remaining Time First (SRTF), is a preemptive
version of the Shortest Job First (SJF) scheduling algorithm.
In SRTN scheduling, the process
with the smallest amount of time remaining until completion is selected to
execute. Because it is preemptive, if a new process arrives that requires less
time to complete than the remaining time of the currently executing process,
the currently executing process is preempted, and the new process is scheduled.
Here's how it works:
- The operating system maintains a list or queue of
all ready processes.
- The scheduler selects the process with the shortest
remaining time from the list of ready processes.
- This process is dispatched to the CPU for
execution.
- If a new process arrives in the ready queue while a
process is executing, and the new process has a shorter remaining time
than the currently executing process, the CPU is taken away from the
current process (it is preempted), and the new process is scheduled to
run.
- When the running process completes, the scheduler
again selects the process with the shortest remaining time.
The benefits of SRTN include:
- It can provide better average turnaround time than
SJF, as it allows a shorter process to run immediately rather than waiting
for a longer process to complete.
The limitations of SRTN include:
- Like SJF, SRTN suffers from the problem of
starvation. Longer processes may wait indefinitely if shorter processes
keep arriving.
- Predicting the time remaining of a process can be
challenging in practical implementations.
- It can lead to higher overhead due to increased
context switching if processes with shorter remaining times frequently
arrive.
Because of these limitations, pure
SRTN isn't typically used in general-purpose operating systems. Variations or
approximations of SRTN may be used instead, which make use of priority
scheduling or implement aging to prevent starvation.
Round Robin (RR)
Round Robin (RR) is a preemptive
scheduling algorithm used by operating systems for managing process execution.
It is one of the simplest, fairest, and easiest to understand scheduling
algorithms.
Here's how it works:
- Every process in the system is placed into a queue.
- The scheduler selects the process at the front of
the queue and allocates the CPU to it.
- The selected process is allowed to run for a fixed
amount of time known as a time quantum (or time slice).
- If the process finishes before the time quantum
expires, it is removed from the queue. If it doesn't finish, it is moved
to the back of the queue to await its next turn.
- The scheduler then allocates the CPU to the process
that is now at the front of the queue, and the process repeats.
The key characteristics of Round
Robin scheduling include:
- It is simple to implement and understand.
- It is fair because every process gets an equal
share of the CPU.
- The performance of Round Robin scheduling can be
significantly impacted by the length of the time quantum. If the time
quantum is too short, the system spends too much time switching between
processes. If the time quantum is too long, the system becomes more like a
First-Come, First-Served (FCFS) system, and responsiveness can suffer.
Round Robin scheduling is often
used in interactive systems where equal share and fairness are important, and
it is suitable for time-sharing systems. However, it does not consider process
priority, and it may not be efficient for systems with varying process burst
times.
Priority scheduling
Priority scheduling is a method of
scheduling processes that is based on priority. In this scheduling method, the
operating system assigns a priority rank to each process, and the CPU is
allocated to the process with the highest priority.
Here's how it works:
- Each process is assigned a priority when it enters
the system. The assignment could be based on a variety of factors such as
the importance of the process, the type or category of the process, the
amount of resources the process requires, the user's preferences, etc.
- The scheduler selects the process with the highest
priority from the list of ready processes.
- The selected process is dispatched to the CPU for
execution.
- If a new process arrives in the ready queue that
has a higher priority than the currently executing process, the CPU may be
preempted from the current process and given to the new process, depending
on whether the system is using preemptive or non-preemptive scheduling.
Priority scheduling has several
advantages:
- It allows the operating system to place more
important tasks ahead of less important ones, which can be useful in a
variety of situations.
- It can be either non-preemptive (once a process
starts executing, it runs to completion) or preemptive (if a new process
arrives with a higher priority than the currently running process, the
current process is preempted and the new process starts executing).
However, priority scheduling also
has some drawbacks:
- Starvation: Low-priority tasks may end up
waiting indefinitely if high-priority tasks keep coming. This problem is
known as starvation or indefinite blocking. It can be resolved using
techniques such as aging (gradually increasing the priority of processes
that wait for a long time).
- Priority Inversion: This is a situation
where a lower priority process holds a resource required by a higher
priority process, resulting in the higher priority process having to wait.
- Difficulty in Priority Assignment: It can be
challenging to assign the appropriate priorities to each process,
especially in complex systems.
Despite these challenges, priority
scheduling is widely used in operating systems due to its flexibility and
adaptability to a wide range of scenarios.
multilevel queue scheduling
Multilevel Queue Scheduling is a
complex scheduling algorithm used by operating systems which separates
processes into different groups or queues based on their characteristics, and
each queue has its own scheduling algorithm.
Here's how it works:
- Processes are divided into different queues based
on their characteristics. For example, a common division is to separate
system processes, interactive processes, and batch processes into
different queues. The criteria for division into queues can be the memory
size, process priority, process type, or CPU burst time.
- Each queue has its own scheduling algorithm. For
instance, the foreground (interactive) queue might be scheduled using a
Round Robin scheduler, while the background (batch) queue might use FCFS
scheduling.
- There's a scheduler for the queues as well, which
determines which queue gets the CPU. This could be a simple priority
system (the queue with the highest priority is always given the CPU if it
has processes ready to run), or it could use time slices to cycle between
the queues.
Advantages of Multilevel Queue
Scheduling include:
- It is capable of differentiating between different
types of processes and can handle a wide range of process types, ensuring
that each gets an appropriate share of the CPU.
- It allows for priority scheduling, as more
important tasks can be placed in higher-priority queues.
Disadvantages include:
- Inflexibility. Once a process is assigned to a
queue, it cannot be moved to another queue.
- It can lead to starvation. If higher-priority
queues always have processes to execute, a lower-priority queue may never
get the CPU.
Despite these challenges,
Multilevel Queue Scheduling can be very effective in situations where processes
are easily divided into different classes and have significantly different
behavior. It's commonly used in real-world operating systems.