Process Scheduling
Process scheduling is a key
function of the operating system that handles the execution of processes. It
manages the execution of multiple processes and threads, deciding which gets
access to the CPU, when, and for how long.
The main aim of the process
scheduler is to keep the CPU busy all the time and to deliver minimum response
time for the processes. There are several different scheduling algorithms that
can be used, each with its own advantages and trade-offs:
- First-Come, First-Served (FCFS): This is the
simplest scheduling algorithm. In FCFS, the process that arrives first is
the one that gets executed first. The downside of this method is that
short processes may have to wait for very long processes to complete if
they arrive slightly later.
- Shortest Job Next (SJN): Also known as
Shortest Job First (SJF), this algorithm selects the process with the
smallest total execution time for next execution. While this can reduce
waiting time, it's difficult to know the total execution time of a process
in advance.
- Priority Scheduling: In this algorithm, each
process is assigned a priority and the process with the highest priority
is executed first. If two processes have the same priority, then the FCFS
rule is used. This method can lead to "starvation" or
"indefinite blocking" where a low-priority process may never get
executed if high-priority processes keep arriving.
- Round Robin (RR): In Round Robin scheduling,
each process is given a fixed time slot (or "quantum") in a
cyclic way. It is simple, easy to implement, and starvation-free as all
processes get fair share of CPU.
- Multilevel Queue Scheduling: This algorithm
partitions the ready queue into several separate queues, each with its own
scheduling algorithm. For example, a real-time queue might use SJF, while
a background queue uses RR. The OS then determines which queue to use
based on a process's priority.
- Multilevel Feedback Queue Scheduling: This
is a more complex version of the multilevel queue scheduling algorithm,
where a process can move between queues based on factors such as its CPU
bursts or its behavior.
The scheduler in a real-world
operating system may use one or a combination of these algorithms for best
performance. The goal is to balance CPU use, response times, throughput, and
other factors.
Process Scheduling Queues,
Schedulers, Context switch.
In the process scheduling
mechanism of an operating system, queues, schedulers, and context switches play
a significant role. Here's a brief explanation of these elements:
- Process Scheduling Queues: In an operating
system, processes move between various scheduling queues throughout their
lifecycle. These queues might include:
- Job Queue: This queue contains all the
processes in the system.
- Ready Queue: This queue contains the
processes that are residing in main memory and are ready to execute.
- Waiting Queue: This queue contains
processes that are waiting for some I/O operation to complete.
The processes move from one queue
to another based on their current status and the outcomes of their operations.
- Schedulers: The operating system uses
schedulers to manage when processes enter the 'Running' state.
- Long-term scheduler (or admission scheduler):
This scheduler controls the admission of new processes into the system,
deciding whether or not there are sufficient resources to accommodate a
new process. The processes selected by the long-term scheduler are loaded
into memory and placed into the ready queue.
- Short-term scheduler (or CPU scheduler):
This scheduler selects from among the processes that are ready to
execute, and allocates the CPU to one of them.
- Medium-term scheduler: This scheduler
temporarily removes processes from main memory and places them in
secondary memory (such as a disk drive) or vice versa. This is a part of
the swapping function.
- Context Switch: A context switch occurs when
the CPU switches from executing one process to another. The state of the
first process is saved (in its Process Control Block), and the state of
the next process is loaded, effectively stopping the first process and
starting the second. Context switches can occur as a result of interrupts,
when the running process exits, or when the operating system decides to
swap out a process due to scheduling decisions. Although necessary,
context switching can be computationally expensive, so operating systems
are designed to minimize the need for context switching.
These mechanisms together enable
an operating system to handle multiple processes efficiently, managing their
execution and resource needs while ensuring that the CPU is used effectively.