Title: Chapter 19: RealTime Systems
1Chapter 19 Real-Time Systems
2Chapter 19 Real-Time Systems
- Overview and Introduction
- System Characteristics
- Features of Real-Time Systems
- Implementing Real-Time Operating Systems
- Real-Time CPU Scheduling
3Objectives
- To explain the timing requirements of real-time
systems - To distinguish between hard and soft real-time
systems - To discuss the defining characteristics of
real-time systems - To describe scheduling algorithms for hard
real-time systems
4Overview of Real-Time Systems
- The differences between real time computing
systems and general purpose computing systems are
very profound. - We will examine many of these differences in the
upcoming slides. - A real time computing system is one that requires
that correct results be produced within specified
deadline periods. (Deadline is a key word) - Results produced after a specific time period has
elapsed may well be of absolutely no value and
may mean loss of life or an aircraft crash yet
other failures might not be quite as disastrous
if real time system is not quite as responsive.
. - Real time
- Run on wide range of computer hardware
- Used in many different kinds of applications.
- Some real time systems are embedded in aircraft
instrumentation - your microwave, cell phone, cruise control, and
hosts of other applications. - Are often part of a larger system
- Oftentimes their presence is not obvious to a
user.
5Definitions
- A real-time system requires results produced
within a specified deadline period. - A hard real time system has stringent
requirements, guaranteeing that critical
real-time tasks be completed within their
deadlines. - E.g. Safety-critical systems health systems
etc. - A soft real-time system is less restrictive and
guarantees simply that a critical real-time task
will receive priority over other tasks - Further, these tasks retain priority over other
tasks until the tasks complete. - An embedded system is a computing device that is
part of a larger system (I.e. automobile,
airliner.) - A safety-critical system is a real-time system
with catastrophic results in case of failure. - Again, a hard real-time system guarantees that
real-time tasks be completed within their
required deadlines, while a soft real-time system
provides priority of real-time tasks over non
real-time tasks.
6System Characteristics
- Will look at both soft and hard real-time
operating systems.. - Real time systems typically exhibit the following
characteristics - Single purpose
- Small size
- Inexpensively mass-produced
- Specific timing requirements
- And, so many other rather unique features spring
from these characteristics. - These are the big four defining characteristics
- Lets look at those
7Characteristics - 1
- Single Purpose
- Single purpose is typical controlling anti-lock
brakes toaster, cell phone. - This makes the operating system simple too, as
many characteristics integral to general
purpose operating systems are not available or
needed. - Size
- Often found in severely cramped space but
sufficient for operations. - Examples wrist watches, cell phones, toys
- Thus, CPU processing power is minimal
- Amount of primary memory is also minimal.
- Architectures Compare 32 / 64-bit architectures
with 8/16 bit processors. - Architecture Compare several gigs of memory
with
8Characteristics - 2
- Cost
- Typically mass produced as in microwave ovens,
thermostats. - Thus, real time microprocessors are often
inexpensive - Organization of real time systems are designed to
minimize cost - To eliminate bus architectures, the physical
organization for embedded controllers are often
organized as a system-on-a-chip, which has all
necessary interconnections. - Chip includes memory, cache, a MMU (for possible
address translation), any peripheral ports
necessary - all in a single integrated circuit. - Such organization is typically much less
expensive than typical bus-oriented architectures.
9Bus-Oriented System
10Characteristics - 3
- Timing
- This is the feature that impacts almost
everything else and makes real time systems what
they really are! - Both hard and soft real time systems have
timing requirements. - So, we will need to develop real time scheduling
algorithms that provide priority to the highest
scheduled processes. - Schedulers absolutely must ensure that the
priority of real time tasks does not degrade over
time. - So, we must minimize the response time to
interrupts recognize the interrupt, save
context, transfer control to the handler, etc
as one can easily imagine.
11Features of Real-Time Kernels
- Most real-time systems do not provide features
found in desktop systems. - Simply not needed.
- Do not need (in general) support for
- A variety of peripheral devices graphical
displays, CD, DVD drives - Protection and security mechanism
- Support for multiple users
- Note Windows XP has 40,000,000 lines of code
a typical real-time operating system usually is
written in thousands of source code lines! - Reasons include
- Real-time systems are typically single-purpose.
- Real-time systems often do not require
interfacing with a user. - General-purpose features often require more
substantial hardware than that found in a RT
system.
12Memory Mapping Schemes
- (1) Real-addressing mode where programs generate
actual addresses. - But there is no memory protection between
processes. - We might also need to specify exactly where in
memory the program is to be loaded. - But the speed is very difficult to beat!
- No time spent in address translations.
- Very commonly found in real time systems having
hard, real-time constraints. - Some real time operating systems running on
microprocessors containing a MMU, actually
disable the MMU to gain performance benefit in
referencing physical addresses directly. - (2) Relocation register mode.
- In this scheme, we have a relocation register is
set to the process load point. - Physical addresses are added to contents of the
relation register formed by adding logical (L) to
R (relocatable) to form P (Physical) addresses..
- But here again, there is no protection between
processes. - (3) Implementing full virtual memory.
- But here we may have page tables and translation
look-aside buffers. - While this strategy does indeed provide memory
protection, it is costly
13Address Translation
Diagram of the previous three memory access
techniques.
14Implementing Real-Time Operating Systems
- In general, real-time operating systems must
provide the following features - (1) Preemptive, priority-based scheduling
- (2) Preemptive kernels
- (3) Minimal latency
- Lets look at these three important
characteristics.
151. Priority-Based Scheduling
- A RTOS must respond immediately to a real time
process as soon as that process requires the CPU. - So, there must be a scheduler to support a
priority-based algorithm with preemption. - In priority-based scheduling, processes are
assigned a priority based on importance. - More important processes are assigned higher
priorities. - So, if the scheduler supports preemption, a
process running on a CPU can be preempted if a
higher-priority process arrives. - Recall Windows XP has 32 priority levels, with
the highest levels having priority values 16 to
31 used for RT processes..
162. Preemptive Kernels
- Problem with non-preemptive kernels is process
doesnt have to give up the CPU. This can be
disastrous in a real time system. - Preemptive kernels allow the preemption of a task
running in kernel mode! - But designing preemptive kernels is complex
- Many common modern-day applications
(spreadsheets, browsers, ..) simply do not
require preemptive kernels. - So why have the complexity of such kernels?
- Thus most commercial desktop OSs are not
preemptive, such as XP. - So, non-preemptive kernels are not acceptable for
hard RT systems. - We need preemption!
172. Preemptive Kernels - 2
- One approach
- to insert preemption points within long duration
system calls, so when a process is preempted,
and, if a context-switch takes place, when the
process resumes after the high priority process
runs, it may do so at the point of preemption! - Important to note that preemption points only
occur at carefully architected points in the
kernel, where kernel structures are not
undergoing any kind of modification. - Second approach for supporting preemptive kernels
is - to implement synchronization mechanisms which
protect kernel data structures from modification
from a high-priority process.
183. Minimizing Latency
- Event latency is the amount of time from when an
event occurs to when it is serviced. - Here, we consider the event-driven nature of a RT
system, and we recognize that the application is
usually waiting for an event. . - The system must respond and service the event as
quickly as possible!
19Minimizing Latency continued
- But all events are not created equal.
- Different events have different latency times.
- Some have a very short latency time others
longer. - Example an embedded system controlling
something like a toaster or some kind of radar,
might tolerate a latency of several seconds. - In RT systems weve got two types of latencies to
consider - Interrupt latency, and
- Dispatch latency.
20Interrupt Latency
- Interrupt Latency refers to the period of time
from the arrival of an interrupt at the CPU to
the start of the servicing routine. (see figure
below) - Getting an interrupt, the CPU must finish the
current instruction, determine type of interrupt,
save the state of the current process (context
switch likely) and then jump to the interrupt
service routine. - We clearly need to minimize interrupt latency and
service the interrupt immediately.
21Dispatch Latency
- Dispatch latency is the amount of time required
for the scheduler to stop one process and start
another. - RTOSs must minimize this latency, and the most
effective approach for keeping dispatch latency
at a minimum is via preemptive kernels. - See figure. The conflict phase has two parts
- Preemption of any process running in the kernel,
and - Release by low-priority process resources needed
by a high-priority process. - (e.g. dispatch latency w/preemption disabled in
Solaris is 100 msec with preemption enabled,
preemption is reduced to
22Real-Time CPU Scheduling
- We must change gears to address the reality that
so far we are only ensuring that real time
systems provide priority processing for critical
processes. - But hard real-time systems need much stronger
guarantees! - Tasks MUST be serviced by deadline w/knowledge
that missing deadline no service at all! - So lets consider scheduling considerations for
hard real-time systems.
23Real-Time CPU Scheduling - more
- To understand whats going on, lets consider the
following assumptions and definitions. - First, we will assume that RT processes are
periodic. - This means that they need the CPU at constant
intervals. - Each of these periods processes a fixed
processing time, t, once the CPU acts on it, - Each has a deadline, d, when it must be serviced
by the CPU, and - Each has a period, p
- The relationship of the processing time t, the
deadline, and the period can be expressed as - 0
- The rate of a periodic task is 1/p.
- See next figure to see these relationships.
- .
24Real-Time CPU Scheduling
- Periodic processes require the CPU at specified
intervals (periods) - p is the duration of the period
- d is the deadline by when the process must be
serviced - t is the processing time
- Clearly, we need the processing time t to be less
than the deadline (be completed before deadline
expires).
Lets look at scheduling algorithms that address
deadline requirements of hard, real-time systems.
25Preface to Scheduling Algorithms
- We will look at the
- 1. Rate Monotonic Scheduling Algorithm, and
- 2. Earliest Deadline First Scheduling.
- The rate-monotonic scheduling algorithm schedules
these periodic tasks using a static priority
policy with preemption. - If we are running a lower priority process when a
higher priority process needs to be run, the
scheduler will preempt the lower priority
procedure. - When a periodic task enters the system, it is
assigned a priority number based on its period. - Shorter the period, higher the priority, and vice
versa. - Gives higher priority to the process that needs
the CPU more often. - Procedure also assumes processing time of a
periodic process is the same for each CPU burst! - So every time a process acquires the CPU,
duration of its CPU burst is the same. - As it turns out using the simulation in the book,
it seems that we may be able to assert that we
can schedule real time tasks in such a way that
using this algorithm may be used to meet real
time task deadlines and still have the CPU with
available cycles - We shall see.
- Now lets consider two cases We will assign P2
a higher priority than P1 and then P1 a higher
priority than P2.
26Scheduling of tasks when P2 has a higher priority
than P1
Without the rate-monotonic scheduler We will
first assume that P2 has a higher priority than
P1. We see the execution scenario below. P2
starts executing first and completes at time 35.
Then P1 starts. It completes its burst at time
55 but the first deadline for P1 was at 50, so
the scheduler causes P1 to miss its
deadline! Now, suppose we continue to
use the rate-monotonic scheduling approach where
we assign P1 a higher priority than P2, (since
the period of P1 is shorter than P2).
27Rate Monotonic Scheduling
- Continuing (P1 period shorter than P2).
- P1 starts and completes its CPU burst at time 20.
Meets its first deadline. - P2 starts and running until time 50 but at this
time, it is preempted by P1, even though it only
needed 5 msec to finish its first CPU burst. - P1 finishes its CPU burst at time 70 (meets its
deadline) scheduler resumes P2, which completes
its burst at time 75, also meeting its first
deadline. - System is then idle until time 100 when P1 is
scheduled again. - This rate monotonic scheduling is considered
optimal in the sense that if a set of processes
cannot be scheduled by this algorithm, it cannot
be scheduled by any other algorithm that assigns
static priorities. - Your book then goes into another example and
arrives at the important conclusion the
rate-monotonic scheduling cannot guarantee
processes can be scheduled so that they will
always meet their deadlines. - So, enter the Earliest Deadline First
Scheduler. (EDFS) -
28Earliest Deadline First Scheduling
- This scheduling algorithm dynamically assigns
priorities according deadline. - Earlier the deadline, the higher the priority
later deadline?, lower priority. - In this algorithm, when a process becomes
runnable, it must announce its deadline
requirements to the system. - Priorities may have to be dynamically adjusted to
reflect deadlines of newly runnable processes. - This differs from rate-monotonic scheduling,
where priorities are fixed.
29- The EDF scheduling algorithm does not require
that processes by periodic nor must a process
require a constant amount of CPU time per burst. - The only requirement is that a process announce
its deadline to the scheduler when it becomes
runnable. - The attraction of the EDF scheduling is that it
is theoretically optimal it can schedule
processes so that each process can meet its
deadline requirements and CPU utilization will be
100 percent. - In practice, as it turns out, it is impossible to
achieve this level of CPU utilization due to the
cost of context switching between processes and
interrupt handling.
30End of Chapter 19