Commit f6c68675 authored by Jonas Künzli's avatar Jonas Künzli Committed by overleaf
Browse files

Update on Overleaf.

parent 62cf2f26
% kuenzlij
% !TeX root = ../main.tex
\ownsection{Real-Time OS}
\ownsection{Embedded Operation Systems (5)}
Why is a desktop OS not suited?
\begin{compactitem}
\item Monolithic kernel is too feature rich.
......@@ -20,9 +19,9 @@ Advantages and properties of embedded OS
\item Device drivers handled by tasks instead of hidden drivers $\implies$ everything goes through scheduler\\
\includegraphics[width=1\linewidth]{RTOS_drivers}
\item Protection mechanisms not always necessary (Processes tested and considered reliable). $\implies$ Tasks can do their own I/O, including interrupts
\end{compactitem} ~\newline
\end{compactitem}
\ownsubsection{Requirements for RTOS (6-10)} \oldline
\ownsection{Real-Time OS (5-6)}
\begin{compactitem}
\item The timing behaviour of the OS must be predictable
\subitem For all services upper bound on execution time
......@@ -31,18 +30,18 @@ Advantages and properties of embedded OS
\subitem OS may have to be aware of deadlines unless scheduling is done offline
\subitem OS must provide precise time services
\item OS must be fast
\end{compactitem} ~\newline
\end{compactitem}
\ownsubsection{Main Functionality of RTOS Kernel (6-13)} \oldline
\ownsubsection{Main Functionality of RTOS Kernel (5-11)}
\begin{compactitem}
\item \textbf{Task management:} Execution of quasi-parallel tasks on a processor using processes or threads
\item \textbf{CPU scheduling:} guaranteeing deadlines, minimizing waiting times, fairness in granting resources
\item \textbf{Process synchronization:} critical sections, semaphores, monitors, mutual exclusion
\item \textbf{Task synchronization:} critical sections, semaphores, monitors, mutual exclusion
\item \textbf{Inter-process communication:} buffering
\item \textbf{Real-time clock:} as an internal time reference
\end{compactitem} ~\newline
\end{compactitem}
\ownsubsection{Task states (6-15)} \oldline
\ownsubsection{Task states (5-12)}
\begin{center}
\includegraphics[width=0.9\columnwidth]{OS1}
\end{center}
......@@ -53,11 +52,11 @@ Advantages and properties of embedded OS
\item \textbf{Idle:} A periodic job enters this state when it completes its execution and has to wait for the beginning of the next period.
\end{compactitem}
~
\begin{definition}{Threads}
A thread is the smallest sequence of programmed instructions that can be managed independently by a scheduler. Multiple threads can exist within the same process and share resources such as memory. The \textcolor{red}{Thread Control Block (TCB)} stores information needed to manage and schedule a thread.
\end{definition} ~\newline
\ownsubsection{Threads (5-16)}
A thread is the smallest sequence of programmed instructions that can be managed independently by a scheduler. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resource
s. The \textcolor{red}{Thread Control Block (TCB)} stores information needed to manage and schedule a thread.
\ownsubsection{Communication Mechanisms (6-20)}
\ownsubsection{Communication Mechanisms}
\textbf{Problem:} The use of shared resources for implementing message passing schemes may cause priority inversion and blocking.
\newline
......@@ -72,9 +71,9 @@ Advantages and properties of embedded OS
\item Usage of \textbf{Mailbox} (shared memory buffer, FIFO-queue with fixed cap.) $\implies$ Tasks do not have to wait for each other
\item Better suited for real-time systems than synchronous communication
\item Problem: Blocking behaviour if channel is full or empty
\end{compactitem} ~\newline
\end{compactitem}
\ownsubsection{Classes of embedded OS (6-23)}
\ownsubsection{Classes of embedded OS (5-18)}
\textbf{Class 1: Fast Proprietary Kernels}: For hard real-time systems, these kernels are questionable, because they are designed to be fast, rather than to be predictable in every respect. ~\newline
\textbf{Class 2: Extensions to Standard OS}: Real-time extensions to standard OS. Attempt to exploit comfortable main stream OS. RT-kernel running all RT-tasks, \textbf{standard-OS executed as one task.} \newline
......@@ -84,3 +83,16 @@ Con: RT-tasks cannot use standard-OS services
\includegraphics[width=0.9\columnwidth]{OS2}
\end{center}
\textbf{Class 3: Research Systems}
Research systems try to avoid limitations of existing real‐time and embedded operating systems.
Typical Research questions:
\begin{compactitem}
\item low overhead memory protection,
\item temporal protection of computing resources
\item RTOS for on‐chip multiprocessors
\item quality of service (QoS) control (besides real‐time constraints)
\item formally verified kernel properties
\end{compactitem}
%\ownsubsection{FreeRTOS (5-25)}
% !TeX root = ../main.tex
\ownsection{Aperiodic and Periodic Scheduling (6)}
\ownsubsection{Real-Time Systems (6-3)}
\begin{compactitem}
\item A real-time task is said to be \textcolor{red}{hard}, if missing its deadline may cause catastrophic consequences on the environment under control.
\item A real-time task is called \textcolor{red}{soft}, if meeting its deadline is desirable for performance reasons, but missing its deadline is not catastrophic.
\end{compactitem}
\ownsubsection{Schedule (6-4)}
A \textcolor{red}{schedule} is an assignment of tasks $J=\{J_1,J_2,...\}$ to
the processor, such that each task is executed until completion. It can be
defined as a function $$\sigma: \mathbb{R} \rightarrow \mathbb{N}, t \mapsto
\sigma(t)$$ where $\sigma(t)$ denotes the task which is executed at time $t$.
If $\sigma(t)=0$, the processor is called \textcolor{red}{idle}.\newline
If $\sigma$ changes its value, the processor performs a \textcolor{red}{context switch}. Each interval in which $\sigma$ is constant is called a \textcolor{red}{time slice}. \\
A \textcolor{red}{preemptive schedule} is a schedule in which the running task can be suspended at any time. \\
A schedule is said to be \textcolor{red}{feasible}, if all tasks can be completed according to a set of specified constraints. \\
A set of tasks is \textcolor{red}{schedulable} if there exists an algorithm that can produce a feasible schedule. \\
\textcolor{red}{Arrival time} $a_i$ or \textcolor{red}{release time} $r_i$ is the time at which a task becomes ready for execution. \\
\textcolor{red}{Computation} time $C_i$ is the time necessary to the processor for executing the task without interruption. \\
\textcolor{red}{Deadline} $d_i$ is the time at which a task should be completed. \\
\textcolor{red}{Start time} $s_i$ is the time at which a task starts its execution. \\
\textcolor{red}{Finishing time} $f_i$ is the time at which a task finishes its execution.
\ownsubsection{Metrics (6-6) and (6-13)}
\textbf{For a task}:
\begin{compactitem}
\item \textbf{Response time}: $R_i = f_i - r_i$
\item \textbf{Interference}: $I_i = s_i - r_i$
\item \textbf{Lateness} (positive if too late) $L_i = f_i-d_i: $
\item \textbf{Tardiness}/Exceeding Time $E_i=\max\{0,L_i\}:$
\item \textbf{Laxity/Slack time} (maximum time a task can be delayed)\\
$X_i=d_i-a_i-C_i: $
\end{compactitem}
\textbf{For a schedule}:
\begin{compactitem}
\item Avg. response time: $\overline{t_R} = \frac{1}{n}\sum_{i=1}^{n}(f_i-r_i)$
\item Total completion time: $t_c = \max_{i}\{f_i\} - \min_{i}\{r_i\}$
\item Weighted sum of response time:\\
$t_w = \Big(\sum_{i=1}^{n}w_i(f_i-r_i)\Big)\cdot\Big(\sum_{i=1}^{n}w_i\Big)^{-1}$
\item Maximum Lateness: $L_{max} = \max_{i}\{f_i-d_i\}$
\item Number of late tasks: $N_{late} = \sum_{i=1}^{n}miss(f_i)$
\end{compactitem}
\ownsubsection{Classification of Scheduling Algorithms (6-11)}
\begin{compactitem}
\item \textcolor{red}{Preemptive Algorithms:} The running task can be interrupted at any time to assign the processor another active task
\item \textcolor{red}{Non-preemptive Algorithms:} A task, once started, is executed by the processor until completion
\item \textcolor{red}{Static Algorithms:} Scheduling decisions are based on fixed parameters, assigned to tasks before their activation (constant priorities)
\item \textcolor{red}{Dynamic Algorithms:} Scheduling decisions based on dynamic parameters that may vary during system execution
\item An algorithm is called \textcolor{red}{optimal}, if it minimizes some given cost function defined over the task set
\item An algorithm is called \textcolor{red}{heuristic}, if it tends to but does not guarantee to find an optimal schedule
\item \textcolor{red}{Acceptance test}: Check for every task if when it is accepted, the schedule will still be feasible
\end{compactitem}
\ownsection{Scheduling algorithms for aperiodic Tasks (6-17)}
\begin{tabularx}{\columnwidth}{|p{1.7cm} | p{2.7cm} | X|}
\hline
\textbf{Aperiodic \newline Tasks} & Equal arrival times \newline non-preemptive & Arbitrary arrival times \newline preemptive \\
\hline
Independent \newline tasks \vspace{0.2cm} & EDD \newline (Jackson's Rule) & EDF \newline (Horn's Rule) \\
Dependent \newline tasks & (LDF) \newline (Lawler's Rule) & EDF* \newline (Chetto's Rule) \\
\hline
\end{tabularx}
~\newline
\ownsubsection{Earliest Deadline Due (Jackson's Rule) (6-18)}
\textbf{Algorithm: }Task with earliest deadline is processed first. (Arrival times are equal for all tasks, Scheduling is non-preemptive.)\newline
\textbf{Jackson's Rule: }Given a set of $n$ independent tasks. Processing in order of non-decreasing deadlines is optimal with respect to minimizing the maximum lateness. \newline
\ownsubsection{Latest Deadline First (Lawler's Rule)}
\textbf{Optimization goal:} Minimize the maximum lateness\\
\textbf{Assumptions on the task set:}
\begin{itemize}
\item tasks with precedence relations
\item synchronous arrival times
\end{itemize}
\textbf{Algorithm:}
\begin{itemize}
\item proceed from tail to head
\item among the tasks without successors or whose successors have been all scheduled, select the task with the latest deadline to be scheduled last
\item repeat the procedure until all tasks in the set are selected
\end{itemize}
\ownsubsection{Earliest Deadline First (Horn's Rule) (6-22)}
\textbf{Algorithm:} Task with earliest deadline is processed first. If new task with earlier deadline arrives, current task is preempted. \newline
\textbf{Horn's Rule: }Given a set of $n$ independent tasks with arbitrary arrival times. An algorithm that at any instant executes the task with the earliest absolute among the ready tasks is optimal with respect to minimizing maximum lateness. \newline
\textbf{Acceptance test:} \\
worst case finishing time of task i: $f_i = t + \sum_{k=1}^i c_k(t)$ \\
EDF guarantee condition: $\forall i = 1,\dots ,n \quad t + \sum_{k=1}^i c_k(t)\leq d_i$
\begin{python}
Algorithm: EDF_guarantee (J, J_new)
{
J` = J and {J_new}; /*ordered by deadline*/
t = current_time();
f_0 = t;
for (each J_i in J`) {
f_i = f_(i-1) + c_i(t);
if (f_i >d_i) return(UNFEASIBLE);
}
return(FEASIBLE);
}
\end{python}
A new task is accepted if the schedule remains feasible. \newline
\ownsubsection{EDF* (6-27)}
Determines a feasible schedule for tasks with precedence constraints if one exists. \newline
\textbf{Algorithm: }Modify release times and deadlines. Then use EDF. \newline
\textbf{Modification of release times:}\newline
Task must start not earlier than its release time and not earlier than the minimum finishing time of its predecessor.
\begin{compactenum}
\item Start at initial nodes = sources of precedence graph and set $r^*_i=r_i$
\item Select a task $j$ with all its predecessors having already been modified.
\item Set $r^*_j = \max\{r_j,\max_{i}\{r^*_i+C_i: J_i \rightarrow J_j\}\}$
\item Return to 2
\end{compactenum}
\textbf{Modification of deadlines:} \newline
Task must finish execution within its deadline and not later than the maximum start time of its successor.
\begin{compactenum}
\item Start at terminal nodes = sinks of precedence graph and set $d^*_i=d_i$
\item Select a task $i$ with all successors having already been modified.
\item Set $d^*_i = \min\{d_i, \min_{j}\{d^*_j-C_j: J_i \rightarrow J_j\}\}$
\item Return to 2
\end{compactenum}
\textcolor{red}{\textbf{$\to$ For response time etc. calculations use the original release times and deadlines}}
\ownsection{Scheduling Algorithms for periodic tasks (6-32)}
\begin{tabularx}{\columnwidth}{|p{1.7cm} | p{2.7cm} | X|}
\hline
\textbf{Periodic \newline Tasks} & Deadline $=$ Period & Deadline $<$ Period \\
\hline
Static \newline priority \vspace{0.2cm} & RM \newline (rate-monotonic) & DM \newline (deadline-monotonic) \\
Dynamic \newline priority & EDF & (EDF*) \\
\hline
\end{tabularx}
\\
\begin{definition}{Model of Periodic Tasks}
\begin{compactenum}
\item $\Gamma$: denotes a set of periodic tasks
\item $\tau_i$: denotes a periodic task
\item $\tau_{i,j}$
\item Set $d^*_i = \min\{d_i, \min_{j}\{d^*_j-C_j: J_i \rightarrow J_j\}\}$
\item Return to 2
\end{compactenum}
\end{definition}
\ownsubsection{Rate Monotonic Scheduling (4-23)}
Fixed / static priorities, independent, preemptive, deadlines equal the periods, $D_i=T_i$. Tasks can't suspend themselves, kernel overhead is assumed 0.
\textbf{Algorithm:} Tasks with the higher request rates (=shorter periods) have higher priorities and interrupt tasks with lower priority. RM is optimal w.r.t. schedulability.
\textbf{Schedulability Condition: } (sufficient but not necessary)
$$U=\sum_{i=1}^{n} \frac{C_i}{T_i} \leq n(2^{1/n}-1) \implies \text{schedulability}$$
where $U$ is the \textcolor{red}{processor utilization factor}. $\lim\limits_{n \to \infty} n(2^{1/n}-1) \approx 0.7$
\includegraphics[width=0.5\linewidth]{RM_sufficiency}
As a sufficient and necessary test, you can simulate it or do algorithm of DM section.
\textbf{Critical Instant}: The time at which the release of the task will produce the largest response time. It is if that task is simultaneously released with all higher priority tasks.
$\implies$ If there are no phase shifts, simulate the beginning (till all deadlines have passed). If that works, the schedule is feasible.
\ownsubsection{Deadline Monotonic Scheduling (4-35)}
Fixed / static priorities, independent, preemptive, deadlines can be smaller than periods, $C_i\leq D_i\leq T_i$.
\textbf{Algorithm}: Tasks with smaller relative deadlines have higher priorities and interrupt tasks with lower priority.
\textbf{Schedulability analysis}: (sufficient but not necessary)
$$\sum_{i=1}^{n} \frac{C_i}{D_i} \leq n(2^{1/n}-1) \implies \text{schedulability}$$
\textbf{Schedulatbility Condition: (sufficient and necessary)}: The worst-case is critical instance. Assume that tasks are ordered according to relative deadlines $D_i$, then the \textcolor{red}{worst case interference} for task $i$ is
$$I_i = \sum_{j=1}^{i-1} \Big\lceil\frac{t}{T_j}\Big\rceil C_j$$
The \textcolor{red}{longest response time} $R_i$ of a periodic task $i$ is at critical instance, $R_i=C_i+I_i$. Hence, compute in ascending order the smallest $R_i, ~i=1,...,n$ that satisfy $$R_i=C_i+\sum_{j=1}^{i-1}\Big\lceil\frac{R_i}{T_j}\Big\rceil C_j$$ and check whether $$\forall i=1,...,n: R_i\leq D_i$$
This condition is both necessary and sufficient.
\begin{python}
Algorithm: DM_guarantee($\Gamma$){
for (each $\tau_i \in \Gamma$) {
I=0;
do { $R=I+C_i;$
if ($R>D_i$) return(UNSCHEDULABLE);
I= sum(j=1,...i-1) ( ceil($R/T_j$)*$C_j)$;
} while ($I+C_i>R$);
//while previous operation had any effect
}
return(SCHEDULABLE);
}
\end{python}
\ownsubsection{EDF Scheduling (4-42)}
Dynamic priority assignment, intrinsically preemptive, deadlines can be smaller than periods: $D_i \leq T_i$
\textbf{Algorithm:} The currently executing task is preempted whenever another periodic instance with earlier deadline becomes active
EDF is optimal w.r.t. schedulability (no set of periodic tasks can be scheduled if it can't be scheduled by EDF).
\textbf{Schedulability condition}:
\begin{itemize}
\item if $T_i=D_i$: $$U=\sum_{i=1}^{n}\frac{C_i}{T_i}\leq1$$ is both necessary and sufficient.
\item if $T_i\neq D_i$:
$$U=\sum_{i=1}^{n}\frac{C_i}{D_i}\leq1$$ is only sufficient.
\end{itemize}
......@@ -70,7 +70,7 @@ When $J_2 $/$J_3$ wants to enter critical section $p_3$ becomes higher.
\item and other methods based on \textcolor{red}{resource access protocols} (PCP,SRP, see 7-13)
\end{itemize}
\ownsubsection{Timing Anomaly (7-26)}
\ownsection{Timing Anomaly (7-26)}
Many software and system architectures are fragile since \textcolor{red}{monotonicity} does not hold in general (= making a part of the system operate faster does not necessarily lead to a faster system execution).\newline
\textbf{Examples:}
\begin{itemize}
......@@ -79,8 +79,46 @@ Many software and system architectures are fragile since \textcolor{red}{monoton
\begin{itemize}
\item slower due to more processor cores
\item slower due to faster computation
\item slower due to removing precedence constraint
\item slower due to removing precedence constraint (fixe Reihenfolge)
\end{itemize}
\end{itemize}
\ownsubsection{Communication and Synchronization (7-32)}
\ No newline at end of file
\ownsection{Communication and Synchronization (7-32)}
The use of shared memory for communication between tasks may cause:
\begin{itemize}
\item priority inversion
\item blocking
\end{itemize}
Solutions:
\begin{itemize}
\item shared medium must be “thread safe”
\item data exchange must be protected by critical sections
\end{itemize}
\ownsubsection{Synchronous Communication (7-34)}
\begin{itemize}
\item Tasks must be synchronized (rendez-vous) to exchange data
\item both must be ready at the same time $\rightarrow$ waiting time
\item Communication needs synchronization, therefore the timing of the communication partners is linked
\end{itemize}
\ownsubsection{Asynchronous Communication (7-35)}
\begin{itemize}
\item The sender deposits a message into a channel from where the receiver retrieves the message.
\item Tasks don't have to wait for each other
\item more suited for real-time systems
\item \textbf{Mailbox:} shared memory buffer with send and retrieve functions and a fixed capacity (FIFO)
\item \textbf{Problem:} Blocking behavior if the channel is full or empty. alternative approach provided by cyclical asynchronous buffers (see next section) or double buffering.
\end{itemize}
Beispiel auf Folie (7-36)\\
\includegraphics[width=0.9\columnwidth]{images/mailbox.JPG}
\ownsubsection{Cyclical Asynchronous Buffers (CAB)(7-40)}
\begin{itemize}
\item non-blocking communication between tasks (the sender and receiver are never blocked)
\item a message is not consumed (extracted) by a receiver but maintained until overwritten by a new message.
\item several readers can read at the same time
\item max. number of internal buffers must be equal to the number of tasks +1 that uses the data structure.
\end{itemize}
\ No newline at end of file
images/OS1.png

259 KB | W: | H:

images/OS1.png

53.7 KB | W: | H:

images/OS1.png
images/OS1.png
images/OS1.png
images/OS1.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -28,18 +28,22 @@
\newpage
\input{chapters/Chapter4.tex}
\newpage
\input{chapters/realTimeModels.tex}
\input{chapters/05_operationSystem.tex} % kuenzlij
\newpage
\input{chapters/schedulingAlgorithmsAperiodicTasks.tex}
\input{chapters/06_aperiodicAndPeriodicScheduling.tex} % kuenzlij
\newpage
\input{chapters/schedulingAlgorithmsPeriodicTasks.tex}
\newpage
\input{chapters/mixedTasks.tex}
%\input{chapters/realTimeModels.tex}
%\newpage
%\input{chapters/schedulingAlgorithmsAperiodicTasks.tex}
%\newpage
%\input{chapters/schedulingAlgorithmsPeriodicTasks.tex}
%\newpage
%\input{chapters/mixedTasks.tex}
\newpage
\input{chapters/Chapter7.tex}
%\input{chapters/resourceSharing.tex}
\newpage
%% ghoert zu chapter 5 operat
%% ghoert zu chapter 5 OS
%\input{chapters/realtimeOS.tex}
%\newpage
\input{chapters/systemComponents.tex}
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment