Verified Commit c1829cfc authored by Theo von Arx's avatar Theo von Arx
Browse files

Delete unneeded files

parent 8f906d48
% !TeX root = main.tex
\subsection{Kapitel 2: }
\subsection{Kapitel 3: Hardware‐Software Interface}
\item Add Memory MAP of MSP432 (Processor used in Lab)
\item Add UART
\item Add Polling and Interrupt (Slide 3-55)
\subsection{Kapitel 4: Programming Paradigms}
fertig!!:D \\
\subsection{Kapitel 5: OS}
fertig!!:D \\
\subsection{Kapitel 6: aperiodic, periodic scheduling}
fertig!!:D \\
\subsection{Kapitel 9: Power and Energy}
\item Add Application control (Slide 9-53)
% !TeX root = ../main.tex
\section{Bluetooth (8-24)}
Small size, low cost and energy. Secure and robust transmission. \newline
\subsection{Technical Data (8-24)} \oldline
\item Frequency Range: $(2402+k)$ MHz, $k=0,1,...,78$ (79 channels)
\item 10-100m transmission range, 1Mbit/s BW for each connection
\item \textcolor{red}{Frequency Hopping} and Time Multiplexing: Transmitter jumps from one frequency to another with a fixed rate (0.625ms, 1600 hops/s). The channel sequence is determined by a pseudo random sequence of length $2^{27}-1$.
\item Simultaneous transmission of multimedia streams (synchronous) and data (asynchronous)
\subsection{Network topologies (8-28)}
Contains 1 master and up to 7 slaves. All nodes in a piconet use the same channel sequence that is determined by the device address of the master's BD\_ADDR and the phase which is determined by the system clock of the master. \newline
Connections are either one-to-one or between the master and all slaves (broadcast).
Several piconets with overlapping nodes. A node can simultaneously have the roles of slaves in several piconets and the role of a master in one piconet. The channel sequences of the different piconets are not synchronized.
\subsection{Addressing and Packet Format (8-32)} \oldline
\subsection{Connection Types (8-34)}
\textbf{Synchronous Connection-Oriented (SCO)}
\item Point to point full duplex connection between master \& slaves
\item Master reserves slots to allow transmission of packets \textbf{in regular intervals}
\item Every packet of the master is followed by one of the slave
\textbf{Asynchronous Connection-Less (ACL)}
\item Asynchronous service, no reservation of slots
\item The master transmits spontaneously, the addressed slave answers in the following interval
\end{itemize} ~\newline
\textbf{Multi-Slot Communication} \newline
Master can only start sending in even slot numbers, slaves only in odd ones. Packets from master or slave have length 1, 3 or 5 slots. During a multi-slot the \textbf{frequency doens't change}, afterwards it jumps to the original sequence (e.g. after a triple-slot with $f(k)$ it jumps to $f(k+3)$)
\subsection{Modes and States (8-38)}
\textbf{Modes of operation:}
\item \textcolor{red}{Inquiry}: Master identifies addresses of neighbouring nodes
\item \textcolor{red}{Page}: Master attempts connection to a slave whose BD\_ADDR address is known
\item \textcolor{red}{Connected}: Connection between master and slave is established
\textbf{States in connection mode:}
\item \textcolor{red}{Active}: Active in a connection to a master
\item \textcolor{red}{Hold}: Does not process data packets
\item \textcolor{red}{Sniff}: Awakens in regular time intervals
\item \textcolor{red}{Park}: Passive, in no connection with master but still synchronized
$\to$ $P_{receive} \approx P_{send} \implies$ Important to be able to shut down the receiver.\\
\subsection{Protocol Hierarchy (8-47)}
\item \textcolor{red}{Baseband Specification}: Defines packet formats, physical and logical channels, error correction, synchronization between RX and TX and the different modes of operation
\item \textcolor{red}{Audio Specification}: coding/decoding methods
\textbf{Protocol stack}
\item \textcolor{red}{Link Manager (LM)}: Covers the authentication of a connection and encryption, management of piconet, initiation of connection and transition between modes of operation / states
\item \textcolor{red}{Host Controller Interface (HCI)}: Defines common standardized interface between host and bluetooth node
\item \textcolor{red}{Link Layer Control and Adaptation Layer (L2CAP)}: Abstract interface for data communication. Segments/reassembles packets.
\item \textcolor{red}{RFCOMM}: simulates a serial connection
% !TeX root = ../main.tex
\item Performance (bandwidth, latency and guaranteed behaviour)
\item Efficiency (cost, power consumption)
\item Robustness (fault tolerance, maintainability, security, safety)
\subsection{Random Access (asynchronous \& competing access) (8-8)}
No access control, requires low average utilization
Improved version: Slotted random access
Probability that a station transmits succesfully: $$P=p(1-p)^{n-1}$$ where $p$ is the sending rate of all stations and $n$ the number of stations. Optimal sending rate is $p=1/n$ ~\newline
\subsection{TDMA (synchronous) (8-10)}
Communication in statically allocated time slots, synchronization among all nodes necessary:
Either periodic repetition of communication frame or master node sends out a synchronization frame
\subsection{CSMA/CD (async. \& competing access) (8-11)}
Carrier Sense Multiple Access / Collision Detection \newline
Try to avoid and detect collisions:
\item Check whether channel is idle before starting to transmit
\item If a collision is detected (bc several nodes started almost simultaneously), wait for \textcolor{red}{backoff time} (usually = $2\cdot t_{pd} \cdot \text{rand}(1, 2^s)$ after $s$ consecutive failures) and try again
\item repeated collisions result in increasing backoff times
\end{itemize} ~\newline
For cable length $L$ and signal speed $\sigma$, if Node $A$ starts transmitting at $t=0$, Node $B$ can start transmitting up to $t=L/\sigma$ after node A and there will be no collision.
$B$ detects collision at $t_{pd}=L/\sigma$ and sends jamming sequence immediately. $t_{pd}$ is \textbf{end-to-end latency}
$A$ detects collision when it receives jamming sequence at $t=2L/\sigma$. For $A$ to detect the collision, it must keep listening (and hence transmitting) until it can receive any jamming sequence.
The minimum packet size is thus $C\cdot 2L/\sigma$ where $C$[bit/s] is the channel throughput.
Minimum waiting period is $2L/\sigma$ \newline
\subsection{Token Protocols (async. controlled access) (8-12)}
Token value determines which node is transmitting and/or should transmit next. Only the token holder may transmit.\\
Pass circular null-messages to prevent network from going idle.
\subsection{CSMA/CA - Flexible TDMA (async. controlled access) (8-14)}
Carrier Sense Multiple Access / Collision Avoidance
\item Reserve $s$ \textcolor{red}{slots} for $n$ nodes. Slots are short time intervals and are normally idle. If a slot is used, it becomes a \textcolor{red}{slice}
\item Nodes keep track of global communication state by \textbf{channel sensing}
\item Nodes start transmitting a message only during the assigned slot
\item If $s=n$, no collisions occur. If $s<n$, collisions may occur ($\to$ Random access)
\subsection{CSMA/CR (async. controlled access) (8-15)}
CR = Collision Resolution
\item Before message transmission, there is a global \textbf{arbitration}
\item Each node is assigned a unique ID
\item All nodes wishing to transmit compete by transmitting a binary signal based on their ID
\item A node drops out of the competition if it detects a \textbf{dominant state while transmitting a passive state}
\item The node with the lowest ID (=highest priority) wins
\end{itemize} ~\newline
\subsection{FlexRay (8-17)} \oldline
\item Cycle is subdivided into a static (predictable \& safe) and a dynamic (fast) segment
\item Static segment is based on a fixed allocation of time slots to nodes
\item Dynamic segment for transmission based on Flexible TDMA
\item Usage of several channels for throughput and redundancy
% !TeX root = ../main.tex
\section{Mixed Task Sets (4-49)}
For applications with both aperiodic and periodic tasks.
\item \textcolor{red}{Periodic Tasks:} time-driven, hard timing constraints
\item \textcolor{red}{Aperiodic Tasks:} event-driven, may have hard, soft or no real-time requirements
\item \textcolor{red}{Sporadic Tasks:} Aperiodic task with a maximum arrival rate (or minimum time between two arrivals) assumed.
\subsection{Background Scheduling (4-50)}
Schedule periodic tasks with RM or EDF. Process aperiodic task in the background, that is, when there is no periodic task request.
\item Good: Periodic tasks not affected
\item Bad: Aperiodic task may have huge response times and cannot be prioritized.
\subsection{RM Polling Server (4-52)}
Introduce an artificial periodic task (\textcolor{red}{server}) to service aperiodic requests. The server is characterized by a period $T_s$ and a computation time $C_s$ and scheduled with the same algorithm used for periodic tasks.
Schedulability condition is the same as for normal RM.
\textbf{Aperiodic Guarantee: }
Assumption: An aperiodic task is finished before a new one arrives. Computation time $C_a$, deadline $D_a$:
$$(1+\Big\lceil\frac{C_a}{C_s}\Big\rceil)T_s\leq D_a$$
\subsection{EDF Total Bandwidth Server}
\shortintertext{Utilization of periodic tasks:}
U_p &= \sum_i \frac{C_i}{T_i}\\
\shortintertext{Utilization of periodic tasks:}
U_s &= \frac{C_s}{T_s}
When the $k$-th aperiodic request arrives at time $t=r_k$, it receives a
deadline $$d_k=\max\{r_k,d_{k-1}\}+\frac{C_k}{U_s}$$ where $C_k$ is the
execution time of the request and $U_s $ the server utilization factor
(=bandwidth). $U_S$ can be chosen such that $U_s\leq 1-U_p$ (necessary and
% !TeX root = ../main.tex
\section{Real-Time Models}
\item A real-time task is said to be \textcolor{red}{hard}, if missing its deadline may cause catastrophic consequences on the environment under control.
\item A real-time task is called \textcolor{red}{soft}, if meeting its deadline is desirable for performance reasons, but missing its deadline is not catastrophic.
A \textcolor{red}{schedule} is an assignment of tasks $J=\{J_1,J_2,...\}$ to
the processor, such that each task is executed until completion. It can be
defined as a function $$\sigma: \mathbb{R} \rightarrow \mathbb{N}, t \mapsto
\sigma(t)$$ where $\sigma(t)$ denotes the task which is executed at time $t$.
If $\sigma(t)=0$, the processor is called \textcolor{red}{idle}.\newline
If $\sigma$ changes its value, the processor performs a \textcolor{red}{context switch}. Each interval in which $\sigma$ is constant is called a \textcolor{red}{time slice}.
A \textcolor{red}{preemptive schedule} is a schedule in which the running task can be suspended at any time.
A schedule is said to be \textcolor{red}{feasible}, if all tasks can be completed according to a set of specified constraints.
A set of tasks is \textcolor{red}{schedulable} if there exists an algorithm that can produce a feasible schedule.
%TODO: Include precedence constraints here (Slide 3-9)?
\subsection{Metrics (3-6, 3-13 and 4-38)}
\textbf{For a task}:
\item \textbf{Response time}: $R_i = f_i - r_i$
\item \textbf{Interference}: $I_i = s_i - r_i$
\item \textbf{Lateness} (positive if too late) $L_i = f_i-d_i: $
\item \textbf{Tardiness}/Exceeding Time $E_i=\max\{0,L_i\}:$
\item \textbf{Laxity/Slack time} (maximum time a task can be delayed)\\
$X_i=d_i-a_i-C_i: $
\textbf{For a schedule}:
\item Avg. response time: $\overline{t_R} = \frac{1}{n}\sum_{i=1}^{n}(f_i-r_i)$
\item Total completion time: $t_c = \max_{i}\{f_i\} - \min_{i}\{r_i\}$
\item Weighted sum of response time:\\
$t_w = \Big(\sum_{i=1}^{n}w_i(f_i-r_i)\Big)\cdot\Big(\sum_{i=1}^{n}w_i\Big)^{-1}$
\item Maximum Lateness: $L_{max} = \max_{i}\{f_i-d_i\}$
\item Number of late tasks: $N_{late} = \sum_{i=1}^{n}miss(f_i)$
\subsection{Scheduling Algorithms (3-11)}
\item \textcolor{red}{Preemptive Algorithms:} The running task can be interrupted at any time to assign the processor another active task
\item \textcolor{red}{Non-preemptive Algorithms:} A task, once started, is executed by the processor until completion
\item \textcolor{red}{Static Algorithms:} Scheduling decisions are based on fixed parameters, assigned to tasks before their activation (constant priorities)
\item \textcolor{red}{Dynamic Algorithms:} Scheduling decisions based on dynamic parameters that may vary during system execution
\item An algorithm is called \textcolor{red}{optimal}, if it minimizes some given cost function defined over the task set
\item An algorithm is called \textcolor{red}{heuristic}, if it tends to but does not guarantee to find an optimal schedule
\item \textcolor{red}{Acceptance test}: Check for every task if when it is accepted, the schedule will still be feasible
% !TeX root = ../main.tex
\section{Real-Time OS}
Why is a desktop OS not suited?
\item Monolithic kernel is too feature rich.
\item Monolithic kernel is not modular, fault tolerant, configurable, modifiable
\item Is too resource hungry (memory, computation time)
\item Not designed for mission-critical applications
\item Timing uncertainty too large
Advantages and properties of embedded OS
\item OS can be fitted to individual need: Remove unused functions, conditional compilation depending on hardware, replace dynamic data by static data
\item Improved predictability because of scheduler
\item Interrupts can be employed by any process
\item Device drivers handled by tasks instead of hidden drivers $\implies$ everything goes through scheduler\\
\item Protection mechanisms not always necessary (Processes tested and considered reliable). $\implies$ Tasks can do their own I/O, including interrupts
\end{itemize} ~\newline
\subsection{Requirements for RTOS (6-10)} \oldline
\item The timing behaviour of the OS must be predictable
\subitem For all services upper bound on execution time
\subitem Must be deterministic (upper bounds on blocking times)
\item OS must manage the timing and scheduling
\subitem OS may have to be aware of deadlines unless scheduling is done offline
\subitem OS must provide precise time services
\item OS must be fast
\end{itemize} ~\newline
\subsection{Main Functionality of RTOS Kernel (6-13)} \oldline
\item \textbf{Task management:} Execution of quasi-parallel tasks on a processor using processes or threads
\item \textbf{CPU scheduling:} guaranteeing deadlines, minimizing waiting times, fairness in granting resources
\item \textbf{Process synchronization:} critical sections, semaphores, monitors, mutual exclusion
\item \textbf{Inter-process communication:} buffering
\item \textbf{Real-time clock:} as an internal time reference
\end{itemize} ~\newline
\subsection{Task states (6-15)} \oldline
\item \textbf{Run:} A task enters this state when it starts executing on the processor
\item \textbf{Ready:} State of tasks that are ready to execute but cannot be executed because processor is assigned to another task
\item \textbf{Wait:} A task enters this state when it executes a synchronization primitive to wait for an event, e.g. a wait primite or a semaphore.
\item \textbf{Idle:} A periodic job enters this state when it completes its execution and has to wait for the beginning of the next period.
A thread is the smallest sequence of programmed instructions that can be managed independently by a scheduler. Multiple threads can exist within the same process and share resources such as memory. The \textcolor{red}{Thread Control Block (TCB)} stores information needed to manage and schedule a thread.
\end{definition} ~\newline
\subsection{Communication Mechanisms (6-20)}
\textbf{Problem:} The use of shared resources for implementing message passing schemes may cause priority inversion and blocking.
\textbf{Synchronous communication:}
\item Synchronization for a message transfer (\textcolor{red}{rendez-vous}) $\implies$ They have to wait for each other
\item When off-line scheduling: Transformation into precedence constraints.
\textbf{Asynchronous communication:}
\item Usage of \textbf{Mailbox} (shared memory buffer, FIFO-queue with fixed cap.) $\implies$ Tasks do not have to wait for each other
\item Better suited for real-time systems than synchronous communication
\item Problem: Blocking behaviour if channel is full or empty
\end{itemize} ~\newline
\subsection{Classes of embedded OS (6-23)}
\textbf{Class 1: Fast Proprietary Kernels}: For hard real-time systems, these kernels are questionable, because they are designed to be fast, rather than to be predictable in every respect. ~\newline
\textbf{Class 2: Extensions to Standard OS}: Real-time extensions to standard OS. Attempt to exploit comfortable main stream OS. RT-kernel running all RT-tasks, \textbf{standard-OS executed as one task.} \newline
Pro: Crash of standard-OS does not effect RT-tasks\newline
Con: RT-tasks cannot use standard-OS services
% !TeX root = ../main.tex
\section{Resource Sharing}
Common resources are e.g. data structures, variables, main memory area, file, set of registers, I/O unit, ...
Many shared resource require \textcolor{red}{mutual exclusion} (\textcolor{red}{exclusive resources}). \newline
\begin{definition}{Critical Section}
A piece of code executed under mutual exclusion constraints.
\item A task waiting for an exclusive resource is said to be \textcolor{red}{blocked}
\item Else, it enters the critical section and \textcolor{red}{holds} the resource until it becomes \textcolor{red}{free}.
\subsection{Semaphores $S_i$}
One for each exclusive resource $R_i$. Each critical section operating on a resource must begin with a \pythoninline{wait}($S_i$) primitive and end with a \pythoninline{signal}($S_i$) primitive. \newline
All tasks blocked on the same resource are kept in a queue associated with the semaphore.
\subsection{Priority Inversion}
A low priority task holds a Semaphore and prevents a high priority task from running. Meanwhile, a medium priority task can preempt the low priority task and run $\implies$ high priority task stays blocked. \newline
\textbf{Simple solution:} Disallow preemption during the execution of critical sections. This may block unrelated tasks with higher priority unnecessarily.
Better solution:\\
\textbf{Priority Inheritance Protocol (PIP)}: When a task $J_i$ blocks one or more higher priority tasks, it temporarily inherits the highest priority of the blocked tasks. Among equal priorities, it is \textbf{first come, first served}\newline
\textbf{Terms:} \textcolor{red}{Nominal priority} $P_i$ and \textcolor{red}{active priority} $p_i\leq P_i$. Jobs $J_1,...,J_n$ are ordered w.r.t. nominal priority where $J_1$ has highest priority. Jobs do not suspend themselves. \newline
\item When a job $J_i$ enters a critical section and it is blocked by a job with lower priority, $J_i$ gets blocked.
\item When job $J_i$ is blocked, it transmits its \underline{active} priority to the job $J_k$. Job $J_k$ takes the priority $p_k = p_i$.
\item When $J_k$ finished the crit. sec., it unlocks the semaphore and the blocked job with the highest priority gets awakened.
\item If no other jobs are blocked be $J_k$, then $p_k$ is set to $P_k$, otherwise it is set to the highest priority of the jobs blocked by $J_k$ (In the case of nested semaphores)
Example with nested critical sections:\\
\item \textcolor{red}{Direct Blocking}: Higher-priority job tries to acquire a resource held by a lower-priority job and is blocked by it.
\item \textcolor{red}{Push-through Blocking}: medium-priority job is blocked by a lower-priority job that has inherited a higher priority from a job it directly blocks.
% !TeX root = ../main.tex
\section{Scheduling algorithms for aperiodic Tasks (4-3)}
\begin{tabularx}{\columnwidth}{|p{1.7cm} | p{2.7cm} | X|}
\textbf{Aperiodic \newline Tasks} & Equal arrival times \newline non-preemptive & Arbitrary arrival times \newline preemptive \\
Independent \newline tasks \vspace{0.2cm} & EDD \newline (Jackson's Rule) & EDF \newline (Horn's Rule) \\
Dependent \newline tasks & (LDF) \newline (Lawler's Rule) & EDF* \newline (Chetto's Rule) \\
\subsection{Earliest Deadline Due (Jackson's Rule) (4-4)}
\textbf{Algorithm: }Task with earliest deadline is processed first. (Arrival times are equal for all tasks, Scheduling is non-preemptive.)\newline
\textbf{Jackson's Rule: }Given a set of $n$ independent tasks. Processing in order of non-decreasing deadlines is optimal with respect to minimizing the maximum lateness. \newline
\subsection{Latest Deadline First (Lawler's Rule)}
\textbf{Optimization goal:} Minimize the maximum lateness\\
\textbf{Assumptions on the task set:}
\item tasks with precedence relations
\item synchronous arrival times
\item proceed from tail to head
\item among the tasks without successors or whose successors have been all scheduled, select the task with the latest deadline to be scheduled last
\item repeat the procedure until all tasks in the set are selected
\subsection{Earliest Deadline First (Horn's Rule) (4-8)}
\textbf{Algorithm:} Task with earliest deadline is processed first. If new task with earlier deadline arrives, current task is preempted. \newline
\textbf{Horn's Rule: }Given a set of $n$ independent tasks with arbitrary arrival times. An algorithm that at any instant executes the task with the earliest absolute among the ready tasks is optimal with respect to minimizing maximum lateness. \newline
\textbf{Acceptance test:}
Algorithm: EDF_guarantee (J, J_new)
J` = J and {J_new}; /*ordered by deadline*/
t = current_time();
f_0 = t;
for (each J_i in J`) {
f_i = f_(i-1) + c_i(t);
if (f_i >d_i) return(UNFEASIBLE);
A new task is accepted if the schedule remains feasible. \newline
\subsection{EDF* (4-13)}
Determines a feasible schedule for tasks with precedence constraints if one exists. \newline
\textbf{Algorithm: }Modify release times and deadlines. Then use EDF. \newline
\textbf{Modification of release times:}\newline
Task must start not earlier than its release time and not earlier than the minimum finishing time of its predecessor.
\item Start at initial nodes = sources of precedence graph and set $r^*_i=r_i$
\item Select a task $j$ with all its predecessors having already been modified.
\item Set $r^*_j = \max\{r_j,\max_{i}\{r^*_i+C_i: J_i \rightarrow J_j\}\}$
\item Return to 2
\textbf{Modification of deadlines:} \newline
Task must finish execution within its deadline and not later than the maximum start time of its sucessor.
\item Start at terminal nodes = sinks of precedence graph and set $d^*_i=d_i$
\item Select a task $i$ with all successors having already been modified.
\item Set $d^*_i = \min\{d_i, \min_{j}\{d^*_j-C_j: J_i \rightarrow J_j\}\}$
\item Return to 2
\textcolor{red}{\textbf{$\to$ For response time etc. calculations use the original release times and deadlines}}
% !TeX root = ../main.tex
\section{Scheduling Algorithms for periodic tasks (4-18)}
\begin{tabularx}{\columnwidth}{|p{1.7cm} | p{2.7cm} | X|}
\textbf{Periodic \newline Tasks} & Deadline $=$ Period & Deadline $<$ Period \\
Static \newline priority \vspace{0.2cm} & RM \newline (rate-monotonic) & DM \newline (deadline-monotonic) \\
Dynamic \newline priority & EDF & (EDF*) \\
\subsection{Rate Monotonic Scheduling (4-23)}
Fixed / static priorities, independent, preemtive, deadlines equal the periods, $D_i=T_i$. Tasks can't suspend themselves, kernel overhead is assumed 0.
\textbf{Algorithm:} Tasks with the higher request rates (=shorter periods) have higher priorities and interrupt tasks with lower priority. RM is optimal w.r.t. schedulability.
\textbf{Schedulability Condition: } (sufficient but not necessary)
$$U=\sum_{i=1}^{n} \frac{C_i}{T_i} \leq n(2^{1/n}-1) \implies \text{schedulability}$$
where $U$ is the \textcolor{red}{processor utilization factor}. $\lim\limits_{n \to \infty} n(2^{1/n}-1) \approx 0.7$
As a sufficient and necessary test, you can simulate it or do algorithm of DM section.
\textbf{Critical Instant}: The time at which the release of the task will produce the largest response time. It is if that task is simultaneously released with all higher priority tasks.
$\implies$ If there are no phase shifts, simulate the beginning (till all deadlines have passed). If that works, the schedule is feasible.
\subsection{Deadline Monotonic Scheduling (4-35)}
Fixed / static priorities, independent, preemptive, deadlines can be smaller than periods, $C_i\leq D_i\leq T_i$.
\textbf{Algorithm}: Tasks with smaller relative deadlines have higher priorities and interrupt tasks with lower priority.
\textbf{Schedulability analysis}: (sufficient but not necessary)
$$\sum_{i=1}^{n} \frac{C_i}{D_i} \leq n(2^{1/n}-1) \implies \text{schedulability}$$
\textbf{Schedulatbility Condition: (sufficient and necessary)}: The worst-case is critical instance. Assume that tasks are ordered according to relative deadlines $D_i$, then the \textcolor{red}{worst case interference} for task $i$ is
$$I_i = \sum_{j=1}^{i-1} \Big\lceil\frac{t}{T_j}\Big\rceil C_j$$
The \textcolor{red}{longest response time} $R_i$ of a periodic task $i$ is at critical instance, $R_i=C_i+I_i$. Hence, compute in ascending order the smallest $R_i, ~i=1,...,n$ that satisfy $$R_i=C_i+\sum_{j=1}^{i-1}\Big\lceil\frac{R_i}{T_j}\Big\rceil C_j$$ and check whether $$\forall i=1,...,n: R_i\leq D_i$$
This condition is both necessary and sufficient.
Algorithm: DM_guarantee($\Gamma$){
for (each $\tau_i \in \Gamma$) {
do { $R=I+C_i;$
if ($R>D_i$) return(UNSCHEDULABLE);
I= sum(j=1,...i-1) ( ceil($R/T_j$)*$C_j)$;
} while ($I+C_i>R$);
//while previous operation had any effect
\subsection{EDF Scheduling (4-42)}
Dynamic priority assignment, intrinsically preemptive, deadlines can be smaller than periods: $D_i \leq T_i$
\textbf{Algorithm:} The currently executing task is preempted whenever another periodic instance with earlier deadline becomes active
EDF is optimal w.r.t. schedulability (no set of periodic tasks can be scheduled if it can't be scheduled by EDF).
\textbf{Schedulability condition}:
\item if $T_i=D_i$: $$U=\sum_{i=1}^{n}\frac{C_i}{T_i}\leq1$$ is both necessary and sufficient.
\item if $T_i\neq D_i$:
$$U=\sum_{i=1}^{n}\frac{C_i}{D_i}\leq1$$ is only sufficient.
......@@ -18,8 +18,6 @@
Raphael Fischer, (based on Gian Martis Version)\newline
\input{chapters/introduction.tex} % zrene
......@@ -32,24 +30,11 @@
\input{chapters/06_aperiodicAndPeriodicScheduling.tex} % kuenzlij
%% ghoert zu chapter 5 OS
% \input{chapters/communication.tex}
% \input{chapters/bluetooth.tex}
\input{chapters/lowPowerDesign.tex} % rene
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment