Verified Commit e6d6ca9d authored by Theo von Arx's avatar Theo von Arx
Browse files

Use compact 'itemize' instead of compactitem

parent 6c8fdef5
......@@ -3,26 +3,26 @@
% !TeX root = ../main.tex
\section{Embedded Operation Systems (5)}
Why is a desktop OS not suited?
\begin{compactitem}
\begin{itemize}
\item Monolithic kernel is too feature rich.
\item Monolithic kernel is not modular, fault tolerant, configurable, modifiable
\item Is too resource hungry (memory, computation time)
\item Not designed for mission-critical applications
\item Timing uncertainty too large
\end{compactitem}
\end{itemize}
Advantages and properties of embedded OS
\begin{compactitem}
\begin{itemize}
\item OS can be fitted to individual need: Remove unused functions, conditional compilation depending on hardware, replace dynamic data by static data
\item Improved predictability because of scheduler
\item Interrupts can be employed by any process
\item Device drivers handled by tasks instead of hidden drivers $\implies$ everything goes through scheduler, improves timing predictability\\
\includegraphics[width=1\linewidth]{RTOS_drivers}
\item Protection mechanisms not always necessary (Processes tested and considered reliable). $\implies$ Tasks can do their own I/O, including interrupts
\end{compactitem}
\end{itemize}
\subsection{Real-Time OS (5-6)}
\begin{compactitem}
\begin{itemize}
\item The timing behaviour of the OS must be predictable
\subitem For all services upper bound on execution time
\subitem Must be deterministic (upper bounds on blocking times)
......@@ -30,26 +30,26 @@ Advantages and properties of embedded OS
\subitem OS may have to be aware of deadlines unless scheduling is done offline
\subitem OS must provide precise time services
\item OS must be fast
\end{compactitem}
\end{itemize}
\subsection{Main Functionality of RTOS Kernel (5-11)}
\begin{compactitem}
\begin{itemize}
\item \textbf{Task management:} Execution of quasi-parallel tasks on a processor using processes or threads
\item \textbf{CPU scheduling:} guaranteeing deadlines, minimizing waiting times, fairness in granting resources
\item \textbf{Task synchronization:} critical sections, semaphores, monitors, mutual exclusion
\item \textbf{Inter-process communication:} buffering
\item \textbf{Real-time clock:} as an internal time reference
\end{compactitem}
\end{itemize}
\subsection{Task states (5-12)}
\begin{center}
\includegraphics[width=0.9\columnwidth]{OS1}
\end{center}
\begin{compactitem}
\begin{itemize}
\item \textbf{Run:} A task enters this state when it starts executing on the processor
\item \textbf{Ready:} State of tasks that are ready to execute but cannot be executed because processor is assigned to another task
\textbf{Blocked}: A task enters this state when it executes a synchronization primitive to wait for an event, e.g. a wait primite or a semaphore.
\end{compactitem}
\end{itemize}
\subsection{Threads (5-16)}
A thread is the smallest sequence of programmed instructions that can be managed independently by a scheduler. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resource
......@@ -68,13 +68,13 @@ Con: RT-tasks cannot use standard-OS services
\textbf{Class 3: Research Systems}
Research systems try to avoid limitations of existing real‐time and embedded operating systems.
Typical Research questions:
\begin{compactitem}
\begin{itemize}
\item low overhead memory protection,
\item temporal protection of computing resources
\item RTOS for on‐chip multiprocessors
\item quality of service (QoS) control (besides real‐time constraints)
\item formally verified kernel properties
\end{compactitem}
\end{itemize}
\subsection{FreeRTOS (5-25)}
\begin{tabularx}{\columnwidth}{|p{1.7cm} | X|}
......
% !TeX root = ../main.tex
\section{Aperiodic and Periodic Scheduling (6)}
\subsection{Real-Time Systems (6-3)}
\begin{compactitem}
\begin{itemize}
\item A real-time task is said to be \textcolor{red}{hard}, if missing its deadline may cause catastrophic consequences on the environment under control.
\item A real-time task is called \textcolor{red}{soft}, if meeting its deadline is desirable for performance reasons, but missing its deadline is not catastrophic.
\end{compactitem}
\end{itemize}
\subsection{Schedule (6-4)}
A \textcolor{red}{schedule} is an assignment of tasks $J=\{J_1,J_2,...\}$ to
......@@ -24,27 +24,27 @@ A set of tasks is \textcolor{red}{schedulable} if there exists an algorithm that
\subsection{Metrics (6-6) and (6-13)}
\textbf{For a task}:
\begin{compactitem}
\begin{itemize}
\item \textbf{Response time}: $R_i = f_i - r_i$
\item \textbf{Interference}: $I_i = s_i - r_i$
\item \textbf{Lateness} (positive if too late) $L_i = f_i-d_i: $
\item \textbf{Tardiness}/Exceeding Time $E_i=\max\{0,L_i\}:$
\item \textbf{Laxity/Slack time} (maximum time a task can be delayed)\\
$X_i=d_i-a_i-C_i: $
\end{compactitem}
\end{itemize}
\textbf{For a schedule}:
\begin{compactitem}
\begin{itemize}
\item Avg. response time: $\overline{t_R} = \frac{1}{n}\sum_{i=1}^{n}(f_i-r_i)$
\item Total completion time: $t_c = \max_{i}\{f_i\} - \min_{i}\{r_i\}$
\item Weighted sum of response time:\\
$t_w = \Big(\sum_{i=1}^{n}w_i(f_i-r_i)\Big)\cdot\Big(\sum_{i=1}^{n}w_i\Big)^{-1}$
\item Maximum Lateness: $L_{max} = \max_{i}\{f_i-d_i\}$
\item Number of late tasks: $N_{late} = \sum_{i=1}^{n}miss(f_i)$
\end{compactitem}
\end{itemize}
\subsection{Classification of Scheduling Algorithms (6-11)}
\begin{compactitem}
\begin{itemize}
\item \textcolor{red}{Preemptive Algorithms:} The running task can be interrupted at any time to assign the processor another active task
\item \textcolor{red}{Non-preemptive Algorithms:} A task, once started, is executed by the processor until completion
\item \textcolor{red}{Static Algorithms:} Scheduling decisions are based on fixed parameters, assigned to tasks before their activation (constant priorities)
......@@ -52,7 +52,7 @@ A set of tasks is \textcolor{red}{schedulable} if there exists an algorithm that
\item An algorithm is called \textcolor{red}{optimal}, if it minimizes some given cost function defined over the task set
\item An algorithm is called \textcolor{red}{heuristic}, if it tends to but does not guarantee to find an optimal schedule
\item \textcolor{red}{Acceptance test}: Check for every task if when it is accepted, the schedule will still be feasible
\end{compactitem}
\end{itemize}
\section{Scheduling algorithms for aperiodic Tasks (6-17)}
......@@ -145,7 +145,7 @@ Task must finish execution within its deadline and not later than the maximum st
\end{tabularx}
\\
\begin{definition}{Model of Periodic Tasks}
\begin{compactitem}
\begin{itemize}
\item $\Gamma$: denotes a set of periodic tasks
\item $\tau_i$: denotes a periodic task
\item $\tau_{i,j}$ : denotes the $j$th instance of task $i$
......@@ -155,7 +155,7 @@ Task must finish execution within its deadline and not later than the maximum st
\item $\Phi_i$: denotes the phase of task $i$ (release time of its first instance)
\item $D_i$: denotes the relative deadline of task $i$
\item $T_i$: denotes the period of task $i$
\end{compactitem}
\end{itemize}
\end{definition}
\subsection{Rate Monotonic Scheduling (RM, 6-37)}
......@@ -232,18 +232,18 @@ EDF is optimal w.r.t. schedulability (no set of periodic tasks can be scheduled
\section{Real‐Time Scheduling of Mixed Task Sets (6-63)}
For applications with both aperiodic and periodic tasks.
\begin{compactitem}
\begin{itemize}
\item \textcolor{red}{Periodic Tasks:} time-driven, hard timing constraints
\item \textcolor{red}{Aperiodic Tasks:} event-driven, may have hard, soft or no real-time requirements
\item \textcolor{red}{Sporadic Tasks:} Aperiodic task with a maximum arrival rate (or minimum time between two arrivals) assumed.
\end{compactitem}
\end{itemize}
\subsection{Background Scheduling (6-65)}
Schedule periodic tasks with RM or EDF. Process aperiodic task in the background, that is, when there is no periodic task request.
\begin{compactitem}
\begin{itemize}
\item Good: Periodic tasks not affected
\item Bad: Aperiodic task may have huge response times and cannot be prioritized.
\end{compactitem}
\end{itemize}
\begin{center}
\includegraphics[width=0.8\columnwidth]{sched2}
\end{center}
......
......@@ -15,7 +15,7 @@ ES are expected to finish tasks reliably within time bounds.\\ Essential: Upper
\end{itemize}
\section{Time Triggered Systems (4-21)}
\begin{compactitem}
\begin{itemize}
\item periodic
\item cyclic executive
\item generic time-triggered scheduler
......@@ -31,19 +31,19 @@ ES are expected to finish tasks reliably within time bounds.\\ Essential: Upper
\subitem Allow preemptable background tasks
\item not flexible as no adaptation to the environment
\item problems with long tasks
\end{compactitem}
\end{itemize}
~\newline
\includegraphics[width=0.5\linewidth]{images/timetriggered.JPG}
\subsection{Simple Periodic TT Scheduler (4-22)}
\begin{compactitem}
\begin{itemize}
\item Timer interrupts with period $P$
\item all tasks have Period P
\item Later processes $T_2, T_3$ have unpredictable starting times
\item Interprocess-communication \& resource sharing unproblematic
\item $\sum_{k}\text{WCET}(T_k) < P$
\end{compactitem}
\end{itemize}
\begin{center}
\includegraphics[width=0.8\columnwidth]{sof1}
\end{center}
......@@ -63,17 +63,17 @@ ES are expected to finish tasks reliably within time bounds.\\ Essential: Upper
\text{Bem: Tasks der Reihe nach abarbeiten dann schlafen}
\subsection{TT Cyclic Executive Scheduler (4-24)}
\begin{compactitem}
\begin{itemize}
\item Tasks may have different periods
\item Period $P$ is partitioned into frames of length $f$
\item Problematic for long processes (must be split $\rightarrow$ bad)
\end{compactitem}
\end{itemize}
\begin{center}
\includegraphics[width=0.9\columnwidth]{sof2}
\end{center}
\textbf{Definitions}:
\begin{compactitem}
\begin{itemize}
\item $J$: A set of tasks (not necessarily periodic)
\item $J_i$: A task (not necessarily periodic)
\item $\Gamma$: The set of all periodic tasks
......@@ -87,7 +87,7 @@ ES are expected to finish tasks reliably within time bounds.\\ Essential: Upper
\item $\Phi_i$: Relative phase (release time of first instance)
\item $D_i$: Relative deadline of task $i$
\item $C_i$: WCET of task $i$
\end{compactitem}
\end{itemize}
\includegraphics[width=0.6\linewidth]{symbols_cyclic_exec_scheduler}
\textbf{Conditions}: \\
\begin{tabularx}{\columnwidth}{|X|X|}
......@@ -149,7 +149,7 @@ Bem: F\"ur jeden Task eine Weckzeit berechnen. Dann zu entsprechender Zeit aufwa
\newline
\subsection{Non-Preemptive ET Scheduling (2-42)}
\begin{compactitem}
\begin{itemize}
\item To each event is attributed a corresponding process to be executed
\item Events are emitted by external interrupts or other tasks
\item Events are collected in a queue
......@@ -157,16 +157,16 @@ Bem: F\"ur jeden Task eine Weckzeit berechnen. Dann zu entsprechender Zeit aufwa
\item Extensions:
\subitem Preemptable background process (if no task in queue)
\subitem Timed events can be put into the queue (e.g. periodically)
\end{compactitem}
\end{itemize}
\begin{center}
\includegraphics[width=0.8\columnwidth]{sof3}
\end{center}
Properties:
\begin{compactitem}
\begin{itemize}
\item Interprocess communication simple, interrupts can cause problems with shared resources
\item Buffer overflow in case of too many events
\item Long processes prevent others from running and may cause buffer overflow
\end{compactitem}
\end{itemize}
~
\begin{python}
main:
......@@ -186,17 +186,17 @@ Properties:
\end{python}~\newline
\subsection{Preemptive ET Scheduling (4-40)}
\begin{compactitem}
\begin{itemize}
\item Like Non-preemptive, but processes can be preempted (unterbrochen werden) by others.
\item Stack-based context mechanism of process calls:
\end{compactitem}
\end{itemize}
\begin{center}
\includegraphics[width=0.9\columnwidth]{sof4}
\end{center}
\begin{compactitem}
\begin{itemize}
\item Process must finish in LIFO (last in first out) order (restricts flexibility)
\item Shared resources must be protected (eg. semaphores)
\end{compactitem} ~
\end{itemize} ~
\begin{python}
main:
......@@ -228,27 +228,27 @@ Properties:
% !TeX root = ../main.tex
\section{Multitasking (4-43)}
\begin{compactitem}
\begin{itemize}
\item A \textcolor{red}{Thread} is a unique execution of a program.It consists of register values, memory stack (local variables), program counter.\\ Many threads can run at same time. They share processor \& peripherals.
\item A \textcolor{red}{process} is a unique execution of a program and has its own state. In case of a thread, this state consists mainly of register values and memory stack.
\item \textcolor{red}{Activation record}(=thread context): Contains thread local state, which includes registers and local data structures
\item \textcolor{red}{Context Switch}: Current CPU context goes out, new CPU context goes in
\end{compactitem} ~\newline
\end{itemize} ~\newline
\subsection{Co-operative Multitasking (4-45)}
Each process allows a context switch at \pythoninline{cswitch()} call, scheduler chooses which process runs next. \newline
Pros:
\begin{compactitem}
\begin{itemize}
\item Predictable where context switches can occur
\item Less errors with use of shared resources
\end{compactitem}
\end{itemize}
Cons:
\begin{compactitem}
\begin{itemize}
\item Processes may never give up CPU
\item Real-time behavior threatened if process keeps CPU too long
\end{compactitem} ~\newline
\end{itemize} ~\newline
\begin{compactitem}
\begin{itemize}
\item \textbf{Creating a thread}:
\begin{python}
TREAD(my_thread, arg) {
......@@ -271,17 +271,17 @@ Cons:
\item \textbf{Sleeping}: \pythoninline{NutSleep(1000);}
\item \textbf{Posting and waiting for events}:\\
\includegraphics[width=0.9\linewidth]{multitasking_events}
\end{compactitem}
\end{itemize}
$\to$ There is a \textbf{sleep queue}, a \textbf{ready queue} and a \textbf{wait queue}
\subsection{Preemptive Multitasking (4-47)} \oldline
\begin{compactitem}
\begin{itemize}
\item Most powerful form of multitasking
\item OS controls when context switches
\item OS determines which process runs next
\item Use of timers to call OS and switch context
\item Use of HW or SW interrupts or direct calls to OS routines to switch context
\end{compactitem}~\newline
\end{itemize}~\newline
\includegraphics[width=0.6\linewidth]{images/threads.JPG}
......@@ -28,10 +28,10 @@ To protect exclusive resources:
\textbf{Critical Section}\\
A piece of code executed under mutual exclusion constraints.
\begin{compactitem}
\begin{itemize}
\item A task waiting for an exclusive resource is said to be \textcolor{red}{blocked}
\item Else, it enters the critical section and \textcolor{red}{holds} the resource until it becomes \textcolor{red}{free}.
\end{compactitem}
\end{itemize}
\begin{center}
\includegraphics[width=0.8\columnwidth]{res1}
\end{center}
......@@ -59,22 +59,22 @@ A low priority task holds a Semaphore and prevents a high priority task from run
Jobs $J_1,...,J_n$ are ordered with respect to nominal priority where $J_1$ has highest priority. Jobs do not suspend themselves. \newline
\textbf{Algorithm:}
\begin{compactitem}
\begin{itemize}
\item Jobs are scheduled according to $p_i$
\item Jobs with same priority $\rightarrow$ First come first serve
\item When a job $J_i$ enters a critical section and it is blocked by a job with lower priority, $J_i$ gets blocked.
\item When job $J_i$ is blocked, it transmits its \underline{active} priority to the job $J_k$ that holds the semaphore. Job $J_k$ takes the priority $p_k = p_i$.
\item When $J_k$ finished the crit. sec., it unlocks the semaphore and the blocked job with the highest priority gets awakened.
\item If no other jobs are blocked by $J_k$, then $p_k$ is set to $P_k$, otherwise it is set to the highest priority of the jobs blocked by $J_k$ (In the case of nested semaphores)
\end{compactitem}
\end{itemize}
Example with nested critical sections: We can see how the active priority of Job 3 changes with time\\
\includegraphics[width=0.9\linewidth]{PIP_nested_priorities}
When $J_2 $/$J_3$ wants to enter critical section $p_3$ becomes higher.
\begin{compactitem}
\begin{itemize}
\item \textcolor{red}{Direct Blocking}: Higher-priority job tries to acquire a resource held by a lower-priority job and is blocked by it.
\item \textcolor{red}{Push-through Blocking}: medium-priority job is blocked by a lower-priority job that has inherited a higher priority from a job it directly blocks.
\end{compactitem}
\end{itemize}
~
\includegraphics[width=\linewidth]{images/blocking.JPG}
\textbf{Problems:}
......
......@@ -10,16 +10,16 @@ Nodes correspond to tasks or operations, edges correspond to relations (``execut
% Description of control structures (e.g. branches) and data dependencies.
% \textbf{Control Flow Graph:}
% \begin{compactitem}
% \begin{itemize}
% \item Corresponds to a finite state machine, represents sequential control flow in a program.
% \item Branch conditioins often associated to the outgoing edges of a node
% \item Operations to be executed are associated in form of a dependence graph
% \end{compactitem}
% \end{itemize}
% \textbf{Dependence Graph (Data Flow Graph DFG):}
% \begin{compactitem}
% \begin{itemize}
% \item NOP operations represent the start and end point of the execution (\textcolor{red}{polar graph}).
% \end{compactitem}~\newline
% \end{itemize}~\newline
% \includegraphics[width=0.7\linewidth]{Control-data_flow_graph}
......@@ -30,26 +30,26 @@ A marked graph $G=(V,A,del)$ consists of nodes (=actors) $v\in V$, edges $a=(v_i
\includegraphics[width=0.8\columnwidth]{mod4}
\end{center}
\begin{compactitem}
\begin{itemize}
\item The tokens on the edges correspond to data that are \textbf{stored in FIFO queues}
\item A node is called activated if on every input edge there is at least one token
\item A node can fire if it is activated
\item The firing of a node removes a token from each input edge and adds a token to each output edge (corresponds to processed data)
\item Used for regular computations, e.g. signal flow graphs
\end{compactitem}
\end{itemize}
\textbf{Implementation of Marked Graphs (10-12)}
\begin{compactitem}
\begin{itemize}
\item Hardware implementation as \textcolor{red}{synchronous digital circuits}: Actors are combinatorial circuits, Edges are synchronously clocked shift registers (Everything synchronous $\implies$ The \# of items in a queue remains the same)
\item Hardware implementation as \textcolor{red}{self-timed asynchronous circuit}: Actors and FIFO queues independently implemented, coordination and synchronization of firings by handshake protocol ($\rightarrow$ delay insensitive implementation of the semantics)
\item Software implementation with \textcolor{red}{static scheduling}: At first, a feasible sequence of actor firings is determined which ends in the starting state (initial token distribution). This sequence is implemented in software.
\item Software implementation with \textcolor{red}{dynamic scheduling}: Scheduling is done by RTOS. Actors correspond to threads. After firing, the thread is put into wait state. It is put into ready state when all necessary input data are present.
\end{compactitem} ~\newline
\end{itemize} ~\newline
\subsection{Sequence Graph (SG) (10-16)}
A sequence graph is a \textbf{hierarchy of directed graphs}.\\
A sequence graph is a dependence graph with single start and end node.
\begin{compactitem}
\begin{itemize}
\item It contains two kinds of nodes: operations/hierarchy nodes
\item Each graph is acyclic and polar with start and end node (NOP)
\item There are the following hierarchy nodes:
......@@ -58,7 +58,7 @@ A sequence graph is a dependence graph with single start and end node.
\subitem iteration (LOOP)
\item $V_S$ denotes operations of the algorithm
\item $E_S$ denotes the dependence relations
\end{compactitem}
\end{itemize}
\begin{center}
\includegraphics[width=0.9\columnwidth]{mod3}
\vspace{0.1cm}
......
......@@ -3,15 +3,15 @@
\section{Architecture Synthesis (10)}
Determines a hardware architecture that efficiently executes a given algorithm. Tasks are
\begin{compactitem}
\begin{itemize}
\item \textcolor{red}{Allocation}: Determine necessary hardware resources
\item \textcolor{red}{Scheduling}: Determine timing of individual operations
\item \textcolor{red}{Binding}: Determine relation between individual operations and HW resources
\end{compactitem} \vspace{0.2cm}
\end{itemize} \vspace{0.2cm}
\subsection{Models (10-3)}
see Architecture Models
\begin{compactitem}
\begin{itemize}
\item \textcolor{red}{Sequence Graph} $G_S = (V_S,E_S)$ where $V_S$ denotes the operations and $E_S$ the dependence relations of the algorithm
\item \textcolor{red}{Resource Graph} (bipartite) $G_R = (V_R,E_R)$ where $V_R=V_S \cup V_T$ and $V_T$ denotes the resource types of the architecture ($V_S$ are the operations). An edge $(v_s,v_t) \in E_R$ represents availability of resource type $v_t$ for operation $v_s$.
\item \textcolor{red}{Cost function} for operations $c: V_T \rightarrow \mathbb{Z}$.
......@@ -21,7 +21,7 @@ see Architecture Models
\item A \textcolor{red}{schedule} $\tau: V_S \rightarrow \mathbb{Z}^{\geq0}$ determines the starting times of operations. It is feasible iff
$$\forall (v_i,v_j) \in E_S: \quad \tau(v_j)-\tau(v_i) \geq w(v_i) \defeq w(v_i,\beta(v_i))$$
\item \textcolor{red}{latency} $L$ of a schedule is the time between start node $v_0$ and end node $v_n$: $$L=\tau(v_n)-\tau(v_0)$$
\end{compactitem}
\end{itemize}
\subsection{Multiobjective Optimization (10-25)}
Optimize latency, hardware cost, power and energy
......@@ -39,17 +39,17 @@ Optimize latency, hardware cost, power and energy
\end{definition}
\subsection{Classification of Scheduling Algorithms (10-32)} \oldline
\begin{compactitem}
\begin{itemize}
\item \textcolor{red}{Unlimited} vs. \textcolor{red}{limited} resources
\item \textcolor{red}{Iterative} (initial solution to architecture synthesis stepwise improved) vs. \textcolor{red}{constructive} (synthesis problem solved in one step) vs. \textcolor{red}{Transformative} (initial problem converted into classical optimization problem)
\end{compactitem}~ \vspace{0.2cm}
\end{itemize}~ \vspace{0.2cm}
\subsection{Scheduling without resource constraints (10-31)}
\begin{compactitem}
\begin{itemize}
\item Do as preparatory step for general synthesis
\item or to determine bounds on feasible schedules
\item or if there is a dedicated resource for each operation
\end{compactitem}
\end{itemize}
\subsubsection{As Soon As Possible Algorithm (ASAP)}
Guarantees minimal latency. Constructive, greedy from the beginning
......@@ -81,11 +81,11 @@ guarantees that a feasible schedule exists for this latency.
\bigskip
\subsubsection{Scheduling with Timing Constraints}
\begin{compactitem}
\begin{itemize}
\item Deadlines: Latest finishing time
\item Release Times: Earliest starting time
\item Relative Constraints: Maximum or minimum differences
\end{compactitem}
\end{itemize}
Model all timing constraints using relative constraints:
\begin{equation*}
......@@ -118,14 +118,14 @@ $$\forall v_i \in V_C\setminus\{v_0\}: \quad \tau(v_i) = -\infty$$
\subsubsection{List Scheduling (10-45) (widely used heuristic)}
\begin{compactitem}
\begin{itemize}
\item Each operation has a static priority
\item Algorithm schedules one time step after the other
\item Heuristic Algorithm
\item Doesn't minimize the latency in general. In the special case, that the
dependency graph is a tree and all tasks have the same execution time, minimal latency is guaranteed (sufficient
but not necessary).
\end{compactitem}
\end{itemize}
~
\includegraphics[width=1\linewidth]{List_scheduling_algorithm}
......@@ -152,13 +152,13 @@ Produces the following Ablaufplan (indep. of priorities)\\
\includegraphics[width=0.3\linewidth]{list_scheduling}
\subsubsection{Integer Linear Programming (10-50)}
\begin{compactitem}
\begin{itemize}
\item Yields optimal solution
\item Solves scheduling, binding and allocation simultaneously
\item Assumptions for following example:
\subitem Binding is already fixed (execution times $w(v_i)$ known)
\subitem Earliest/latest starting times of operations $v_i$ are $l_i,h_i$
\end{compactitem}
\end{itemize}
~\newline
Formally:
......@@ -186,7 +186,7 @@ Example:
\includegraphics[width=0.5\linewidth]{integer_LP_example}
\begin{compactitem}
\begin{itemize}
\item Goal: minimize $\tau_f - \tau_s$
\item Precedence constraints:
\subitem $\tau_s = 0$
......@@ -204,13 +204,13 @@ Example:
\item Resource constraints for each point in time:\\
$x_{a1} + x_{b1} \leq 1$, \quad $x_{a2} + x_{b2} \leq 1, \dots$
\subitem if a operation needs more than 1 timeslot, the resource constraints get more complicated.
\end{compactitem}
\end{itemize}
\subsection{Iterative Algorithms (10-56)}
Iterative Algorithms consist of a set of indexed equations that are evaluated for all values of an index variable $l$ (e.g. signal flow graphs, marked graphs). Multiple representations are possible:
\begin{compactitem}
\begin{itemize}
\item Single \textcolor{red}{indexed equation} with constant index dependencies
$$y[l] = au[l]+by[l-1]+cy[l-2]+dy[l-3]$$
\item Equivalent set of indexed equations
......@@ -239,19 +239,19 @@ Iterative Algorithms consist of a set of indexed equations that are evaluated fo
\begin{center}
\includegraphics[width=0.6\columnwidth]{images/loop.JPG}
\end{center}
\end{compactitem}
\end{itemize}
($\to$ essentially a sequence graph is executed repeatedly)
~
\begin{definition}{ }
\begin{compactitem}
\begin{itemize}
\item An \textcolor{red}{iteration} is the set of all operations necessary to compute all variables $x_i[l]$ for a fixed index $l$
\item The \textcolor{red}{iteration interval} $P$ is the time distance between two successive iterations of an iterative algorithm.
\item $1/P$ is the \textcolor{red}{throughput}
\item The \textcolor{red}{latency} $L$ is the maximal time distance between the starting and the finishing times of operations belonging to one iteration.
\item In \textcolor{red}{functional pipelining}, there exist time instances where the operations of different iterations $l$ are executed simultaneously.
\item In case of \textcolor{red}{loop folding}, starting and finishing times of an operation are in different physical iterations.
\end{compactitem}
\end{itemize}
\end{definition}
\textbf{Implementation}
\begin{itemize}
......@@ -284,12 +284,12 @@ proof on slide 10-65
\subsection{Dynamic Voltage Scaling (DVS, 10-67)}
Optimize energy in case of DVS using ILP:
\begin{compactitem}
\begin{itemize}
\item $|K|$ different voltage levels
\item A task $v_i \in V_S$ can use one of the execution times $\forall k \in K: w_k(v_i)$ and corresponding energy $e_k(v_i)$
\item Deadlines $d(v_i)$ for each operation
\item no resource constraints
\end{compactitem}
\end{itemize}
~
\begin{compactenum}
\item Minimize $$\sum_{k \in K} \sum_{v_i \in V_S} y_{ik}\cdot e_k(v_i)$$ subjecto to the constraints 2-5
......
......@@ -4,12 +4,12 @@
Small size, low cost and energy. Secure and robust transmission. \newline
\subsection{Technical Data (8-24)} \oldline
\begin{compactitem}
\begin{itemize}
\item Frequency Range: $(2402+k)$ MHz, $k=0,1,...,78$ (79 channels)
\item 10-100m transmission range, 1Mbit/s BW for each connection
\item \textcolor{red}{Frequency Hopping} and Time Multiplexing: Transmitter jumps from one frequency to another with a fixed rate (0.625ms, 1600 hops/s). The channel sequence is determined by a pseudo random sequence of length $2^{27}-1$.
\item Simultaneous transmission of multimedia streams (synchronous) and data (asynchronous)
\end{compactitem}
\end{itemize}
\subsection{Network topologies (8-28)}
\begin{definition}{piconet}
......@@ -26,51 +26,51 @@ Small size, low cost and energy. Secure and robust transmission. \newline
\subsection{Connection Types (8-34)}
\textbf{Synchronous Connection-Oriented (SCO)}
\begin{compactitem}
\begin{itemize}
\item Point to point full duplex connection between master \& slaves
\item Master reserves slots to allow transmission of packets \textbf{in regular intervals}
\item Every packet of the master is followed by one of the slave
\end{compactitem}
\end{itemize}