Skip to content
Snippets Groups Projects
Unverified Commit 6551b043 authored by auphelia's avatar auphelia Committed by GitHub
Browse files

Merge pull request #620 from Xilinx/feature/jenkins

Feature/jenkins
parents d8a3df5d 1456e32e
No related branches found
No related tags found
No related merge requests found
Showing
with 464 additions and 262 deletions
......@@ -50,4 +50,4 @@ jobs:
- name: DockerRunQuicktest
run: |
docker run --init --hostname finn_gha -v $(pwd):/workspace/finn -e FINN_BUILD_DIR=/tmp/finn_gha -e FINN_INST_NAME=finn_gha finn_gha quicktest.sh
docker run --init --hostname finn_gha -w $(pwd) -v $(pwd):$(pwd) -e FINN_BUILD_DIR=/tmp/finn_gha -e FINN_INST_NAME=finn_gha finn_gha quicktest.sh
......@@ -77,9 +77,6 @@ MANIFEST
# Per-project virtualenvs
.venv*/
# Jenkins cfg dir
/docker/jenkins_home
# SSH key dir mounted into Docker
/ssh_keys/
......@@ -96,3 +93,6 @@ MANIFEST
# generated files as part of end2end notebooks
/notebooks/end2end_example/**/*.onnx
# downloaded dep repos
/deps/
......@@ -4,7 +4,7 @@
<img align="left" src="https://raw.githubusercontent.com/Xilinx/finn/github-pages/docs/img/finn-stack.png" alt="drawing" style="margin-right: 20px" width="250"/>
[![Gitter](https://badges.gitter.im/xilinx-finn/community.svg)](https://gitter.im/xilinx-finn/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![GitHub Discussions](https://img.shields.io/badge/discussions-join-green)](https://github.com/Xilinx/finn/discussions)
[![ReadTheDocs](https://readthedocs.org/projects/finn/badge/?version=latest&style=plastic)](http://finn.readthedocs.io/)
FINN is an experimental framework from Xilinx Research Labs to explore deep neural network
......
/******************************************************************************
* Copyright (c) 2022, Xilinx, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* 3. Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION). HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* @brief Validation top-level module for checksum component.
* @author Thomas B. Preußer <tpreusse@amd.com>
*
*******************************************************************************/
#include "checksum.hpp"
CHECKSUM_TOP(WORDS_PER_FRAME, WORD_SIZE, ITEMS_PER_WORD)
/******************************************************************************
* Copyright (c) 2022, Xilinx, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* 3. Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION). HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* @brief Checksum over stream-carried data frames.
* @author Thomas B. Preußer <tpreusse@amd.com>
*
*******************************************************************************/
#include <hls_stream.h>
#include <ap_int.h>
/**
* Computes a checksum over a forwarded stream assumed to carry frames of
* N words further subdivided into K subwords.
* - Subword slicing can be customized typically by using a lambda.
* The provided DefaultSubwordSlicer assumes an `ap_(u)int`-like word
* type with a member `width` and a range-based slicing operator. It
* further assumes a little-endian arrangement of subwords within words
* for the canonical subword stream order.
* - Subwords wider than 23 bits are folded using bitwise XOR across
* slices of 23 bits starting from the LSB.
* - The folded subword values are weighted according to their position
* in the stream relative to the start of frame by a periodic weight
* sequence 1, 2, 3, ...
* - The weighted folded subword values are reduced to a checksum by an
* accumulation module 2^24.
* - A checksum is emitted for each completed frame. It is the concatenation
* of an 8-bit (modulo 256) frame counter and the 24-bit frame checksum.
*/
template<typename T, unsigned K> class DefaultSubwordSlicer {
static_assert(T::width%K == 0, "Word size must be subword multiple.");
static constexpr unsigned W = T::width/K;
public:
ap_uint<W> operator()(T const &x, unsigned const j) const {
#pragma HLS inline
return x((j+1)*W-1, j*W);
}
};
template<
unsigned N, // number of data words in a frame
unsigned K, // subword count per data word
typename T, // type of stream-carried data words
typename F = DefaultSubwordSlicer<T, K> // f(T(), j) to extract subwords
>
void checksum(
hls::stream<T> &src,
hls::stream<T> &dst,
ap_uint<32> &chk,
ap_uint<1> drain, // drain data after checksuming without forward to `dst`
F&& f = F()
) {
ap_uint<2> coeff[3] = { 1, 2, 3 };
ap_uint<24> s = 0;
for(unsigned i = 0; i < N; i++) {
#pragma HLS pipeline II=1 style=flp
T const x = src.read();
// Pass-thru copy
if(!drain) dst.write(x);
// Actual checksum update
for(unsigned j = 0; j < K; j++) {
#pragma HLS unroll
auto const v0 = f(x, j);
constexpr unsigned W = 1 + (decltype(v0)::width-1)/23;
ap_uint<K*23> v = v0;
ap_uint< 23> w = 0;
for(unsigned k = 0; k < W; k++) {
w ^= v(23*k+22, 23*k);
}
s += (coeff[j%3][1]? (w, ap_uint<1>(0)) : ap_uint<24>(0)) + (coeff[j%3][0]? w : ap_uint<23>(0));
}
// Re-align coefficients
for(unsigned j = 0; j < 3; j++) {
#pragma HLS unroll
ap_uint<3> const cc = coeff[j] + ap_uint<3>(K%3);
coeff[j] = cc(1, 0) + cc[2];
}
}
// Frame counter & output
static ap_uint<8> cnt = 0;
#pragma HLS reset variable=cnt
chk = (cnt++, s);
}
#define CHECKSUM_TOP_(WORDS_PER_FRAME, WORD_SIZE, ITEMS_PER_WORD) \
using T = ap_uint<WORD_SIZE>; \
void checksum_ ## WORDS_PER_FRAME ## _ ## WORD_SIZE ## _ ## ITEMS_PER_WORD ( \
hls::stream<T> &src, \
hls::stream<T> &dst, \
ap_uint<32> &chk, \
ap_uint< 1> drain \
) { \
_Pragma("HLS interface port=src axis") \
_Pragma("HLS interface port=dst axis") \
_Pragma("HLS interface port=chk s_axilite") \
_Pragma("HLS interface port=drain s_axilite") \
_Pragma("HLS interface port=return ap_ctrl_none") \
_Pragma("HLS dataflow disable_start_propagation") \
checksum<WORDS_PER_FRAME, ITEMS_PER_WORD>(src, dst, chk, drain); \
}
#define CHECKSUM_TOP(WORDS_PER_FRAME, WORD_SIZE, ITEMS_PER_WORD) \
CHECKSUM_TOP_(WORDS_PER_FRAME, WORD_SIZE, ITEMS_PER_WORD)
/******************************************************************************
* Copyright (c) 2022, Xilinx, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* 3. Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION). HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* @brief Testbench for checksum component.
* @author Thomas B. Preußer <tpreusse@amd.com>
*
*******************************************************************************/
module checksum_tb;
//-----------------------------------------------------------------------
// Global Control
logic clk = 0;
always #5ns clk = !clk;
logic rst;
//-----------------------------------------------------------------------
// DUT
localparam int unsigned N = 60; // words per frame
localparam int unsigned K = 4; // subwords per word
localparam int unsigned W = 8; // subword size
logic [K-1:0][W-1:0] src_TDATA;
logic src_TVALID;
uwire src_TREADY;
uwire [K-1:0][W-1:0] dst_TDATA;
uwire dst_TVALID;
logic dst_TREADY;
uwire [31:0] chk;
uwire chk_vld;
checksum_top dut (
.ap_clk(clk), .ap_rst_n(!rst),
.src_TDATA, .src_TVALID, .src_TREADY,
.dst_TDATA, .dst_TVALID, .dst_TREADY,
.chk, .chk_ap_vld(chk_vld),
.ap_local_block(), .ap_local_deadlock()
);
//-----------------------------------------------------------------------
// Stimulus
logic [K-1:0][W-1:0] Bypass [$] = {};
logic [31:0] Checksum[$] = {};
initial begin
src_TDATA = 'x;
src_TVALID = 0;
rst = 1;
repeat(9) @(posedge clk);
rst <= 0;
for(int unsigned r = 0; r < 311; r++) begin
automatic logic [23:0] sum = 0;
src_TVALID <= 1;
for(int unsigned i = 0; i < N; i++) begin
for(int unsigned k = 0; k < K; k++) begin
automatic logic [W-1:0] v = $urandom()>>17;
src_TDATA[k] <= v;
sum += ((K*i+k)%3 + 1) * v;
end
@(posedge clk iff src_TREADY);
Bypass.push_back(src_TDATA);
end
src_TVALID <= 0;
$display("Expect: %02x:%06x", r[7:0], sum);
Checksum.push_back({r, sum});
end
repeat(8) @(posedge clk);
$finish;
end
//-----------------------------------------------------------------------
// Output Validation
// Drain and check pass-thru stream
assign dst_TREADY = 1;
always_ff @(posedge clk iff dst_TVALID) begin
assert(Bypass.size()) begin
automatic logic [K-1:0][W-1:0] exp = Bypass.pop_front();
assert(dst_TDATA === exp) else begin
$error("Unexpected output %0x instead of %0x.", dst_TDATA, exp);
$stop;
end
end
else begin
$error("Spurious data output.");
$stop;
end
end
// Validate checksum reports
always_ff @(posedge clk iff chk_vld) begin
$display("Check: %02x:%06x", chk[31:24], chk[23:0]);
assert(Checksum.size()) begin
automatic logic [31:0] exp = Checksum.pop_front();
assert(chk === exp) else begin
$error("Unexpected checksum %0x instead of %0x.", chk, exp);
$stop;
end
end
else begin
$error("Spurious checksum output.");
$stop;
end
end
endmodule : checksum_tb
......@@ -39,24 +39,29 @@ WORKDIR /workspace
ENV TZ="Europe/Dublin"
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y build-essential
RUN apt-get install -y libglib2.0-0
RUN apt-get install -y libsm6
RUN apt-get install -y libxext6
RUN apt-get install -y libxrender-dev
RUN apt-get install -y verilator
RUN apt-get install -y nano
RUN apt-get install -y zsh
RUN apt-get install -y rsync
RUN apt-get install -y git
RUN apt-get install -y sshpass
RUN apt-get install -y wget
RUN apt-get install -y sudo
RUN apt-get install -y unzip
RUN apt-get install -y zip
RUN apt-get update && \
apt-get install -y \
build-essential \
libc6-dev-i386 \
libglib2.0-0 \
libsm6 \
libxext6 \
libxrender-dev \
verilator \
nano \
zsh \
rsync \
git \
openssh-client \
sshpass \
wget \
sudo \
unzip \
zip \
locales \
lsb-core
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN locale-gen "en_US.UTF-8"
# install XRT
RUN wget https://www.xilinx.com/bin/public/openDownload?filename=$XRT_DEB_VERSION.deb -O /tmp/$XRT_DEB_VERSION.deb
......@@ -72,11 +77,12 @@ RUN rm requirements.txt
RUN pip install pygments==2.4.1
RUN pip install ipykernel==5.5.5
RUN pip install jupyter==1.0.0
RUN pip install markupsafe==2.0.1
RUN pip install matplotlib==3.3.1 --ignore-installed
RUN pip install pytest-dependency==0.5.1
RUN pip install sphinx==3.1.2
RUN pip install sphinx_rtd_theme==0.5.0
RUN pip install pytest-xdist==2.0.0
RUN pip install pytest-xdist[setproctitle]==2.4.0
RUN pip install pytest-parallel==0.1.0
RUN pip install "netron>=5.0.0"
RUN pip install pandas==1.1.5
......@@ -84,70 +90,21 @@ RUN pip install scikit-learn==0.24.1
RUN pip install tqdm==4.31.1
RUN pip install -e git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading
# git-based Python repo dependencies
# these are installed in editable mode for easier co-development
ARG FINN_BASE_COMMIT="7cd7e00ba6709a85073ba22beeb5827e684fe085"
ARG QONNX_COMMIT="76c165fe7656d9bb3b826e98ac452085f1544f54"
ARG FINN_EXP_COMMIT="af6102769226b82b639f243dc36f065340991513"
ARG BREVITAS_COMMIT="a5b71d6de1389d3e7db898fef72e014842670f03"
ARG PYVERILATOR_COMMIT="0c3eb9343500fc1352a02c020a736c8c2db47e8e"
ARG CNPY_COMMIT="4e8810b1a8637695171ed346ce68f6984e585ef4"
ARG HLSLIB_COMMIT="bcca5d2b69c88e9ad7a86581ec062a9756966367"
ARG OMX_COMMIT="1dfc4aa2f2895632742cd5751520c6b472feb74e"
ARG AVNET_BDF_COMMIT="2d49cfc25766f07792c0b314489f21fe916b639b"
# finn-base
RUN git clone https://github.com/Xilinx/finn-base.git /workspace/finn-base
RUN git -C /workspace/finn-base checkout $FINN_BASE_COMMIT
RUN pip install -e /workspace/finn-base
# Install qonnx without dependencies, currently its only dependency is finn-base
RUN git clone https://github.com/fastmachinelearning/qonnx.git /workspace/qonnx
RUN git -C /workspace/qonnx checkout $QONNX_COMMIT
RUN pip install --no-dependencies -e /workspace/qonnx
# extra dependencies from other FINN deps
# installed in Docker image to make entrypoint script go faster
# finn-experimental
RUN git clone https://github.com/Xilinx/finn-experimental.git /workspace/finn-experimental
RUN git -C /workspace/finn-experimental checkout $FINN_EXP_COMMIT
RUN pip install -e /workspace/finn-experimental
RUN pip install deap==1.3.1
RUN pip install mip==1.13.0
RUN pip install networkx==2.8
# brevitas
RUN git clone https://github.com/Xilinx/brevitas.git /workspace/brevitas
RUN git -C /workspace/brevitas checkout $BREVITAS_COMMIT
RUN pip install -e /workspace/brevitas
RUN pip install future-annotations==1.0.0
RUN pip install dependencies==2.0.1
RUN pip install tokenize-rt==4.2.1
# pyverilator
RUN git clone https://github.com/maltanar/pyverilator.git /workspace/pyverilator
RUN git -C /workspace/pyverilator checkout $PYVERILATOR_COMMIT
RUN pip install -e /workspace/pyverilator
# other git-based dependencies (non-Python)
# cnpy
RUN git clone https://github.com/rogersce/cnpy.git /workspace/cnpy
RUN git -C /workspace/cnpy checkout $CNPY_COMMIT
# finn-hlslib
RUN git clone https://github.com/Xilinx/finn-hlslib.git /workspace/finn-hlslib
RUN git -C /workspace/finn-hlslib checkout $HLSLIB_COMMIT
# oh-my-xilinx
RUN git clone https://bitbucket.org/maltanar/oh-my-xilinx.git /workspace/oh-my-xilinx
RUN git -C /workspace/oh-my-xilinx checkout $OMX_COMMIT
# board files
RUN cd /tmp; \
wget -q https://github.com/cathalmccabe/pynq-z1_board_files/raw/master/pynq-z1.zip; \
wget -q https://dpoauwgwqsy2x.cloudfront.net/Download/pynq-z2.zip; \
unzip -q pynq-z1.zip; \
unzip -q pynq-z2.zip; \
mkdir /workspace/board_files; \
mv pynq-z1/ /workspace/board_files/; \
mv pynq-z2/ /workspace/board_files/; \
rm pynq-z1.zip; \
rm pynq-z2.zip; \
git clone https://github.com/Avnet/bdf.git /workspace/avnet-bdf; \
git -C /workspace/avnet-bdf checkout $AVNET_BDF_COMMIT; \
mv /workspace/avnet-bdf/* /workspace/board_files/;
RUN pip install tclwrapper==0.0.1
# extra environment variables for FINN compiler
ENV VIVADO_IP_CACHE "/tmp/vivado_ip_cache"
ENV PATH "${PATH}:/workspace/oh-my-xilinx"
ENV OHMYXILINX "/workspace/oh-my-xilinx"
WORKDIR /workspace/finn
COPY docker/finn_entrypoint.sh /usr/local/bin/
COPY docker/quicktest.sh /usr/local/bin/
......
......@@ -28,11 +28,14 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
export FINN_ROOT=/workspace/finn
export HOME=/tmp/home_dir
export SHELL=/bin/bash
export LANG="en_US.UTF-8"
export LC_ALL="en_US.UTF-8"
export LANGUAGE="en_US:en"
# colorful terminal output
export PS1='\[\033[1;36m\]\u\[\033[1;31m\]@\[\033[1;32m\]\h:\[\033[1;35m\]\w\[\033[1;31m\]\$\[\033[0m\] '
export PATH=$PATH:$OHMYXILINX
YELLOW='\033[0;33m'
GREEN='\033[0;32m'
......@@ -51,12 +54,21 @@ recho () {
echo -e "${RED}ERROR: $1${NC}"
}
if [ -f "$FINN_ROOT/setup.py" ];then
# qonnx
pip install --user -e ${FINN_ROOT}/deps/qonnx
# finn-experimental
pip install --user -e ${FINN_ROOT}/deps/finn-experimental
# brevitas
pip install --user -e ${FINN_ROOT}/deps/brevitas
# pyverilator
pip install --user -e ${FINN_ROOT}/deps/pyverilator
if [ -f "${FINN_ROOT}/setup.py" ];then
# run pip install for finn
pip install --user -e $FINN_ROOT
pip install --user -e ${FINN_ROOT}
else
recho "Unable to find FINN source code in /workspace/finn"
recho "Ensure you have passed -v <path-to-finn-repo>:/workspace/finn to the docker run command"
recho "Unable to find FINN source code in ${FINN_ROOT}"
recho "Ensure you have passed -v <path-to-finn-repo>:<path-to-finn-repo> to the docker run command"
exit -1
fi
......@@ -90,5 +102,16 @@ else
fi
fi
if [ -f "$HLS_PATH/settings64.sh" ];then
# source Vitis HLS env.vars
source $HLS_PATH/settings64.sh
gecho "Found Vitis HLS at $HLS_PATH"
else
yecho "Unable to find $HLS_PATH/settings64.sh"
yecho "Functionality dependent on Vitis HLS will not be available."
yecho "Please note that FINN needs at least version 2020.2 for Vitis HLS support."
yecho "If you need Vitis HLS, ensure HLS_PATH is set correctly and mounted into the Docker container."
fi
# execute the provided command(s) as root
exec "$@"
FROM jenkins/jenkins:lts
# if we want to install via apt
USER root
RUN apt-get update
RUN apt-get install -y gnupg-agent curl ca-certificates apt-transport-https software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update
RUN apt-get install -y docker-ce-cli
# drop back to the regular jenkins user - good practice
USER jenkins
pipeline {
agent any
parameters {
string(name: 'FINN_CI_BRANCH', defaultValue: '', description: 'FINN branch to build')
string(name: 'FINN_XILINX_PATH', defaultValue: '', description: 'Path to Xilinx tool installation')
string(name: 'FINN_XILINX_VERSION', defaultValue: '2020.1', description: 'Xilinx tool version')
string(name: 'PYNQ_BOARD', defaultValue: 'Pynq-Z1', description: 'PYNQ board type')
string(name: 'PYNQ_IP', defaultValue: '', description: 'PYNQ board IP address')
string(name: 'PYNQ_USERNAME', defaultValue: 'xilinx', description: 'PYNQ board username')
string(name: 'PYNQ_PASSWORD', defaultValue: 'xilinx', description: 'PYNQ board password')
string(name: 'PYNQ_TARGET_DIR', defaultValue: '/home/xilinx/finn', description: 'PYNQ board target deployment directory')
string(name: 'NUM_DEFAULT_WORKERS', defaultValue: '1', description: 'Number of cores for parallel transformations')
// main test: everything except rtlsim and end2end tests, parallel run with xdist, no parallel transformations to save on memory
string(name: 'DOCKER_CMD_MAIN', defaultValue: """python setup.py test --addopts "-k 'not (rtlsim or end2end)' --dist=loadfile -n auto" """, description: 'Main test command')
// rtlsim tests: parallel run with pytest-parallel, no parallel transformations to save on memory
string(name: 'DOCKER_CMD_RTLSIM', defaultValue: """python setup.py test --addopts "-k rtlsim --workers auto" """, description: 'rtlsim test command')
// end2end tests: no parallel testing, use NUM_DEFAULT_WORKERS for parallel transformations
string(name: 'DOCKER_CMD_END2END', defaultValue: """python setup.py test --addopts "-k end2end" """, description: 'end2end test command')
// allow specifying where to mount the cloned folder from, since Jenkins and FINN may be running in separate containers
string(name: 'WORKSPACE_MOUNT', defaultValue: '/var/jenkins_home/workspace/finn', description: 'Path to Jenkins workspace mount')
node {
def app
stage('Clone repository') {
/* Let's make sure we have the repository cloned to our workspace */
checkout scm
}
environment {
DOCKER_TAG='finn_ci:$BUILD_ID'
DOCKER_INST_NAME='finn_ci'
BUILD_PATH='/tmp/finn_ci'
VIVADO_PATH=${params.FINN_XILINX_PATH}/Vivado/${params.FINN_XILINX_VERSION}
VITIS_PATH=${params.FINN_XILINX_PATH}/Vitis/${params.FINN_XILINX_VERSION}
}
stages {
stage("Clone") {
steps {
git branch: "${params.FINN_CI_BRANCH}", url: 'https://github.com/Xilinx/finn.git'
withEnv([
"FINN_XILINX_PATH=/proj/xbuilds/SWIP/2022.1_0420_0327/installs/lin64",
"FINN_XILINX_VERSION=2022.1",
"FINN_DOCKER_TAG=xilinx/finn:jenkins",
"FINN_HOST_BUILD_DIR=/scratch/users/finn_ci",
"PLATFORM_REPO_PATHS=/opt/xilinx/dsa"
]){
parallel firstBranch: {
stage('Brevitas export') {
dir("${env.WORKSPACE}") {
sh("bash run-docker.sh python setup.py test --addopts -mbrevitas_export")
}
}
}
stage('Build') {
steps {
sh """
docker build -t $DOCKER_TAG -f docker/Dockerfile.finn_ci \
--build-arg BUILD_PATH=$BUILD_PATH \
.
"""
}, secondBranch: {
stage('Streamlining transformations') {
dir("${env.WORKSPACE}") {
sh("bash run-docker.sh python setup.py test --addopts -mstreamline")
}
}
}
stage('test-main') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh """
docker run --init \
--hostname $DOCKER_INST_NAME \
-v ${params.WORKSPACE_MOUNT}:/workspace/finn \
-v ${params.FINN_XILINX_PATH}:${params.FINN_XILINX_PATH}:ro \
-e NUM_DEFAULT_WORKERS=1 \
-e FINN_INST_NAME=$DOCKER_INST_NAME \
-e VIVADO_PATH=$VIVADO_PATH \
-e VITIS_PATH=$VITIS_PATH \
-e PYNQ_BOARD=${params.PYNQ_BOARD} \
-e PYNQ_IP=${params.PYNQ_IP} \
-e PYNQ_USERNAME=${params.PYNQ_USERNAME} \
-e PYNQ_PASSWORD=${params.PYNQ_PASSWORD} \
-e PYNQ_TARGET_DIR=${params.PYNQ_TARGET_DIR} \
$DOCKER_TAG ${params.DOCKER_CMD_MAIN}
"""}
}, thirdBranch: {
stage('Util functions') {
dir("${env.WORKSPACE}") {
sh("bash run-docker.sh python setup.py test --addopts -mutil")
}
}
}
stage('test-rtlsim') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh """
docker run --init \
--hostname $DOCKER_INST_NAME \
-v ${params.WORKSPACE_MOUNT}:/workspace/finn \
-v $VIVADO_PATH:$VIVADO_PATH:ro \
-e NUM_DEFAULT_WORKERS=1 \
-e FINN_INST_NAME=$DOCKER_INST_NAME \
-e VIVADO_PATH=$VIVADO_PATH \
-e VITIS_PATH=$VITIS_PATH \
-e PYNQ_BOARD=${params.PYNQ_BOARD} \
-e PYNQ_IP=${params.PYNQ_IP} \
-e PYNQ_USERNAME=${params.PYNQ_USERNAME} \
-e PYNQ_PASSWORD=${params.PYNQ_PASSWORD} \
-e PYNQ_TARGET_DIR=${params.PYNQ_TARGET_DIR} \
$DOCKER_TAG ${params.DOCKER_CMD_RTLSIM}
"""}
}, fourthBranch: {
stage('General transformations') {
dir("${env.WORKSPACE}") {
sh("bash run-docker.sh python setup.py test --addopts -mtransform")
}
}
}
stage('test-end2end') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh """
docker run --init \
--hostname $DOCKER_INST_NAME \
-v ${params.WORKSPACE_MOUNT}:/workspace/finn \
-v $VIVADO_PATH:$VIVADO_PATH:ro \
-e NUM_DEFAULT_WORKERS=${params.NUM_DEFAULT_WORKERS} \
-e FINN_INST_NAME=$DOCKER_INST_NAME \
-e VIVADO_PATH=$VIVADO_PATH \
-e VITIS_PATH=$VITIS_PATH \
-e PYNQ_BOARD=${params.PYNQ_BOARD} \
-e PYNQ_IP=${params.PYNQ_IP} \
-e PYNQ_USERNAME=${params.PYNQ_USERNAME} \
-e PYNQ_PASSWORD=${params.PYNQ_PASSWORD} \
-e PYNQ_TARGET_DIR=${params.PYNQ_TARGET_DIR} \
$DOCKER_TAG ${params.DOCKER_CMD_END2END}
""" }
}, fifthBranch: {
stage('Fpgadataflow transformations and simulations') {
dir("${env.WORKSPACE}") {
sh("bash run-docker.sh python setup.py test --addopts -mfpgadataflow")
}
}
}
}
......
#!/bin/bash
# defaults, can be overriden by environment variables
# user to run Jenkins as -- see NOTE below regarding Docker access permissions
: ${JENKINS_USER=jenkins}
# port for Jenkins on host machine
: ${JENKINS_PORT=8080}
# make Jenkins config persistent by mounting into this folder
: ${JENKINS_HOME=$(pwd)/jenkins_home}
mkdir -p $JENKINS_HOME
# build a Jenkins Docker image that also has the Docker CLI installed
docker build -t finn_jenkins -f Dockerfile.jenkins .
# launch Docker container mounted to local Docker socket
# NOTE: we allow customizing the user (e.g. as root) to work around permission
# issues, may not al
docker run -u $JENKINS_USER -p $JENKINS_PORT:8080 -v /var/run/docker.sock:/var/run/docker.sock -v $JENKINS_HOME:/var/jenkins_home finn_jenkins
......@@ -2,7 +2,7 @@
: ${PYTEST_PARALLEL=auto}
cd $FINN_ROOT
cd $FINN_ROOT/finn
# check if command line argument is empty or not present
if [ -z $1 ]; then
echo "Running quicktest: not (vivado or slow or board) with pytest-xdist"
......
......@@ -186,20 +186,23 @@ This is possible by using the `build_custom` entry as follows:
outside the FINN repo folder for cleaner separation. Let's call this folder
``custom_build_dir``.
2. Create a ``custom_build_dir/build.py`` file that will perform the build when
executed. You should also put any ONNX model(s) or other Python modules you
may want to include in your build flow in this folder (so that they get mounted
into the Docker container while building). Besides the filename and data placement,
2. Create one or more Python files under this directory that perform the build(s)
you would like when executed, for instance ``custom_build_dir/build.py`` and
``custom_build_dir/build_quick.py``.
You should also put any ONNX model(s) or other
Python modules you may want to include in your build flow in this folder (so that they get
mounted into the Docker container while building). Besides the data placement,
you have complete freedom on how to implement the build flow here, including
calling the steps from the simple dataflow build mode above,
making calls to FINN library functions, preprocessing and altering models, building several variants etc.
You can find a basic example of build.py under ``src/finn/qnn-data/build_dataflow/build.py``.
You can find a basic example of a build flow under ``src/finn/qnn-data/build_dataflow/build.py``.
You can launch the custom build flow using:
You can launch the desired custom build flow using:
::
./run-docker.sh build_custom <path/to/custom_build_dir/>
./run-docker.sh build_custom <path/to/custom_build_dir> <name-of-build-flow>
This will mount the specified folder into the FINN Docker container and launch
your ``build.py``.
the build flow. If ``<name-of-build-flow>`` is not specified it will default to ``build``
and thus execute ``build.py``. If it is specified, it will be ``<name-of-build-flow>.py``.
......@@ -63,40 +63,44 @@ Docker images
If you want to add new dependencies (packages, repos) to FINN it's
important to understand how we handle this in Docker.
There are currently two Docker images used in FINN:
* The finn.dev image, used for deploying and developing the FINN compiler. Details described below.
* The finn.ci image, which is used for continuous integration testing. Almost identical to finn.dev image, key differences are no user setup and fewer packages installed (e.g. no Jupyter).
The finn.dev image is built and launched as follows:
1. run-docker.sh launches the build of the Docker image with `docker build`
1. run-docker.sh launches fetch-repos.sh to checkout dependency git repos at correct commit hashes (unless ``FINN_SKIP_DEP_REPOS=1``)
2. Docker image is built from docker/Dockerfile.finn_dev using the following steps:
2. run-docker.sh launches the build of the Docker image with `docker build` (unless ``FINN_DOCKER_PREBUILT=1``). Docker image is built from docker/Dockerfile.finn using the following steps:
* Base: PyTorch dev image
* Set up apt dependencies: apt-get install a few packages for verilator and
* Set up pip dependencies: Python packages FINN depends on are listed in requirements.txt, which is copied into the container and pip-installed. Some additional packages (such as Jupyter and Netron) are also installed.
* Do user setup: Switch to the same user running the container to avoid running as root.
* Clone dependency repos: These include Brevitas, finn-hlslib, finn-base, pyverilator and oh-my-xilinx. The correct commit version will be checked out by the entrypoint script.
* Install XRT deps, if needed: For Vitis builds we need to install the extra dependencies for XRT. This is only triggered if the image is built with the INSTALL_XRT_DEPS=1 argument.
3. Docker image is ready, run-docker.sh can now launch a container from this image with `docker run`. It sets up certain environment variables and volume mounts:
* Vivado/Vitis is mounted from the host into the container (on the same path).
* The finn root folder is mounted under /workspace/finn. This allows modifying the source code on the host and testing inside the container.
* The finn root folder is mounted into the container (on the same path). This allows modifying the source code on the host and testing inside the container.
* The build folder is mounted under /tmp/finn_dev_username (can be overridden by defining FINN_HOST_BUILD_DIR). This will be used for generated files. Mounting on the host allows easy examination of the generated files, and keeping the generated files after the container exits.
* Various environment variables are set up for use inside the container. See the run-docker.sh script for a complete list.
4. Entrypoint script (docker/finn_entrypoint.sh) upon launching container performs the following:
* Update and checkout the dependency repos at specified commits.
* Do `pip install` on the dependency git repos at specified commits.
* Source Vivado settings64.sh from specified path to make vivado and vivado_hls available.
* Download PYNQ board files into the finn root directory, unless they already exist.
* Source Vitits settings64.sh if Vitis is mounted.
5. Depending on the arguments to run-docker.sh a different application is launched. run-docker.sh notebook launches a Jupyter server for the tutorials, whereas run-docker.sh build_custom and run-docker.sh build_dataflow trigger a dataflow build (see documentation). Running without arguments yields an interactive shell. See run-docker.sh for other options.
(Re-)launching builds outside of Docker
======================================
It is possible to launch builds for FINN-generated HLS IP and stitched-IP folders outside of the Docker container.
This may be necessary for visual inspection of the generated designs inside the Vivado GUI, if you run into licensing
issues during synthesis, or other environmental problems.
Simply set the ``FINN_ROOT`` environment variable to the location where the FINN compiler is installed on the host
computer, and you should be able to launch the various .tcl scripts or .xpr project files without using the FINN
Docker container as well.
Linting
=======
......
......@@ -75,7 +75,7 @@ Why does FINN-generated architectures need FIFOs between layers?
See https://github.com/Xilinx/finn/discussions/383
How do I tell FINN to utilize DSPs instead of LUTs for MAC operations in particular layers?
This is done with the ``resType="dsp"`` attribute on ``StreamingFCLayer`` and ``Vector_Vector_Activate`` instances.
This is done with the ``resType="dsp"`` attribute on ``MatrixVectorActivation`` and ``Vector_Vector_Activate`` instances.
When using the ``build_dataflow`` system, this can be specified at a per layer basis by specifying it as part of one or more layers’
folding config (:py:mod:`finn.builder.build_dataflow_config.DataflowBuildConfig.folding_config_file`).
This is a good idea for layers with more weight/input act bits and high PE*SIMD.
......@@ -84,7 +84,7 @@ How do I tell FINN to utilize DSPs instead of LUTs for MAC operations in particu
How do I tell FINN to utilize a particular type of memory resource in particular layers?
This is done with the ``ram_style`` attribute. Check the particular ``HLSCustomOp`` attribute definition to see
which modes are supported (`example for StreamingFCLayer <https://github.com/Xilinx/finn/blob/dev/src/finn/custom_op/fpgadataflow/streamingfclayer_batch.py#L95>`_).
which modes are supported (`example for MatrixVectorActivation <https://github.com/Xilinx/finn/blob/dev/src/finn/custom_op/fpgadataflow/matrixvectoractivation.py#L101>`_).
When using the ``build_dataflow`` system, this can be specified at a per layer basis by specifying it as part of one or more layers’
folding config (:py:mod:`finn.builder.build_dataflow_config.DataflowBuildConfig.folding_config_file`).
See the `MobileNet-v1 build config for ZCU104 in finn-examples <https://github.com/Xilinx/finn-examples/blob/main/build/mobilenet-v1/folding_config/ZCU104_folding_config.json#L15>`_ for reference.
......
......@@ -113,6 +113,7 @@ These are summarized below:
* (optional) ``FINN_DOCKER_RUN_AS_ROOT`` (default 0) if set to 1 then run Docker container as root, default is the current user.
* (optional) ``FINN_DOCKER_GPU`` (autodetected) if not 0 then expose all Nvidia GPUs or those selected by ``NVIDIA_VISIBLE_DEVICES`` to Docker container for accelerated DNN training. Requires `Nvidia Container Toolkit <https://github.com/NVIDIA/nvidia-docker>`_
* (optional) ``FINN_DOCKER_EXTRA`` (default "") pass extra arguments to the ``docker run`` command when executing ``./run-docker.sh``
* (optional) ``FINN_SKIP_DEP_REPOS`` (default "0") skips the download of FINN dependency repos (uses the ones already downloaded under deps/.
* (optional) ``NVIDIA_VISIBLE_DEVICES`` (default "") specifies specific Nvidia GPUs to use in Docker container. Possible values are a comma-separated list of GPU UUID(s) or index(es) e.g. ``0,1,2``, ``all``, ``none``, or void/empty/unset.
* (optional) ``DOCKER_BUILDKIT`` (default "1") enables `Docker BuildKit <https://docs.docker.com/develop/develop-images/build_enhancements/>`_ for faster Docker image rebuilding (recommended).
......@@ -121,7 +122,7 @@ General FINN Docker tips
* Several folders including the root directory of the FINN compiler and the ``FINN_HOST_BUILD_DIR`` will be mounted into the Docker container and can be used to exchange files.
* Do not use ``sudo`` to launch the FINN Docker. Instead, setup Docker to run `without root <https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user>`_.
* If you want a new terminal on an already-running container, you can do this with `docker exec -it <name_of_container> bash`.
* The container is spawned with the `--rm` option, so make sure that any important files you created inside the container are either in the /workspace/finn folder (which is mounted from the host computer) or otherwise backed up.
* The container is spawned with the `--rm` option, so make sure that any important files you created inside the container are either in the finn compiler folder (which is mounted from the host computer) or otherwise backed up.
Using a prebuilt image
**********************
......@@ -154,7 +155,7 @@ Start on the target side:
Continue on the host side (replace the ``<PYNQ_IP>`` and ``<PYNQ_USERNAME>`` with the IP address and username of your board from the first step):
1. Launch the Docker container from where you cloned finn with ``./run-docker.sh``
2. Go into the `ssh_keys` directory (e.g. ``cd /workspace/finn/ssh_keys``)
2. Go into the `ssh_keys` directory (e.g. ``cd /path/to/finn/ssh_keys``)
3. Run ``ssh-keygen`` to create a key pair e.g. ``id_rsa`` private and ``id_rsa.pub`` public key
4. Run ``ssh-copy-id -i id_rsa.pub <PYNQ_USERNAME>@<PYNQ_IP>`` to install the keys on the remote system
5. Test that you can ``ssh <PYNQ_USERNAME>@<PYNQ_IP>`` without having to enter the password. Pass the ``-v`` flag to the ssh command if it doesn't work to help you debug.
......
......@@ -14,10 +14,10 @@ FINN uses `ONNX <https://github.com/onnx/onnx>`_ as an intermediate representati
Custom Quantization Annotations
===============================
ONNX does not support datatypes smaller than 8-bit integers, whereas in FINN we are interested in smaller integers down to ternary and bipolar. To make this work, FINN uses the quantization_annotation field in ONNX to annotate tensors with their FINN DataType (:py:mod:`finn.core.datatype.DataType`) information. However, all tensors are expected to use single-precision floating point (float32) storage in FINN. This means we store even a 1-bit value as floating point for the purposes of representation. The FINN compiler flow is responsible for eventually producing a packed representation for the target hardware, where the 1-bit is actually stored as 1-bit.
ONNX does not support datatypes smaller than 8-bit integers, whereas in FINN we are interested in smaller integers down to ternary and bipolar. To make this work, FINN uses the quantization_annotation field in ONNX to annotate tensors with their FINN DataType (:py:mod:`qonnx.core.datatype.DataType`) information. However, all tensors are expected to use single-precision floating point (float32) storage in FINN. This means we store even a 1-bit value as floating point for the purposes of representation. The FINN compiler flow is responsible for eventually producing a packed representation for the target hardware, where the 1-bit is actually stored as 1-bit.
Note that FINN uses floating point tensors as a carrier data type to represent integers. Floating point arithmetic can introduce rounding errors, e.g. (int_num * float_scale) / float_scale is not always equal to int_num.
When using the custom ONNX execution flow, FINN will attempt to sanitize any rounding errors for integer tensors. See (:py:mod:`finn.util.basic.sanitize_quant_values`) for more information.
When using the custom ONNX execution flow, FINN will attempt to sanitize any rounding errors for integer tensors. See (:py:mod:`qonnx.util.basic.sanitize_quant_values`) for more information.
This behavior can be disabled (not recommended!) by setting the environment variable SANITIZE_QUANT_TENSORS=0.
Custom Operations/Nodes
......@@ -39,7 +39,7 @@ To verify correct operation of FINN-ONNX graphs, FINN provides its own ONNX exec
ModelWrapper
============
FINN provides a ModelWrapper class (:py:mod:`finn.core.modelwrapper.ModelWrapper`) as a thin wrapper around ONNX to make it easier to analyze and manipulate ONNX graphs. This wrapper provides many helper functions, while still giving full access to the ONNX protobuf representation.
FINN provides a ModelWrapper class (:py:mod:`qonnx.core.modelwrapper.ModelWrapper`) as a thin wrapper around ONNX to make it easier to analyze and manipulate ONNX graphs. This wrapper provides many helper functions, while still giving full access to the ONNX protobuf representation.
Some of the helper functions are described in more detail below.
......@@ -48,7 +48,7 @@ Create a ModelWrapper instance
The ModelWrapper instance can be created using a model in .onnx format or by directly passing a ModelProto instance to the wrapper. The code block below gives an example of how to use the wrapper on a model in .onnx format.
::
from finn.core.modelwrapper import ModelWrapper
from qonnx.core.modelwrapper import ModelWrapper
model = ModelWrapper("model.onnx")
Access the ONNX GraphProto through ModelWrapper
......@@ -116,7 +116,7 @@ As mentioned above there are FINN DataTypes additional to the container datatype
model.get_tensor_datatype(tensor_list[2])
# set tensor datatype of third tensor in model tensor list
from finn.core.datatype import DataType
from qonnx.core.datatype import DataType
finn_dtype = DataType.BIPOLAR
model.set_tensor_datatype(tensor_list[2], finn_dtype)
......@@ -127,7 +127,7 @@ ModelWrapper contains two helper functions for tensor initializers, one to deter
# get tensor initializer of third tensor in model tensor list
model.get_initializer(tensor_list[2])
ModelWrapper contains more useful functions, if you are interested please have a look at the ModelWrapper module (:py:mod:`finn.core.modelwrapper.ModelWrapper`) directly.
ModelWrapper contains more useful functions, if you are interested please have a look at the ModelWrapper module (:py:mod:`qonnx.core.modelwrapper.ModelWrapper`) directly.
.. _analysis_pass:
......@@ -146,10 +146,10 @@ A transformation passes changes (transforms) the given model, it gets the model
.. _mem_mode:
StreamingFCLayer *mem_mode*
MatrixVectorActivation *mem_mode*
===========================
FINN supports two types of the so-called *mem_mode* attrıbute for the node StreamingFCLayer. This mode controls how the weight values are accessed during the execution. That means the mode setting has direct influence on the resulting circuit. Currently two settings for the *mem_mode* are supported in FINN:
FINN supports two types of the so-called *mem_mode* attrıbute for the node MatrixVectorActivation. This mode controls how the weight values are accessed during the execution. That means the mode setting has direct influence on the resulting circuit. Currently two settings for the *mem_mode* are supported in FINN:
* "const"
......@@ -163,7 +163,7 @@ The following picture shows the idea behind the two modes.
Const mode
----------
In *const* mode the weights are "baked in" into the Matrix-Vector-Activate-Unit (MVAU), which means they are part of the HLS code. During the IP block generation the weight values are integrated as *params.h* file in the HLS code and synthesized together with it. For the *const* mode IP block generation the `StreamingFCLayer_Batch function <https://github.com/Xilinx/finn-hlslib/blob/07a8353f6cdfd8bcdd81e309a5581044c2a93d3b/fclayer.h#L94>`_ from the finn-hls library is used, which implements a standard MVAU. The resulting IP block has an input and an output stream, as shown in the above picture on the left. FIFOs in the form of verilog components are connected to these.
In *const* mode the weights are "baked in" into the Matrix-Vector-Activate-Unit (MVAU), which means they are part of the HLS code. During the IP block generation the weight values are integrated as *params.h* file in the HLS code and synthesized together with it. For the *const* mode IP block generation the `Matrix_Vector_Activate_Batch function <https://github.com/Xilinx/finn-hlslib/blob/19fa1197c09bca24a0f77a7fa04b8d7cb5cc1c1d/mvau.hpp#L93>`_ from the finn-hls library is used, which implements a standard MVAU. The resulting IP block has an input and an output stream, as shown in the above picture on the left. FIFOs in the form of verilog components are connected to these.
Advantages:
......@@ -185,7 +185,7 @@ In *decoupled* mode a different variant of the MVAU with three ports is used. Be
Advantages:
* better control over the used memory primivites used (see the ram_style attribute in StreamingFCLayer)
* better control over the used memory primivites used (see the ram_style attribute in MatrixVectorActivation)
* potentially faster HLS synthesis time since weight array shape is no longer part of HLS synthesis
......
......@@ -19,11 +19,11 @@ Tidy-up transformations
These transformations do not appear in the diagram above, but are applied in many steps in the FINN flow to postprocess the model after a transformation and/or prepare it for the next transformation. They ensure that all information is set and behave like a "tidy-up". These transformations are the following:
* :py:mod:`finn.transformation.general.GiveReadableTensorNames` and :py:mod:`finn.transformation.general.GiveUniqueNodeNames`
* :py:mod:`qonnx.transformation.general.GiveReadableTensorNames` and :py:mod:`qonnx.transformation.general.GiveUniqueNodeNames`
* :py:mod:`finn.transformation.infer_datatypes.InferDataTypes` and :py:mod:`finn.transformation.infer_shapes.InferShapes`
* :py:mod:`qonnx.transformation.infer_datatypes.InferDataTypes` and :py:mod:`qonnx.transformation.infer_shapes.InferShapes`
* :py:mod:`finn.transformation.fold_constants.FoldConstants`
* :py:mod:`qonnx.transformation.fold_constants.FoldConstants`
Streamlining Transformations
============================
......@@ -35,7 +35,7 @@ After this transformation the ONNX model is streamlined and contains now custom
Convert to HLS Layers
=====================
Pairs of binary XNORPopcountMatMul layers are converted to StreamingFCLayers and following Multithreshold layers are absorbed into the Matrix-Vector-Activate-Unit (MVAU). The result is a model consisting of a mixture of HLS and non-HLS layers. For more details, see :py:mod:`finn.transformation.fpgadataflow.convert_to_hls_layers`. The MVAU can be implemented in two different modes, *const* and *decoupled*, see chapter :ref:`mem_mode`.
Pairs of binary XNORPopcountMatMul layers are converted to MatrixVectorActivation layers and following Multithreshold layers are absorbed into the Matrix-Vector-Activate-Unit (MVAU). The result is a model consisting of a mixture of HLS and non-HLS layers. For more details, see :py:mod:`finn.transformation.fpgadataflow.convert_to_hls_layers`. The MVAU can be implemented in two different modes, *const* and *decoupled*, see chapter :ref:`mem_mode`.
Dataflow Partitioning
=====================
......@@ -43,7 +43,7 @@ Dataflow Partitioning
In the next step the graph is split and the part consisting of HLS layers is further processed in the FINN flow. The parent graph containing the non-HLS layers remains. The PE and SIMD are set to 1 by default, so the result is a network of only HLS layers with maximum folding. The model can be verified using the *cppsim* simulation. It is a simulation using C++ and is described in more detail in chapter :ref:`verification`.
Folding
=======
=========
To adjust the folding, the values for PE and SIMD can be increased to achieve also an increase in the performance. The result can be verified using the same simulation flow as for the network with maximum folding (*cppsim* using C++), for details please have a look at chapter :ref:`verification`.
......
......@@ -8,15 +8,15 @@ Modules
finn.core.data\_layout
-------------------------
.. automodule:: finn.core.data_layout
.. automodule:: qonnx.core.data_layout
:members:
:undoc-members:
:show-inheritance:
finn.core.datatype
qonnx.core.datatype
-------------------------
.. automodule:: finn.core.datatype
.. automodule:: qonnx.core.datatype
:members:
:undoc-members:
:show-inheritance:
......@@ -29,10 +29,10 @@ finn.core.execute\_custom\_node
:undoc-members:
:show-inheritance:
finn.core.modelwrapper
qonnx.core.modelwrapper
-----------------------------
.. automodule:: finn.core.modelwrapper
.. automodule:: qonnx.core.modelwrapper
:members:
:undoc-members:
:show-inheritance:
......
......@@ -127,10 +127,10 @@ finn.custom\_op.fpgadataflow.streamingdatawidthconverter\_batch
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.streamingfclayer\_batch
finn.custom\_op.fpgadataflow.matrixvectoractivation
-----------------------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.streamingfclayer_batch
.. automodule:: finn.custom_op.fpgadataflow.matrixvectoractivation
:members:
:undoc-members:
:show-inheritance:
......@@ -184,10 +184,10 @@ finn.custom\_op.fpgadataflow.upsampler
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.vector\_vector\_activate\_batch
finn.custom\_op.fpgadataflow.vectorvectoractivation
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.vector_vector_activate_batch
.. automodule:: finn.custom_op.fpgadataflow.vectorvectoractivation
:members:
:undoc-members:
:show-inheritance:
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment