Skip to content
Snippets Groups Projects
Commit 10390bfc authored by Yaman Umuroglu's avatar Yaman Umuroglu
Browse files

Merge branch 'dev' into feature/concat

parents d6fd1dee d8e0f689
No related branches found
No related tags found
No related merge requests found
Showing
with 454 additions and 85 deletions
...@@ -13,3 +13,12 @@ Contributors ...@@ -13,3 +13,12 @@ Contributors
* Suranga Mahesh (@surangamh) * Suranga Mahesh (@surangamh)
* Peter Lehnhardt (@pete-lennart) * Peter Lehnhardt (@pete-lennart)
* Neil Kim Nielsen (@neilkimn) * Neil Kim Nielsen (@neilkimn)
* Jon Ander Lezeta (@jalezeta)
* John Terry (@jterry-x)
* Alina Vasilciuc (@alinavalinav)
* Alessandro Pappalardo (@volcacius)
* Giuseppe Franco (@Giuseppe5)
* Syed Asad Alam (@asadalam)
* Javier Duarte (@jmduarte)
* Uma Maheshwari (@umav1511)
* José Rosa (@pinxau1000)
...@@ -24,9 +24,9 @@ Please see the [Getting Started](https://finn.readthedocs.io/en/latest/getting_s ...@@ -24,9 +24,9 @@ Please see the [Getting Started](https://finn.readthedocs.io/en/latest/getting_s
## What's New in FINN? ## What's New in FINN?
* **2021-11-05:** v0.7 is released, introducing QONNX support, three new example networks and many other improvements. Read more on the [v0.7 release blog post](https://xilinx.github.io/finn//2021/11/05/finn-v07-is-released.html).
* **2021-06-15:** v0.6 is released, with ResNet-50 on U250 and ZCU104 MobileNet-v1 in finn-examples showcasing new features plus a lot more. Read more on the [v0.6 release blog post](https://xilinx.github.io/finn//2021/06/15/finn-v06-is-released.html). * **2021-06-15:** v0.6 is released, with ResNet-50 on U250 and ZCU104 MobileNet-v1 in finn-examples showcasing new features plus a lot more. Read more on the [v0.6 release blog post](https://xilinx.github.io/finn//2021/06/15/finn-v06-is-released.html).
* **2020-12-17:** v0.5b (beta) is released, with a new [examples repo](https://github.com/Xilinx/finn-examples) including MobileNet-v1. Read more on the <a href="https://xilinx.github.io/finn/2020/12/17/finn-v05b-beta-is-released.html">release blog post</a>. * **2020-12-17:** v0.5b (beta) is released, with a new [examples repo](https://github.com/Xilinx/finn-examples) including MobileNet-v1. Read more on the <a href="https://xilinx.github.io/finn/2020/12/17/finn-v05b-beta-is-released.html">release blog post</a>.
* **2020-09-21:** v0.4b (beta) is released. Read more on the <a href="https://xilinx.github.io/finn/2020/09/21/finn-v04b-beta-is-released.html">release blog post</a>.
## Documentation ## Documentation
......
...@@ -86,18 +86,24 @@ RUN pip install -e git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg ...@@ -86,18 +86,24 @@ RUN pip install -e git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg
# git-based Python repo dependencies # git-based Python repo dependencies
# these are installed in editable mode for easier co-development # these are installed in editable mode for easier co-development
ARG FINN_BASE_COMMIT="7c2603a95e90e4de2575020e575c24eab6a15889" ARG FINN_BASE_COMMIT="e8facdd719b55839cca46da2cc4f4a4a372afb41"
ARG FINN_EXP_COMMIT="f82c0d9868bb88ea045dfadb28508d327d287221" ARG QONNX_COMMIT="9f9eff95227cc57aadc6eafcbd44b7acda89f067"
ARG BREVITAS_COMMIT="462f86cdc60f9915baf13afd1676fb21da44c2ee" ARG FINN_EXP_COMMIT="af6102769226b82b639f243dc36f065340991513"
ARG BREVITAS_COMMIT="a5b71d6de1389d3e7db898fef72e014842670f03"
ARG PYVERILATOR_COMMIT="0c3eb9343500fc1352a02c020a736c8c2db47e8e" ARG PYVERILATOR_COMMIT="0c3eb9343500fc1352a02c020a736c8c2db47e8e"
ARG CNPY_COMMIT="4e8810b1a8637695171ed346ce68f6984e585ef4" ARG CNPY_COMMIT="4e8810b1a8637695171ed346ce68f6984e585ef4"
ARG HLSLIB_COMMIT="fbb07135b3d991602e8abe3f2c51212c11fd392b" ARG HLSLIB_COMMIT="966d17d3fddd801927b2167627d23a9a15ed1461"
ARG OMX_COMMIT="1dfc4aa2f2895632742cd5751520c6b472feb74e" ARG OMX_COMMIT="1dfc4aa2f2895632742cd5751520c6b472feb74e"
ARG AVNET_BDF_COMMIT="2d49cfc25766f07792c0b314489f21fe916b639b" ARG AVNET_BDF_COMMIT="2d49cfc25766f07792c0b314489f21fe916b639b"
# finn-base # finn-base
RUN git clone https://github.com/Xilinx/finn-base.git /workspace/finn-base RUN git clone https://github.com/Xilinx/finn-base.git /workspace/finn-base
RUN git -C /workspace/finn-base checkout $FINN_BASE_COMMIT RUN git -C /workspace/finn-base checkout $FINN_BASE_COMMIT
RUN pip install -e /workspace/finn-base RUN pip install -e /workspace/finn-base
# Install qonnx without dependencies, currently its only dependency is finn-base
RUN git clone https://github.com/fastmachinelearning/qonnx.git /workspace/qonnx
RUN git -C /workspace/qonnx checkout $QONNX_COMMIT
RUN pip install --no-dependencies -e /workspace/qonnx
# finn-experimental # finn-experimental
RUN git clone https://github.com/Xilinx/finn-experimental.git /workspace/finn-experimental RUN git clone https://github.com/Xilinx/finn-experimental.git /workspace/finn-experimental
RUN git -C /workspace/finn-experimental checkout $FINN_EXP_COMMIT RUN git -C /workspace/finn-experimental checkout $FINN_EXP_COMMIT
......
...@@ -8,7 +8,13 @@ Brevitas Export ...@@ -8,7 +8,13 @@ Brevitas Export
:scale: 70% :scale: 70%
:align: center :align: center
FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/brevitas_examples/bnn_pynq>`_. Brevitas provides an export of a quantized network in ONNX representation. The resulting model consists only of `ONNX standard nodes <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_, but also contains additional attributes for the ONNX nodes to represent low precision datatypes. To work with the model it is wrapped into :ref:`modelwrapper` provided by FINN. FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/brevitas_examples/bnn_pynq>`_. Brevitas provides an export of a quantized network in ONNX representation in several flavors.
Two of the Brevitas-exported ONNX variants can be ingested by FINN:
* FINN-ONNX: Quantized weights exported as tensors with additional attributes to mark low-precision datatypes. Quantized activations exported as MultiThreshold nodes.
* QONNX: All quantization is represented using Quant, BinaryQuant or Trunc nodes. QONNX must be converted into FINN-ONNX by :py:mod:`finn.transformation.qonnx.convert_qonnx_to_finn`
To work with either type of ONNX model, it is loaded into a :ref:`modelwrapper` provided by FINN.
At this stage we can already use the functional verification flow to simulate the model using Python, this is marked in the graphic with the dotted arrow. For more details please have look at :ref:`verification`. At this stage we can already use the functional verification flow to simulate the model using Python, this is marked in the graphic with the dotted arrow. For more details please have look at :ref:`verification`.
......
...@@ -7,7 +7,7 @@ Developer documentation ...@@ -7,7 +7,7 @@ Developer documentation
This page is intended to serve as a starting point for new FINN developers. This page is intended to serve as a starting point for new FINN developers.
Power users may also find this information useful. Power users may also find this information useful.
Getting started Prerequisites
================ ================
Before starting to do development on FINN it's a good idea to start Before starting to do development on FINN it's a good idea to start
......
...@@ -4,68 +4,109 @@ ...@@ -4,68 +4,109 @@
Frequently Asked Questions Frequently Asked Questions
*********************** ***********************
.. note:: **This page is under construction.** Can't find the answer to your question here? Check `FINN GitHub Discussions <https://github.com/Xilinx/finn/discussions>`_.
Can I install FINN out of the Docker container?
===============================================
We do not support out of the Docker implementations at the moment. This is due Can I install FINN out of the Docker container?
to the high complexity of the FINN project dependencies. We do not support out of the Docker implementations at the moment. This is due
to the high complexity of the FINN project dependencies.
Since FINN uses ONNX, can I compile any model from the ONNX Model Zoo to an FPGA accelerator? Since FINN uses ONNX, can I compile any model from the ONNX Model Zoo to an FPGA accelerator?
============================================================================================= The short answer is no. FINN uses ONNX in a specific (non-standard) way, including custom layer
types and quantization annotations. Networks must be first quantized using Brevitas and exported
to FINN-ONNX to be converted to FPGA accelerators.
The short answer is no. FINN uses ONNX in a specific (non-standard) way, including custom layer
types and quantization annotations. Networks must be first quantized using Brevitas and exported
to FINN-ONNX to be converted to FPGA accelerators.
Can I install FINN out of the Docker container?
We do not support out of the Docker implementations at the moment. This is due
to the high complexity of the FINN project dependencies.
Can I deploy custom NNs with arbitrary precisions and layers using FINN? Since FINN uses ONNX, can I compile any model from the ONNX Model Zoo to an FPGA accelerator?
========================================================================= The short answer is no. FINN uses ONNX in a specific (non-standard) way, including custom layer
types and quantization annotations. Networks must be first quantized using Brevitas and exported
to FINN-ONNX to be converted to FPGA accelerators.
Yes, though the effort required and quality of results will vary.
Although we do support arbitrary
precision, the way we create the hardware isn't typically practical for more than
4 bits, or very large networks for a single FPGA.
In terms of layers, only a subset of quantized layers covered by the various FINN examples
are currently supported.
It is possible to add support for new layers, though we don't have tutorials for this in place
just yet.
Does FINN only work with the example networks? Can I deploy custom NNs with arbitrary precisions and layers using FINN?
============================================== Yes, though the effort required and quality of results will vary.
Although we do support arbitrary
precision, the way we create the hardware isn't typically practical for more than
4 bits, or very large networks for a single FPGA.
In terms of layers, only a subset of quantized layers covered by the various FINN examples
are currently supported.
It is possible to add support for new layers, though we don't have tutorials for this in place
just yet.
FINN isn't restricted to the example networks; Does FINN only work with the example networks?
rather, it's restricted to certain patterns (e.g. certain layer types and their combinations). FINN isn't restricted to the example networks;
The current best practice for custom networks is to take a working network and gradually modify it. rather, it's restricted to certain patterns (e.g. certain layer types and their combinations).
The current best practice for custom networks is to take a working network and gradually modify it.
What is the expected background for using FINN? What is the expected background for using FINN?
=============================================== Some general knowledge of Python, Docker, machine learning with neural networks and Jupyter notebooks
is expected.
Some general knowledge of Python, Docker, machine learning with neural networks and Jupyter notebooks Our goal is to make the tool in a shape and form so that no hardware/FPGA background
is expected. should be necessary, although having some knowledge would give better results.
Our goal is to make the tool in a shape and form so that no hardware/FPGA background
should be necessary, although having some knowledge would give better results.
What operating systems are supported by FINN? What operating systems are supported by FINN?
============================================= FINN should work fine under any Linux-based OS capable of running Vivado/Vitis, as long
as you install Docker (``docker-ce``) on your machine.
FINN should work fine under any Linux-based OS capable of running Vivado/Vitis, as long
as you install Docker (``docker-ce``) on your machine .
I am getting DocNav and Model_Composer errors when launching the Docker image. I am getting DocNav and Model_Composer errors when launching the Docker image.
============================================================================== We do not mount those particular directories into the Docker container because they are not
used. The errors are Vivado related but you can safely ignore them.
We do not mount those particular directories into the Docker container because they are not
used. The errors are Vivado related but you can safely ignore them.
What board do you recommend to start working with FINN? What board do you recommend to start working with FINN?
======================================================= Our preferred target platforms are those supported by `PYNQ <http://www.pynq.io/board.html>`_.
For those boards we can offer end-to-end (DNN-to-bitstream) deployment,
Our preferred target platforms are those supported by `PYNQ <http://www.pynq.io/board.html>`_. see the `finn-examples <https://github.com/Xilinx/finn-examples>`_ repository for some examples.
For those boards we can offer end-to-end (DNN-to-bitstream) deployment, However, FINN also supports Vivado IP Integrator designs. The IPs connect using AXI stream (FIFO)
see the `finn-examples <https://github.com/Xilinx/finn-examples>`_ repository for some examples. in-and-out interfaces. This means that it can be integrated onto any Xilinx FPGA board,
However, FINN also supports Vivado IP Integrator designs. The IPs connect using AXI stream (FIFO) though you will have to do the system integration manually.
in-and-out interfaces. This means that it can be integrated onto any Xilinx FPGA board,
though you will have to do the system integration manually. FINN-generated builds break after I restart my computer, because ``/tmp`` gets wiped.
See https://github.com/Xilinx/finn/discussions/404
How can I target an arbitrary Xilinx FPGA without PYNQ support?
See https://github.com/Xilinx/finn/discussions/387
Why does FINN-generated architectures need FIFOs between layers?
See https://github.com/Xilinx/finn/discussions/383
How do I tell FINN to utilize DSPs instead of LUTs for MAC operations in particular layers?
This is done with the ``resType="dsp"`` attribute on ``StreamingFCLayer`` and ``Vector_Vector_Activate`` instances.
When using the ``build_dataflow`` system, this can be specified at a per layer basis by specifying it as part of one or more layers’
folding config (:py:mod:`finn.builder.build_dataflow_config.DataflowBuildConfig.folding_config_file`).
This is a good idea for layers with more weight/input act bits and high PE*SIMD.
See the `MobileNet-v1 build config for ZCU104 in finn-examples <https://github.com/Xilinx/finn-examples/blob/main/build/mobilenet-v1/folding_config/ZCU104_folding_config.json#L15>`_ for reference.
How do I tell FINN to utilize a particular type of memory resource in particular layers?
This is done with the ``ram_style`` attribute. Check the particular ``HLSCustomOp`` attribute definition to see
which modes are supported (`example for StreamingFCLayer <https://github.com/Xilinx/finn/blob/dev/src/finn/custom_op/fpgadataflow/streamingfclayer_batch.py#L95>`_).
When using the ``build_dataflow`` system, this can be specified at a per layer basis by specifying it as part of one or more layers’
folding config (:py:mod:`finn.builder.build_dataflow_config.DataflowBuildConfig.folding_config_file`).
See the `MobileNet-v1 build config for ZCU104 in finn-examples <https://github.com/Xilinx/finn-examples/blob/main/build/mobilenet-v1/folding_config/ZCU104_folding_config.json#L15>`_ for reference.
Which data layout do FINN-generated accelerators use? Big-endian? Little-endian?
The data layout used by FINN does not correspond to system-level big or little endian due to difficulties in defining what
the “word size” is and bit packing for smaller datatypes. FINN’s “word size” is dependent on the parallelization of the
first/last layers. For instance, if the first HLS layer is using SIMD=3 this means the “innermost dimension” in the
data packing functions will be of size 3.
When you use the verification infrastructure or the generated PYNQ Python drivers that FINN provides, the tool normally
takes care of any required data layout conversion on standard numpy arrays before presenting the data to the accelerator,
and vice versa on the output side. Doing this data packing and layout conversion manually can be messy at the moment.
If you need to do this manually, first examine how the `FINN PYNQ Python drivers <https://github.com/Xilinx/finn-examples/blob/main/finn_examples/driver.py#L379>`_ do this – notice how the input data is
first reshaped to create the “folded input shape” that reflects the word size of the first layer based on how much it
was parallelized, then data packing is applied to obtain a raw byte array (with some reversals going on) that can be
fed directly to the hardware. Another example of this is the `npy_to_rtlsim_input <https://github.com/Xilinx/finn-base/blob/dev/src/finn/util/data_packing.py#L289>`_ function, which converts npy arrays to lists of Python arbitrary-precision integers that we feed into pyverilator for rtl simulation:
Why does FIFO sizing take so long for my network? Is something wrong?
The automatic FIFO sizing in FINN can take quite long. It unfortunately doesn’t really parallelize on multiple cores since
it’s based on running an rtl simulation with lots of inputs and very large FIFOs, then observing the max occupancy/count
in each FIFO.
What's a good starting point for the folding configuration if I want to make manual changes?
First, enable automatic folding options in ``build_dataflow`` such ``target_fps``. This should find a decent set of
folding factors and save them to ``output_folder/auto_folding_config.json`` which you can use as a basis for creating the desired config.
...@@ -12,7 +12,8 @@ Quickstart ...@@ -12,7 +12,8 @@ Quickstart
3. Clone the FINN compiler from the repo: ``git clone https://github.com/Xilinx/finn/`` and go into the directory where it is cloned 3. Clone the FINN compiler from the repo: ``git clone https://github.com/Xilinx/finn/`` and go into the directory where it is cloned
4. Execute ``./run-docker.sh quicktest`` to verify your installation. 4. Execute ``./run-docker.sh quicktest`` to verify your installation.
5. Optionally, follow the instructions on :ref:`PYNQ board first-time setup` or :ref:`Alveo first-time setup` for board setup. 5. Optionally, follow the instructions on :ref:`PYNQ board first-time setup` or :ref:`Alveo first-time setup` for board setup.
6. All done! See :ref:`Running FINN in Docker` for the various options on how to run the FINN compiler. 6. Optionally, set up a `Vivado/Vitis license`_.
7. All done! See :ref:`Running FINN in Docker` for the various options on how to run the FINN compiler.
How do I use FINN? How do I use FINN?
...@@ -28,7 +29,7 @@ In general, the approach for using the FINN framework is as follows: ...@@ -28,7 +29,7 @@ In general, the approach for using the FINN framework is as follows:
1. Train your own quantized neural network (QNN) in `Brevitas <https://github.com/Xilinx/brevitas>`_. We have some `guidelines <https://bit.ly/finn-hls4ml-qat-guidelines>`_ on quantization-aware training (QAT). 1. Train your own quantized neural network (QNN) in `Brevitas <https://github.com/Xilinx/brevitas>`_. We have some `guidelines <https://bit.ly/finn-hls4ml-qat-guidelines>`_ on quantization-aware training (QAT).
2. Export to FINN-ONNX by following `this tutorial <https://github.com/Xilinx/finn/blob/master/notebooks/basics/1_brevitas_network_import.ipynb>`_ . 2. Export to FINN-ONNX by following `this tutorial <https://github.com/Xilinx/finn/blob/master/notebooks/basics/1_brevitas_network_import.ipynb>`_ .
3. Use FINN's ``build_dataflow`` system on the exported model by following `this tutorial <https://github.com/Xilinx/finn/blob/master/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb>`_ 3. Use FINN's ``build_dataflow`` system on the exported model by following this `tutorial <https://github.com/Xilinx/finn/blob/master/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb>`_
4. Adjust your QNN topology, quantization settings and ``build_dataflow`` configuration to get the desired results. 4. Adjust your QNN topology, quantization settings and ``build_dataflow`` configuration to get the desired results.
Please note that the framework is still under development, and how well this works will depend on how similar your custom network is to the examples we provide. Please note that the framework is still under development, and how well this works will depend on how similar your custom network is to the examples we provide.
...@@ -111,6 +112,7 @@ These are summarized below: ...@@ -111,6 +112,7 @@ These are summarized below:
* (optional) ``FINN_DOCKER_TAG`` (autogenerated) specifies the Docker image tag to use. * (optional) ``FINN_DOCKER_TAG`` (autogenerated) specifies the Docker image tag to use.
* (optional) ``FINN_DOCKER_RUN_AS_ROOT`` (default 0) if set to 1 then run Docker container as root, default is the current user. * (optional) ``FINN_DOCKER_RUN_AS_ROOT`` (default 0) if set to 1 then run Docker container as root, default is the current user.
* (optional) ``FINN_DOCKER_GPU`` (autodetected) if not 0 then expose all Nvidia GPUs or those selected by ``NVIDIA_VISIBLE_DEVICES`` to Docker container for accelerated DNN training. Requires `Nvidia Container Toolkit <https://github.com/NVIDIA/nvidia-docker>`_ * (optional) ``FINN_DOCKER_GPU`` (autodetected) if not 0 then expose all Nvidia GPUs or those selected by ``NVIDIA_VISIBLE_DEVICES`` to Docker container for accelerated DNN training. Requires `Nvidia Container Toolkit <https://github.com/NVIDIA/nvidia-docker>`_
* (optional) ``FINN_DOCKER_EXTRA`` (default "") pass extra arguments to the ``docker run`` command when executing ``./run-docker.sh``
* (optional) ``NVIDIA_VISIBLE_DEVICES`` (default "") specifies specific Nvidia GPUs to use in Docker container. Possible values are a comma-separated list of GPU UUID(s) or index(es) e.g. ``0,1,2``, ``all``, ``none``, or void/empty/unset. * (optional) ``NVIDIA_VISIBLE_DEVICES`` (default "") specifies specific Nvidia GPUs to use in Docker container. Possible values are a comma-separated list of GPU UUID(s) or index(es) e.g. ``0,1,2``, ``all``, ``none``, or void/empty/unset.
* (optional) ``DOCKER_BUILDKIT`` (default "1") enables `Docker BuildKit <https://docs.docker.com/develop/develop-images/build_enhancements/>`_ for faster Docker image rebuilding (recommended). * (optional) ``DOCKER_BUILDKIT`` (default "1") enables `Docker BuildKit <https://docs.docker.com/develop/develop-images/build_enhancements/>`_ for faster Docker image rebuilding (recommended).
...@@ -181,15 +183,26 @@ On the host side: ...@@ -181,15 +183,26 @@ On the host side:
5. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_. Copy your private key to the ``finn/ssh_keys`` folder on the host to get password-less deployment and remote execution. 5. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_. Copy your private key to the ``finn/ssh_keys`` folder on the host to get password-less deployment and remote execution.
6. Done! You can try the ``test_end2end_vitis`` tests in the FINN Docker to verify your setup, although this will take some time. 6. Done! You can try the ``test_end2end_vitis`` tests in the FINN Docker to verify your setup, although this will take some time.
Vivado/Vitis license
*********************
If you are targeting Xilinx FPGA parts that needs specific licenses (non-WebPack) you can make these available to the
FINN Docker container by passing extra arguments. To do this, you can use the ``FINN_DOCKER_EXTRA`` environment variable as follows:
::
export FINN_DOCKER_EXTRA=" -v /path/to/licenses:/path/to/licenses -e XILINXD_LICENSE_FILE=/path/to/licenses "
The above example mounts ``/path/to/licenses`` from the host into the same path on the Docker container, and sets the
value of the ``XILINXD_LICENSE_FILE`` environment variable.
System Requirements System Requirements
==================== ====================
* Ubuntu 18.04 with ``bash`` installed * Ubuntu 18.04 with ``bash`` installed
* Docker `without root <https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user>`_ * Docker `without root <https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user>`_
* A working Vivado 2019.1 or 2020.1 installation * A working Vivado 2020.1 installation
* A ``VIVADO_PATH`` environment variable pointing to the Vivado installation directory (e.g. the directory where settings64.sh is located) * ``FINN_XILINX_PATH`` and ``FINN_XILINX_VERSION`` environment variables correctly set, see `Quickstart`_
* *(optional)* `Vivado/Vitis license`_ if targeting non-WebPack FPGA parts.
* *(optional)* A PYNQ board with a network connection, see `PYNQ board first-time setup`_ * *(optional)* A PYNQ board with a network connection, see `PYNQ board first-time setup`_
* *(optional)* An Alveo board, and a working Vitis 2020.1 installation if you want to use Vitis and Alveo (see `Alveo first-time setup`_ ) * *(optional)* An Alveo board, and a working Vitis 2020.1 installation if you want to use Vitis and Alveo (see `Alveo first-time setup`_ )
......
...@@ -4,12 +4,12 @@ ...@@ -4,12 +4,12 @@
Internals Internals
********* *********
Intermediate Representation: FINN-ONNX Intermediate Representation: QONNX and FINN-ONNX
====================================== ================================================
FINN uses `ONNX <https://github.com/onnx/onnx>`_ as an intermediate representation (IR) for neural networks. As such, almost every component inside FINN uses ONNX and its `Python API <https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md>`_, so you may want to familiarize yourself with how ONNX represents DNNs. Specifically, the `ONNX protobuf description <https://github.com/onnx/onnx/blob/master/onnx/onnx.proto>`_ (or its `human-readable documentation <https://github.com/onnx/onnx/blob/master/docs/IR.md>`_ and the `operator schemas <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_ are useful as reference documents. We also provide a Jupyter notebook that can help to get familiar with ONNX by showing how to work with a simple ONNX model in FINN, see chapter :ref:`tutorials` for details. FINN uses `ONNX <https://github.com/onnx/onnx>`_ as an intermediate representation (IR) for neural networks. As such, almost every component inside FINN uses ONNX and its `Python API <https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md>`_, so you may want to familiarize yourself with how ONNX represents DNNs. Specifically, the `ONNX protobuf description <https://github.com/onnx/onnx/blob/master/onnx/onnx.proto>`_ (or its `human-readable documentation <https://github.com/onnx/onnx/blob/master/docs/IR.md>`_ and the `operator schemas <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_ are useful as reference documents. We also provide a Jupyter notebook that can help to get familiar with ONNX by showing how to work with a simple ONNX model in FINN, see chapter :ref:`tutorials` for details.
.. note:: FINN uses ONNX is a specific way that we refer to as FINN-ONNX, and not all ONNX graphs are supported by FINN (and vice versa). .. note:: FINN supports two specialized variants of ONNX called QONNX and FINN-ONNX, and not all ONNX graphs are supported by FINN (and vice versa).
Custom Quantization Annotations Custom Quantization Annotations
=============================== ===============================
......
...@@ -23,6 +23,13 @@ finn.analysis.base ...@@ -23,6 +23,13 @@ finn.analysis.base
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.analysis.inference\_cost
-----------------------------
.. automodule:: finn.analysis.inference_cost
:members:
:undoc-members:
:show-inheritance:
finn.analysis.topology finn.analysis.topology
----------------------------- -----------------------------
......
...@@ -13,6 +13,23 @@ Base Class ...@@ -13,6 +13,23 @@ Base Class
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.fpgadataflow.addstreams\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.addstreams_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.channelwise\_op\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.channelwise_op_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.convolutioninputgenerator finn.custom\_op.fpgadataflow.convolutioninputgenerator
------------------------------------------------------------- -------------------------------------------------------------
...@@ -21,6 +38,87 @@ finn.custom\_op.fpgadataflow.convolutioninputgenerator ...@@ -21,6 +38,87 @@ finn.custom\_op.fpgadataflow.convolutioninputgenerator
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.fpgadataflow.convolutioninputgenerator1d
-------------------------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.convolutioninputgenerator1d
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.downsampler
-----------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.downsampler
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.duplicatestreams\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.duplicatestreams_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.fmpadding\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.fmpadding_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.globalaccpool\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.globalaccpool_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.iodma
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.iodma
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.labelselect\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.labelselect_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.lookup
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.lookup
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.pool\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.pool_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.streamingdataflowpartition
--------------------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.streamingdataflowpartition
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.streamingdatawidthconverter\_batch finn.custom\_op.fpgadataflow.streamingdatawidthconverter\_batch
---------------------------------------------------------------------- ----------------------------------------------------------------------
...@@ -61,6 +159,15 @@ finn.custom\_op.fpgadataflow.templates ...@@ -61,6 +159,15 @@ finn.custom\_op.fpgadataflow.templates
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.fpgadataflow.thresholding\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.thresholding_batch
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.tlastmarker finn.custom\_op.fpgadataflow.tlastmarker
----------------------------------------------- -----------------------------------------------
...@@ -68,3 +175,19 @@ finn.custom\_op.fpgadataflow.tlastmarker ...@@ -68,3 +175,19 @@ finn.custom\_op.fpgadataflow.tlastmarker
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.fpgadataflow.upsampler
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.upsampler
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.fpgadataflow.vector\_vector\_activate\_batch
-----------------------------------------------
.. automodule:: finn.custom_op.fpgadataflow.vector_vector_activate_batch
:members:
:undoc-members:
:show-inheritance:
...@@ -5,6 +5,14 @@ Custom Op - General ...@@ -5,6 +5,14 @@ Custom Op - General
General Custom Ops General Custom Ops
=================== ===================
finn.custom\_op.general.bipolar_quant
--------------------------------------
.. automodule:: finn.custom_op.general.bipolar_quant
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.general.debugmarker finn.custom\_op.general.debugmarker
----------------------------------- -----------------------------------
...@@ -13,6 +21,14 @@ finn.custom\_op.general.debugmarker ...@@ -13,6 +21,14 @@ finn.custom\_op.general.debugmarker
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.general.genericpartition
-----------------------------------------
.. automodule:: finn.custom_op.general.genericpartition
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.general.im2col finn.custom\_op.general.im2col
------------------------------ ------------------------------
...@@ -37,6 +53,14 @@ finn.custom\_op.general.multithreshold ...@@ -37,6 +53,14 @@ finn.custom\_op.general.multithreshold
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.general.quant
------------------------------
.. automodule:: finn.custom_op.general.quant
:members:
:undoc-members:
:show-inheritance:
finn.custom\_op.general.quantavgpool2d finn.custom\_op.general.quantavgpool2d
-------------------------------------- --------------------------------------
...@@ -45,13 +69,13 @@ finn.custom\_op.general.quantavgpool2d ...@@ -45,13 +69,13 @@ finn.custom\_op.general.quantavgpool2d
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.general.streamingdataflowpartition finn.custom\_op.general.trunc
--------------------------------------------------- ------------------------------
.. automodule:: finn.custom_op.general.streamingdataflowpartition .. automodule:: finn.custom_op.general.trunc
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.custom\_op.general.xnorpopcount finn.custom\_op.general.xnorpopcount
------------------------------------- -------------------------------------
......
...@@ -62,6 +62,14 @@ finn.transformation.fpgadataflow.create\_stitched\_ip ...@@ -62,6 +62,14 @@ finn.transformation.fpgadataflow.create\_stitched\_ip
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.fpgadataflow.externalize\_params
------------------------------------------------------------
.. automodule:: finn.transformation.fpgadataflow.externalize_params
:members:
:undoc-members:
:show-inheritance:
finn.transformation.fpgadataflow.floorplan finn.transformation.fpgadataflow.floorplan
---------------------------------------------------- ----------------------------------------------------
......
***********************
Transformation - QONNX
************************
Transformation (QONNX)
===========================
.. automodule:: finn.transformation.qonnx
:members:
:undoc-members:
:show-inheritance:
finn.transformation.qonnx.convert\_qonnx\_to\_finn
---------------------------------------------------
.. automodule:: finn.transformation.qonnx.convert_qonnx_to_finn
:members:
:undoc-members:
:show-inheritance:
finn.transformation.qonnx.fold\_quant\_weights
-----------------------------------------------
.. automodule:: finn.transformation.qonnx.fold_quant_weights
:members:
:undoc-members:
:show-inheritance:
finn.transformation.qonnx.infer\_quant\_avg\_pool\_2d
------------------------------------------------------
.. automodule:: finn.transformation.qonnx.infer_quant_avg_pool_2d
:members:
:undoc-members:
:show-inheritance:
finn.transformation.qonnx.qonnx\_activation\_handlers
-------------------------------------------------------
.. automodule:: finn.transformation.qonnx.qonnx_activation_handlers
:members:
:undoc-members:
:show-inheritance:
finn.transformation.qonnx.quant\_act\_to\_multithreshold
---------------------------------------------------------
.. automodule:: finn.transformation.qonnx.quant_act_to_multithreshold
:members:
:undoc-members:
:show-inheritance:
...@@ -11,6 +11,7 @@ Submodules ...@@ -11,6 +11,7 @@ Submodules
:maxdepth: 2 :maxdepth: 2
finn.transformation.fpgadataflow finn.transformation.fpgadataflow
finn.transformation.qonnx
finn.transformation.streamline finn.transformation.streamline
Transformation Passes Transformation Passes
...@@ -40,6 +41,14 @@ finn.transformation.bipolar\_to\_xnor ...@@ -40,6 +41,14 @@ finn.transformation.bipolar\_to\_xnor
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.change\_3d\_tensors\_to\_4d
------------------------------------------------
.. automodule:: finn.transformation.change_3d_tensors_to_4d
:members:
:undoc-members:
:show-inheritance:
finn.transformation.change\_datalayout finn.transformation.change\_datalayout
-------------------------------------------- --------------------------------------------
...@@ -48,6 +57,13 @@ finn.transformation.change\_datalayout ...@@ -48,6 +57,13 @@ finn.transformation.change\_datalayout
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.create\_generic\_partitions
------------------------------------------------
.. automodule:: finn.transformation.create_generic_partitions
:members:
:undoc-members:
:show-inheritance:
finn.transformation.double\_to\_single\_float finn.transformation.double\_to\_single\_float
---------------------------------------------------- ----------------------------------------------------
...@@ -57,6 +73,23 @@ finn.transformation.double\_to\_single\_float ...@@ -57,6 +73,23 @@ finn.transformation.double\_to\_single\_float
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.extend\_partition
------------------------------------------
.. automodule:: finn.transformation.extend_partition
:members:
:undoc-members:
:show-inheritance:
finn.transformation.extract\_conv\_bias
------------------------------------------
.. automodule:: finn.transformation.extract_conv_bias
:members:
:undoc-members:
:show-inheritance:
finn.transformation.fold\_constants finn.transformation.fold\_constants
------------------------------------------ ------------------------------------------
...@@ -65,6 +98,14 @@ finn.transformation.fold\_constants ...@@ -65,6 +98,14 @@ finn.transformation.fold\_constants
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.gemm\_to\_matmul
------------------------------------------
.. automodule:: finn.transformation.gemm_to_matmul
:members:
:undoc-members:
:show-inheritance:
finn.transformation.general finn.transformation.general
---------------------------------- ----------------------------------
...@@ -113,6 +154,13 @@ finn.transformation.lower\_convs\_to\_matmul ...@@ -113,6 +154,13 @@ finn.transformation.lower\_convs\_to\_matmul
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.make\_input\_chanlast
------------------------------------------
.. automodule:: finn.transformation.make_input_chanlast
:members:
:undoc-members:
:show-inheritance:
finn.transformation.merge\_onnx\_models finn.transformation.merge\_onnx\_models
---------------------------------------- ----------------------------------------
...@@ -130,3 +178,11 @@ finn.transformation.move\_reshape ...@@ -130,3 +178,11 @@ finn.transformation.move\_reshape
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.remove
-------------------------------------
.. automodule:: finn.transformation.remove
:members:
:undoc-members:
:show-inheritance:
...@@ -26,13 +26,6 @@ finn.transformation.streamline.collapse\_repeated ...@@ -26,13 +26,6 @@ finn.transformation.streamline.collapse\_repeated
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.transformation.streamline.remove
-------------------------------------
.. automodule:: finn.transformation.streamline.remove
:members:
:undoc-members:
:show-inheritance:
finn.transformation.streamline.reorder finn.transformation.streamline.reorder
--------------------------------------------- ---------------------------------------------
......
...@@ -72,6 +72,15 @@ finn.util.onnx ...@@ -72,6 +72,15 @@ finn.util.onnx
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
finn.util.platforms
--------------------
.. automodule:: finn.util.platforms
:members:
:undoc-members:
:show-inheritance:
finn.util.pytorch finn.util.pytorch
------------------ ------------------
......
...@@ -92,11 +92,11 @@ SCRIPTPATH=$(dirname "$SCRIPT") ...@@ -92,11 +92,11 @@ SCRIPTPATH=$(dirname "$SCRIPT")
: ${FINN_DOCKER_PREBUILT="0"} : ${FINN_DOCKER_PREBUILT="0"}
: ${FINN_DOCKER_RUN_AS_ROOT="0"} : ${FINN_DOCKER_RUN_AS_ROOT="0"}
: ${FINN_DOCKER_GPU="$(docker info | grep nvidia | wc -m)"} : ${FINN_DOCKER_GPU="$(docker info | grep nvidia | wc -m)"}
: ${FINN_DOCKER_EXTRA=""}
: ${NVIDIA_VISIBLE_DEVICES=""} : ${NVIDIA_VISIBLE_DEVICES=""}
: ${DOCKER_BUILDKIT="1"} : ${DOCKER_BUILDKIT="1"}
DOCKER_INTERACTIVE="" DOCKER_INTERACTIVE=""
DOCKER_EXTRA=""
if [ "$1" = "test" ]; then if [ "$1" = "test" ]; then
gecho "Running test suite (all tests)" gecho "Running test suite (all tests)"
...@@ -112,20 +112,20 @@ elif [ "$1" = "notebook" ]; then ...@@ -112,20 +112,20 @@ elif [ "$1" = "notebook" ]; then
JUPYTER_PASSWD_ARG="--NotebookApp.password='$JUPYTER_PASSWD_HASH'" JUPYTER_PASSWD_ARG="--NotebookApp.password='$JUPYTER_PASSWD_HASH'"
fi fi
DOCKER_CMD="jupyter notebook --allow-root --no-browser --ip=0.0.0.0 --port $JUPYTER_PORT $JUPYTER_PASSWD_ARG notebooks" DOCKER_CMD="jupyter notebook --allow-root --no-browser --ip=0.0.0.0 --port $JUPYTER_PORT $JUPYTER_PASSWD_ARG notebooks"
DOCKER_EXTRA+="-e JUPYTER_PORT=$JUPYTER_PORT " FINN_DOCKER_EXTRA+="-e JUPYTER_PORT=$JUPYTER_PORT "
DOCKER_EXTRA+="-e NETRON_PORT=$NETRON_PORT " FINN_DOCKER_EXTRA+="-e NETRON_PORT=$NETRON_PORT "
DOCKER_EXTRA+="-p $JUPYTER_PORT:$JUPYTER_PORT " FINN_DOCKER_EXTRA+="-p $JUPYTER_PORT:$JUPYTER_PORT "
DOCKER_EXTRA+="-p $NETRON_PORT:$NETRON_PORT " FINN_DOCKER_EXTRA+="-p $NETRON_PORT:$NETRON_PORT "
elif [ "$1" = "build_dataflow" ]; then elif [ "$1" = "build_dataflow" ]; then
BUILD_DATAFLOW_DIR=$(readlink -f "$2") BUILD_DATAFLOW_DIR=$(readlink -f "$2")
DOCKER_EXTRA="-v $BUILD_DATAFLOW_DIR:$BUILD_DATAFLOW_DIR " FINN_DOCKER_EXTRA="-v $BUILD_DATAFLOW_DIR:$BUILD_DATAFLOW_DIR "
DOCKER_INTERACTIVE="-it" DOCKER_INTERACTIVE="-it"
#FINN_HOST_BUILD_DIR=$BUILD_DATAFLOW_DIR/build #FINN_HOST_BUILD_DIR=$BUILD_DATAFLOW_DIR/build
gecho "Running build_dataflow for folder $BUILD_DATAFLOW_DIR" gecho "Running build_dataflow for folder $BUILD_DATAFLOW_DIR"
DOCKER_CMD="build_dataflow $BUILD_DATAFLOW_DIR" DOCKER_CMD="build_dataflow $BUILD_DATAFLOW_DIR"
elif [ "$1" = "build_custom" ]; then elif [ "$1" = "build_custom" ]; then
BUILD_CUSTOM_DIR=$(readlink -f "$2") BUILD_CUSTOM_DIR=$(readlink -f "$2")
DOCKER_EXTRA="-v $BUILD_CUSTOM_DIR:$BUILD_CUSTOM_DIR -w $BUILD_CUSTOM_DIR " FINN_DOCKER_EXTRA="-v $BUILD_CUSTOM_DIR:$BUILD_CUSTOM_DIR -w $BUILD_CUSTOM_DIR "
DOCKER_INTERACTIVE="-it" DOCKER_INTERACTIVE="-it"
#FINN_HOST_BUILD_DIR=$BUILD_DATAFLOW_DIR/build #FINN_HOST_BUILD_DIR=$BUILD_DATAFLOW_DIR/build
gecho "Running build_custom: $BUILD_CUSTOM_DIR/build.py" gecho "Running build_custom: $BUILD_CUSTOM_DIR/build.py"
...@@ -139,9 +139,9 @@ fi ...@@ -139,9 +139,9 @@ fi
if [ "$FINN_DOCKER_GPU" != 0 ];then if [ "$FINN_DOCKER_GPU" != 0 ];then
gecho "nvidia-docker detected, enabling GPUs" gecho "nvidia-docker detected, enabling GPUs"
if [ ! -z "$NVIDIA_VISIBLE_DEVICES" ];then if [ ! -z "$NVIDIA_VISIBLE_DEVICES" ];then
DOCKER_EXTRA+="--runtime nvidia -e NVIDIA_VISIBLE_DEVICES=$NVIDIA_VISIBLE_DEVICES " FINN_DOCKER_EXTRA+="--runtime nvidia -e NVIDIA_VISIBLE_DEVICES=$NVIDIA_VISIBLE_DEVICES "
else else
DOCKER_EXTRA+="--gpus all " FINN_DOCKER_EXTRA+="--gpus all "
fi fi
fi fi
...@@ -222,7 +222,7 @@ if [ ! -z "$FINN_XILINX_PATH" ];then ...@@ -222,7 +222,7 @@ if [ ! -z "$FINN_XILINX_PATH" ];then
DOCKER_EXEC+="-e ALVEO_TARGET_DIR=$ALVEO_TARGET_DIR " DOCKER_EXEC+="-e ALVEO_TARGET_DIR=$ALVEO_TARGET_DIR "
fi fi
fi fi
DOCKER_EXEC+="$DOCKER_EXTRA " DOCKER_EXEC+="$FINN_DOCKER_EXTRA "
DOCKER_EXEC+="$FINN_DOCKER_TAG $DOCKER_CMD" DOCKER_EXEC+="$FINN_DOCKER_TAG $DOCKER_CMD"
$DOCKER_EXEC $DOCKER_EXEC
...@@ -74,7 +74,17 @@ exclude = ...@@ -74,7 +74,17 @@ exclude =
# PDF = ReportLab; RXP # PDF = ReportLab; RXP
# finn-base is needed to build the full set of docs # finn-base is needed to build the full set of docs
docs = docs =
finn-base finn-base==0.0.3
docutils==0.17.1
dataclasses-json==0.5.2
gspread==3.6.0
pytest
netron
vcdvcd
torchvision
torch
qonnx@git+https://github.com/fastmachinelearning/qonnx@main#egg=qonnx
# Add here test requirements (semicolon/line-separated) # Add here test requirements (semicolon/line-separated)
testing = testing =
pytest pytest
......
...@@ -32,7 +32,8 @@ from finn.util.basic import is_finn_op ...@@ -32,7 +32,8 @@ from finn.util.basic import is_finn_op
def verify_nodes(model): def verify_nodes(model):
"""Checks if custom ops in graph are correctly built, with all attributes """Checks if custom ops in graph are correctly built, with all attributes
and inputs. and inputs. Please note that many FINN CustomOps don't yet implement the
verify_node function required for this analysis pass to work correctly.
Returns {node op_type : info_messages} Returns {node op_type : info_messages}
......
...@@ -89,6 +89,8 @@ class LargeFIFOMemStyle(str, Enum): ...@@ -89,6 +89,8 @@ class LargeFIFOMemStyle(str, Enum):
class VerificationStepType(str, Enum): class VerificationStepType(str, Enum):
"Steps at which FINN ONNX execution can be launched for verification." "Steps at which FINN ONNX execution can be launched for verification."
#: verify after step_qonnx_to_finn, using Python execution
QONNX_TO_FINN_PYTHON = "finn_onnx_python"
#: verify after step_tidy_up, using Python execution #: verify after step_tidy_up, using Python execution
TIDY_UP_PYTHON = "initial_python" TIDY_UP_PYTHON = "initial_python"
#: verify after step_streamline , using Python execution #: verify after step_streamline , using Python execution
...@@ -103,6 +105,7 @@ class VerificationStepType(str, Enum): ...@@ -103,6 +105,7 @@ class VerificationStepType(str, Enum):
#: specified order. Use the `steps` as part of build config to restrict which #: specified order. Use the `steps` as part of build config to restrict which
#: steps will be run. #: steps will be run.
default_build_dataflow_steps = [ default_build_dataflow_steps = [
"step_qonnx_to_finn",
"step_tidy_up", "step_tidy_up",
"step_streamline", "step_streamline",
"step_convert_to_hls", "step_convert_to_hls",
...@@ -123,6 +126,7 @@ default_build_dataflow_steps = [ ...@@ -123,6 +126,7 @@ default_build_dataflow_steps = [
#: List of steps to run for an estimate-only (no synthesis) dataflow build #: List of steps to run for an estimate-only (no synthesis) dataflow build
estimate_only_dataflow_steps = [ estimate_only_dataflow_steps = [
"step_qonnx_to_finn",
"step_tidy_up", "step_tidy_up",
"step_streamline", "step_streamline",
"step_convert_to_hls", "step_convert_to_hls",
...@@ -291,6 +295,14 @@ class DataflowBuildConfig: ...@@ -291,6 +295,14 @@ class DataflowBuildConfig:
#: If given, stop at this step. #: If given, stop at this step.
stop_step: Optional[str] = None stop_step: Optional[str] = None
#: The optional argument `max_multithreshold_bit_width` affects which Quant nodes
#: of the QONNX format get converted to the MultiThreshold nodes of FINN. This
#: only affects Quant nodes in the activation path. Quant nodes, which define a
#: bit width larger than `max_multithreshold_bit_width` are not converted to
#: MultiThreshold nodes and a warning is raised instead.
#: If not given `max_multithreshold_bit_width` defaults to 8.
max_multithreshold_bit_width: Optional[int] = 8
#: Override the number of inputs for rtlsim performance measurement. #: Override the number of inputs for rtlsim performance measurement.
rtlsim_batch_size: Optional[int] = 1 rtlsim_batch_size: Optional[int] = 1
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment