diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 5a7f70f8f69293d8dcef9b64c763aa606d5d73f5..126a4ac4b2bee7f3eaaf610646855b48d07b9e32 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -51,7 +51,7 @@ repos: args: ['--fix=no'] - repo: https://github.com/PyCQA/isort - rev: 5.10.1 + rev: 5.12.0 hooks: - id: isort diff --git a/.readthedocs.yaml b/.readthedocs.yaml index 3601fcdccff675e6f850d4636ebbfc0726f7cd4d..478957be113b686c4fabd3d071fdf6203dd37dd3 100644 --- a/.readthedocs.yaml +++ b/.readthedocs.yaml @@ -35,7 +35,7 @@ sphinx: configuration: docs/finn/conf.py python: - version: 3.7 + version: 3.8 install: - method: pip path: . diff --git a/AUTHORS.rst b/AUTHORS.rst index d011ce3d7ad74125b7013b7a7e987eb22e70a9f3..861b81924b187620d77f8cd47d4faff8d7f15bf8 100644 --- a/AUTHORS.rst +++ b/AUTHORS.rst @@ -9,7 +9,7 @@ Contributors * Hendrik Borras (@HenniOVP) * Lucian Petrica (@quetric) * Tobias Alonso (@Tobi-Alonso) -* Felix Paul Jentzsch (@felixpj) +* Felix Paul Jentzsch (@fpjentzsch) * Mirza Mrahorovic (@mmrahorovic) * Suranga Mahesh (@surangamh) * Peter Lehnhardt (@pete-lennart) @@ -26,3 +26,5 @@ Contributors * Aziz Bahri (@azizb-xlnx) * Fionn O'Donohoe (@fionnodonohoe-xlnx) * Matthias Gehre (@mgehre-amd) +* Hugo Le Blevec (@hleblevec) +* Patrick Geel (@patrickgeel) diff --git a/README.md b/README.md index 1b8efc8f19d0b664a17320585f5ea60acbe03eb4..2e1faf8f0c4422c8690506bb5f79611c6661fa9c 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ Please see the [Getting Started](https://finn.readthedocs.io/en/latest/getting_s ## Documentation -You can view the documentation on [readthedocs](https://finn.readthedocs.io) or build them locally using `python setup.py doc` from inside the Docker container. Additionally, there is a series of [Jupyter notebook tutorials](https://github.com/Xilinx/finn/tree/master/notebooks), which we recommend running from inside Docker for a better experience. +You can view the documentation on [readthedocs](https://finn.readthedocs.io) or build them locally using `python setup.py doc` from inside the Docker container. Additionally, there is a series of [Jupyter notebook tutorials](https://github.com/Xilinx/finn/tree/main/notebooks), which we recommend running from inside Docker for a better experience. ## Community @@ -67,4 +67,4 @@ The current implementation of the framework is based on the following publicatio ## Old version We previously released an early-stage prototype of a toolflow that took in Caffe-HWGQ binarized network descriptions and produced dataflow architectures. You can find it in the [v0.1](https://github.com/Xilinx/finn/tree/v0.1) branch in this repository. -Please be aware that this version is deprecated and unsupported, and the master branch does not share history with that branch so it should be treated as a separate repository for all purposes. +Please be aware that this version is deprecated and unsupported, and the main branch does not share history with that branch so it should be treated as a separate repository for all purposes. diff --git a/docs/finn/brevitas_export.rst b/docs/finn/brevitas_export.rst index 304aa30854118e1ebd3258169ee4698a873e8689..950b601f98d14e99a00841f23894770eb0bb1569 100644 --- a/docs/finn/brevitas_export.rst +++ b/docs/finn/brevitas_export.rst @@ -16,6 +16,6 @@ Two of the Brevitas-exported ONNX variants can be ingested by FINN: To work with either type of ONNX model, it is loaded into a :ref:`modelwrapper` provided by FINN. -At this stage we can already use the functional verification flow to simulate the model using Python, this is marked in the graphic with the dotted arrow. For more details please have look at :ref:`verification`. +At this stage we can already use the functional verification flow to simulate the model using Python. For more details please have look at :ref:`verification`. The model can now be further processed in FINN, the next flow step is :ref:`nw_prep`. diff --git a/docs/finn/command_line.rst b/docs/finn/command_line.rst index 12e01db5544e847a775d330929d1eea916cae74e..8c37479a28ea7c2ae76bbcce9cf5bfc53646a2cb 100644 --- a/docs/finn/command_line.rst +++ b/docs/finn/command_line.rst @@ -105,7 +105,7 @@ The following outputs will be generated regardless of which particular outputs a The other output products are controlled by the `generate_outputs` field in the build configuration), and are detailed below. -* :py:mod:`finn.builder.build_dataflow.DataflowOutputType.ESTIMATE_REPORTS` produces a variety of reports to estimate resource usage and performance *without* running any synthesis. This can be useful for setting up the parallelization and other hardware configuration: +* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.ESTIMATE_REPORTS` produces a variety of reports to estimate resource usage and performance *without* running any synthesis. This can be useful for setting up the parallelization and other hardware configuration: * ``report/estimate_layer_cycles.json`` -- cycles per layer estimation from analytical model * ``report/estimate_layer_resources.json`` -- resources per layer estimation from analytical model @@ -113,31 +113,31 @@ build configuration), and are detailed below. * ``report/estimate_network_performance.json`` -- whole-network performance estimation from analytical model * ``report/op_and_param_counts.json`` -- per-layer and total number of operations and parameters (independent of parallelization) -* :py:mod:`finn.builder.build_dataflow.DataflowOutputType.STITCHED_IP`: produces a stitched Vivado IP block design that can be integrated with other FPGA designs in Vivado IPI: +* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.STITCHED_IP`: produces a stitched Vivado IP block design that can be integrated with other FPGA designs in Vivado IPI: * ``stitched_ip/finn_vivado_stitch_proj.xpr`` -- Vivado project (including Vivado IP Integrator block design) to generate the stitched IP * ``stitched_ip/ip`` -- exported Vivado IP for the stitched design -* :py:mod:`finn.builder.build_dataflow.DataflowOutputType.RTLSIM_PERFORMANCE`: measure latency and performance for the stitched IP in RTL simulation, using PyVerilator +* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.RTLSIM_PERFORMANCE`: measure latency and performance for the stitched IP in RTL simulation, using PyVerilator * ``report/rtlsim_performance.json`` -- accelerator throughput and latency from RTL simulation -* :py:mod:`finn.builder.build_dataflow.DataflowOutputType.OOC_SYNTH` runs out-of-context synthesis for the stitched IP. This is useful for getting post-synthesis resource counts and achievable clock frequency without having to produce a full bitfile with DMA engines: +* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.OOC_SYNTH` runs out-of-context synthesis for the stitched IP. This is useful for getting post-synthesis resource counts and achievable clock frequency without having to produce a full bitfile with DMA engines: * ``report/ooc_synth_and_timing.json`` -- resources and achievable clock frequency from out-of-context synthesis -* :py:mod:`finn.builder.build_dataflow.DataflowOutputType.BITFILE` will run Vivado and/or Vitis to insert the FINN accelerator inside a shell, with DMA engines instantiated to move data to/from main memory: +* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.BITFILE` will run Vivado and/or Vitis to insert the FINN accelerator inside a shell, with DMA engines instantiated to move data to/from main memory: * ``bitfile/finn-accel.(bit|xclbin)`` -- generated bitfile depending on platform * ``report/post_synth_resources.xml`` -- FPGA resource utilization after synthesis * ``report/post_route_timing.rpt`` -- post-route timing report -* :py:mod:`finn.builder.build_dataflow.DataflowOutputType.PYNQ_DRIVER` will generate a PYNQ Python driver that can be used to interface the generated accelerator: +* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.PYNQ_DRIVER` will generate a PYNQ Python driver that can be used to interface the generated accelerator: * ``driver/driver.py`` -- Python driver that can be used on PYNQ on Zynq or Alveo platforms to launch the accelerator -* :py:mod:`finn.builder.build_dataflow.DataflowOutputType.DEPLOYMENT_PACKAGE`: +* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.DEPLOYMENT_PACKAGE`: * ``deploy/`` -- deployment package folder with a bitfile and driver, ready to be copied to target hardware platform @@ -153,7 +153,7 @@ and compare it against the expected output that you provide. This is achieved by setting up the following members of the build configuration: -* Set ``verify_steps`` to be a list of :py:mod:`finn.builder.build_dataflow.VerificationStepType` +* Set ``verify_steps`` to be a list of :py:mod:`finn.builder.build_dataflow_config.VerificationStepType` where each element in the list indicates the output of a particular step that will be verified. See the documentation of the ``VerificationStepType`` for more information. diff --git a/docs/finn/developers.rst b/docs/finn/developers.rst index b152dfef66d0eb47e086d3c5cd51174c5df52128..f9252f764c3f8297140f81d7ed42ab2da1218dae 100644 --- a/docs/finn/developers.rst +++ b/docs/finn/developers.rst @@ -12,7 +12,7 @@ Prerequisites Before starting to do development on FINN it's a good idea to start with understanding the basics as a user. Going through all of the -:ref:`tutorials` is strongly recommended if you haven' already done so. +:ref:`tutorials` is strongly recommended if you haven't already done so. Additionally, please review the documentation available on :ref:`internals`. Repository structure @@ -153,7 +153,7 @@ from the FINN root directory as follows: :: - python setup.py test --addopts "-k test_brevitas_debug --pdb" + pytest -k test_brevitas_debug --pdb If you want to run tests in parallel (e.g. to take advantage of a multi-core CPU) diff --git a/docs/finn/end_to_end_flow.rst b/docs/finn/end_to_end_flow.rst index bc5c5230718bcc8dd50334cc1f20c3c84c012ca4..0a022067c38ec3bb3c793d288e0230013ca8b21c 100644 --- a/docs/finn/end_to_end_flow.rst +++ b/docs/finn/end_to_end_flow.rst @@ -9,7 +9,7 @@ As you can see in the picture, FINN has a high modularity and has the property t :scale: 50% :align: center -The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into five sections, each of it includes several flow steps. The flow starts in top left corner with Brevitas export (green section), followed by the preparation of the network (blue section) for the Vivado HLS and Vivado IPI (orange section). There is also a section for testing and verification in software (red section) and the hardware generation and deployment on the PYNQ board (yellow section). +The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into five sections, each of it includes several flow steps. The flow starts in top left corner with Brevitas export, followed by the preparation of the network for the Vitis HLS and Vivado IPI. There is also a section for testing and verification in software (in the cloud on the right) and the hardware generation and deployment on the PYNQ board. This example flow is covered in the `end2end_example <https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example>`_ Jupyter notebooks. For a more detailed overview about the different flow sections, please have a look at the corresponding pages: diff --git a/docs/finn/getting_started.rst b/docs/finn/getting_started.rst index 40425c119fafdcd03292b05c7a7e71310f767239..9b3111b70eae97a3644e1de23c368bd5b09f7927 100644 --- a/docs/finn/getting_started.rst +++ b/docs/finn/getting_started.rst @@ -20,7 +20,7 @@ How do I use FINN? ================== We strongly recommend that you first watch one of the pre-recorded `FINN tutorial <https://www.youtube.com/watch?v=zw2aG4PhzmA&%3Bindex=2>`_ -videos, then follow the Jupyter notebook tutorials for `training and deploying an MLP for network intrusion detection <https://github.com/Xilinx/finn/tree/master/notebooks/end2end_example/cybersecurity>`_ . +videos, then follow the Jupyter notebook tutorials for `training and deploying an MLP for network intrusion detection <https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example/cybersecurity>`_ . You may also want to check out the other :ref:`tutorials`, and the `FINN examples repository <https://github.com/Xilinx/finn-examples>`_ . Our aim in FINN is *not* to accelerate common off-the-shelf neural networks, but instead provide you with a set of tools @@ -28,19 +28,19 @@ to train *customized* networks and create highly-efficient FPGA implementations In general, the approach for using the FINN framework is as follows: 1. Train your own quantized neural network (QNN) in `Brevitas <https://github.com/Xilinx/brevitas>`_. We have some `guidelines <https://bit.ly/finn-hls4ml-qat-guidelines>`_ on quantization-aware training (QAT). -2. Export to FINN-ONNX by following `this tutorial <https://github.com/Xilinx/finn/blob/master/notebooks/basics/1_brevitas_network_import.ipynb>`_ . -3. Use FINN's ``build_dataflow`` system on the exported model by following this `tutorial <https://github.com/Xilinx/finn/blob/master/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb>`_ +2. Export to FINN-ONNX by following `this tutorial <https://github.com/Xilinx/finn/blob/main/notebooks/basics/1_brevitas_network_import.ipynb>`_ . +3. Use FINN's ``build_dataflow`` system on the exported model by following this `tutorial <https://github.com/Xilinx/finn/blob/main/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb>`_ 4. Adjust your QNN topology, quantization settings and ``build_dataflow`` configuration to get the desired results. Please note that the framework is still under development, and how well this works will depend on how similar your custom network is to the examples we provide. If there are substantial differences, you will most likely have to write your own Python scripts that call the appropriate FINN compiler functions that process your design correctly, or adding new functions (including -Vivado HLS layers) +Vitis HLS layers) as required. -The `advanced FINN tutorials <https://github.com/Xilinx/finn/tree/master/notebooks/advanced>`_ can be useful here. +The `advanced FINN tutorials <https://github.com/Xilinx/finn/tree/main/notebooks/advanced>`_ can be useful here. For custom networks, we recommend making a copy of the `BNN-PYNQ end-to-end -Jupyter notebook tutorials <https://github.com/Xilinx/finn/tree/master/notebooks/end2end_example/bnn-pynq>`_ as a starting point, visualizing the model at intermediate +Jupyter notebook tutorials <https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example/bnn-pynq>`_ as a starting point, visualizing the model at intermediate steps and adding calls to new transformations as needed. Once you have a working flow, you can implement a command line entry for this by using the "advanced mode" described in the :ref:`command_line` section. @@ -50,7 +50,8 @@ Running FINN in Docker FINN runs inside a Docker container, it comes with a script to easily build and launch the container. If you are not familiar with Docker, there are many excellent `online resources <https://docker-curriculum.com/>`_ to get started. You may want to review the :ref:`General FINN Docker tips` and :ref:`Environment variables` as well. If you want to use prebuilt images, read :ref:`Using a prebuilt image`. -The ``run-docker.sh`` script that can be launched in the following modes: + +The above mentioned script to build and launch the FINN docker container is called `run-docker.sh <https://github.com/Xilinx/finn/blob/main/run-docker.sh>`_ . It can be launched in the following modes: Launch interactive shell ************************ @@ -140,10 +141,7 @@ If you are having trouble building the Docker image or need offline access, you Supported FPGA Hardware ======================= -**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Ultra96, ZCU102 and ZCU104 boards. - -.. warning:: - In previous FINN versions (v0.4b - v0.7) we had support for `Xilinx Alveo boards <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_ using PYNQ and Vitis 2020.1, see instructions below for Alveo setup that works with older versions. Please note that with the new release with Vitis 2022.1, we do only have experimental support to automatically deployment for Alveo cards. +**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Ultra96, ZCU102 and ZCU104 boards, as well as Alveo cards. **Vivado IPI support for any Xilinx FPGA:** FINN generates a Vivado IP Integrator (IPI) design from the neural network with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx FPGA as part of a larger system. It's up to you to take the FINN-generated accelerator (what we call "stitched IP" in the tutorials), wire it up to your FPGA design and send/receive neural network data to/from the accelerator. @@ -181,12 +179,12 @@ On the target side: On the host side: -1. Install Vitis 2020.1 and set up the ``VITIS_PATH`` environment variable to point to your installation. +1. Install Vitis 2022.1 and set up the ``VITIS_PATH`` environment variable to point to your installation. 2. Install Xilinx XRT. Ensure that the ``XRT_DEB_VERSION`` environment variable reflects which version of XRT you have installed. 3. Install the Vitis platform files for Alveo and set up the ``PLATFORM_REPO_PATHS`` environment variable to point to your installation. *This must be the same path as the target's platform files (target step 2)* 4. Set up the ``ALVEO_*`` environment variables accordingly for your target, see description of environment variables above. 5. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_. Copy your private key to the ``finn/ssh_keys`` folder on the host to get password-less deployment and remote execution. -6. Done! You can try the ``test_end2end_vitis`` tests in the FINN Docker to verify your setup, although this will take some time. +6. Done! Vivado/Vitis license ********************* @@ -214,7 +212,7 @@ We also recommend running the FINN compiler on a system with sufficiently strong hardware: * **RAM.** Depending on your target FPGA platform, your system must have sufficient RAM to be - able to run Vivado/Vitis synthesis for that part. See `this page <https://www.xilinx.com/products/design-tools/vivado/memory.html>`_ + able to run Vivado/Vitis synthesis for that part. See `this page <https://www.xilinx.com/products/design-tools/vivado/vivado-ml.html#memory>`_ for more information. For targeting Zynq and Zynq UltraScale+ parts, at least 8 GB is recommended. Larger parts may require up to 16 GB. For targeting Alveo parts with Vitis, at least 64 GB RAM is recommended. diff --git a/docs/finn/hw_build.rst b/docs/finn/hw_build.rst index 2a64b87943075ff004f79c9d457136e41e27723d..a5c486935d531f7a037f3c49ead5bc7906afa831 100644 --- a/docs/finn/hw_build.rst +++ b/docs/finn/hw_build.rst @@ -9,14 +9,14 @@ Hardware Build and Deployment :align: center A model where all layers have been converted to HLS layers can be processed by -FINN to build a bitfile and driver targeting a Zynq system or to generate a Vivado IP Integrator (IPI) +FINN to build a bitfile and driver targeting a Zynq or Alveo system or to generate a Vivado IP Integrator (IPI) design with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx FPGA as part of a larger system. Hardware Build ============== -Internally, the hardware build for Zynq devices consists of the following steps: +Internally, the hardware build consists of the following steps: 1. Driver generation 2. DMA and DWC node insertion @@ -89,9 +89,4 @@ Deployment Deployment and Remote Execution ------------------------------- -The bitfile and the driver file(s) are copied to the PYNQ board and can be executed there using the *onnx_exec* function with the right *exec_mode* settings. For details please have a look at transformation :py:mod:`finn.transformation.fpgadataflow.make_deployment.DeployToPYNQ` and the execution function :py:mod:`finn.core.onnx_exec`. - -Throughput Test ---------------- - -FINN also offers the possibility to measure the network performance directly on the PYNQ board. This can be done by using :py:mod:`finn.core.throughput_test`. When running this function the metrics of the network are returned as dictionary. +The bitfile and the driver file(s) are copied to the PYNQ board and can be executed there. For more information see the description in the `end2end_example <https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example>`_ Jupyter notebooks. diff --git a/docs/finn/internals.rst b/docs/finn/internals.rst index 0b33affc76484d2175a336b188661550731ca1ab..add70d649c773061c5b9e1d91dcaa852dcc4cbac 100644 --- a/docs/finn/internals.rst +++ b/docs/finn/internals.rst @@ -7,7 +7,7 @@ Internals Intermediate Representation: QONNX and FINN-ONNX ================================================ -FINN uses `ONNX <https://github.com/onnx/onnx>`_ as an intermediate representation (IR) for neural networks. As such, almost every component inside FINN uses ONNX and its `Python API <https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md>`_, so you may want to familiarize yourself with how ONNX represents DNNs. Specifically, the `ONNX protobuf description <https://github.com/onnx/onnx/blob/master/onnx/onnx.proto>`_ (or its `human-readable documentation <https://github.com/onnx/onnx/blob/master/docs/IR.md>`_ and the `operator schemas <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_ are useful as reference documents. We also provide a Jupyter notebook that can help to get familiar with ONNX by showing how to work with a simple ONNX model in FINN, see chapter :ref:`tutorials` for details. +FINN uses `ONNX <https://github.com/onnx/onnx>`_ as an intermediate representation (IR) for neural networks. As such, almost every component inside FINN uses ONNX and its `Python API <https://github.com/onnx/onnx/blob/main/docs/PythonAPIOverview.md>`_, so you may want to familiarize yourself with how ONNX represents DNNs. Specifically, the `ONNX protobuf description <https://github.com/onnx/onnx/blob/main/onnx/onnx.proto>`_ (or its `human-readable documentation <https://github.com/onnx/onnx/blob/main/docs/IR.md>`_ and the `operator schemas <https://github.com/onnx/onnx/blob/main/docs/Operators.md>`_ are useful as reference documents. We also provide a Jupyter notebook that can help to get familiar with ONNX by showing how to work with a simple ONNX model in FINN, see chapter :ref:`tutorials` for details. .. note:: FINN supports two specialized variants of ONNX called QONNX and FINN-ONNX, and not all ONNX graphs are supported by FINN (and vice versa). @@ -137,14 +137,14 @@ ModelWrapper contains more useful functions, if you are interested please have a Analysis Pass ============= -An analysis pass traverses the graph structure and produces information about certain properties. It gets the model in the ModelWrapper as input and returns a dictionary of the properties the analysis extracts. If you are interested in how to write an analysis pass for FINN, please take a look at the Jupyter notebook about how to write an analysis pass, see chapter :ref:`tutorials` for details. For more information about existing analysis passes in FINN, see module :py:mod:`finn.analysis`. +An analysis pass traverses the graph structure and produces information about certain properties. It gets the model in the ModelWrapper as input and returns a dictionary of the properties the analysis extracts. If you are interested in how to write an analysis pass for FINN, please take a look at the Jupyter notebook about how to write an analysis pass, see chapter :ref:`tutorials` for details. For more information about existing analysis passes in FINN, see module :py:mod:`finn.analysis` . .. _transformation_pass: Transformation Pass =================== -A transformation passes changes (transforms) the given model, it gets the model in the ModelWrapper as input and returns the changed model (ModelWrapper) to the FINN flow. Additional the flag *model_was_changed* which indicates if a transformation has to be performed more than once, is returned. If you are interested in how to write a transformation pass for FINN, please take a look at the Jupyter notebook about how to write a transformation pass, see chapter :ref:`tutorials` for details. For more information about existing transformation passes in FINN, see module :py:mod:`finn.transformation`. +A transformation passes changes (transforms) the given model, it gets the model in the ModelWrapper as input and returns the changed model (ModelWrapper) to the FINN flow. Additional the flag *model_was_changed* which indicates if a transformation has to be performed more than once, is returned. If you are interested in how to write a transformation pass for FINN, please take a look at the Jupyter notebook about how to write a transformation pass, see chapter :ref:`tutorials` for details. For more information about existing transformation passes in FINN, see module :py:mod:`finn.transformation` . .. _mem_mode: @@ -167,7 +167,7 @@ The following picture shows the idea behind the "const" and "decoupled" mode. Const mode ---------- -In *const* mode the weights are "baked in" into the Matrix-Vector-Activate-Unit (MVAU), which means they are part of the HLS code. During the IP block generation the weight values are integrated as *params.h* file in the HLS code and synthesized together with it. For the *const* mode IP block generation the `Matrix_Vector_Activate_Batch function <https://github.com/Xilinx/finn-hlslib/blob/19fa1197c09bca24a0f77a7fa04b8d7cb5cc1c1d/mvau.hpp#L93>`_ from the finn-hls library is used, which implements a standard MVAU. The resulting IP block has an input and an output stream, as shown in the above picture on the left. FIFOs in the form of verilog components are connected to these. +In *const* mode the weights are "baked in" into the Matrix-Vector-Activate-Unit (MVAU), which means they are part of the HLS code. During the IP block generation the weight values are integrated as *params.h* file in the HLS code and synthesized together with it. For the *const* mode IP block generation the `Matrix_Vector_Activate_Batch function <https://github.com/Xilinx/finn-hlslib/blob/master/mvau.hpp#L92>`_ from the finn-hls library is used, which implements a standard MVAU. The resulting IP block has an input and an output stream, as shown in the above picture on the left. FIFOs in the form of verilog components are connected to these. Advantages: @@ -185,7 +185,7 @@ Disadvantages: Decoupled mode -------------- -In *decoupled* mode a different variant of the MVAU with three ports is used. Besides the input and output streams, which are fed into the circuit via Verilog FIFOs, there is another input, which is used to stream the weights. For this the `streaming MVAU <https://github.com/Xilinx/finn-hlslib/blob/07a8353f6cdfd8bcdd81e309a5581044c2a93d3b/mvau.hpp#L213>`_ from the finn-hls library is used. To make the streaming possible a Verilog weight streamer component accesses the weight memory and sends the values via another FIFO to the MVAU. This component can be found in the `finn-rtllib <https://github.com/Xilinx/finn/tree/dev/finn-rtllib>`_ under the name *memstream.v*. For the IP block generation this component, the IP block resulting from the synthesis of the HLS code of the streaming MVAU and a FIFO for the weight stream are combined in a verilog wrapper. The weight values are saved in .dat files and stored in the weight memory from which the weight streamer reads. The resulting verilog component, which is named after the name of the node and has the suffix "_memstream.v", exposes only two ports to the outside, the data input and output. It therefore behaves externally in the same way as the MVAU in *const* mode. +In *decoupled* mode a different variant of the MVAU with three ports is used. Besides the input and output streams, which are fed into the circuit via Verilog FIFOs, there is another input, which is used to stream the weights. For this the `streaming MVAU <https://github.com/Xilinx/finn-hlslib/blob/master/mvau.hpp#L214>`_ from the finn-hls library is used. To make the streaming possible a Verilog weight streamer component accesses the weight memory and sends the values via another FIFO to the MVAU. This component can be found in the `finn-rtllib <https://github.com/Xilinx/finn/tree/dev/finn-rtllib>`_ under the name *memstream.v*. For the IP block generation this component, the IP block resulting from the synthesis of the HLS code of the streaming MVAU and a FIFO for the weight stream are combined in a verilog wrapper. The weight values are saved in .dat files and stored in the weight memory from which the weight streamer reads. The resulting verilog component, which is named after the name of the node and has the suffix "_memstream.v", exposes only two ports to the outside, the data input and output. It therefore behaves externally in the same way as the MVAU in *const* mode. Advantages: diff --git a/docs/finn/nw_prep.rst b/docs/finn/nw_prep.rst index 566eda5bac38855e9ed8edfdf53193bb6c025256..6fea992cf70ad2cb29b385133ccdcf34606b2185 100644 --- a/docs/finn/nw_prep.rst +++ b/docs/finn/nw_prep.rst @@ -10,7 +10,7 @@ Network Preparation The main principle of FINN are analysis and transformation passes. If you like to have more information about these please have a look at section :ref:`analysis_pass` and :ref:`transformation_pass` or at chapter :ref:`tutorials` about the provided Jupyter notebooks. -This page is about the network preparation, the flow step that comes after the :ref:`brevitas_export`. Its main idea is to optimize the network and convert the nodes to custom nodes that correspond to `finn-hlslib <https://github.com/Xilinx/finn-hlslib>`_ functions. In this way we get a network that we can bring to hardware with the help of Vivado. For that we have to apply several transformations on the ONNX model, which this flow step receives wrapped in the :ref:`modelwrapper`. +This page is about the network preparation, the flow step that comes after the :ref:`brevitas_export`. Its main idea is to optimize the network and convert the nodes to custom nodes that correspond to `finn-hlslib <https://github.com/Xilinx/finn-hlslib>`_ functions. In this way we get a network that we can bring to hardware with the help of Vitis and Vivado. For that we have to apply several transformations on the ONNX model, which this flow step receives wrapped in the :ref:`modelwrapper`. Various transformations are involved in the network preparation. The following is a short overview of these. diff --git a/docs/finn/source_code/finn.builder.rst b/docs/finn/source_code/finn.builder.rst index 2433cab83d1aa140010f4082ec8323bdaa8c6ff4..caadf3f91f7c9aa06f04be356e9c3594fc208d2d 100644 --- a/docs/finn/source_code/finn.builder.rst +++ b/docs/finn/source_code/finn.builder.rst @@ -9,9 +9,9 @@ finn.builder.build\_dataflow ---------------------------- .. automodule:: finn.builder.build_dataflow - :members: - :undoc-members: - :show-inheritance: + :members: + :undoc-members: + :show-inheritance: finn.builder.build\_dataflow\_config ------------------------------------ @@ -26,6 +26,6 @@ finn.builder.build\_dataflow\_steps ------------------------------------ .. automodule:: finn.builder.build_dataflow_steps - :members: - :undoc-members: - :show-inheritance: + :members: + :undoc-members: + :show-inheritance: diff --git a/docs/finn/tutorials.rst b/docs/finn/tutorials.rst index 110f77c5b10d2415ac2d2ff7b716567ec5cb76fa..7ac54501cf22a0b123b7b3d156a6a437e8045f22 100644 --- a/docs/finn/tutorials.rst +++ b/docs/finn/tutorials.rst @@ -46,3 +46,8 @@ The notebooks in this folder are more developer oriented. They should help you t * 2_custom_op * Explains the basics of FINN custom ops and how to define a new one. + +FINN Example FPGA Flow Using MNIST Numerals +============================================ + +Next to the Jupyter notebooks above there is a tutorial about the command-line build_dataflow `here <https://github.com/Xilinx/finn/tree/main/tutorials/fpga_flow>`_ which shows how to bring a FINN compiled model into the Vivado FPGA design environment. diff --git a/fetch-repos.sh b/fetch-repos.sh index 7078b284a9bbfdebc6bfe5bd8f7d577bdfcacabc..dd23c33e1bfc17a49ec43158291d6e3b4c1a0c89 100755 --- a/fetch-repos.sh +++ b/fetch-repos.sh @@ -27,12 +27,12 @@ # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -QONNX_COMMIT="ce321742d98f23909a890ed680a9c99640d7aaab" +QONNX_COMMIT="dd35a8ff49d7225a07ffceeebe25a6361df48349" FINN_EXP_COMMIT="9cbd2787b5160e2b44e0e8164a0df1457dbd5366" BREVITAS_COMMIT="a5b71d6de1389d3e7db898fef72e014842670f03" PYVERILATOR_COMMIT="766e457465f5c0dd315490d7b9cc5d74f9a76f4f" CNPY_COMMIT="4e8810b1a8637695171ed346ce68f6984e585ef4" -HLSLIB_COMMIT="d27f6b6c5d8f1bb208db395659389603f63ad4be" +HLSLIB_COMMIT="4ddfa00b07275a3f1de1c13409e6acb489115fe2" OMX_COMMIT="d1065a788219ca0eb54d5e57600b1f9d7f67d4cc" AVNET_BDF_COMMIT="2d49cfc25766f07792c0b314489f21fe916b639b" XIL_BDF_COMMIT="8cf4bb674a919ac34e3d99d8d71a9e60af93d14e" diff --git a/notebooks/advanced/0_custom_analysis_pass.ipynb b/notebooks/advanced/0_custom_analysis_pass.ipynb index a4ad32ed7f547a4c035b5cbe4da11ebe2565883a..f8444520c3ded795702420d7f86335d0048ef043 100644 --- a/notebooks/advanced/0_custom_analysis_pass.ipynb +++ b/notebooks/advanced/0_custom_analysis_pass.ipynb @@ -137,7 +137,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/advanced/1_custom_transformation_pass.ipynb b/notebooks/advanced/1_custom_transformation_pass.ipynb index e40a534af56352712f20bfb250112aeacfee278f..391e852a71e1109b376abd7bb5d5f9d264d06498 100644 --- a/notebooks/advanced/1_custom_transformation_pass.ipynb +++ b/notebooks/advanced/1_custom_transformation_pass.ipynb @@ -233,7 +233,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/advanced/2_custom_op.ipynb b/notebooks/advanced/2_custom_op.ipynb index 051a406708ee7b4bcbd548b39acac000b473c7cf..636da64dd52fab81f8d6a763d199e8e13e9e3cc0 100644 --- a/notebooks/advanced/2_custom_op.ipynb +++ b/notebooks/advanced/2_custom_op.ipynb @@ -8,14 +8,14 @@ "\n", "Suppose that you want to introduce a new (custom) operation type into the FINN compiler. Custom operations in FINN are useful for a variety of things ranging from code generation to functional verification. This is achieved by creating a new Python module for your custom operation that fulfills certain interface specifications.\n", "\n", - "One thing to point out before we start is that **these custom operations are generic** and not really tied to e.g. Vivado HLS or few-bit quantization. As you will see in this notebook, it's possible to provide arbitrary Python/C/C++/... execution and code generation paths for custom nodes.\n", + "One thing to point out before we start is that **these custom operations are generic** and not really tied to e.g. Vitis HLS or few-bit quantization. As you will see in this notebook, it's possible to provide arbitrary Python/C/C++/... execution and code generation paths for custom nodes.\n", "\n", "## The CustomOp base class\n", "\n", "Subclasses of `CustomOp` provide a way of providing custom functionality for ONNX nodes in FINN.\n", "This is the base class for every custom op node used in the framework, so you must create subclasses of `CustomOp` to provide execution, code generation and other functionalities in FINN.\n", "\n", - "Let's start by looking at the `CustomOp` base class itself, which lives in the `finn-base` repository. You can view it [here](https://github.com/Xilinx/finn-base/blob/dev/src/finn/custom_op/base.py). Note that the `finn` Docker container already has `finn-base` set up as a dependency.\n", + "Let's start by looking at the `CustomOp` base class itself, which lives in the `qonnx` repository. You can view it [here](https://github.com/fastmachinelearning/qonnx/blob/main/src/qonnx/custom_op/base.py). Note that the `finn` Docker container already has `qonnx` set up as a dependency.\n", "\n", "Some points of importance:\n", "\n", @@ -23,7 +23,7 @@ "\n", "2. `CustomOp` subclasses need to implement the methods below (those not starting with underscore).\n", "\n", - "3. To be discoverable in the custom op register, `CustomOp` subclasses must set the `domain` field to the name of the Python module they appear in. For instance, to use the custom `Im2Col` op type from [here](https://github.com/Xilinx/finn-base/blob/dev/src/finn/custom_op/general/im2col.py), the ONNX node must use `domain=qonnx.custom_op.general` since its module is located at `finn/custom_op/general/im2col.py`." + "3. To be discoverable in the custom op register, `CustomOp` subclasses must set the `domain` field to the name of the Python module they appear in. For instance, to use the custom `Im2Col` op type from [here](https://github.com/fastmachinelearning/qonnx/blob/main/src/qonnx/custom_op/general/im2col.py), the ONNX node must use `domain=qonnx.custom_op.general` since its module is located at `qonnx/custom_op/general/im2col.py`." ] }, { @@ -130,7 +130,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To make sure our custom op is available, it needs to be registered. The best practice for this is to create a submodule under `finn.custom_op` which includes a `custom_op` dictionary that maps strings (op names) to classes (op implementations). Since we're in a Jupyter notebook we'll just hijack it at runtime like this:" + "To make sure our custom op is available, it needs to be registered. The best practice for this is to create a submodule under `qonnx.custom_op` which includes a `custom_op` dictionary that maps strings (op names) to classes (op implementations). Since we're in a Jupyter notebook we'll just hijack it at runtime like this:" ] }, { @@ -658,7 +658,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/basics/0_how_to_work_with_onnx.ipynb b/notebooks/basics/0_how_to_work_with_onnx.ipynb index b6a5a0481574928ef490d8bb55bbbe2bb882b951..35a83ea97b87bbe78ae1ff58a5ee50a0b0420a8f 100644 --- a/notebooks/basics/0_how_to_work_with_onnx.ipynb +++ b/notebooks/basics/0_how_to_work_with_onnx.ipynb @@ -24,7 +24,7 @@ "source": [ "### How to create a simple ONNX model\n", "\n", - "To explain how to create an ONNX model a simple example with mathematical operations is used. All nodes are from the [standard operations library of ONNX](https://github.com/onnx/onnx/blob/master/docs/Operators.md).\n", + "To explain how to create an ONNX model a simple example with mathematical operations is used. All nodes are from the [standard operations library of ONNX](https://github.com/onnx/onnx/blob/main/docs/Operators.md).\n", "\n", "First ONNX is imported, then the helper function can be used to make a node." ] @@ -305,7 +305,7 @@ "source": [ "### How to manipulate an ONNX model\n", "\n", - "In the model there are two successive adder nodes. An adder node in ONNX can only add two inputs, but there is also the [**sum**](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Sum) node, which can process more than two inputs. So it would be a reasonable change of the graph to combine the two successive adder nodes to one sum node." + "In the model there are two successive adder nodes. An adder node in ONNX can only add two inputs, but there is also the [**sum**](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Sum) node, which can process more than two inputs. So it would be a reasonable change of the graph to combine the two successive adder nodes to one sum node." ] }, { @@ -599,7 +599,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/basics/1_brevitas_network_import.ipynb b/notebooks/basics/1_brevitas_network_import.ipynb index 5fb29754dc0ad56c2d07c783cf43102975b1621b..a884e90d7572789fc64cf9b953b5730590d4e8f1 100644 --- a/notebooks/basics/1_brevitas_network_import.ipynb +++ b/notebooks/basics/1_brevitas_network_import.ipynb @@ -297,7 +297,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb b/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb index 28155d6f3eacd4dfd77aefbc73fc4ed3ef12f1dd..388accad3aa2bb2633fb691241d234442d59bb11 100644 --- a/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb +++ b/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb @@ -46,7 +46,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into 5 sections represented by a different color, each of it includes several flow steps. The flow starts in top left corner with Brevitas export (green section), followed by the preparation of the network (blue section) for the Vivado HLS synthesis and Vivado IPI stitching (orange section), and finally building a PYNQ overlay bitfile and testing it on a PYNQ board (yellow section).\n", + "The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into 5 sections represented by a different color, each of it includes several flow steps. The flow starts in top left corner with Brevitas export (green section), followed by the preparation of the network (blue section) for the Vitis HLS synthesis and Vivado IPI stitching (orange section), and finally building a PYNQ overlay bitfile and testing it on a PYNQ board (yellow section).\n", "There is an additional section for functional verification (red section) on the left side of the diagram, which we will not cover in this notebook. For details please take a look in the verification notebook which you can find [here](tfc_end2end_verification.ipynb)\n", "\n", "\n", @@ -199,7 +199,7 @@ "\n", "\n", "\n", - "Note how the convolution layer looks very similar to the fully connected one in terms of the matrix-vector-threshold unit (MVTU), but now the MVTU is preceded by a sliding window unit that produces the matrix from the input image. All of these building blocks, including the `MaxPool` layer you see in this figure, exist as templated Vivado HLS C++ functions in [finn-hlslib](https://github.com/Xilinx/finn-hlslib).\n", + "Note how the convolution layer looks very similar to the fully connected one in terms of the matrix-vector-threshold unit (MVTU), but now the MVTU is preceded by a sliding window unit that produces the matrix from the input image. All of these building blocks, including the `MaxPool` layer you see in this figure, exist as templated Vitis HLS C++ functions in [finn-hlslib](https://github.com/Xilinx/finn-hlslib).\n", "\n", "\n", "To target this kind of hardware architecture with our network we'll apply a convolution lowering transformation, in addition to streamlining. You may recall the *streamlining transformation* that we applied to the TFC-w1a1 network, which is a series of mathematical simplifications that allow us to get rid of floating point scaling operations by implementing few-bit activations as thresholding operations. \n", @@ -462,11 +462,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 5. Deployment and Remote Execution\n", + "## 5. Deployment and Execution\n", "\n", - "Now that we're done with the hardware generation, we can copy the necessary files onto our PYNQ board.\n", - "\n", - "**Make sure you've [set up the SSH keys for your PYNQ board](https://finn-dev.readthedocs.io/en/latest/getting_started.html#pynq-board-first-time-setup) before executing this step.**" + "The bitfile and generated driver files(s) will be copied into a deployment folder which then can be used to run the network on the PYNQ board." ] }, { @@ -475,33 +473,33 @@ "metadata": {}, "outputs": [], "source": [ - "import os\n", + "from shutil import copy\n", + "from distutils.dir_util import copy_tree\n", + "\n", + "# create directory for deployment files\n", + "deployment_dir = make_build_dir(prefix=\"pynq_deployment_\")\n", + "model.set_metadata_prop(\"pynq_deployment_dir\", deployment_dir)\n", "\n", - "# set up the following values according to your own environment\n", - "# FINN will use ssh to deploy and run the generated accelerator\n", - "ip = \"192.168.2.99\"\n", - "username = os.getenv(\"PYNQ_USERNAME\", \"xilinx\")\n", - "password = os.getenv(\"PYNQ_PASSWORD\", \"xilinx\")\n", - "port = os.getenv(\"PYNQ_PORT\", 22)\n", - "target_dir = os.getenv(\"PYNQ_TARGET_DIR\", \"/home/xilinx/finn_cnv_end2end_example\")\n", - "# set up ssh options to only allow publickey authentication\n", - "options = \"-o PreferredAuthentications=publickey -o PasswordAuthentication=no\"\n", + "# get and copy necessary files\n", + "# .bit and .hwh file\n", + "bitfile = model.get_metadata_prop(\"bitfile\")\n", + "hwh_file = model.get_metadata_prop(\"hw_handoff\")\n", + "deploy_files = [bitfile, hwh_file]\n", "\n", - "# test access to PYNQ board\n", - "! ssh {options} {username}@{ip} -p {port} cat /var/run/motd.dynamic" + "for dfile in deploy_files:\n", + " if dfile is not None:\n", + " copy(dfile, deployment_dir)\n", + "\n", + "# driver.py and python libraries\n", + "pynq_driver_dir = model.get_metadata_prop(\"pynq_driver_dir\")\n", + "copy_tree(pynq_driver_dir, deployment_dir)" ] }, { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "from finn.transformation.fpgadataflow.make_deployment import DeployToPYNQ\n", - "\n", - "model = ModelWrapper(build_dir + \"/end2end_cnv_w1a1_synth.onnx\")\n", - "model = model.transform(DeployToPYNQ(ip, port, username, password, target_dir))\n", - "model.save(build_dir + \"/end2end_cnv_w1a1_pynq_deploy.onnx\")" + "Next to these files, we will also need an example numpy array to test the network on the PYNQ board. (*and before you ask, that's supposed to be a cat (CIFAR-10 class number 3)*) Recall that we partitioned our original network into a parent graph that contained the non-synthesizable nodes and a child graph that contained the bulk of the network, which we turned into a bitfile. The only operator left outside the FPGA partition was a `Transpose` to convert NCHW images into NHWC ones. Thus, we can skip the execution in the parent as long as we ensure our image has the expected data layout. The example numpy array can then be saved as .npy file." ] }, { @@ -510,8 +508,14 @@ "metadata": {}, "outputs": [], "source": [ - "target_dir_pynq = target_dir + \"/\" + model.get_metadata_prop(\"pynq_deployment_dir\").split(\"/\")[-1]\n", - "target_dir_pynq" + "import pkg_resources as pk\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "fn = pk.resource_filename(\"finn.qnn-data\", \"cifar10/cifar10-test-data-class3.npz\")\n", + "x = np.load(fn)[\"arr_0\"]\n", + "x = x.reshape(3, 32,32).transpose(1, 2, 0)\n", + "plt.imshow(x)" ] }, { @@ -520,14 +524,19 @@ "metadata": {}, "outputs": [], "source": [ - "! ssh {options} {username}@{ip} -p {port} 'ls -l {target_dir_pynq}'" + "model = ModelWrapper(build_dir + \"/end2end_cnv_w1a1_synth.onnx\")\n", + "iname = model.graph.input[0].name\n", + "ishape = model.get_tensor_shape(iname)\n", + "np.save(deployment_dir + \"/input.npy\", x.reshape(ishape))" ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": null, "metadata": {}, + "outputs": [], "source": [ - "We only have two more steps to be able to remotely execute the deployed bitfile with some test data from the CIFAR-10 dataset. Let's load up some test data that comes bundled with FINN -- *and before you ask, that's supposed to be a cat (CIFAR-10 class number 3)*." + "! ls {deployment_dir}" ] }, { @@ -536,54 +545,34 @@ "metadata": {}, "outputs": [], "source": [ - "import pkg_resources as pk\n", - "import matplotlib.pyplot as plt\n", - "import numpy as np\n", - "\n", - "fn = pk.resource_filename(\"finn.qnn-data\", \"cifar10/cifar10-test-data-class3.npz\")\n", - "x = np.load(fn)[\"arr_0\"]\n", - "x = x.reshape(3, 32,32).transpose(1, 2, 0)\n", - "plt.imshow(x)" + "from shutil import make_archive\n", + "make_archive('deploy-on-pynq-cnv', 'zip', deployment_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Recall that we partitioned our original network into a parent graph that contained the non-synthesizable nodes and a child graph that contained the bulk of the network, which we turned into a bitfile. The only operator left outside the FPGA partition was a `Transpose` to convert NCHW images into NHWC ones. Thus, we can skip the execution in the parent as long as we ensure our image has the expected data layout, which we have done above." + "You can now download the created zipfile (File -> Open, mark the checkbox next to the deploy-on-pynq-tfc.zip and select Download from the toolbar), then copy it to your PYNQ board (for instance via scp or rsync). Then, run the following commands on the PYNQ board to extract the archive and run the execution:" ] }, { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import numpy as np\n", - "from finn.core.onnx_exec import execute_onnx\n", - "\n", - "model = ModelWrapper(build_dir + \"/end2end_cnv_w1a1_pynq_deploy.onnx\")\n", - "iname = model.graph.input[0].name\n", - "oname = model.graph.output[0].name\n", - "ishape = model.get_tensor_shape(iname)\n", - "input_dict = {iname: x.astype(np.float32).reshape(ishape)}\n", - "ret = execute_onnx(model, input_dict, True)" - ] - }, - { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "ret[oname]" + "```shell\n", + "unzip deploy-on-pynq-cnv.zip -d finn-cnv-demo\n", + "cd finn-cnv-demo\n", + "sudo python3 -m pip install bitstring\n", + "sudo python3 driver.py --exec_mode=execute --batchsize=1 --bitfile=resizer.bit --inputfile=input.npy\n", + "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "We see that the network correctly predicts this as a class 3 (\"cat\"). " + "The output will be saved on the PYNQ board as `output.npy` and can be copied to the host and opened with `np.load()`." ] }, { @@ -592,7 +581,7 @@ "source": [ "### Validating the Accuracy on a PYNQ Board <a id='validation'></a>\n", "\n", - "All the command line prompts here are meant to be executed with `sudo` on the PYNQ board, so we'll use a workaround (`echo password | sudo -S command`) to get that working from this notebook running on the host computer.\n", + "All the command line prompts here are meant to be executed with `sudo` on the PYNQ board.\n", "\n", "**Ensure that your PYNQ board has a working internet connecting for the next steps, since some there is some downloading involved.**\n", "\n", @@ -601,16 +590,9 @@ "\n", "Command to execute on PYNQ:\n", "\n", - "```pip3 install git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "! ssh {options} -t {username}@{ip} -p {port} 'echo {password} | sudo -S pip3 install git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading'" + "```shell\n", + "sudo pip3 install git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading\n", + "```" ] }, { @@ -621,16 +603,9 @@ "\n", "Command to execute on PYNQ:\n", "\n", - "`python3.6 validate.py --dataset cifar10 --batchsize 1000`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "! ssh {options} -t {username}@{ip} -p {port} 'cd {target_dir_pynq}; echo {password} | sudo -S python3.6 validate.py --dataset cifar10 --batchsize 1000'" + "```shell\n", + "sudo python3 validate.py --dataset cifar10 --batchsize 1000\n", + "```" ] }, { @@ -643,7 +618,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/end2end_example/bnn-pynq/tfc_end2end_example.ipynb b/notebooks/end2end_example/bnn-pynq/tfc_end2end_example.ipynb index c4fc92b97c91d6b1dfadc41ac3c23d014bd9fada..eec17b2fa7e8226cafe48f095aa38eb704b2812e 100644 --- a/notebooks/end2end_example/bnn-pynq/tfc_end2end_example.ipynb +++ b/notebooks/end2end_example/bnn-pynq/tfc_end2end_example.ipynb @@ -33,7 +33,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into 5 sections represented by a different color, each of it includes several flow steps. The flow starts in top left corner with Brevitas export (green section), followed by the preparation of the network (blue section) for the Vivado HLS synthesis and Vivado IPI stitching (orange section), and finally building a PYNQ overlay bitfile and testing it on a PYNQ board (yellow section).\n", + "The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into 5 sections represented by a different color, each of it includes several flow steps. The flow starts in top left corner with Brevitas export (green section), followed by the preparation of the network (blue section) for the Vitis HLS synthesis and Vivado IPI stitching (orange section), and finally building a PYNQ overlay bitfile and testing it on a PYNQ board (yellow section).\n", "There is an additional section for functional verification (red section) on the right side of the diagram, which we will not cover in this notebook. For details please take a look in the verification notebook which you can find [here](tfc_end2end_verification.ipynb)\n", "\n", "\n", @@ -161,7 +161,7 @@ "\n", "\n", "\n", - "In practice, the compute arrays are instantiated by function calls to optimized Vivado HLS building blocks from the [finn-hlslib](https://github.com/Xilinx/finn-hlslib) library. As these function calls can only handle certain patterns/cases, we need to transform the network into an appropriate form so that we can replace network layers with these function calls, which is the goal of the network preparation process." + "In practice, the compute arrays are instantiated by function calls to optimized Vitis HLS building blocks from the [finn-hlslib](https://github.com/Xilinx/finn-hlslib) library. As these function calls can only handle certain patterns/cases, we need to transform the network into an appropriate form so that we can replace network layers with these function calls, which is the goal of the network preparation process." ] }, { @@ -248,7 +248,7 @@ "\n", "In FINN, we can bake some of these pre/postprocessing operatings into the graph, and in some cases these can be highly beneficial for performance by allowing our accelerator to directly consume raw data instead of going through CPU preprocessing. \n", "\n", - "We'll demonstrate this for our small image classification network as follows. Brevitas preprocesses BNN-PYNQ network inputs with `torchvision.transforms.ToTensor()` [prior to training](https://github.com/Xilinx/brevitas/blob/master/src/brevitas_examples/bnn_pynq/trainer.py#L104), which converts 8-bit RGB values into floats between 0 and 1 by dividing the input by 255. We can achieve the same effect in FINN by exporting a single-node ONNX graph for division by 255 (which already exists as `finn.util.pytorch.ToTensor` and merging this with our original model. Finally, we're going to mark our input tensor as 8-bit to let FINN know which level of precision to use." + "We'll demonstrate this for our small image classification network as follows. Brevitas preprocesses BNN-PYNQ network inputs with `torchvision.transforms.ToTensor()` [prior to training](https://github.com/Xilinx/brevitas/blob/master/src/brevitas_examples/bnn_pynq/trainer.py#L86), which converts 8-bit RGB values into floats between 0 and 1 by dividing the input by 255. We can achieve the same effect in FINN by exporting a single-node ONNX graph for division by 255 (which already exists as `finn.util.pytorch.ToTensor` and merging this with our original model. Finally, we're going to mark our input tensor as 8-bit to let FINN know which level of precision to use." ] }, { @@ -343,7 +343,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As can be seen, several transformations are involved in the streamlining transformation. There are move and collapse transformations. In the last step the operations are transformed into multithresholds. The involved transformations can be viewed in detail [here](https://github.com/Xilinx/finn/tree/master/src/finn/transformation/streamline). After each transformation, three of the tidy-up transformations (`GiveUniqueNodeNames`, `GiveReadableTensorNames` and `InferDataTypes`) are applied to the model.\n", + "As can be seen, several transformations are involved in the streamlining transformation. There are move and collapse transformations. In the last step the operations are transformed into multithresholds. The involved transformations can be viewed in detail [here](https://github.com/Xilinx/finn/tree/main/src/finn/transformation/streamline). After each transformation, three of the tidy-up transformations (`GiveUniqueNodeNames`, `GiveReadableTensorNames` and `InferDataTypes`) are applied to the model.\n", "\n", "After streamlining the network looks as follows:" ] @@ -525,7 +525,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can use the higher-level [HLSCustomOp](https://github.com/Xilinx/finn/blob/main/src/finn/custom_op/fpgadataflow/__init__.py) wrappers for this node. These wrappers provide easy access to specific properties of these nodes, such as the folding factors (PE and SIMD). Let's have a look at which node attributes are defined by the CustomOp wrapper, and adjust the SIMD and PE attributes." + "We can use the higher-level [HLSCustomOp](https://github.com/Xilinx/finn/blob/main/src/finn/custom_op/fpgadataflow/hlscustomop.py) wrappers for this node. These wrappers provide easy access to specific properties of these nodes, such as the folding factors (PE and SIMD). Let's have a look at which node attributes are defined by the CustomOp wrapper, and adjust the SIMD and PE attributes." ] }, { @@ -547,7 +547,7 @@ "metadata": {}, "source": [ "We can see that the PE and SIMD are listed as node attributes, as well as the depths of the FIFOs that will be inserted between consecutive layers, and all can be adjusted using `set_nodeattr` subject to certain constraints. There are also a lot of additional attributes that can be set for this node type.\n", - "**In this notebook we are setting the folding factors and FIFO depths manually, but in a future version we will support determining the folding factors given an FPGA resource budget according to the analytical model from the [FINN-R paper](https://arxiv.org/pdf/1809.04570).**" + "**In this notebook we are setting the folding factors and FIFO depths manually but it is possible to use FINN transformations for this ([SetFolding](https://github.com/Xilinx/finn/blob/main/src/finn/transformation/fpgadataflow/set_folding.py) and [InsertAndSetFIFODepths](https://github.com/Xilinx/finn/blob/main/src/finn/transformation/fpgadataflow/set_fifo_depths.py)).**" ] }, { @@ -609,7 +609,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This completes the network preparation and the network can be passed on to the next block *Vivado HLS and IPI*, which is described below." + "This completes the network preparation and the network can be passed on to the next block *Vitis HLS and IPI*, which is described below." ] }, { @@ -798,23 +798,21 @@ "source": [ "## 4. PYNQ deployment <a id='hw_test'></a>\n", "\n", - "* [Deployment and Remote Execution](#deploy)\n", + "* [Deployment](#deploy)\n", "* [Validation on PYNQ Board](#validation)\n", "* [Throughput Test on PYNQ Board](#throughput)\n", "\n", "\n", - "We are almost done preparing our hardware design. We'll now put it in a form suitable for use as a PYNQ overlay, synthesize and deploy it." + "The bitfile and generated driver will be copied together with some necessary files for execution into a deployment folder which then can be used to run the network on the PYNQ board." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Deployment and Remote Execution <a id='deploy'></a>\n", + "### Deployment <a id='deploy'></a>\n", "\n", - "We'll now use the `DeployToPYNQ` transformation to create a deployment folder with the bitfile and driver file(s), and copy that to the PYNQ board. You can change the default IP address, username, password and target folder for the PYNQ below.\n", - "\n", - "**Make sure you've [set up the SSH keys for your PYNQ board](https://finn-dev.readthedocs.io/en/latest/getting_started.html#pynq-board-first-time-setup) before executing this step.**" + "We'll now create a deployment folder with the bitfile and driver file(s), we zip it and afterwards it can be copied to the PYNQ board for execution and validation." ] }, { @@ -823,74 +821,33 @@ "metadata": {}, "outputs": [], "source": [ - "import os\n", + "from shutil import copy\n", + "from distutils.dir_util import copy_tree\n", "\n", - "# set up the following values according to your own environment\n", - "# FINN will use ssh to deploy and run the generated accelerator\n", - "ip = \"192.168.2.99\"\n", - "username = os.getenv(\"PYNQ_USERNAME\", \"xilinx\")\n", - "password = os.getenv(\"PYNQ_PASSWORD\", \"xilinx\")\n", - "port = os.getenv(\"PYNQ_PORT\", 22)\n", - "target_dir = os.getenv(\"PYNQ_TARGET_DIR\", \"/home/xilinx/finn_tfc_end2end_example\")\n", - "# set up ssh options to only allow publickey authentication\n", - "options = \"-o PreferredAuthentications=publickey -o PasswordAuthentication=no\"\n", + "# create directory for deployment files\n", + "deployment_dir = make_build_dir(prefix=\"pynq_deployment_\")\n", + "model.set_metadata_prop(\"pynq_deployment_dir\", deployment_dir)\n", "\n", - "# test access to PYNQ board\n", - "! ssh {options} {username}@{ip} -p {port} cat /var/run/motd.dynamic" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from finn.transformation.fpgadataflow.make_deployment import DeployToPYNQ\n", + "# get and copy necessary files\n", + "# .bit and .hwh file\n", + "bitfile = model.get_metadata_prop(\"bitfile\")\n", + "hwh_file = model.get_metadata_prop(\"hw_handoff\")\n", + "deploy_files = [bitfile, hwh_file]\n", "\n", - "model = model.transform(DeployToPYNQ(ip, port, username, password, target_dir))\n", - "model.save(build_dir + \"/tfc_w1_a1_pynq_deploy.onnx\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's verify that the remote access credentials is saved in the model metadata, and that the deployment folder has been successfully copied to the board:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "model.model.metadata_props" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "target_dir_pynq = target_dir + \"/\" + model.get_metadata_prop(\"pynq_deployment_dir\").split(\"/\")[-1]\n", - "target_dir_pynq" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "! ssh {options} {username}@{ip} -p {port} 'ls -l {target_dir_pynq}'" + "for dfile in deploy_files:\n", + " if dfile is not None:\n", + " copy(dfile, deployment_dir)\n", + "\n", + "# driver.py and python libraries\n", + "pynq_driver_dir = model.get_metadata_prop(\"pynq_driver_dir\")\n", + "copy_tree(pynq_driver_dir, deployment_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "We only have two more steps to be able to remotely execute the deployed bitfile with some test data from the MNIST dataset. Let's load up some test data that comes bundled with FINN." + "Next to these files, we will also need an example numpy array to test the network on the PYNQ board. You may recall that one \"reshape\" node was left out of the StreamingDataflowPartition. We'll do that manually with a numpy function call when passing in the input, but everything else in the network ended up inside the StreamingDataflowPartition so that's all we need to do. The example numpy array can then be saved as .npy file. " ] }, { @@ -914,18 +871,23 @@ "metadata": {}, "outputs": [], "source": [ - "model = ModelWrapper(build_dir + \"/tfc_w1_a1_pynq_deploy.onnx\")\n", + "import numpy as np\n", + "\n", + "model = ModelWrapper(build_dir + \"/tfc_w1_a1_post_synthesis.onnx\")\n", "iname = model.graph.input[0].name\n", "oname = parent_model.graph.output[0].name\n", "ishape = model.get_tensor_shape(iname)\n", - "print(\"Expected network input shape is \" + str(ishape))" + "print(\"Expected network input shape is \" + str(ishape))\n", + "np.save(deployment_dir + \"/input.npy\", x.reshape(ishape))" ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": null, "metadata": {}, + "outputs": [], "source": [ - "Finally, we can call `execute_onnx` on the graph, which will internally call remote execution with the bitfile, grab the results and return a numpy array. You may recall that one \"reshape\" node was left out of the StreamingDataflowPartition. We'll do that manually with a numpy function call when passing in the input, but everything else in the network ended up inside the StreamingDataflowPartition so that's all we need to do." + "! ls {deployment_dir}" ] }, { @@ -934,27 +896,34 @@ "metadata": {}, "outputs": [], "source": [ - "import numpy as np\n", - "from finn.core.onnx_exec import execute_onnx\n", - "\n", - "input_dict = {iname: x.reshape(ishape)}\n", - "ret = execute_onnx(model, input_dict)" + "from shutil import make_archive\n", + "make_archive('deploy-on-pynq-tfc', 'zip', deployment_dir)" ] }, { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can now download the created zipfile (**File -> Open**, mark the checkbox next to the `deploy-on-pynq-tfc.zip` and select Download from the toolbar), then copy it to your PYNQ board (for instance via `scp` or `rsync`). Then, run the following commands **on the PYNQ board** to extract the archive and run the execution:" + ] + }, + { + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "ret[oname]" + "```shell\n", + "unzip deploy-on-pynq-tfc.zip -d finn-tfc-demo\n", + "cd finn-tfc-demo\n", + "sudo python3 -m pip install bitstring\n", + "sudo python3 driver.py --exec_mode=execute --batchsize=1 --bitfile=resizer.bit --inputfile=input.npy\n", + "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "We see that the network correctly predicts this as a digit 2." + "The output will be saved on the PYNQ board as `output.npy` and can be copied to the host and opened with `np.load()`." ] }, { @@ -963,25 +932,16 @@ "source": [ "### Validating the Accuracy on a PYNQ Board <a id='validation'></a>\n", "\n", - "All the command line prompts here are meant to be executed with `sudo` on the PYNQ board, so we'll use a workaround (`echo password | sudo -S command`) to get that working from this notebook running on the host computer.\n", - "\n", "**Ensure that your PYNQ board has a working internet connecting for the next steps, since there is some downloading involved.**\n", "\n", "To validate the accuracy, we first need to install the [`dataset-loading`](https://github.com/fbcotter/dataset_loading) Python package to the PYNQ board. This will give us a convenient way of downloading and accessing the MNIST dataset.\n", "\n", "\n", - "Command to execute on PYNQ:\n", + "Command to execute on PYNQ board:\n", "\n", - "```sudo pip3 install git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "! ssh {options} -t {username}@{ip} -p {port} 'echo {password} | sudo -S pip3 install git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading'" + "```shell\n", + "sudo pip3 install git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading\n", + "```" ] }, { @@ -990,18 +950,11 @@ "source": [ "We can now use the `validate.py` script that was generated together with the driver to measure top-1 accuracy on the MNIST dataset.\n", "\n", - "Command to execute on PYNQ:\n", + "Command to execute on PYNQ board:\n", "\n", - "`python3.6 validate.py --dataset mnist --batchsize 1000`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "! ssh {options} -t {username}@{ip} -p {port} 'cd {target_dir_pynq}; echo {password} | sudo -S python3.6 validate.py --dataset mnist --batchsize 1000'" + "```shell\n", + "sudo python3 validate.py --dataset mnist --batchsize 1000\n", + "```" ] }, { @@ -1016,60 +969,30 @@ "metadata": {}, "source": [ "### Throughput Test on PYNQ Board <a id='throughput'></a>\n", - "In addition to the functional verification, FINN also offers the possibility to measure the network performance directly on the PYNQ board. This can be done using the core function `throughput_test`. In the next section we import the function and execute it.\n", - "First we extract the `remote_exec_model` again and pass it to the function. The function returns the metrics of the network as dictionary. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from finn.core.throughput_test import throughput_test_remote\n", - "\n", - "model = ModelWrapper(build_dir + \"/tfc_w1_a1_pynq_deploy.onnx\")\n", - "res = throughput_test_remote(model, 10000)\n", - "print(\"Network metrics:\")\n", - "for key in res:\n", - " print(str(key) + \": \" + str(res[key]))" + "In addition to the functional verification, FINN also offers the possibility to measure the network performance directly on the PYNQ board. This can be done setting the `exec_mode` to `throughput_test`. \n", + "Command to execute on PYNQ board:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Together with the values for folding we can evaluate the performance of our accelerator. Each layer has a total folding factor of 64 and because the network is fully pipelined, it follows: `II = 64`. II is the initiation interval and indicates how many cycles are needed for one input to be processed. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "II = 64\n", - "# frequency in MHz\n", - "f_MHz = 100\n", - "# expected throughput in MFPS\n", - "expected_throughput = f_MHz / II\n", - "# measured throughput (FPS) from throughput test, converted to MFPS\n", - "measured_throughput = res[\"throughput[images/s]\"] * 0.000001\n", - "# peformance\n", - "print(\"We reach approximately \" + str(round((measured_throughput / expected_throughput)*100)) + \"% of the ideal performance.\")" + "```shell\n", + "sudo python3 driver.py --exec_mode=throughput_test --batchsize=1000 --bitfile=resizer.bit\n", + "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "The measured values were recorded with a batch size of 10000 and at a frequency of 100 MHz. We will be improving the efficiency of the generated accelerator examples in the coming FINN releases." + "The network metrics from the throughput test are saved in a file called `nw_metrics.txt` on the PYNQ board. Which can be investigated after running the command above." ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb b/notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb index 813127197e07e4ddb5ec5ff39aed0278e117babc..6c3b7965098e013fa35ac5f5b2b481e678d68f5d 100644 --- a/notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb +++ b/notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb @@ -61,7 +61,7 @@ "fc = get_test_model_trained(\"TFC\", 1, 1)\n", "raw_i = get_data(\"qonnx.data\", \"onnx/mnist-conv/test_data_set_0/input_0.pb\")\n", "input_tensor = onnx.load_tensor_from_string(raw_i)\n", - "input_brevitas = torch.from_numpy(nph.to_array(input_tensor)).float()\n", + "input_brevitas = torch.from_numpy(nph.to_array(input_tensor).copy()).float()\n", "output_golden = fc.forward(input_brevitas).detach().numpy()\n", "output_golden" ] @@ -72,7 +72,7 @@ "source": [ "## Simulation using Python <a id='simpy'></a>\n", "\n", - "If an ONNX model consists of [standard ONNX](https://github.com/onnx/onnx/blob/master/docs/Operators.md) nodes and/or FINN custom operations that do not belong to the fpgadataflow (`backend` $\\neq$ `fpgadataflow`) this model can be checked for functionality using Python.\n", + "If an ONNX model consists of [standard ONNX](https://github.com/onnx/onnx/blob/main/docs/Operators.md) nodes and/or FINN custom operations that do not belong to the fpgadataflow (`backend` $\\neq$ `fpgadataflow`) this model can be checked for functionality using Python.\n", "\n", "To simulate a standard ONNX node [onnxruntime](https://github.com/microsoft/onnxruntime) is used. onnxruntime is an open source tool developed by Microsoft to run standard ONNX nodes. For the FINN custom op nodes execution, functions are defined. The following is an example of the execution function of a XNOR popcount node.\n" ] @@ -383,7 +383,15 @@ "\n", "child_model = ModelWrapper(build_dir + \"/tfc_w1_a1_dataflow_child.onnx\")\n", "child_model = child_model.transform(InsertDWC())\n", - "child_model = child_model.transform(InsertFIFO())\n", + "\n", + "# set all impl_styles of the DWCs to hls to enable emulation\n", + "dwc_nodes = child_model.get_nodes_by_op_type(\"StreamingDataWidthConverter_Batch\")\n", + "for dwc in dwc_nodes:\n", + " dwc_inst = getCustomOp(dwc)\n", + " dwc_inst.set_nodeattr(\"impl_style\", \"hls\")\n", + " \n", + "child_model = child_model.transform(InsertFIFO(create_shallow_fifos=True))\n", + "child_model.save(build_dir + \"/test.onnx\");\n", "child_model = child_model.transform(GiveUniqueNodeNames())\n", "child_model = child_model.transform(PrepareIP(test_fpga_part, target_clk_ns))\n", "child_model = child_model.transform(HLSSynthIP())\n", @@ -431,7 +439,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/end2end_example/cybersecurity/1-train-mlp-with-brevitas.ipynb b/notebooks/end2end_example/cybersecurity/1-train-mlp-with-brevitas.ipynb index 5625a6f1c20ee5e4a66df28931a6a891f699a738..3d77586258b9ddb64985e7f7b7a2215565839c50 100644 --- a/notebooks/end2end_example/cybersecurity/1-train-mlp-with-brevitas.ipynb +++ b/notebooks/end2end_example/cybersecurity/1-train-mlp-with-brevitas.ipynb @@ -741,7 +741,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/end2end_example/cybersecurity/2-import-into-finn-and-verify.ipynb b/notebooks/end2end_example/cybersecurity/2-import-into-finn-and-verify.ipynb index 370312c77e90c67a3095e0800ad0c6046bfd75f4..e4848a1f40bed5865eccc1d831a634ac5f54e965 100644 --- a/notebooks/end2end_example/cybersecurity/2-import-into-finn-and-verify.ipynb +++ b/notebooks/end2end_example/cybersecurity/2-import-into-finn-and-verify.ipynb @@ -381,7 +381,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb b/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb index 33adb68dc8ddfff1b427d82e4666a70e883bf2c8..a18cafd6044328d53139acafb2be2cf73a4ec9b6 100644 --- a/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb +++ b/notebooks/end2end_example/cybersecurity/3-build-accelerator-with-finn.ipynb @@ -624,7 +624,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, diff --git a/setup.cfg b/setup.cfg index a1d0fef6cb08994ae8666fd2ea37166bf1cd3752..1893aa42316dad341fcedbd527f5abcf482e5cfb 100644 --- a/setup.cfg +++ b/setup.cfg @@ -72,18 +72,20 @@ exclude = # Add here additional requirements for extra features, to install with: # `pip install FINN[PDF]` like: # PDF = ReportLab; RXP -# finn-base is needed to build the full set of docs +# qonnx is needed to build the full set of docs docs = - finn-base==0.0.3 docutils==0.17.1 dataclasses-json==0.5.7 gspread==3.6.0 + IPython pytest netron vcdvcd torchvision torch qonnx@git+https://github.com/fastmachinelearning/qonnx@main#egg=qonnx + pyverilator@git+https://github.com/maltanar/pyverilator@master#egg=pyverilator + brevitas@git+https://github.com/Xilinx/brevitas@master#egg=brevitas_examples # Add here test requirements (semicolon/line-separated) testing = diff --git a/src/finn/qnn-data/build_dataflow/dataflow_build_config.json b/src/finn/qnn-data/build_dataflow/dataflow_build_config.json index 27ec38f6a4eb55c99dc4805f91d6e388e735308c..a053c1a22f7d3d290628c661a5cf113a3be44f53 100644 --- a/src/finn/qnn-data/build_dataflow/dataflow_build_config.json +++ b/src/finn/qnn-data/build_dataflow/dataflow_build_config.json @@ -7,6 +7,7 @@ "standalone_thresholds": true, "shell_flow_type": "vivado_zynq", "verify_save_rtlsim_waveforms": true, + "force_python_rtlsim": true, "verify_steps": [ "initial_python", "streamlined_python", diff --git a/tutorials/fpga_flow/README.md b/tutorials/fpga_flow/README.md index 63ca6ac832c556b3e47a15fc3207683886796f23..2aaad0423b7d49c3d6760243fe1b1c1899b9030e 100644 --- a/tutorials/fpga_flow/README.md +++ b/tutorials/fpga_flow/README.md @@ -4,7 +4,7 @@ This example demonstrates how to bring a FINN compiled model into the Vivado FPG If you are new to the command-line flow, more information can be found [here](https://finn.readthedocs.io/en/latest/command_line.html). -This demo was created using Vivado 2020.1. +This demo was created using Vivado 2022.1. ## Compiling the Model in FINN @@ -26,7 +26,7 @@ Prior to running, insure the following prerequisites have been met: - Install FINN and prerequisites. The [Getting Started](https://finn.readthedocs.io/en/latest/getting_started.html#quickstart) section of the FINN documentation might be helpful for this. - Ensure you have the `FINN_XILINX_PATH` and `FINN_XILINX_VERSION` env variables set appropriately for your install. For example: > export FINN_XILINX_PATH=/opt/Xilinx -> export FINN_XILINX_VERSION=2020.1 +> export FINN_XILINX_VERSION=2022.1 - Set the env variable for your `finn` install top directory (where you cloned the FINN compiler repo): > export FINN_ROOT=/home/foo/finn @@ -112,7 +112,7 @@ testbench generators. There are any number of ways to bring the stitched IP into larger design. -FINN already packages the stitched IP block design as a standalone IP-XACT component, which you can find under `${FINN_ROOT}/tutorials/fpga_flow/output_tfc_w0a1_fpga/stitched_ip/ip`. You can add this to the list of IP repos and use it in your own Vivado designs. A good reference for this is [UG1119](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_1/ug1119-vivado-creating-packaging-ip-tutorial.pdf) +FINN already packages the stitched IP block design as a standalone IP-XACT component, which you can find under `${FINN_ROOT}/tutorials/fpga_flow/output_tfc_w0a1_fpga/stitched_ip/ip`. You can add this to the list of IP repos and use it in your own Vivado designs. A good reference for this is [UG1119](https://www.xilinx.com/content/dam/xilinx/support/documents/sw_manuals/xilinx2022_1/ug1119-vivado-creating-packaging-ip-tutorial.pdf) Keep in mind that all of the User IP Repo's included in the Stitched IP project (from `$FINN_HOST_BUILD_DIR` which is normally located under `/tmp/finn_dev_<username>`) need to also be brought in as IP Repo's to any project using the stitched IP. It would be prudent to copy those IP repos to an appropriate archive location. You should also set the `FINN_ROOT` environment variable to point to the compiler installation directory, as some of the build scripts will