Skip to content
Snippets Groups Projects
Unverified Commit 01431f34 authored by jalezeta's avatar jalezeta Committed by GitHub
Browse files

Feature/tutorial fpga21 (#274)


* Move python package installations into the finn_dev Dockerfile

* Delete previous run results to avoid "File exists" error.

* [Docker] pin new packages to versions

Co-authored-by: default avatarYaman Umuroglu <yaman.umuroglu@xilinx.com>
parent f2d7b6b3
No related branches found
No related tags found
No related merge requests found
......@@ -55,6 +55,9 @@ RUN pip install sphinx_rtd_theme==0.5.0
RUN pip install pytest-xdist==2.0.0
RUN pip install pytest-parallel==0.1.0
RUN pip install netron
RUN pip install pandas==1.1.5
RUN pip install scikit-learn==0.24.1
RUN pip install tqdm==4.31.1
RUN pip install -e git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading
# switch user
......
%% Cell type:markdown id: tags:
# Building the Streaming Dataflow Accelerator
**Important: This notebook depends on the 2-cybersecurity-finn-verification notebook because we are using models that were created by these notebooks. So please make sure the needed .onnx files are generated prior to running this notebook.**
<img align="left" src="finn-example.png" alt="drawing" style="margin-right: 20px" width="250"/>
In this notebook, we'll use the FINN compiler generate an FPGA accelerator with a streaming dataflow architecture from our quantized MLP for the cybersecurity task. The key idea in such architectures is to parallelize across layers as well as within layers by dedicating a proportionate amount of compute resources to each layer, illustrated on the figure to the left. You can read more about the general concept in the [FINN](https://arxiv.org/pdf/1612.07119) and [FINN-R](https://dl.acm.org/doi/pdf/10.1145/3242897) papers. This is done by mapping each layer to a Vivado HLS description, parallelizing each layer's implementation to the appropriate degree and using on-chip FIFOs to link up the layers to create the full accelerator.
These implementations offer a good balance of performance and flexibility, but building them by hand is difficult and time-consuming. This is where the FINN compiler comes in: it can build streaming dataflow accelerators from an ONNX description to match the desired throughput.
%% Cell type:markdown id: tags:
## Outline
-------------
1. [Introduction to `build_dataflow` Tool](#intro_build_dataflow)
2. [Understanding the Build Configuration: `DataflowBuildConfig`](#underst_build_conf)
2.1.[Output Products](#output_prod)
2.2.[Configuring the Board and FPGA Part](#config_fpga)
2.3 [Configuring the Performance](#config_perf)
4. [Launch a Build: Only Estimate Reports](#build_estimate_report)
5. [Launch a Build: Stitched IP, out-of-context synth and rtlsim Performance](#build_ip_synth_rtlsim)
6. [Launch a Build: PYNQ Bitfile and Driver](#build_bitfile_driver)
%% Cell type:markdown id: tags:
## Introduction to `build_dataflow` Tool <a id="intro_build_dataflow"></a>
Since version 0.5b, the FINN compiler has a `build_dataflow` tool. Compared to previous versions which required setting up all the needed transformations in a Python script, it makes experimenting with dataflow architecture generation easier. The core idea is to specify the relevant build info as a configuration `dict`, which invokes all the necessary steps to make the dataflow build happen. It can be invoked either from the [command line](https://finn-dev.readthedocs.io/en/latest/command_line.html) or with a single Python function call
In this notebook, we'll use the Python function call to invoke the builds to stay inside the Jupyter notebook, but feel free to experiment with reproducing what we do here with the `./run-docker.sh build_dataflow` and `./run-docker.sh build_custom` command-line entry points too, as documented [here](https://finn-dev.readthedocs.io/en/latest/command_line.html).
%% Cell type:markdown id: tags:
## Understanding the Build Configuration: `DataflowBuildConfig` <a id="underst_build_conf"></a>
The build configuration is specified by an instance of `finn.builder.build_dataflow_config.DataflowBuildConfig`. The configuration is a Python [`dataclass`](https://docs.python.org/3/library/dataclasses.html) which can be serialized into or de-serialized from JSON files for persistence, although we'll just set it up in Python here.
There are many options in the configuration to customize different aspects of the build, we'll only cover a few of them in this notebook. You can read the details on all the config options on [the FINN API documentation](https://finn-dev.readthedocs.io/en/latest/source_code/finn.builder.html#finn.builder.build_dataflow_config.DataflowBuildConfig).
Let's go over some of the members of the `DataflowBuildConfig`:
### Output Products <a id="output_prod"></a>
The build can produce many different outputs, and some of them can take a long time (e.g. bitfile synthesis for a large network). When you first start working on generating a new accelerator and exploring the different performance options, you may not want to go all the way to a bitfile. Thus, in the beginning you may just select the estimate reports as the output products. Gradually, you can generate the output products from later stages until you are happy enough with the design to build the full accelerator integrated into a shell.
The output products are controlled by:
* `generate_outputs`: list of output products (of type [`finn.builder.build_dataflow_config.DataflowOutputType`](https://finn-dev.readthedocs.io/en/latest/source_code/finn.builder.html#finn.builder.build_dataflow_config.DataflowOutputType)) that will be generated by the build. Some available options are:
- `ESTIMATE_REPORTS` : report expected resources and performance per layer and for the whole network without any synthesis
- `STITCHED_IP` : create a stream-in stream-out IP design that can be integrated into other Vivado IPI or RTL designs
- `RTLSIM_PERFORMANCE` : use PyVerilator to do a performance/latency test of the `STITCHED_IP` design
- `OOC_SYNTH` : run out-of-context synthesis (just the accelerator itself, without any system surrounding it) on the `STITCHED_IP` design to get post-synthesis FPGA resources and achievable clock frequency
- `BITFILE` : integrate the accelerator into a shell to produce a standalone bitfile
- `PYNQ_DRIVER` : generate a PYNQ Python driver that can be used to launch the accelerator
- `DEPLOYMENT_PACKAGE` : create a folder with the `BITFILE` and `PYNQ_DRIVER` outputs, ready to be copied to the target FPGA platform.
* `output_dir`: the directory where the all the generated build outputs above will be written into.
* `steps`: list of predefined (or custom) build steps FINN will go through. Use `build_dataflow_config.estimate_only_dataflow_steps` to execute only the steps needed for estimation (without any synthesis), and the `build_dataflow_config.default_build_dataflow_steps` otherwise (which is the default value).
### Configuring the Board and FPGA Part <a id="config_fpga"></a>
* `fpga_part`: Xilinx FPGA part to be used for synthesis, can be left unspecified to be inferred from `board` below, or specified explicitly for e.g. out-of-context synthesis.
* `board`: target Xilinx Zynq or Alveo board for generating accelerators integrated into a shell. See the `pynq_part_map` and `alveo_part_map` dicts in [this file](https://github.com/Xilinx/finn-base/blob/dev/src/finn/util/basic.py#L41) for a list of possible boards.
* `shell_flow_type`: the target [shell flow type](https://finn-dev.readthedocs.io/en/latest/source_code/finn.builder.html#finn.builder.build_dataflow_config.ShellFlowType), only needed for generating full bitfiles where the FINN design is integrated into a shell (so only needed if `BITFILE` is selected)
### Configuring the Performance <a id="config_perf"></a>
You can configure the performance (and correspondingly, the FPGA resource footprint) of the generated in two ways:
1) (basic) Set a target performance and let the compiler figure out the per-node parallelization settings.
2) (advanced) Specify a separate .json as `folding_config_file` that lists the degree of parallelization (as well as other hardware options) for each layer.
This notebook only deals with the basic approach, for which you need to set up:
* `target_fps`: target inference performance in frames per second. Note that target may not be achievable due to specific layer constraints, or due to resource limitations of the FPGA.
* `synth_clk_period_ns`: target clock frequency (in nanoseconds) for Vivado synthesis. e.g. `synth_clk_period_ns=5.0` will target a 200 MHz clock. Note that the target clock period may not be achievable depending on the FPGA part and design complexity.
%% Cell type:markdown id: tags:
## Launch a Build: Only Estimate Reports <a id="build_estimate_report"></a>
First, we'll launch a build that only generates the estimate reports, which does not require any synthesis. Note two things below: how the `generate_outputs` only contains `ESTIMATE_REPORTS`, but also how the `steps` uses a value of `estimate_only_dataflow_steps`. This skips steps like HLS synthesis to provide a quick estimate from analytical models.
%% Cell type:code id: tags:
``` python
import finn.builder.build_dataflow as build
import finn.builder.build_dataflow_config as build_cfg
model_file = "cybsec-mlp-verified.onnx"
estimates_output_dir = "output_estimates_only"
cfg = build.DataflowBuildConfig(
output_dir = estimates_output_dir,
target_fps = 1000000,
synth_clk_period_ns = 10.0,
fpga_part = "xc7z020clg400-1",
steps = build_cfg.estimate_only_dataflow_steps,
generate_outputs=[
build_cfg.DataflowOutputType.ESTIMATE_REPORTS,
]
)
build.build_dataflow_cfg(model_file, cfg)
```
%% Output
Building dataflow accelerator from cybsec-mlp-verified.onnx
Intermediate outputs will be generated in /tmp/finn_dev_osboxes
Final outputs will be generated in output_estimates_only
Build log is at output_estimates_only/build_dataflow.log
Running step: step_tidy_up [1/7]
Running step: step_streamline [2/7]
Running step: step_convert_to_hls [3/7]
Running step: step_create_dataflow_partition [4/7]
Running step: step_target_fps_parallelization [5/7]
Running step: step_apply_folding_config [6/7]
Running step: step_generate_estimate_reports [7/7]
Completed successfully
0
%% Cell type:markdown id: tags:
We'll now examine the generated outputs from this build. If we look under the outputs directory, we'll find a subfolder with the generated estimate reports.
%% Cell type:code id: tags:
``` python
! ls {estimates_output_dir}
```
%% Output
build_dataflow.log intermediate_models report time_per_step.json
%% Cell type:code id: tags:
``` python
! ls {estimates_output_dir}/report
```
%% Output
estimate_layer_config_alternatives.json estimate_network_performance.json
estimate_layer_cycles.json op_and_param_counts.json
estimate_layer_resources.json
%% Cell type:markdown id: tags:
We see that various reports have been generated as .json files. Let's examine the contents of the `estimate_network_performance.json` for starters. Here, we can see the analytical estimates for the performance and latency.
%% Cell type:code id: tags:
``` python
! cat {estimates_output_dir}/report/estimate_network_performance.json
```
%% Output
{
"critical_path_cycles": 272,
"max_cycles": 80,
"max_cycles_node_name": "StreamingFCLayer_Batch_0",
"estimated_throughput_fps": 1250000.0,
"estimated_latency_ns": 2720.0
}
%% Cell type:markdown id: tags:
Since all of these reports are .json files, we can easily load them into Python for further processing. Let's define a helper function and look at the `estimate_layer_cycles.json` report.
%% Cell type:code id: tags:
``` python
import json
def read_json_dict(filename):
with open(filename, "r") as f:
ret = json.load(f)
return ret
```
%% Cell type:code id: tags:
``` python
read_json_dict(estimates_output_dir + "/report/estimate_layer_cycles.json")
```
%% Output
{'StreamingFCLayer_Batch_0': 80,
'StreamingFCLayer_Batch_1': 64,
'StreamingFCLayer_Batch_2': 64,
'StreamingFCLayer_Batch_3': 64}
%% Cell type:markdown id: tags:
Here, we can see the estimated number of clock cycles each layer will take. Recall that all of these layers will be running in parallel, and the slowest layer will determine the overall throughput of the entire neural network. FINN attempts to parallelize each layer such that they all take a similar number of cycles, and less than the corresponding number of cycles that would be required to meet `target_fps`. Additionally by summing up all layer cycle estimates one can obtain an estimate for the overall latency of the whole network.
Finally, we can see the layer-by-layer resource estimates in the `estimate_layer_resources.json` report:
%% Cell type:code id: tags:
``` python
read_json_dict(estimates_output_dir + "/report/estimate_layer_resources.json")
```
%% Output
{'StreamingFCLayer_Batch_0': {'BRAM_18K': 27,
'BRAM_efficiency': 0.15432098765432098,
'LUT': 8149,
'URAM': 0,
'URAM_efficiency': 1,
'DSP': 0},
'StreamingFCLayer_Batch_1': {'BRAM_18K': 4,
'BRAM_efficiency': 0.1111111111111111,
'LUT': 1435,
'URAM': 0,
'URAM_efficiency': 1,
'DSP': 0},
'StreamingFCLayer_Batch_2': {'BRAM_18K': 4,
'BRAM_efficiency': 0.1111111111111111,
'LUT': 1435,
'URAM': 0,
'URAM_efficiency': 1,
'DSP': 0},
'StreamingFCLayer_Batch_3': {'BRAM_18K': 1,
'BRAM_efficiency': 0.006944444444444444,
'LUT': 341,
'URAM': 0,
'URAM_efficiency': 1,
'DSP': 0},
'total': {'BRAM_18K': 36.0, 'LUT': 11360.0, 'URAM': 0.0, 'DSP': 0.0}}
%% Cell type:markdown id: tags:
This particular report is useful to determine whether the current configuration will fit into a particular FPGA. If you see that the resource requirements are too high for the FPGA you had in mind, you should consider lowering the `target_fps`.
*Note that the analytical models tend to over-estimate how much resources are needed, since they can't capture the effects of various synthesis optimizations.*
%% Cell type:markdown id: tags:
## Launch a Build: Stitched IP, out-of-context synth and rtlsim Performance <a id="build_ip_synth_rtlsim"></a>
Once we have a configuration that gives satisfactory estimates, we can move on to generating the accelerator. We can do this in different ways depending on how we want to integrate the accelerator into a larger system. For instance, if we have a larger streaming system built in Vivado or if we'd like to re-use this generated accelerator as an IP component in other projects, the `STITCHED_IP` output product is a good choice. We can also use the `OOC_SYNTH` output product to get post-synthesis resource and clock frequency numbers for our accelerator.
**NOTE: These next builds will take several minutes since multiple calls to Vivado and a call to the RTL simulator are involved.**
%% Cell type:code id: tags:
``` python
import finn.builder.build_dataflow as build
import finn.builder.build_dataflow_config as build_cfg
import os
import shutil
model_file = "cybsec-mlp-verified.onnx"
rtlsim_output_dir = "output_ipstitch_ooc_rtlsim"
#Delete previous run results if exist
if os.path.exists(rtlsim_output_dir):
shutil.rmtree(rtlsim_output_dir)
print("Previous run results deleted!")
cfg = build.DataflowBuildConfig(
output_dir = rtlsim_output_dir,
target_fps = 1000000,
synth_clk_period_ns = 10.0,
fpga_part = "xc7z020clg400-1",
generate_outputs=[
build_cfg.DataflowOutputType.STITCHED_IP,
build_cfg.DataflowOutputType.RTLSIM_PERFORMANCE,
build_cfg.DataflowOutputType.OOC_SYNTH,
]
)
build.build_dataflow_cfg(model_file, cfg)
```
%% Output
Building dataflow accelerator from cybsec-mlp-verified.onnx
Intermediate outputs will be generated in /tmp/finn_dev_osboxes
Final outputs will be generated in output_ipstitch_ooc_rtlsim
Build log is at output_ipstitch_ooc_rtlsim/build_dataflow.log
Running step: step_tidy_up [1/15]
Running step: step_streamline [2/15]
Running step: step_convert_to_hls [3/15]
Running step: step_create_dataflow_partition [4/15]
Running step: step_target_fps_parallelization [5/15]
Running step: step_apply_folding_config [6/15]
Running step: step_generate_estimate_reports [7/15]
Running step: step_hls_ipgen [8/15]
Running step: step_set_fifo_depths [9/15]
Running step: step_create_stitched_ip [10/15]
Running step: step_measure_rtlsim_performance [11/15]
Running step: step_make_pynq_driver [12/15]
Running step: step_out_of_context_synthesis [13/15]
Running step: step_synthesize_bitfile [14/15]
Running step: step_deployment_package [15/15]
Completed successfully
0
%% Cell type:markdown id: tags:
Among the output products, we will find the accelerator exported as IP:
%% Cell type:code id: tags:
``` python
! ls {rtlsim_output_dir}/stitched_ip
```
%% Output
all_verilog_srcs.txt finn_vivado_stitch_proj.xpr
finn_vivado_stitch_proj.cache ip
finn_vivado_stitch_proj.hbs make_project.sh
finn_vivado_stitch_proj.hw make_project.tcl
finn_vivado_stitch_proj.ip_user_files vivado.jou
finn_vivado_stitch_proj.srcs vivado.log
%% Cell type:markdown id: tags:
We also have a few reports generated by these output products, different from the ones generated by `ESTIMATE_REPORTS`.
%% Cell type:code id: tags:
``` python
! ls {rtlsim_output_dir}/report
```
%% Output
estimate_layer_resources_hls.json rtlsim_performance.json
ooc_synth_and_timing.json
%% Cell type:markdown id: tags:
In `ooc_synth_and_timing.json` we can find the post-synthesis and maximum clock frequency estimate for the accelerator. Note that the clock frequency estimate here tends to be optimistic, since out-of-context synthesis is less constrained.
%% Cell type:code id: tags:
``` python
! cat {rtlsim_output_dir}/report/ooc_synth_and_timing.json
```
%% Output
{
"vivado_proj_folder": "/tmp/finn_dev_osboxes/synth_out_of_context_wy3b6qf4/results_finn_design_wrapper",
"LUT": 7073.0,
"FF": 7534.0,
"DSP": 0.0,
"BRAM": 18.0,
"WNS": 0.632,
"": 0,
"fmax_mhz": 106.7463706233988,
"estimated_throughput_fps": 1334329.6327924852
}
%% Cell type:markdown id: tags:
In `rtlsim_performance.json` we can find the steady-state throughput and latency for the accelerator, as obtained by rtlsim. If the DRAM bandwidth numbers reported here are below what the hardware platform is capable of (i.e. the accelerator is not memory-bound), you can expect the same steady-state throughput in real hardware.
%% Cell type:code id: tags:
``` python
! cat {rtlsim_output_dir}/report/rtlsim_performance.json
```
%% Output
{
"cycles": 838,
"runtime[ms]": 0.00838,
"throughput[images/s]": 954653.9379474939,
"DRAM_in_bandwidth[Mb/s]": 71.59904534606204,
"DRAM_out_bandwidth[Mb/s]": 0.11933174224343673,
"fclk[mhz]": 100.0,
"N": 8,
"latency_cycles": 229
}
%% Cell type:markdown id: tags:
Finally, let's have a look at `final_hw_config.json`. This is the node-by-node hardware configuration determined by the FINN compiler, including FIFO depths, parallelization settings (PE/SIMD) and others. If you want to optimize your build further (the "advanced" method we mentioned under "Configuring the performance"), you can use this .json file as the `folding_config_file` for a new run to use it as a starting point for further exploration and optimizations.
%% Cell type:code id: tags:
``` python
! cat {rtlsim_output_dir}/final_hw_config.json
```
%% Output
{
"Defaults": {},
"StreamingFIFO_0": {
"ram_style": "auto",
"depth": 32,
"impl_style": "rtl"
},
"StreamingFCLayer_Batch_0": {
"PE": 32,
"SIMD": 15,
"ram_style": "auto",
"resType": "lut",
"mem_mode": "decoupled",
"runtime_writeable_weights": 0
},
"StreamingDataWidthConverter_Batch_0": {
"impl_style": "hls"
},
"StreamingFCLayer_Batch_1": {
"PE": 4,
"SIMD": 16,
"ram_style": "auto",
"resType": "lut",
"mem_mode": "decoupled",
"runtime_writeable_weights": 0
},
"StreamingDataWidthConverter_Batch_1": {
"impl_style": "hls"
},
"StreamingFCLayer_Batch_2": {
"PE": 4,
"SIMD": 16,
"ram_style": "auto",
"resType": "lut",
"mem_mode": "decoupled",
"runtime_writeable_weights": 0
},
"StreamingDataWidthConverter_Batch_2": {
"impl_style": "hls"
},
"StreamingFCLayer_Batch_3": {
"PE": 1,
"SIMD": 1,
"ram_style": "auto",
"resType": "lut",
"mem_mode": "decoupled",
"runtime_writeable_weights": 0
}
}
%% Cell type:markdown id: tags:
## Launch a Build: PYNQ Bitfile and Driver <a id="build_bitfile_driver"></a>
%% Cell type:code id: tags:
``` python
import finn.builder.build_dataflow as build
import finn.builder.build_dataflow_config as build_cfg
model_file = "cybsec-mlp-verified.onnx"
final_output_dir = "output_final"
#Delete previous run results if exist
if os.path.exists(final_output_dir):
shutil.rmtree(final_output_dir)
print("Previous run results deleted!")
cfg = build.DataflowBuildConfig(
output_dir = final_output_dir,
target_fps = 1000000,
synth_clk_period_ns = 10.0,
board = "Pynq-Z1",
shell_flow_type = build_cfg.ShellFlowType.VIVADO_ZYNQ,
generate_outputs=[
build_cfg.DataflowOutputType.BITFILE,
build_cfg.DataflowOutputType.PYNQ_DRIVER,
build_cfg.DataflowOutputType.DEPLOYMENT_PACKAGE,
]
)
build.build_dataflow_cfg(model_file, cfg)
```
%% Output
Building dataflow accelerator from cybsec-mlp-verified.onnx
Intermediate outputs will be generated in /tmp/finn_dev_osboxes
Final outputs will be generated in output_final
Build log is at output_final/build_dataflow.log
Running step: step_tidy_up [1/15]
Running step: step_streamline [2/15]
Running step: step_convert_to_hls [3/15]
Running step: step_create_dataflow_partition [4/15]
Running step: step_target_fps_parallelization [5/15]
Running step: step_apply_folding_config [6/15]
Running step: step_generate_estimate_reports [7/15]
Running step: step_hls_ipgen [8/15]
Running step: step_set_fifo_depths [9/15]
Running step: step_create_stitched_ip [10/15]
Running step: step_measure_rtlsim_performance [11/15]
Running step: step_make_pynq_driver [12/15]
Running step: step_out_of_context_synthesis [13/15]
Running step: step_synthesize_bitfile [14/15]
Running step: step_deployment_package [15/15]
Completed successfully
0
%% Cell type:markdown id: tags:
For our final build, the output products include the bitfile (and the accompanying .hwh file, also needed to execute correctly on PYNQ for Zynq platforms):
%% Cell type:code id: tags:
``` python
! ls {final_output_dir}/bitfile
```
%% Output
finn-accel.bit finn-accel.hwh
%% Cell type:markdown id: tags:
The generated Python driver lets us execute the accelerator on PYNQ platforms with simply numpy i/o. You can find some notebooks showing how to use FINN-generated accelerators at runtime in the [finn-examples](https://github.com/Xilinx/finn-examples) repository.
%% Cell type:code id: tags:
``` python
! ls {final_output_dir}/driver
```
%% Output
driver.py driver_base.py finn runtime_weights validate.py
%% Cell type:markdown id: tags:
The reports folder contains the post-synthesis resource and timing reports:
%% Cell type:code id: tags:
``` python
! ls {final_output_dir}/report
```
%% Output
estimate_layer_resources_hls.json post_synth_resources.xml
post_route_timing.rpt
%% Cell type:markdown id: tags:
Finally, we have the `deploy` folder which contains everything you need to copy onto the target board to get the accelerator running:
%% Cell type:code id: tags:
``` python
! ls {final_output_dir}/deploy
```
%% Output
bitfile driver
%% Cell type:code id: tags:
``` python
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment