Skip to content
Snippets Groups Projects
Commit f425ccf3 authored by Yaman Umuroglu's avatar Yaman Umuroglu
Browse files

[Notebook] import showInNetron from utils for cnv-w1a1 notebook

parent 7ba5d571
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# End-to-End FINN Flow for a Simple Convolutional Net
-----------------------------------------------------------------
In this notebook, we will go through the FINN steps needed to take a binarized convolutional network all the way down to a heterogeneous streaming dataflow accelerator running on the FPGA.
It's recommended to go through the simpler [end-to-end notebook for a fully connected network](tfc_end2end_example.ipynb) first, since many steps here are very similar and we will focus on what is done differently for convolutions.
This notebook is quite lengthy, and some of the cells (involving Vivado synthesis) may take up to an hour to finish running. To let you save and resume your progress, we will save the intermediate ONNX models that are generated in the various steps to disk, so that you can jump back directly to where you left off.
%% Cell type:markdown id: tags:
## Quick Introduction to the CNV-w1a1 Network
The particular quantized neural network (QNN) we will be targeting in this notebook is referred to as CNV-w1a1 and it classifies 32x32 RGB images into one of ten CIFAR-10 classes. All weights and activations in this network are quantized to bipolar values (either -1 or +1), with the exception of the input (which is RGB with 8 bits per channel) and the final output (which is 32-bit numbers). It first appeared in the original [FINN paper](https://arxiv.org/abs/1612.07119) from ISFPGA'17 with the name CNV, as a variant of the binarized convolutional network from the [BinaryNet paper](https://arxiv.org/abs/1602.02830), in turn inspired by the VGG-11 topology which was the runner-up for the 2014 [ImageNet Large Scale Visual Recognition Challenge](http://www.image-net.org/challenges/LSVRC/).
You'll have a chance to interactively examine the layers that make up the network in Netron in a moment, so that's enough about the network for now.
%% Cell type:markdown id: tags:
## Quick Recap of the End-to-End Flow
The FINN compiler comes with many *transformations* that modify the ONNX representation of the network according to certain patterns. This notebook will demonstrate a *possible* sequence of such transformations to take a particular trained network all the way down to hardware, as shown in the figure below.
%% Cell type:markdown id: tags:
![](finn-design-flow-example.svg)
%% Cell type:markdown id: tags:
The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into 5 sections represented by a different color, each of it includes several flow steps. The flow starts in top left corner with Brevitas export (green section), followed by the preparation of the network (blue section) for the Vivado HLS synthesis and Vivado IPI stitching (orange section), and finally building a PYNQ overlay bitfile and testing it on a PYNQ board (yellow section).
There is an additional section for functional verification (red section) on the left side of the diagram, which we will not cover in this notebook. For details please take a look in the verification notebook which you can find [here](tfc_end2end_verification.ipynb)
This Jupyter notebook is organized based on the sections described above. We will use the following helper functions, `showSrc` to show source code of FINN library calls and `showInNetron` to show the ONNX model at the current transformation step. The Netron displays are interactive, but they only work when running the notebook actively and not on GitHub (i.e. if you are viewing this on GitHub you'll only see blank squares).
We will use the helper function `showInNetron` to show the ONNX model at the current transformation step. The Netron displays are interactive, but they only work when running the notebook actively and not on GitHub (i.e. if you are viewing this on GitHub you'll only see blank squares).
%% Cell type:code id: tags:
``` python
import inspect
import netron
from finn.util.basic import make_build_dir
from IPython.display import IFrame
def showSrc(what):
print("".join(inspect.getsourcelines(what)[0]))
def showInNetron(model_filename):
netron.start(model_filename, port=8081, host="0.0.0.0")
return IFrame(src="http://0.0.0.0:8081/", width="100%", height=400)
from finn.util.visualization import showInNetron
build_dir = "/workspace/finn"
```
%% Cell type:markdown id: tags:
## 1. Brevitas Export, FINN Import and Tidy-Up
Similar to what we did in the TFC-w1a1 end-to-end notebook, we will start by exporting the [pretrained CNV-w1a1 network](https://github.com/maltanar/brevitas_cnv_lfc) to ONNX, importing that into FINN and running the "tidy-up" transformations to have a first look at the topology.
%% Cell type:code id: tags:
``` python
import onnx
from finn.util.test import get_test_model_trained
import brevitas.onnx as bo
from finn.core.modelwrapper import ModelWrapper
from finn.transformation.double_to_single_float import DoubleToSingleFloat
from finn.transformation.infer_shapes import InferShapes
from finn.transformation.fold_constants import FoldConstants
from finn.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames
cnv = get_test_model_trained("CNV", 1, 1)
bo.export_finn_onnx(cnv, (1, 3, 32, 32), build_dir + "/end2end_cnv_w1a1_export.onnx")
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_export.onnx")
model = model.transform(DoubleToSingleFloat())
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveReadableTensorNames())
model.save(build_dir + "/end2end_cnv_w1a1_tidy.onnx")
```
%% Output
/workspace/brevitas_cnv_lfc/training_scripts/models/CNV.py:112: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
x = 2.0 * x - torch.tensor([1.0]).to(self.device)
%% Cell type:markdown id: tags:
Now that the model is exported, let's have a look at its layer structure with Netron. Remember that the visualization below is interactive, you can click on the individual nodes and view the layer attributes, trained weights and so on.
%% Cell type:code id: tags:
``` python
showInNetron(build_dir+"/end2end_cnv_w1a1_tidy.onnx")
```
%% Output
Serving '/workspace/finn/end2end_cnv_w1a1_tidy.onnx' at http://0.0.0.0:8081
<IPython.lib.display.IFrame at 0x7f0cb10a8ef0>
<IPython.lib.display.IFrame at 0x7f7b24ef8b00>
%% Cell type:markdown id: tags:
You can see that the network is composed of a repeating convolution-convolution-maxpool layer pattern to extract features using 3x3 convolution kernels (with weights binarized) and `Sign` activations, followed by fully connected layers acting as the classifier. Also notice the initial `MultiThreshold` layer at the beginning of the network, which is quantizing float inputs to 8-bit ones.
%% Cell type:markdown id: tags:
## 2. How FINN Implements Convolutions: Lowering and Streamlining
In FINN, we implement convolutions with the *lowering* approach: we convert them to matrix-matrix multiply operations, where one of the matrices is generated by sliding a window over the input image. You can read more about the sliding window operator and how convolution lowering works [in this notebook](https://github.com/maltanar/qnn-inference-examples/blob/master/3-convolutional-binarized-gtsrb.ipynb). The streaming dataflow architecture we will end up with is going to look something like this figure from the [FINN-R paper](https://arxiv.org/abs/1809.04570):
![](cnv-mp-fc.png)
Note how the convolution layer looks very similar to the fully connected one in terms of the matrix-vector-threshold unit (MVTU), but now the MVTU is preceded by a sliding window unit that produces the matrix from the input image. All of these building blocks, including the `MaxPool` layer you see in this figure, exist as templated Vivado HLS C++ functions in [finn-hlslib](https://github.com/Xilinx/finn-hlslib).
To target this kind of hardware architecture with our network we'll apply a convolution lowering transformation, in addition to streamlining. You may recall the *streamlining transformation* that we applied to the TFC-w1a1 network, which is a series of mathematical simplifications that allow us to get rid of floating point scaling operations by implementing few-bit activations as thresholding operations. **The current implementation of streamlining is highly network-specific and may not work for your network if its topology is very different than the example network here. We hope to rectify this in future releases.**
%% Cell type:code id: tags:
``` python
from finn.transformation.streamline import Streamline
from finn.transformation.lower_convs_to_matmul import LowerConvsToMatMul
from finn.transformation.bipolar_to_xnor import ConvertBipolarMatMulToXnorPopcount
import finn.transformation.streamline.absorb as absorb
from finn.transformation.streamline.reorder import MakeMaxPoolNHWC
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_tidy.onnx")
model = model.transform(Streamline())
model = model.transform(LowerConvsToMatMul())
model = model.transform(MakeMaxPoolNHWC())
model = model.transform(absorb.AbsorbTransposeIntoMultiThreshold())
model = model.transform(ConvertBipolarMatMulToXnorPopcount())
model = model.transform(Streamline())
model.save(build_dir + "/end2end_cnv_w1a1_streamlined.onnx")
```
%% Cell type:markdown id: tags:
We won't go into too much detail about what happens in each transformation and why they are called in the particular order they are (feel free to visualize the intermediate steps using Netron yourself if you are curious) but here is a brief summmmary:
* `Streamline` moves floating point scaling and addition operations closer to the input of the nearest thresholding activation and absorbs them into thresholds
* `LowerConvsToMatMul` converts ONNX `Conv` nodes into sequences of `Im2Col, MatMul` nodes as discussed above. `Im2Col` is a custom FINN ONNX high-level node type that implements the sliding window operator.
* `MakeMaxPoolNHWC` and `AbsorbTransposeIntoMultiThreshold` convert the *data layout* of the network into the NHWC data layout that finn-hlslib primitives use. NCHW means the tensor dimensions are ordered as `(N : batch, H : height, W : width, C : channels)` (assuming 2D images). The ONNX standard ops normally use the NCHW layout, but the ONNX intermediate representation itself does not dictate any data layout.
* You may recall `ConvertBipolarMatMulToXnorPopcount` from the TFC-w1a1 example, which is needed to implement bipolar-by-bipolar (w1a1) networks correctly using finn-hlslib.
Let's visualize the streamlined and lowered network with Netron. Observe how all the `Conv` nodes have turned into pairs of `Im2Col, MatMul` nodes, and many nodes including `BatchNorm, Mul, Add` nodes have disappeared and replaced with `MultiThreshold` nodes.
%% Cell type:code id: tags:
``` python
showInNetron(build_dir+"/end2end_cnv_w1a1_streamlined.onnx")
```
%% Output
Stopping http://0.0.0.0:8081
Serving '/workspace/finn/end2end_cnv_w1a1_streamlined.onnx' at http://0.0.0.0:8081
<IPython.lib.display.IFrame at 0x7f0cb1098f28>
%% Cell type:markdown id: tags:
## 3. Partitioning, Conversion to HLS Layers and Folding
The next steps will be (again) very similar to what we did for the TFC-w1a1 network. We'll first convert the layers that we can put into the FPGA into their HLS equivalents and separate them out into a *dataflow partition*:
%% Cell type:code id: tags:
``` python
import finn.transformation.fpgadataflow.convert_to_hls_layers as to_hls
from finn.transformation.fpgadataflow.create_dataflow_partition import (
CreateDataflowPartition,
)
from finn.transformation.move_reshape import MoveReshape
from finn.custom_op.registry import getCustomOp
# choose the memory mode for the MVTU units, decoupled or const
mem_mode = "decoupled"
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_streamlined.onnx")
model = model.transform(to_hls.InferBinaryStreamingFCLayer(mem_mode))
model = model.transform(to_hls.InferQuantizedStreamingFCLayer(mem_mode))
model = model.transform(to_hls.InferConvInpGen())
model = model.transform(to_hls.InferStreamingMaxPool())
# get rid of Reshape(-1, 1) operation between hlslib nodes
model = model.transform(MoveReshape())
parent_model = model.transform(CreateDataflowPartition())
parent_model.save(build_dir + "/end2end_cnv_w1a1_dataflow_parent.onnx")
sdp_node = parent_model.get_nodes_by_op_type("StreamingDataflowPartition")[0]
sdp_node = getCustomOp(sdp_node)
dataflow_model_filename = sdp_node.get_nodeattr("model")
# save the dataflow partition with a different name for easier access
dataflow_model = ModelWrapper(dataflow_model_filename)
dataflow_model.save(build_dir + "/end2end_cnv_w1a1_dataflow_model.onnx")
```
%% Cell type:markdown id: tags:
Notice the additional `MoveReshape` transformation that was not used for TFC-w1a1. In the last Netron visualization you may have noticed a `Reshape` operation towards the end of the network where the convolutional part of the network ends and the fully-connected layers started. That `Reshape` is essentialy a tensor flattening operation, which we can remove for the purposes of hardware implementation. We can examine the contents of the dataflow partition with Netron, and observe the `ConvolutionInputGenerator`, `StreamingFCLayer_Batch` and `StreamingMaxPool_Batch` nodes that implement the sliding window, matrix multiply and maxpool operations in hlslib. *Note that the StreamingFCLayer instances following the ConvolutionInputGenerator nodes are really implementing the convolutions, despite the name. The final three StreamingFCLayer instances implement actual FC layers.*
%% Cell type:code id: tags:
``` python
showInNetron(build_dir + "/end2end_cnv_w1a1_dataflow_model.onnx")
```
%% Output
Stopping http://0.0.0.0:8081
Serving '/workspace/finn/end2end_cnv_w1a1_dataflow_model.onnx' at http://0.0.0.0:8081
<IPython.lib.display.IFrame at 0x7f0cb063e208>
%% Cell type:markdown id: tags:
Now we have to set the *folding factors* for certain layers to adjust the performance of our accelerator, similar to the TFC-w1a1 example. We'll also set the desired FIFO depths around those layers, which are important to achieve full throughput in the accelerator.
%% Cell type:code id: tags:
``` python
from finn.transformation.fpgadataflow.insert_dwc import InsertDWC
from finn.transformation.fpgadataflow.insert_tlastmarker import InsertTLastMarker
from finn.transformation.fpgadataflow.insert_fifo import InsertFIFO
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_dataflow_model.onnx")
fc_layers = model.get_nodes_by_op_type("StreamingFCLayer_Batch")
# each tuple is (PE, SIMD, in_fifo_depth) for a layer
folding = [
(16, 3, 128),
(32, 32, 128),
(16, 32, 128),
(16, 32, 128),
(4, 32, 81),
(1, 32, 2),
(1, 4, 2),
(1, 8, 128),
(5, 1, 3),
]
for fcl, (pe, simd, ififodepth) in zip(fc_layers, folding):
fcl_inst = getCustomOp(fcl)
fcl_inst.set_nodeattr("PE", pe)
fcl_inst.set_nodeattr("SIMD", simd)
fcl_inst.set_nodeattr("inFIFODepth", ififodepth)
# use same SIMD values for the sliding window operators
swg_layers = model.get_nodes_by_op_type("ConvolutionInputGenerator")
for i in range(len(swg_layers)):
swg_inst = getCustomOp(swg_layers[i])
simd = folding[i][1]
swg_inst.set_nodeattr("SIMD", simd)
model = model.transform(InsertDWC())
model = model.transform(InsertFIFO())
model = model.transform(InsertTLastMarker())
model = model.transform(GiveUniqueNodeNames())
model.save(build_dir + "/end2end_cnv_w1a1_folded.onnx")
```
%% Cell type:markdown id: tags:
Below we visualize in Netron to observe the `StreamingDataWidthConverter` and `StreamingFIFO` nodes that have been inserted into graph, as well as the folding factors in the `PE` and `SIMD` attributes of each `StreamingFCLayer_Batch`.
%% Cell type:code id: tags:
``` python
showInNetron(build_dir + "/end2end_cnv_w1a1_folded.onnx")
```
%% Output
Stopping http://0.0.0.0:8081
Serving '/workspace/finn/end2end_cnv_w1a1_folded.onnx' at http://0.0.0.0:8081
<IPython.lib.display.IFrame at 0x7f0cb1098748>
%% Cell type:markdown id: tags:
Our network is now ready and we can start with the hardware generation.
%% Cell type:markdown id: tags:
## 4. Hardware Generation
From this point onward, the steps we have to follow do not depend on the particular network and will be exactly the same as the TFC-w1a1 example. We first proceed with HLS synthesis, **which may take 10-20 minutes depending on your host computer**.
%% Cell type:code id: tags:
``` python
from finn.transformation.fpgadataflow.codegen_ipgen import CodeGen_ipgen
from finn.transformation.fpgadataflow.hlssynth_ipgen import HLSSynth_IPGen
from finn.util.basic import pynq_part_map
test_pynq_board = "Pynq-Z1"
test_fpga_part = pynq_part_map[test_pynq_board]
target_clk_ns = 5
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_folded.onnx")
model = model.transform(CodeGen_ipgen(test_fpga_part, target_clk_ns))
model = model.transform(HLSSynth_IPGen())
model.save(build_dir + "/end2end_cnv_w1a1_ipgen.onnx")
```
%% Cell type:markdown id: tags:
Once the HLS synthesis is complete, we can stitch together the generated IP blocks into a larger IP that is the implementation of our network:
%% Cell type:code id: tags:
``` python
from finn.transformation.fpgadataflow.replace_verilog_relpaths import (
ReplaceVerilogRelPaths,
)
from finn.transformation.fpgadataflow.codegen_ipstitch import CodeGen_ipstitch
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_ipgen.onnx")
model = model.transform(ReplaceVerilogRelPaths())
model = model.transform(CodeGen_ipstitch(test_fpga_part))
model.save(build_dir + "/end2end_cnv_w1a1_ipstitch.onnx")
```
%% Cell type:markdown id: tags:
Finally, we create a PYNQ project that includes the hardware "shell" that will support our accelerator, including the data movers, and run Vivado synthesis, **which may take around 30 minutes depending on your host computer.**
*If you'd like to watch the progress, you can open the generated project file (printed below) with the Vivado GUI.*
%% Cell type:code id: tags:
``` python
from finn.transformation.fpgadataflow.make_pynq_proj import MakePYNQProject
from finn.transformation.fpgadataflow.synth_pynq_proj import SynthPYNQProject
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_ipstitch.onnx")
model = model.transform(MakePYNQProject(test_pynq_board))
vivado_proj = model.get_metadata_prop("vivado_pynq_proj")
print("Vivado synthesis project is at %s/resizer.xpr" % vivado_proj)
model.save(build_dir + "/end2end_cnv_w1a1_pynqproj.onnx")
```
%% Output
Vivado synthesis project is at /tmp/finn_dev_maltanar/vivado_pynq_proj_96qtjweo/resizer.xpr
%% Cell type:code id: tags:
``` python
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_pynqproj.onnx")
model = model.transform(SynthPYNQProject())
model.save(build_dir + "/end2end_cnv_w1a1_synth.onnx")
```
%% Cell type:markdown id: tags:
## 5. Deployment and Remote Execution
Now that we're done with the hardware generation, we can generate a Python driver for accelerator and copy the necessary files onto our PYNQ board.
%% Cell type:code id: tags:
``` python
import os
from finn.transformation.fpgadataflow.make_pynq_driver import MakePYNQDriver
from finn.transformation.fpgadataflow.make_deployment import DeployToPYNQ
# set up the following values according to your own environment
# FINN will use ssh to deploy and run the generated accelerator
ip = os.getenv("PYNQ_IP", "192.168.1.99")
username = os.getenv("PYNQ_USERNAME", "xilinx")
password = os.getenv("PYNQ_PASSWORD", "xilinx")
port = os.getenv("PYNQ_PORT", 22)
target_dir = os.getenv("PYNQ_TARGET_DIR", "/home/xilinx/finn")
model = ModelWrapper(build_dir + "/end2end_cnv_w1a1_synth.onnx")
model = model.transform(MakePYNQDriver())
model = model.transform(DeployToPYNQ(ip, port, username, password, target_dir))
model.save(build_dir + "/end2end_cnv_w1a1_pynq_deploy.onnx")
```
%% Cell type:code id: tags:
``` python
! sshpass -p {password} ssh {username}@{ip} -p {port} 'ls -l {target_dir}/*'
```
%% Output
total 4260
-rw-r--r-- 1 xilinx xilinx 6380 May 7 15:14 driver.py
drwxr-xr-x 4 xilinx xilinx 4096 May 7 15:14 finn
-rw-r--r-- 1 xilinx xilinx 4045675 May 7 15:14 resizer.bit
-rw-r--r-- 1 xilinx xilinx 302015 May 7 15:14 resizer.hwh
%% Cell type:markdown id: tags:
We only have two more steps to be able to remotely execute the deployed bitfile with some test data from the CIFAR-10 dataset. Let's load up some test data that comes bundled with FINN -- and before you ask, that's supposed to be a cat (CIFAR-10 class number 3).
%% Cell type:code id: tags:
``` python
import pkg_resources as pk
import matplotlib.pyplot as plt
import numpy as np
fn = pk.resource_filename("finn", "data/cifar10/cifar10-test-data-class3.npz")
x = np.load(fn)["arr_0"].astype(np.float32)
x = x / 255
plt.imshow(x.reshape(3, 32,32).transpose(1, 2, 0))
```
%% Output
<matplotlib.image.AxesImage at 0x7f0c2b2c6908>
%% Cell type:markdown id: tags:
Recall that we partitioned our original network into a parent graph that contained the non-synthesizable nodes and a child graph that contained the bulk of the network, which we turned into a bitfile. We'll load up the parent graph, modify the `StreamingDataflowPartition` node so that it points to the deployed ONNX graph.
%% Cell type:code id: tags:
``` python
# point to the PYNQ-deployed model as the StreamingDataflowPartition in the parent
parent_model = ModelWrapper(build_dir+"/end2end_cnv_w1a1_dataflow_parent.onnx")
sdp_node = parent_model.get_nodes_by_op_type("StreamingDataflowPartition")[0]
sdp_node = getCustomOp(sdp_node)
sdp_node.set_nodeattr("model", build_dir + "/end2end_cnv_w1a1_pynq_deploy.onnx")
parent_model.save(build_dir+"/end2end_cnv_w1a1_dataflow_parent_with_remote_bitfile_exec.onnx")
```
%% Cell type:markdown id: tags:
Finally, we can call `execute_onnx` on the parent graph, which will internally call remote execution with the bitfile once the `StreamingDataflowPartition` node is reached, grab the results, then continue executing the last portion of the network.
%% Cell type:code id: tags:
``` python
import numpy as np
from finn.core.onnx_exec import execute_onnx
iname = parent_model.graph.input[0].name
oname = parent_model.graph.output[0].name
ishape = parent_model.get_tensor_shape(iname)
input_dict = {iname: x.reshape(ishape)}
ret = execute_onnx(parent_model, input_dict, True)
```
%% Cell type:markdown id: tags:
We'll pass the output of the network through a softmax function to interpret it as probabilities, and plot the per-class probabilities as a bar chart.
%% Cell type:code id: tags:
``` python
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
logits = ret[oname].flatten()
prob = softmax(logits)
classes = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
plt.figure(figsize=(20, 3))
plt.bar(classes, prob)
```
%% Output
<BarContainer object of 10 artists>
%% Cell type:markdown id: tags:
We see that the network correctly predicts this as a class 3 ("cat") with high probability. This concludes our tutorial on how to take a convolutional BNN all the way down to hardware with FINN, and execute it remotely on a PYNQ board.
%% Cell type:code id: tags:
``` python
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment