Skip to content
Snippets Groups Projects
Commit 6be464a4 authored by Yaman Umuroglu's avatar Yaman Umuroglu
Browse files

[Notebook] add option to use prequantized dataset for cybsec MLP

parent 01431f34
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# Verify Exported ONNX Model in FINN
**Important: This notebook depends on the 1-train-mlp-with-brevitas notebook, because we are using the ONNX model that was exported there. So please make sure the needed .onnx file is generated before you run this notebook.**
**Also remember to 'close and halt' any other FINN notebooks, since Netron visualizations use the same port.**
In this notebook we will show how to import the network we trained in Brevitas and verify it in the FINN compiler.
This verification process can actually be done at various stages in the compiler [as explained in this notebook](../bnn-pynq/tfc_end2end_verification.ipynb) but for this example we'll only consider the first step: verifying the exported high-level FINN-ONNX model.
Once this model is sucessfully verified, we'll generate an FPGA accelerator from it in the next notebook.
%% Cell type:code id: tags:
``` python
import onnx
import torch
```
%% Cell type:markdown id: tags:
**This is important -- always import onnx before torch**. This is a workaround for a [known bug](https://github.com/onnx/onnx/issues/2394).
%% Cell type:markdown id: tags:
## Outline
-------------
1. [Import model and visualize in Netron](#brevitas_import_visualization)
2. [Network preperations: Tidy up transformations](#network_preparations)
3. [Load the dataset and Brevitas model](#load_dataset)
4. [Compare FINN and Brevitas execution](#compare_brevitas)
%% Cell type:markdown id: tags:
# 1. Import model and visualize in Netron <a id="brevitas_import_visualization"></a>
Now that we have the model in .onnx format, we can work with it using FINN. To import it into FINN, we'll use the [`ModelWrapper`](https://finn.readthedocs.io/en/latest/source_code/finn.core.html#finn.core.modelwrapper.ModelWrapper). It is a wrapper around the ONNX model which provides several helper functions to make it easier to work with the model.
%% Cell type:code id: tags:
``` python
from finn.core.modelwrapper import ModelWrapper
model_file_path = "cybsec-mlp.onnx"
model_for_sim = ModelWrapper(model_file_path)
```
%% Cell type:markdown id: tags:
To visualize the exported model, Netron can be used. Netron is a visualizer for neural networks and allows interactive investigation of network properties. For example, you can click on the individual nodes and view the properties.
%% Cell type:code id: tags:
``` python
from finn.util.visualization import showInNetron
showInNetron(model_file_path)
```
%% Output
Serving 'cybsec-mlp.onnx' at http://0.0.0.0:8081
<IPython.lib.display.IFrame at 0x7fc1fc950748>
<IPython.lib.display.IFrame at 0x7fe10c830e48>
%% Cell type:markdown id: tags:
# 2. Network preperation: Tidy up transformations <a id="network_preparations"></a>
Before running the verification, we need to prepare our FINN-ONNX model. In particular, all the intermediate tensors need to have statically defined shapes. To do this, we apply some transformations to the model like a kind of "tidy-up" to make it easier to process. You can read more about these transformations in [this notebook](../bnn-pynq/tfc_end2end_example.ipynb).
%% Cell type:code id: tags:
``` python
from finn.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames, RemoveStaticGraphInputs
from finn.transformation.infer_shapes import InferShapes
from finn.transformation.infer_datatypes import InferDataTypes
from finn.transformation.fold_constants import FoldConstants
model_for_sim = model_for_sim.transform(InferShapes())
model_for_sim = model_for_sim.transform(FoldConstants())
model_for_sim = model_for_sim.transform(GiveUniqueNodeNames())
model_for_sim = model_for_sim.transform(GiveReadableTensorNames())
model_for_sim = model_for_sim.transform(InferDataTypes())
model_for_sim = model_for_sim.transform(RemoveStaticGraphInputs())
```
%% Cell type:markdown id: tags:
**Would the FINN compiler still work if we didn't do this?** The compilation step in the next notebook applies these transformations internally and would work fine, but we're going to use FINN's verification capabilities below and these require the tidy-up transformations.
There's one more thing we'll do: we will mark the input tensor datatype as bipolar, which will be used by the compiler later on.
*In the near future it will be possible to add this information to the model while exporting, instead of having to add it manually.*
%% Cell type:code id: tags:
``` python
from finn.core.datatype import DataType
finnonnx_in_tensor_name = model_for_sim.graph.input[0].name
finnonnx_out_tensor_name = model_for_sim.graph.output[0].name
print("Input tensor name: %s" % finnonnx_in_tensor_name)
print("Output tensor name: %s" % finnonnx_out_tensor_name)
finnonnx_model_in_shape = model_for_sim.get_tensor_shape(finnonnx_in_tensor_name)
print("Input tensor shape: %s" % str(finnonnx_model_in_shape))
model_for_sim.set_tensor_datatype(finnonnx_in_tensor_name, DataType.BIPOLAR)
print("Input tensor datatype: %s" % str(model_for_sim.get_tensor_datatype(finnonnx_in_tensor_name)))
verified_model_filename = "cybsec-mlp-verified.onnx"
model_for_sim.save(verified_model_filename)
```
%% Output
Input tensor name: global_in
Output tensor name: global_out
Input tensor shape: [1, 600]
Input tensor datatype: DataType.BIPOLAR
%% Cell type:markdown id: tags:
Let's view our ready-to-go model. Some changes to note:
* all intermediate tensors now have their shapes specified (indicated by numbers next to the arrows going between layers)
* the datatype on the input tensor is set to DataType.BIPOLAR (click on the `global_in` node to view properties)
%% Cell type:code id: tags:
``` python
showInNetron(verified_model_filename)
```
%% Output
Stopping http://0.0.0.0:8081
Serving 'cybsec-mlp-verified.onnx' at http://0.0.0.0:8081
<IPython.lib.display.IFrame at 0x7fc280154278>
<IPython.lib.display.IFrame at 0x7fe10a956e80>
%% Cell type:markdown id: tags:
# 3. Load the Dataset and the Brevitas Model <a id="load_dataset"></a>
We'll use some example data from the quantized UNSW-NB15 dataset (from the previous notebook) to use as inputs for the verification.
Recall that the quantized values from the dataset are 593-bit binary {0, 1} vectors whereas our exported model takes 600-bit bipolar {-1, +1} vectors, so we'll have to preprocess it a bit before we can use it for verifying the ONNX model.
%% Cell type:code id: tags:
``` python
from torch.utils.data import DataLoader, Dataset
from dataloader_quantized import UNSW_NB15_quantized
import numpy as np
from torch.utils.data import TensorDataset
test_quantized_dataset = UNSW_NB15_quantized(file_path_train='UNSW_NB15_training-set.csv', \
file_path_test = "UNSW_NB15_testing-set.csv", \
train=False)
def get_preqnt_dataset(data_dir: str, train: bool):
unsw_nb15_data = np.load(data_dir + "/unsw_nb15_binarized.npz")
if train:
partition = "train"
else:
partition = "test"
part_data = unsw_nb15_data[partition].astype(np.float32)
part_data = torch.from_numpy(part_data)
part_data_in = part_data[:, :-1]
part_data_out = part_data[:, -1]
return TensorDataset(part_data_in, part_data_out)
n_verification_inputs = 100
# last column is the label, exclude it
input_tensor = test_quantized_dataset.data[:n_verification_inputs,:-1]
test_quantized_dataset = get_preqnt_dataset(".", False)
input_tensor = test_quantized_dataset.tensors[0][:n_verification_inputs]
input_tensor.shape
```
%% Output
torch.Size([100, 593])
%% Cell type:markdown id: tags:
Let's also bring up the MLP we trained in Brevitas from the previous notebook. We'll compare its outputs to what is generated by FINN.
%% Cell type:code id: tags:
``` python
input_size = 593
hidden1 = 64
hidden2 = 64
hidden3 = 64
weight_bit_width = 2
act_bit_width = 2
num_classes = 1
from brevitas.nn import QuantLinear, QuantReLU
import torch.nn as nn
brevitas_model = nn.Sequential(
QuantLinear(input_size, hidden1, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden1),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden1, hidden2, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden2),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden2, hidden3, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden3),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden3, num_classes, bias=True, weight_bit_width=weight_bit_width)
)
# replace this with your trained network checkpoint if you're not
# using the pretrained weights
trained_state_dict = torch.load("state_dict.pth")["models_state_dict"][0]
# Uncomment the following line if you previously chose to train the network yourself
#trained_state_dict = torch.load("state_dict_self-trained.pth")
brevitas_model.load_state_dict(trained_state_dict, strict=False)
```
%% Output
IncompatibleKeys(missing_keys=[], unexpected_keys=[])
%% Cell type:code id: tags:
``` python
def inference_with_brevitas(current_inp):
brevitas_output = brevitas_model.forward(current_inp)
# apply sigmoid + threshold
brevitas_output = torch.sigmoid(brevitas_output)
brevitas_output = (brevitas_output.detach().numpy() > 0.5) * 1
# convert output to bipolar
brevitas_output = 2*brevitas_output - 1
return brevitas_output
```
%% Cell type:markdown id: tags:
# 4. Compare FINN & Brevitas execution <a id="compare_brevitas"></a>
%% Cell type:markdown id: tags:
Let's make helper functions to execute the same input with Brevitas and FINN. For FINN, we'll use the [`finn.core.onnx_exec`](https://finn.readthedocs.io/en/latest/source_code/finn.core.html#finn.core.onnx_exec.execute_onnx) function to execute the exported FINN-ONNX on the inputs.
%% Cell type:code id: tags:
``` python
def inference_with_finn_onnx(current_inp):
# convert input to numpy for FINN
current_inp = current_inp.detach().numpy()
# add padding and re-scale to bipolar
current_inp = np.pad(current_inp, [(0, 0), (0, 7)])
current_inp = 2*current_inp-1
# reshape to expected input (add 1 for batch dimension)
current_inp = current_inp.reshape(finnonnx_model_in_shape)
# create the input dictionary
input_dict = {finnonnx_in_tensor_name : current_inp}
# run with FINN's execute_onnx
output_dict = oxe.execute_onnx(model_for_sim, input_dict)
#get the output tensor
finn_output = output_dict[finnonnx_out_tensor_name]
return finn_output
```
%% Cell type:markdown id: tags:
Now we can call our inference helper functions for each input and compare the outputs.
%% Cell type:code id: tags:
``` python
import finn.core.onnx_exec as oxe
import numpy as np
from tqdm import trange
verify_range = trange(n_verification_inputs, desc="FINN execution", position=0, leave=True)
brevitas_model.eval()
ok = 0
nok = 0
for i in verify_range:
# run in Brevitas with PyTorch tensor
current_inp = input_tensor[i].reshape((1, 593))
brevitas_output = inference_with_brevitas(current_inp)
finn_output = inference_with_finn_onnx(current_inp)
# compare the outputs
ok += 1 if finn_output == brevitas_output else 0
nok += 1 if finn_output != brevitas_output else 0
verify_range.set_description("ok %d nok %d" % (ok, nok))
verify_range.refresh() # to show immediately the update
```
%% Output
ok 100 nok 0: 100%|██████████| 100/100 [00:46<00:00, 2.17it/s]
ok 100 nok 0: 100%|██████████| 100/100 [00:47<00:00, 2.11it/s]
%% Cell type:code id: tags:
``` python
if ok == n_verification_inputs:
print("Verification succeeded. Brevitas and FINN-ONNX execution outputs are identical")
else:
print("Verification failed. Brevitas and FINN-ONNX execution outputs are NOT identical")
```
%% Output
Verification succeeded. Brevitas and FINN-ONNX execution outputs are identical
%% Cell type:markdown id: tags:
This concludes our second notebook. In the next one, we'll take the ONNX model we just verified all the way down to FPGA hardware with the FINN compiler.
%% Cell type:code id: tags:
``` python
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment