Skip to content
Snippets Groups Projects
Commit 909e2dcb authored by auphelia's avatar auphelia
Browse files

[notebook - modelwrapper] Fixed some spelling mistakes

parent 9a17b5e3
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# FINN - ModelWrapper
--------------------------------------
<font size="3"> This notebook is about the ModelWrapper class within FINN.
Following showSrc function is used to print the source code of function calls in the Jupyter notebook:</font>
%% Cell type:code id: tags:
``` python
import inspect
def showSrc(what):
print("".join(inspect.getsourcelines(what)[0]))
```
%% Cell type:markdown id: tags:
## General Information
------------------------------
* <font size="3"> wrapper around ONNX ModelProto that exposes several utility
functions for graph manipulation and exploration </font>
* <font size="3"> ModelWrapper instance takes onnx model proto and `make_deepcopy` flag as input </font>
* <font size="3"> onnx model proto can either be a string with the path to a stored .onnx file on disk, or serialized bytes </font>
* <font size="3"> ModelWrapper instance takes ONNX ModelProto and `make_deepcopy` flag as input </font>
* <font size="3"> ONNX ModelProto can either be a string with the path to a stored .onnx file on disk, or serialized bytes </font>
* <font size="3"> `make_deepcopy` is by default False but can be set to True if a (deep) copy should be created </font>
%% Cell type:markdown id: tags:
### Create a ModelWrapper instance
<font size="3">Here we use a premade ONNX file on disk to load up the ModelWrapper, but this could have been produced from e.g. a trained Brevitas PyTorch model. See [this notebook](brevitas-network-import.ipynb) for more details.</font>
%% Cell type:code id: tags:
``` python
from finn.core.modelwrapper import ModelWrapper
model = ModelWrapper("LFCW1A1.onnx")
```
%% Cell type:markdown id: tags:
### Access the ONNX GraphProto through ModelWrapper
<font size="3">ModelWrapper is a thin wrapper around the ONNX protobuf, and it offers a range of helper functions as well as direct access to the underlying protobuf. The `.model` member gives access to the full ONNX ModelProto, whereas `.graph` gives access to the GraphProto, as follows:</font>
%% Cell type:code id: tags:
``` python
# access the ONNX ModelProto
modelproto = model.model
print("ModelProto IR version is %d" % modelproto.ir_version)
# the graph
graphproto = model.graph
print("GraphProto top-level outputs are %s" % str(graphproto.output))
# the node list
nodes = model.graph.node
print("There are %d nodes in the graph" % len(nodes))
print("The first node is \n%s" % str(nodes[0]))
```
%% Output
ModelProto IR version is 4
GraphProto top-level outputs are [name: "60"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_value: 1
}
dim {
dim_value: 10
}
}
}
}
]
There are 29 nodes in the graph
The first node is
input: "0"
output: "21"
op_type: "Shape"
%% Cell type:markdown id: tags:
### Helper functions for tensors
<font size="3"> Every input and output of every node in the onnx model is represented as tensor with several properties (i.e. name, shape, data type). ModelWrapper provides some utility functions to work with the tensors. </font>
%% Cell type:markdown id: tags:
##### Get all tensor names
<font size="3">Produces a list of all tensor names (inputs, activations, weights, outputs...) in the graph.</font>
%% Cell type:code id: tags:
``` python
# get all tensor names
tensor_list = model.get_all_tensor_names()
print(tensor_list)
```
%% Output
['0', 'features.3.weight', 'features.3.bias', 'features.3.running_mean', 'features.3.running_var', 'features.7.weight', 'features.7.bias', 'features.7.running_mean', 'features.7.running_var', 'features.11.weight', 'features.11.bias', 'features.11.running_mean', 'features.11.running_var', '20', '23', '28', '30', '33', '34', '41', '42', '49', '50', '57', '58', '60']
%% Cell type:markdown id: tags:
##### Producer and consumer of a tensor
<font size="3">A tensor can have a producer node and/or a consumer node in the onnx model. ModelWrapper provides two helper functions to access these nodes, they are shown in the following.
It may be that a tensor does not have a producer or consumer node, for example if the tensor represents a constant that is already set. In that case `None` will be returned.</font>
%% Cell type:code id: tags:
``` python
# get random tensor and find producer and consumer (returns node)
tensor_name = tensor_list[25]
print("Producer node of tensor {}:".format(tensor_name))
print(model.find_producer(tensor_name))
tensor_name = tensor_list[0]
print("Consumer node of tensor {}:".format(tensor_name))
print(model.find_consumer(tensor_name))
print("Producer of tensor 0: %s" % str(model.find_producer("0")))
```
%% Output
Producer node of tensor 60:
input: "59"
input: "58"
output: "60"
op_type: "Mul"
Consumer node of tensor 0:
input: "0"
output: "21"
op_type: "Shape"
Producer of tensor 0: None
%% Cell type:markdown id: tags:
##### Tensor shape
<font size="3">Each tensor has a specific shape which can be accessed with the following ModelWrapper helper functions.</font>
%% Cell type:code id: tags:
``` python
# get tensor_shape
print("Shape of tensor 0 is %s" % str(model.get_tensor_shape("0")))
```
%% Output
Shape of tensor 0 is [1, 1, 28, 28]
%% Cell type:markdown id: tags:
<font size="3">It is also possible to set the tensor shape with a helper function. The syntax would be the following:
`onnx_model.set_tensor_shape(tensor_name, tensor_shape)`
Optionally, the dtype (container datatype) of the tensor can also be specified as third argument. By default it is set to TensorProto.FLOAT.
**Important:** dtype should not be confused with FINN data type, which specifies the quantization annotation. See the remarks about FINN-ONNX in [this notebook](finn-basics.ipynb). It is safest to use floating point tensors as the container data type for best compatibility inside FINN.</font>
%% Cell type:markdown id: tags:
##### Tensor FINN DataType
%% Cell type:markdown id: tags:
<font size="3">FINN introduces its [own data types](https://github.com/Xilinx/finn/blob/dev/src/finn/core/datatype.py) because ONNX does not natively support precisions less than 8 bits. FINN is about quantized neural networks, so precision of i.e. 4 bits, 3 bits, 2 bits or 1 bit are of interest. To represent the data within FINN, float tensors are used with additional annotation to specify the quantized data type of a tensor. The following helper functions are about this quantization annotation.</font>
%% Cell type:code id: tags:
``` python
# get tensor data type (FINN data type)
print("The FINN DataType of tensor 0 is " + str(model.get_tensor_datatype("0")))
print("The FINN DataType of tensor 32 is " + str(model.get_tensor_datatype("32")))
```
%% Output
The FINN DataType of tensor 0 is DataType.FLOAT32
The FINN DataType of tensor 32 is DataType.BIPOLAR
%% Cell type:markdown id: tags:
<font size="3">In addition to the get_tensor_datatatype() function, the (FINN) datatype of a tensor can be set using the `set_tensor_datatype(tensor_name, datatype)` function.</font>
%% Cell type:markdown id: tags:
##### Tensor initializers
<font size="3">Some tensors have *initializers*, like tensors that represent constants or i.e. the trained weight values.
ModelWrapper contains two helper functions for this case, one to determine the current initializer and one to set the initializer of a tensor. If there is no initializer, `None` is returned.</font>
%% Cell type:code id: tags:
``` python
# get tensor initializer
tensor_name = tensor_list[1]
print("Initializer for tensor 33:\n" + str(model.get_initializer("33")))
print("Initializer for tensor 0:\n" + str(model.get_initializer("0")))
```
%% Output
Initializer for tensor 33:
[[ 1. 1. 1. ... 1. 1. -1.]
[ 1. 1. -1. ... 1. 1. -1.]
[-1. 1. -1. ... -1. 1. -1.]
...
[-1. 1. -1. ... -1. -1. 1.]
[ 1. 1. -1. ... 1. 1. -1.]
[-1. 1. 1. ... -1. -1. 1.]]
Initializer for tensor 0:
None
%% Cell type:markdown id: tags:
<font size="3">Like for the other tensor helper functions there is a `set_initializer(tensor_name, tensor_value)` function.</font>
<font size="3">Like for the other tensor helper functions there is a corresponding set function (`set_initializer(tensor_name, tensor_value)`).</font>
%% Cell type:markdown id: tags:
### More helper functions
<font size="3">ModelWrapper contains more useful functions, if you are interested please have a look at the [Python code](https://github.com/Xilinx/finn/blob/dev/src/finn/core/modelwrapper.py) directly. Additionally, in the folder notebooks/ a Jupyter notebook about transformation passes [FINN-HowToTransformationPass](FINN-HowToTransformationPass.ipynb) and one about analysis passes [FINN-HowToAnalysisPass](FINN-HowToAnalysisPass.ipynb) can be found.</font>
%% Cell type:code id: tags:
``` python
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment