Core

Modules

finn.core.datatype

class finn.core.datatype.DataType

Bases: enum.Enum

Enum class that contains FINN data types to set the quantization annotation. ONNX does not support data types smaller than 8-bit integers, whereas in FINN we are interested in smaller integers down to ternary and bipolar.

Assignment of DataTypes to indices based on following ordering:

  • unsigned to signed

  • fewer to more bits

Currently supported DataTypes:

BINARY = 1
BIPOLAR = 8
FLOAT32 = 16
INT16 = 14
INT2 = 10
INT3 = 11
INT32 = 15
INT4 = 12
INT8 = 13
TERNARY = 9
UINT16 = 6
UINT2 = 2
UINT3 = 3
UINT32 = 7
UINT4 = 4
UINT8 = 5
allowed(value)

Check whether given value is allowed for this DataType.

  • value (float32): value to be checked

bitwidth()

Returns the number of bits required for this DataType.

get_hls_datatype_str()

Returns the corresponding Vivado HLS datatype name.

get_num_possible_values()

Returns the number of possible values this DataType can take. Only implemented for integer types for now.

get_smallest_possible()

Returns smallest (fewest bits) possible DataType that can represent value. Prefers unsigned integers where possible.

is_integer()

Returns whether this DataType represents integer values only.

max()

Returns the largest possible value allowed by this DataType.

min()

Returns the smallest possible value allowed by this DataType.

signed()

Returns whether this DataType can represent negative numbers.

finn.core.execute_custom_node

finn.core.execute_custom_node.execute_custom_node(node, context, graph)

Call custom implementation to execute a single custom node. Input/output provided via context.

finn.core.modelwrapper

class finn.core.modelwrapper.ModelWrapper(onnx_model_proto, make_deepcopy=False)

Bases: object

A wrapper around ONNX ModelProto that exposes some useful utility functions for graph manipulation and exploration.

analysis(analysis_fxn)

Runs given anaylsis_fxn on this model and return resulting dict.

check_all_tensor_shapes_specified()

Checks whether all tensors have a specified shape (ValueInfo). The ONNX standard allows for intermediate activations to have no associated ValueInfo, but FINN expects this.

check_compatibility()

Checks this model for FINN compatibility:

  • no embedded subgraphs

  • all tensor shapes are specified, including activations

  • all constants are initializers

find_consumer(tensor_name)

Finds and returns the node that consumes the tensor with given name. Currently only works for linear graphs.

find_producer(tensor_name)

Finds and returns the node that produces the tensor with given name. Currently only works for linear graphs.

get_all_tensor_names()

Returns a list of all (input, output and value_info) tensor names in the graph.

get_initializer(tensor_name)

Gets the initializer value for tensor with given name, if any.

get_metadata_prop(key)

Returns the value associated with metadata_prop with given key, or None otherwise.

get_tensor_datatype(tensor_name)

Returns the FINN DataType of tensor with given name.

get_tensor_fanout(tensor_name)

Returns the number of nodes for which the tensor with given name is as input.

get_tensor_shape(tensor_name)

Returns the shape of tensor with given name, if it has ValueInfoProto.

get_tensor_valueinfo(tensor_name)

Returns ValueInfoProto of tensor with given name, if it has one.

property graph

Returns the graph of the model.

make_empty_exec_context()

Creates an empty execution context for this model.

The execution context is a dictionary of all tensors used for the inference computation. Any initializer values will be taken into account, all other tensors will be zero.

make_new_valueinfo_name()

Returns a name that can be used for a new value_info.

property model

Returns the model.

rename_tensor(old_name, new_name)

Renames a tensor from old_name to new_name.

save(filename)

Saves the wrapper ONNX ModelProto into a file with given name.

set_initializer(tensor_name, tensor_value)

Sets the initializer value for tensor with given name.

set_metadata_prop(key, value)

Sets metadata property with given key to the given value.

set_tensor_datatype(tensor_name, datatype)

Sets the FINN DataType of tensor with given name.

set_tensor_shape(tensor_name, tensor_shape, dtype=1)

Assigns shape in ValueInfoProto for tensor with given name.

transform(transformation, make_deepcopy=True)

Applies given Transformation repeatedly until no more changes can be made and returns a transformed ModelWrapper instance.

If make_deepcopy is specified, operates on a new (deep)copy of model.

finn.core.onnx_exec

finn.core.onnx_exec.compare_execution(model_a, model_b, input_dict, compare_fxn=<function <lambda>>)

Executes two ONNX models and compare their outputs using given function.

compare_fxn should take in two tensors and return a Boolean

finn.core.onnx_exec.execute_node(node, context, graph)

Executes a single node by using onnxruntime, with custom function or if dataflow partition by using remote execution or rtlsim.

Input/output provided via context.

finn.core.onnx_exec.execute_onnx(model, input_dict, return_full_exec_context=False)

Executes given ONNX ModelWrapper with given named inputs.

If return_full_exec_context is False, a dict of named outputs is returned as indicated by the model.graph.output.

If return return_full_exec_context is True, the full set of tensors used by the execution (including inputs, weights, activations and final outputs) will be returned as a dict.

finn.core.onnx_exec.execute_onnx_and_make_model(model, input_dict)

Executes given ONNX ModelWrapper with given named inputs and return a new ModelWrapper where an initializer is provided for each tensor as taken from the execution. This new model is useful for debugging, since it contains all the intermediate activation values.

finn.core.remote_exec

finn.core.remote_exec.remote_exec(model, execution_context)

Executes the given model remotely on the pynq board. The metadata properties related to the pynq board have to be set. The execution context contains the input values.

finn.core.rtlsim_exec

finn.core.rtlsim_exec.rtlsim_exec(model, execution_context)

Use PyVerilator to execute given model with stitched IP. The execution context contains the input values.