diff --git a/docs/finn/index.rst b/docs/finn/index.rst index 969b8afad5d259f8e3e3d49e95d52c93917a5e01..4a9452c7274c81dae9460eb17638638dab6963cb 100644 --- a/docs/finn/index.rst +++ b/docs/finn/index.rst @@ -25,7 +25,7 @@ What is FINN? More FINN Resources =================== -* `List of publications <https://github.com/Xilinx/finn/blob/dev/docs/publications.md>`_ +* `List of publications <https://github.com/Xilinx/finn/blob/master/docs/publications.md>`_ * `Roadmap <https://github.com/Xilinx/finn/projects/1>`_ diff --git a/docs/finn/tutorials.rst b/docs/finn/tutorials.rst index 961fea47dda433a947a5caf65da22a50e0a97483..cd852d82b51424c947f3b86a0f8056df0bdee831 100644 --- a/docs/finn/tutorials.rst +++ b/docs/finn/tutorials.rst @@ -13,19 +13,19 @@ Basics The notebooks in this folder should give a basic insight into FINN, how to get started and the basic concepts. -* `0_getting_started <https://github.com/Xilinx/finn/blob/dev/notebooks/basics/0_getting_started.ipynb>`_ - +* `0_getting_started <https://github.com/Xilinx/finn/blob/master/notebooks/basics/0_getting_started.ipynb>`_ + * This notebook corresponds to the chapter :ref:`getting_started` and gives an overview how to start working with FINN. -* `1_how_to_work_with_onnx <https://github.com/Xilinx/finn/blob/dev/notebooks/basics/1_how_to_work_with_onnx.ipynb>`_ +* `1_how_to_work_with_onnx <https://github.com/Xilinx/finn/blob/master/notebooks/basics/1_how_to_work_with_onnx.ipynb>`_ * This notebook can help you to learn how to create and manipulate a simple ONNX model, also by using FINN -* `2_modelwrapper <https://github.com/Xilinx/finn/blob/dev/notebooks/basics/2_modelwrapper.ipynb>`_ +* `2_modelwrapper <https://github.com/Xilinx/finn/blob/master/notebooks/basics/2_modelwrapper.ipynb>`_ * This notebook corresponds to the section :ref:`modelwrapper` in the chapter about internals. -* `3_brevitas_network_import <https://github.com/Xilinx/finn/blob/dev/notebooks/basics/3_brevitas_network_import.ipynb>`_ +* `3_brevitas_network_import <https://github.com/Xilinx/finn/blob/master/notebooks/basics/3_brevitas_network_import.ipynb>`_ * This notebook shows how to import a brevitas network and prepare it for the FINN flow. @@ -34,19 +34,19 @@ Internals The notebooks in this folder are more developer oriented. They should help you to get familiar with the principles in FINN and how to add new content regarding these concepts. -* `0_custom_analysis_pass <https://github.com/Xilinx/finn/blob/dev/notebooks/internals/0_custom_analysis_pass.ipynb>`_ +* `0_custom_analysis_pass <https://github.com/Xilinx/finn/blob/master/notebooks/internals/0_custom_analysis_pass.ipynb>`_ * This notebook explains what an analysis pass is and how to write one for FINN. -* `1_custom_transformation_pass <https://github.com/Xilinx/finn/blob/dev/notebooks/internals/1_custom_transformation_pass.ipynb>`_ +* `1_custom_transformation_pass <https://github.com/Xilinx/finn/blob/master/notebooks/internals/1_custom_transformation_pass.ipynb>`_ * This notebook explains what a transformation pass is and how to write one for FINN. -* `2_custom_op <https://github.com/Xilinx/finn/blob/dev/notebooks/internals/2_custom_op.ipynb>`_ +* `2_custom_op <https://github.com/Xilinx/finn/blob/master/notebooks/internals/2_custom_op.ipynb>`_ * This notebooks explains what a custom operation/node is and how to create one for FINN. -* `3_verify_hls_custom_op <https://github.com/Xilinx/finn/blob/dev/notebooks/internals/3_verify_hls_custom_op.ipynb>`_ +* `3_verify_hls_custom_op <https://github.com/Xilinx/finn/blob/master/notebooks/internals/3_verify_hls_custom_op.ipynb>`_ * This notebook shows the functional verification flow for hls custom operations/nodes. @@ -55,6 +55,4 @@ End-to-End Flow This notebook shows the FINN end-to-end flow step by step using an example of a simple, binarized, fully-connected network trained on the MNIST data set. Starting with the brevitas export and taking this particular network all the way down to hardware by using a specific sequence of transformations. -* `tfc_end2end_example <https://github.com/Xilinx/finn/blob/dev/notebooks/end2end_example/tfc_end2end_example.ipynb>`_ - - +* `tfc_end2end_example <https://github.com/Xilinx/finn/blob/master/notebooks/end2end_example/tfc_end2end_example.ipynb>`_ diff --git a/notebooks/basics/0_getting_started.ipynb b/notebooks/basics/0_getting_started.ipynb index 57f8e5947dfb2ff8525cc0691cc6dd41e15c4bf7..07b2a2ba6d5a21be15de5c4061500d83b2aefdf3 100644 --- a/notebooks/basics/0_getting_started.ipynb +++ b/notebooks/basics/0_getting_started.ipynb @@ -72,13 +72,13 @@ "\n", "FINN uses ONNX is a specific way that we refer to as FINN-ONNX, and not all ONNX graphs are supported by FINN (and vice versa). Here is a list of key points to keep in mind:\n", "\n", - "* *Custom quantization annotations but data stored as float.* ONNX does not support datatypes smaller than 8-bit integers, whereas in FINN we are interested in smaller integers down to ternary and bipolar. To make this work, FINN uses the `quantization_annotation` field in ONNX to annotate tensors with their [FINN DataType](https://github.com/Xilinx/finn/blob/dev/src/finn/core/datatype.py) information. However, all tensors are expected to use single-precision floating point (float32) storage in FINN. This means we store even a 1-bit value as floating point for the purposes of representation. The FINN compiler flow is responsible for eventually producing a packed representation for the target hardware, where the 1-bit is actually stored as 1-bit.\n", + "* *Custom quantization annotations but data stored as float.* ONNX does not support datatypes smaller than 8-bit integers, whereas in FINN we are interested in smaller integers down to ternary and bipolar. To make this work, FINN uses the `quantization_annotation` field in ONNX to annotate tensors with their [FINN DataType](https://github.com/Xilinx/finn/blob/master/src/finn/core/datatype.py) information. However, all tensors are expected to use single-precision floating point (float32) storage in FINN. This means we store even a 1-bit value as floating point for the purposes of representation. The FINN compiler flow is responsible for eventually producing a packed representation for the target hardware, where the 1-bit is actually stored as 1-bit.\n", "\n", "* *Custom operations/nodes.* FINN uses many custom operations (`op_type` in ONNX NodeProto) that are not defined in the ONNX operator schema. These custom nodes are marked with `domain=\"finn\"` in the protobuf to identify them as such. These nodes can represent specific operations that we need for low-bit networks, or operations that are specific to a particular hardware backend.\n", "\n", - "* *Custom ONNX execution flow* To verify correct operation of FINN-ONNX graphs, FINN provides its own [ONNX execution flow](https://github.com/Xilinx/finn/blob/dev/src/finn/core/onnx_exec.py). This flow supports the standard set of ONNX operations as well as the custom FINN operations. *Important:* this execution flow is *only* meant for checking the correctness of models after applying transformations, and *not* for high performance inference. \n", + "* *Custom ONNX execution flow* To verify correct operation of FINN-ONNX graphs, FINN provides its own [ONNX execution flow](https://github.com/Xilinx/finn/blob/master/src/finn/core/onnx_exec.py). This flow supports the standard set of ONNX operations as well as the custom FINN operations. *Important:* this execution flow is *only* meant for checking the correctness of models after applying transformations, and *not* for high performance inference. \n", "\n", - "* *ModelWrapper* FINN provides a [`ModelWrapper`](https://github.com/Xilinx/finn/blob/dev/src/finn/core/modelwrapper.py) class as a thin wrapper around ONNX to make it easier to analyze and manipulate ONNX graphs. This wrapper provides many helper functions, while still giving full access to the ONNX protobuf representation. \n", + "* *ModelWrapper* FINN provides a [`ModelWrapper`](https://github.com/Xilinx/finn/blob/master/src/finn/core/modelwrapper.py) class as a thin wrapper around ONNX to make it easier to analyze and manipulate ONNX graphs. This wrapper provides many helper functions, while still giving full access to the ONNX protobuf representation. \n", "\n", "[Netron](https://lutzroeder.github.io/netron/) is very useful for visualizing ONNX models, including FINN-ONNX models." ] @@ -89,9 +89,9 @@ "source": [ "## More FINN Resources\n", "\n", - "* **[List of publications](https://github.com/Xilinx/finn/blob/dev/docs/publications.md)**\n", + "* **[List of publications](https://github.com/Xilinx/finn/blob/master/docs/publications.md)**\n", "* **[Roadmap](https://github.com/Xilinx/finn/projects/1)**\n", - "* **[Status of example networks](https://github.com/Xilinx/finn/blob/dev/docs/example-networks.md)**\n", + "* **[Status of example networks](https://github.com/Xilinx/finn/blob/master/docs/example-networks.md)**\n", "\n", "\n", "\n" diff --git a/notebooks/end2end_example/tfc_end2end_example.ipynb b/notebooks/end2end_example/tfc_end2end_example.ipynb index 3505d6f5180be96b778c01a0d3b2365ca2e38e4d..6fad8193c10a8ac8d2857f6c89b93f292ebb0a9c 100644 --- a/notebooks/end2end_example/tfc_end2end_example.ipynb +++ b/notebooks/end2end_example/tfc_end2end_example.ipynb @@ -713,7 +713,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can use the higher-level [HLSCustomOp](https://github.com/Xilinx/finn/blob/dev/src/finn/custom_op/fpgadataflow/__init__.py) wrappers for these nodes. These wrappers provide easy access to specific properties of these nodes, such as the folding factors (PE and SIMD). Let's have a look at which node attributes are defined by the CustomOp wrapper, and adjust the SIMD and PE attributes." + "We can use the higher-level [HLSCustomOp](https://github.com/Xilinx/finn/blob/master/src/finn/custom_op/fpgadataflow/__init__.py) wrappers for these nodes. These wrappers provide easy access to specific properties of these nodes, such as the folding factors (PE and SIMD). Let's have a look at which node attributes are defined by the CustomOp wrapper, and adjust the SIMD and PE attributes." ] }, { diff --git a/notebooks/internals/2_custom_op.ipynb b/notebooks/internals/2_custom_op.ipynb index 7e91d8c4048b3c3ffb547fe9fdaa7fa726263f8d..9aaef9d42ccde42a8f3a0213f1c287a8d72c164a 100644 --- a/notebooks/internals/2_custom_op.ipynb +++ b/notebooks/internals/2_custom_op.ipynb @@ -306,7 +306,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "<font size=\"3\">`make_shape_compatible_op`: To use the flow of FINN, the transformation pass [infer_shapes](https://github.com/Xilinx/finn/blob/dev/src/finn/transformation/infer_shapes.py) is applied to the graphs in various places. In order for this transformation to be applied to CustomOps, they must first be converted to standard ONNX nodes with the same shape behavior. This means, nodes where the relationship between input and output shape is the same. \n", + "<font size=\"3\">`make_shape_compatible_op`: To use the flow of FINN, the transformation pass [infer_shapes](https://github.com/Xilinx/finn/blob/master/src/finn/transformation/infer_shapes.py) is applied to the graphs in various places. In order for this transformation to be applied to CustomOps, they must first be converted to standard ONNX nodes with the same shape behavior. This means, nodes where the relationship between input and output shape is the same. \n", "\n", "This is done at this point. Since the output shape of a multithreshold node is the same as the input shape, it can be replaced by a `\"Relu\"` node from the standard node library of onnx.</font>" ] @@ -322,7 +322,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "<font size=\"3\">`execute_node`: This function allows the execution of the node, depending on the CustomOp a different functionality has to be implemented. In the case of the multithreshold node the input values and the thresholds are first extracted and after the attributes for the output scaling have been retrieved, the output is calculated with the help of a separate function. For more details regarding this function please take a look in the code [here](https://github.com/Xilinx/finn/blob/dev/src/finn/custom_op/multithreshold.py). </font>" + "<font size=\"3\">`execute_node`: This function allows the execution of the node, depending on the CustomOp a different functionality has to be implemented. In the case of the multithreshold node the input values and the thresholds are first extracted and after the attributes for the output scaling have been retrieved, the output is calculated with the help of a separate function. For more details regarding this function please take a look in the code [here](https://github.com/Xilinx/finn/blob/master/src/finn/custom_op/multithreshold.py). </font>" ] }, {