diff --git a/notebooks/9-FINN-EndToEndFlow.ipynb b/notebooks/9-FINN-EndToEndFlow.ipynb index f88b1984881e5cb3ea6b72be872a5f83ad6dbf69..39f5177fd8398efc5961a7e2492e958092c6ad71 100644 --- a/notebooks/9-FINN-EndToEndFlow.ipynb +++ b/notebooks/9-FINN-EndToEndFlow.ipynb @@ -13,7 +13,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ @@ -164,7 +164,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ @@ -226,9 +226,94 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Streamlining" + "The transformations can be imported and applied as follows." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "from finn.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames\n", + "from finn.transformation.infer_shapes import InferShapes\n", + "from finn.transformation.infer_datatypes import InferDataTypes\n", + "from finn.transformation.fold_constants import FoldConstants\n", + "\n", + "model = model.transform(InferShapes())\n", + "model = model.transform(FoldConstants())\n", + "model = model.transform(GiveUniqueNodeNames())\n", + "model = model.transform(GiveReadableTensorNames())\n", + "model = model.transform(InferDataTypes())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The result of these transformations can be viewed with netron after the model has been saved again. By clicking on the individual nodes, it can now be seen, for example, that each node has been given a name. Also the whole upper area could be folded, so that now the first node is \"Reshape\"." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "Stopping http://0.0.0.0:8081\n", + "Serving 'lfc_w1_a1.onnx' at http://0.0.0.0:8081\n" + ] + } + ], + "source": [ + "model.save(\"lfc_w1_a1.onnx\")\n", + "netron.start(\"lfc_w1_a1.onnx\", port=8081, host=\"0.0.0.0\")" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "<iframe src=\"http://0.0.0.0:8081/\" style=\"position: relative; width: 100%;\" height=\"400\"></iframe>\n" + ], + "text/plain": [ + "<IPython.core.display.HTML object>" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "%%html\n", + "<iframe src=\"http://0.0.0.0:8081/\" style=\"position: relative; width: 100%;\" height=\"400\"></iframe>" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Streamlining\n", + "Streamlining is a transformation containing several sub-transformations. The goal of streamlining is to eliminate floating point operations by moving them around, then collapsing them into one operation and in the last step transform them into multithresholding nodes. For the theoretical background see arXiv:1709.04060.\n", + "\n", + "In the following the streamlining transformation is shown." ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, { "cell_type": "markdown", "metadata": {},