From dec3aa75fb212197114fae486301b6b9371c608b Mon Sep 17 00:00:00 2001
From: auphelia <jakobapk@web.de>
Date: Thu, 7 May 2020 08:58:32 +0100
Subject: [PATCH] [Notebook] Update and rerun basics Jupyter notebooks

---
 .../basics/0_how_to_work_with_onnx.ipynb      | 72 +++++++++----------
 .../basics/1_brevitas_network_import.ipynb    |  6 +-
 2 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/notebooks/basics/0_how_to_work_with_onnx.ipynb b/notebooks/basics/0_how_to_work_with_onnx.ipynb
index 4b4fc4569..58f53c329 100644
--- a/notebooks/basics/0_how_to_work_with_onnx.ipynb
+++ b/notebooks/basics/0_how_to_work_with_onnx.ipynb
@@ -6,7 +6,7 @@
    "source": [
     "# FINN - How to work with ONNX\n",
     "\n",
-    "This notebook should give an overview of ONNX ProtoBuf, help to create and manipulate an ONNX model and use FINN functions to work with it. There may be overlaps to notebook [ModelWrapper](2_modelwrapper.ipynb), but this notebook will give an overview about the handling of ONNX models in FINN."
+    "This notebook should give an overview of ONNX ProtoBuf, help to create and manipulate an ONNX model and use FINN functions to work with it."
    ]
   },
   {
@@ -14,7 +14,7 @@
    "metadata": {},
    "source": [
     "## Outline\n",
-    "* #### How to create a simple model\n",
+    "* #### How to create a simple ONNX model\n",
     "* #### How to manipulate an ONNX model"
    ]
   },
@@ -22,7 +22,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### How to create a simple model\n",
+    "### How to create a simple ONNX model\n",
     "\n",
     "To explain how to create an ONNX model a simple example with mathematical operations is used. All nodes are from the [standard operations library of ONNX](https://github.com/onnx/onnx/blob/master/docs/Operators.md).\n",
     "\n",
@@ -93,7 +93,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "The names of the inputs and outputs of the nodes give already an idea of the structure of the resulting graph. In order to integrate the nodes into a graph environment, the inputs and outputs of the graph have to be specified first. In ONNX all data edges are processed as tensors. So with the helper function tensor value infos are created for the input and output tensors of the graph. Float from ONNX is used as data type. "
+    "The names of the inputs and outputs of the nodes give already an idea of the structure of the resulting graph. In order to integrate the nodes into a graph environment, the inputs and outputs of the graph have to be specified first. In ONNX all data edges are processed as tensors. So with onnx helper function tensors value infos are created for the input and output tensors of the graph. Float from ONNX is used as data type. "
    ]
   },
   {
@@ -159,14 +159,14 @@
    "outputs": [],
    "source": [
     "onnx_model = onnx.helper.make_model(graph, producer_name=\"simple-model\")\n",
-    "onnx.save(onnx_model, 'simple_model.onnx')"
+    "onnx.save(onnx_model, '/tmp/simple_model.onnx')"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "To visualize the created model, [netron](https://github.com/lutzroeder/netron) can be used. Netron is a visualizer for neural network, deep learning and machine learning models."
+    "To visualize the created model, [netron](https://github.com/lutzroeder/netron) can be used. Netron is a visualizer for neural network, deep learning and machine learning models. FINN provides a utility function for visualization with netron, which we import and use in the following."
    ]
   },
   {
@@ -189,7 +189,7 @@
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "Serving 'simple_model.onnx' at http://0.0.0.0:8081\n"
+      "Serving '/tmp/simple_model.onnx' at http://0.0.0.0:8081\n"
      ]
     },
     {
@@ -206,7 +206,7 @@
        "        "
       ],
       "text/plain": [
-       "<IPython.lib.display.IFrame at 0x7fb9303c7b38>"
+       "<IPython.lib.display.IFrame at 0x7fcdfc956b70>"
       ]
      },
      "execution_count": 7,
@@ -215,7 +215,7 @@
     }
    ],
    "source": [
-    "showInNetron('simple_model.onnx')"
+    "showInNetron('/tmp/simple_model.onnx')"
    ]
   },
   {
@@ -284,7 +284,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "To run the model and calculate the output, [onnxruntime](https://github.com/microsoft/onnxruntime) can be used. ONNX Runtime is a performance-focused complete scoring engine for Open Neural Network Exchange (ONNX) models from Microsoft. The `.InferenceSession` function is used to create a session of the model and `.run` is used to execute the model."
+    "To run the model and calculate the output, [onnxruntime](https://github.com/microsoft/onnxruntime) can be used. ONNX Runtime is a performance-focused complete scoring engine for ONNX models from Microsoft. The `.InferenceSession` function is used to create a session of the model and `.run` is used to execute the model."
    ]
   },
   {
@@ -316,16 +316,16 @@
      "output_type": "stream",
      "text": [
       "The output of the ONNX model is: \n",
-      "[[ 1. 16.  3. 10.]\n",
-      " [ 5. 17. 17. 13.]\n",
-      " [ 3. 11. 10. 17.]\n",
-      " [ 9.  2.  4.  8.]]\n",
+      "[[22. 13. 21.  8.]\n",
+      " [ 0.  8. 11.  1.]\n",
+      " [ 3. 12.  8.  2.]\n",
+      " [ 0.  6.  1.  4.]]\n",
       "\n",
       "The output of the reference function is: \n",
-      "[[ 1. 16.  3. 10.]\n",
-      " [ 5. 17. 17. 13.]\n",
-      " [ 3. 11. 10. 17.]\n",
-      " [ 9.  2.  4.  8.]]\n",
+      "[[22. 13. 21.  8.]\n",
+      " [ 0.  8. 11.  1.]\n",
+      " [ 3. 12.  8.  2.]\n",
+      " [ 0.  6.  1.  4.]]\n",
       "\n",
       "The results are the same!\n"
      ]
@@ -364,7 +364,7 @@
    "source": [
     "In the following we assume that we do not know the appearance of the model, so we first try to identify whether there are two consecutive adders in the graph and then convert them into a sum node. \n",
     "\n",
-    "Here we make use of FINN. FINN provides a thin wrapper around the model which provides several additional helper functions to manipulate the graph. The code can be found [here](https://github.com/Xilinx/finn/blob/master/src/finn/core/modelwrapper.py) and you can find a more detailed description in the notebook [ModelWrapper](2_modelwrapper.ipynb)."
+    "Here we make use of FINN. FINN provides a thin wrapper around the model which provides several additional helper functions to manipulate the graph. The code can be found [here](https://github.com/Xilinx/finn/blob/master/src/finn/core/modelwrapper.py)."
    ]
   },
   {
@@ -656,17 +656,17 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 24,
+   "execution_count": 25,
    "metadata": {},
    "outputs": [],
    "source": [
     "onnx_model1 = onnx.helper.make_model(graph, producer_name=\"simple-model1\")\n",
-    "onnx.save(onnx_model1, 'simple_model1.onnx')"
+    "onnx.save(onnx_model1, '/tmp/simple_model1.onnx')"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 25,
+   "execution_count": 26,
    "metadata": {},
    "outputs": [
     {
@@ -675,7 +675,7 @@
      "text": [
       "\n",
       "Stopping http://0.0.0.0:8081\n",
-      "Serving 'simple_model1.onnx' at http://0.0.0.0:8081\n"
+      "Serving '/tmp/simple_model1.onnx' at http://0.0.0.0:8081\n"
      ]
     },
     {
@@ -692,16 +692,16 @@
        "        "
       ],
       "text/plain": [
-       "<IPython.lib.display.IFrame at 0x7fb93018f9e8>"
+       "<IPython.lib.display.IFrame at 0x7fcdfc130cc0>"
       ]
      },
-     "execution_count": 25,
+     "execution_count": 26,
      "metadata": {},
      "output_type": "execute_result"
     }
    ],
    "source": [
-    "showInNetron('simple_model1.onnx')"
+    "showInNetron('/tmp/simple_model1.onnx')"
    ]
   },
   {
@@ -713,7 +713,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 26,
+   "execution_count": 27,
    "metadata": {},
    "outputs": [],
    "source": [
@@ -723,7 +723,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 27,
+   "execution_count": 28,
    "metadata": {},
    "outputs": [
     {
@@ -731,16 +731,16 @@
      "output_type": "stream",
      "text": [
       "The output of the manipulated ONNX model is: \n",
-      "[[ 1. 16.  3. 10.]\n",
-      " [ 5. 17. 17. 13.]\n",
-      " [ 3. 11. 10. 17.]\n",
-      " [ 9.  2.  4.  8.]]\n",
+      "[[22. 13. 21.  8.]\n",
+      " [ 0.  8. 11.  1.]\n",
+      " [ 3. 12.  8.  2.]\n",
+      " [ 0.  6.  1.  4.]]\n",
       "\n",
       "The output of the reference function is: \n",
-      "[[ 1. 16.  3. 10.]\n",
-      " [ 5. 17. 17. 13.]\n",
-      " [ 3. 11. 10. 17.]\n",
-      " [ 9.  2.  4.  8.]]\n",
+      "[[22. 13. 21.  8.]\n",
+      " [ 0.  8. 11.  1.]\n",
+      " [ 3. 12.  8.  2.]\n",
+      " [ 0.  6.  1.  4.]]\n",
       "\n",
       "The results are the same!\n"
      ]
diff --git a/notebooks/basics/1_brevitas_network_import.ipynb b/notebooks/basics/1_brevitas_network_import.ipynb
index ca148e9eb..0abf671a5 100644
--- a/notebooks/basics/1_brevitas_network_import.ipynb
+++ b/notebooks/basics/1_brevitas_network_import.ipynb
@@ -12,7 +12,7 @@
     "2. Call Brevitas FINN-ONNX export and visualize with Netron\n",
     "3. Import into FINN and call cleanup transformations\n",
     "\n",
-    "We'll use the following showSrc function to print the source code for function calls in the Jupyter notebook:"
+    "We'll use the following utility functions to print the source code for function calls (`showSrc()`) and to visualize a network using netron (`showInNetron()`) in the Jupyter notebook:"
    ]
   },
   {
@@ -442,7 +442,7 @@
        "        "
       ],
       "text/plain": [
-       "<IPython.lib.display.IFrame at 0x7f86cdb6e5f8>"
+       "<IPython.lib.display.IFrame at 0x7f3d330b6ac8>"
       ]
      },
      "execution_count": 9,
@@ -624,7 +624,7 @@
        "        "
       ],
       "text/plain": [
-       "<IPython.lib.display.IFrame at 0x7f86cdb6ec18>"
+       "<IPython.lib.display.IFrame at 0x7f3d3380aef0>"
       ]
      },
      "execution_count": 15,
-- 
GitLab