diff --git a/notebooks/9-FINN-EndToEndFlow.ipynb b/notebooks/9-FINN-EndToEndFlow.ipynb
index 6d1360aace7fed6a7c08ca385d37e7f2ffd85042..9e5761b833d5542655e323cbd5b26bc53dd1cf63 100644
--- a/notebooks/9-FINN-EndToEndFlow.ipynb
+++ b/notebooks/9-FINN-EndToEndFlow.ipynb
@@ -354,7 +354,76 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "As can be seen, several transformations are involved in the streamlining transformation. There are move and collapse transformations. In the last step the operations are transformed into multithresholds. The involved transformations can be viewed in detail [here](https://github.com/Xilinx/finn/tree/dev/src/finn/transformation/streamline). After each transformation, three of the basic transformations (`GiveUniqueNodeNames`, `GiveReadableTensorNames` and `InferDataTypes`) are applied to the model as clean up."
+    "As can be seen, several transformations are involved in the streamlining transformation. There are move and collapse transformations. In the last step the operations are transformed into multithresholds. The involved transformations can be viewed in detail [here](https://github.com/Xilinx/finn/tree/dev/src/finn/transformation/streamline). After each transformation, three of the basic transformations (`GiveUniqueNodeNames`, `GiveReadableTensorNames` and `InferDataTypes`) are applied to the model as clean up.\n",
+    "\n",
+    "After streamlining the network looks as follows."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "Stopping http://0.0.0.0:8081\n",
+      "Serving 'lfc_w1_a1.onnx' at http://0.0.0.0:8081\n"
+     ]
+    }
+   ],
+   "source": [
+    "model = model.transform(Streamline())\n",
+    "model.save(\"lfc_w1_a1.onnx\")\n",
+    "netron.start(\"lfc_w1_a1.onnx\", port=8081, host=\"0.0.0.0\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/html": [
+       "<iframe src=\"http://0.0.0.0:8081/\" style=\"position: relative; width: 100%;\" height=\"400\"></iframe>\n"
+      ],
+      "text/plain": [
+       "<IPython.core.display.HTML object>"
+      ]
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "%%html\n",
+    "<iframe src=\"http://0.0.0.0:8081/\" style=\"position: relative; width: 100%;\" height=\"400\"></iframe>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Our example network is a quantized network with 1 bit precision. For this reason, after streamlining, the resulting bipolar matrix multiplications are converted into xnorpopcount operations. This transformation produces operations that are again collapsed and converted into thresholds. This procedure is shown below. After these transformations, the nodes can be converted to HLS layers for further processing."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from finn.transformation.bipolar_to_xnor import ConvertBipolarMatMulToXnorPopcount\n",
+    "import finn.transformation.streamline.absorb as absorb\n",
+    "from finn.transformation.streamline.round_thresholds import RoundAndClipThresholds\n",
+    "\n",
+    "model = model.transform(ConvertBipolarMatMulToXnorPopcount())\n",
+    "model = model.transform(absorb.AbsorbAddIntoMultiThreshold())\n",
+    "model = model.transform(absorb.AbsorbMulIntoMultiThreshold())\n",
+    "model = model.transform(RoundAndClipThresholds())"
    ]
   },
   {