diff --git a/notebooks/FCLayer_graph.onnx b/notebooks/FCLayer_graph.onnx
new file mode 100644
index 0000000000000000000000000000000000000000..efefcd681bfda4d72fd9be9d15f5069c05184a37
Binary files /dev/null and b/notebooks/FCLayer_graph.onnx differ
diff --git a/notebooks/FINN-CodeGenerationAndCompilation.ipynb b/notebooks/FINN-CodeGenerationAndCompilation.ipynb
index 922693c8e9e12cc799b07db4bf30400cd56f803d..0183ce9a6d9aac7882fde6d53873712026f55720 100644
--- a/notebooks/FINN-CodeGenerationAndCompilation.ipynb
+++ b/notebooks/FINN-CodeGenerationAndCompilation.ipynb
@@ -6,7 +6,218 @@
    "source": [
     "# FINN - Code Generation and Compilation\n",
     "-----------------------------------------------------------------\n",
-    "This notebook is about code generation and compilation to enable execution of FINN "
+    "<font size=\"3\">This notebook is about code generation and compilation to enable execution of FINN custom operation nodes. </font>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Outline\n",
+    "-------------\n",
+    "* <font size=\"3\">Example model</font>\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Example model\n",
+    "<font size=\"3\">To show the code generation and compilation of a node, an example model with a streaming fclayer node is first created. To learn more about FINN custom operation nodes, please take a look in notebook *FINN-CustomOps*.\n",
+    "\n",
+    "First TensorProto and helper are imported from ONNX. These functions can be used to create tensors, nodes, graphs and models in ONNX. Additional functions from `util` and the classes `DataType` and `ModelWrapper` are needed. More information about `DataType` and `ModelWrapper` can be found in Jupyter notebook *FINN-ModelWrapper*.</font>"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from onnx import TensorProto, helper\n",
+    "import finn.core.utils as util\n",
+    "from finn.core.datatype import DataType\n",
+    "from finn.core.modelwrapper import ModelWrapper"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">Then all parameters, that are needed to create a streaming fclayer, are set. To keep the example clear small values are chosen. </font>"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "idt = wdt = odt = DataType.BIPOLAR\n",
+    "mw = 8\n",
+    "mh = 8\n",
+    "pe = 4\n",
+    "simd = 4\n",
+    "wmem = mw * mh // (pe * simd)\n",
+    "nf = mh // pe\n",
+    "sf = mw // simd\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">A `tensor_value_info` is created for all tensors involved. In this case there is one tensor for the weights besides the input and output tensors. Then an input list is created containing the two inputs (`\"inp\"`and `\"weights\"`).</font>"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "inp = helper.make_tensor_value_info(\"inp\", TensorProto.FLOAT, [1, sf, simd])\n",
+    "weights = helper.make_tensor_value_info(\"weights\", TensorProto.FLOAT, [mw, mh])\n",
+    "outp = helper.make_tensor_value_info(\"outp\", TensorProto.FLOAT, [1, nf, pe])\n",
+    "node_inp_list = [\"inp\", \"weights\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">Now the node can be created. The operation type is set to `\"StreamingFCLayer_Batch\"` and the rest of the attributes are set appropriately. The relevant attributes for the activation of the code generation and compilation are:</font>\n",
+    "* <font size=\"3\">**`domain=\"finn\"`**: specifies that the created node is a FINN-Custom Op</font>\n",
+    "* <font size=\"3\">**`backend=\"fpgadataflow\"`**: specifies that it is a node that corresponds to a function in the finn-hls library</font>\n",
+    "* <font size=\"3\">**`code_gen_dir\"`**: specifies the path to the directory where the generated c++ files are (is set during code generation)</font>\n",
+    "* <font size=\"3\">**`executable_path\"`**: specifies the path to the executable created after compilation (is set during compilation)</font>"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "FCLayer_node = helper.make_node(\n",
+    "        \"StreamingFCLayer_Batch\",\n",
+    "        node_inp_list,\n",
+    "        [\"outp\"],\n",
+    "        domain=\"finn\",\n",
+    "        backend=\"fpgadataflow\",\n",
+    "        code_gen_dir=\"\",\n",
+    "        executable_path=\"\",\n",
+    "        resType=\"ap_resource_lut()\",\n",
+    "        MW=mw,\n",
+    "        MH=mh,\n",
+    "        SIMD=simd,\n",
+    "        PE=pe,\n",
+    "        WMEM=wmem,\n",
+    "        TMEM=0,\n",
+    "        inputDataType=idt.name,\n",
+    "        weightDataType=wdt.name,\n",
+    "        outputDataType=odt.name,\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\"> The node is packed into a graph environment and the inputs and outputs are set.</font>"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "graph = helper.make_graph(\n",
+    "        nodes=[FCLayer_node], name=\"fclayer_graph\", inputs=[inp], outputs=[outp]\n",
+    "    )"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">A model is now created from the graph, which is then converted into a ModelWrapper object for further processing in FINN. Afterwards the ModelWrapper internal functions can be used to set the FINN data types and the initializer for the weights. Since this is an example, the weights are not taken from the training, but random values are generated using the utility function `gen_finn_dt_tensor()`. This function gets a FINN datatype and a shape and generates a tensor with values of this datatype in the desired shape.</font>\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "model = helper.make_model(graph, producer_name=\"fclayer-model\")\n",
+    "model = ModelWrapper(model)\n",
+    "\n",
+    "model.set_tensor_datatype(\"inp\", idt)\n",
+    "model.set_tensor_datatype(\"outp\", odt)\n",
+    "model.set_tensor_datatype(\"weights\", wdt)\n",
+    "W = util.gen_finn_dt_tensor(wdt, (mw, mh))\n",
+    "model.set_initializer(\"weights\", W)\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">The model is saved and then netron is used to visualize the resulting model. </font>"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "model.save(\"FCLayer_graph.onnx\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "Stopping http://0.0.0.0:8081\n",
+      "Serving 'FCLayer_graph.onnx' at http://0.0.0.0:8081\n"
+     ]
+    }
+   ],
+   "source": [
+    "import netron\n",
+    "netron.start('FCLayer_graph.onnx', port=8081, host=\"0.0.0.0\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/html": [
+       "<iframe src=\"http://0.0.0.0:8081/\" style=\"position: relative; width: 100%;\" height=\"400\"></iframe>\n"
+      ],
+      "text/plain": [
+       "<IPython.core.display.HTML object>"
+      ]
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "%%html\n",
+    "<iframe src=\"http://0.0.0.0:8081/\" style=\"position: relative; width: 100%;\" height=\"400\"></iframe>"
    ]
   },
   {