diff --git a/notebooks/FINN-CustomOps.ipynb b/notebooks/FINN-CustomOps.ipynb
index 9ac32369bf0da1b1f317a0dda0109fdc4828ce50..def670e46ff50a539df6a5e00788749c396bbd57 100644
--- a/notebooks/FINN-CustomOps.ipynb
+++ b/notebooks/FINN-CustomOps.ipynb
@@ -43,8 +43,9 @@
    "source": [
     "## Outline\n",
     "---------------------------\n",
-    "* <font size=\"3\">FINN-ONNX node</font>\n",
+    "* <font size=\"3\">Basic FINN-ONNX node</font>\n",
     "* <font size=\"3\">CustomOp class</font>\n",
+    "* <font size=\"3\">HLS FINN-ONNX node</font>\n",
     "* <font size=\"3\">HLSCustomOp class</font>"
    ]
   },
@@ -52,37 +53,30 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## FINN-ONNX node\n",
+    "## Basic FINN-ONNX node\n",
     "\n",
-    "<font size=\"3\">To create a FINN-ONNX node you can use the helper function of ONNX. Because it is an ONNX NodeProtobuf, but with several additional attributes. </font>\n",
+    "<font size=\"3\">To create a FINN-ONNX node you can use the helper function of ONNX. Because it is an ONNX NodeProtobuf, but with several additional attributes. The procedure is shown with an example for a multithreshold node. </font>\n",
     "\n",
-    "`FCLayer_node = helper.make_node(\n",
-    "    \"StreamingFCLayer_Batch\",\n",
-    "    node_inp_list,\n",
-    "    node_outp_list,\n",
+    "`multithreshold_node = helper.make_node(\n",
+    "    \"MultiThreshold\",\n",
+    "    [\"v\", \"thresholds\"],\n",
+    "    [\"out\"],\n",
     "    domain=\"finn\",\n",
-    "    backend=\"fpgadataflow\",\n",
-    "    code_gen_dir=\"\",\n",
-    "    executable_path=\"\",\n",
-    "    resType=\"ap_resource_lut()\",\n",
-    "    MW=mw,\n",
-    "    MH=mh,\n",
-    "    SIMD=simd,\n",
-    "    PE=pe,\n",
-    "    inputDataType=<FINN DataType>,\n",
-    "    weightDataType=<FINN DataType>,\n",
-    "    outputDataType=<FINN DataType>,\n",
-    "    ActVal=actval,\n",
-    "    binaryXnorMode=<0/1>,\n",
-    "    noActivation=<0/1>\n",
-    ")`"
+    "    out_scale=2.0,\n",
+    "    out_bias=-1.0,\n",
+    "    out_dtype=\"\",\n",
+    ")`\n"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "<font size=\"3\">`\"StreamingFCLayer_Batch\"` describes the op_type, then the inputs and outputs are declared. This is still like building a default onnx node without additional attributes. But since this is a custom op node of FINN, the attribute `domain=\"finn\"` must be set. The streaming fc layer is a custom op from the finn-hls library, this information is set in the node using the `backend` attribute. To execute a custom op from the finn-hls library, the corresponding c++ code must be created and an executable must be produced. Where the generated code is stored is specified in the `code_gen_dir` attribute and `executable_path` specifies the path to the produced executable. In addition to the data types of the input and output tensors, the node also contains various other attributes resulting from the parameters of the corresponding finn-hls library function. More detailed information can be found in the documentation of [finn-hlslib](github.com/Xilinx/finn-hlslib).</font>"
+    "<font size=\"3\">The `helper.make_node` function gets the op_type as first argument. In this case it is *MultiThreshold*. Then the inputs and outputs are passed. Beside the data input the multithreshold node has an additional input to pass the threshold values. \n",
+    "\n",
+    "The next attribute (`domain`) is to specify that it is a FINN-ONNX node. It must be set to `\"finn\"`, so that the functions that work with FINN-ONNX nodes can directly recognize that it is a CustomOp. The attributes `out_scale` and `out_bias` are special multithreshold attributes to manipulate the output value. `out_dtype` contains the output data type.\n",
+    "    \n",
+    "**Note**: each FINN-ONNX node has its own special attributes, which must be set correctly to ensure proper processing.</font>"
    ]
   },
   {
@@ -96,7 +90,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 4,
+   "execution_count": 3,
    "metadata": {},
    "outputs": [
     {
@@ -191,7 +185,131 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "<font size=\"3\">When instantiating the class, the ONNX node is passed to access all attributes of the node within the class. This is accompanied by the functions `get_nodeattr()`and `set_nodeattr()`, which each instance of this class has. Furthermore 4 abstract methods are implemented, which are described in more detail in the comments in the code. </font>"
+    "<font size=\"3\">When instantiating the class, the ONNX node is passed to access all attributes of the node within the class. This is accompanied by the functions `get_nodeattr()`and `set_nodeattr()`, which each instance of this class has. Furthermore 4 abstract methods are implemented, which are described in more detail in the commands of the code and will be exemplarily explained for the multithreshold node in the following. </font>"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "class MultiThreshold(CustomOp):\n",
+      "    def get_nodeattr_types(self):\n",
+      "        return {\n",
+      "            \"out_dtype\": (\"s\", True, \"\"),\n",
+      "            \"out_scale\": (\"f\", False, 1.0),\n",
+      "            \"out_bias\": (\"f\", False, 0.0),\n",
+      "        }\n",
+      "\n",
+      "    def make_shape_compatible_op(self):\n",
+      "        node = self.onnx_node\n",
+      "        return helper.make_node(\"Relu\", [node.input[0]], [node.output[0]])\n",
+      "\n",
+      "    def infer_node_datatype(self, model):\n",
+      "        node = self.onnx_node\n",
+      "        odt = self.get_nodeattr(\"out_dtype\")\n",
+      "        model.set_tensor_datatype(node.output[0], DataType[odt])\n",
+      "\n",
+      "    def execute_node(self, context, graph):\n",
+      "        node = self.onnx_node\n",
+      "        # save inputs\n",
+      "        v = context[node.input[0]]\n",
+      "        thresholds = context[node.input[1]]\n",
+      "        # retrieve attributes if output scaling is used\n",
+      "        out_scale = self.get_nodeattr(\"out_scale\")\n",
+      "        out_bias = self.get_nodeattr(\"out_bias\")\n",
+      "        # calculate output\n",
+      "        output = multithreshold(v, thresholds, out_scale, out_bias)\n",
+      "        # setting context according to output\n",
+      "        context[node.output[0]] = output\n",
+      "\n"
+     ]
+    }
+   ],
+   "source": [
+    "from finn.custom_op.multithreshold import MultiThreshold\n",
+    "showSrc(MultiThreshold)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\"> `get_nodeattr_types`: returns a dict for the permitted attributes for node. It returns a triple with following values for each of the special multithreshold attributes. </font>\n",
+    "* <font size=\"3\">`dtype`: indicates which member of the ONNX AttributeProto will be utilized </font>\n",
+    "* <font size=\"3\">`require`: indicates whether this attribute is required </font>\n",
+    "* <font size=\"3\">`default_value`: indicates the default value that will be used if the attribute is not set </font>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">`make_shape_compatible_op`: To use the flow of FINN, the transformation pass [infer_shapes](https://github.com/Xilinx/finn/blob/dev/src/finn/transformation/infer_shapes.py) is applied to the graphs in various places. In order for this transformation to be applied to CustomOps, they must first be converted to standard ONNX nodes with the same shape behavior. This means, nodes where the relationship between input and output shape is the same. \n",
+    "\n",
+    "This is done at this point. Since the output shape of a multithreshold node is the same as the input shape, it can be replaced by a `\"Relu\"` node from the standard node library of onnx.</font>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">`infer_node_datatype`: sets the output tensor data type accordingly to the attribute `out_dtype` </font>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">`execute_node`: This function allows the execution of the node, depending on the CustomOp a different functionality has to be implemented. In the case of the multithreshold node the input values and the thresholds are first extracted and after the attributes for the output scaling have been retrieved, the output is calculated with the help of a separate function. For more details regarding this function please take a look in the code [here](https://github.com/Xilinx/finn/blob/dev/src/finn/custom_op/multithreshold.py). </font>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">FINN has a subset of CustomOps that correspond to the [finn-hls](https://finn-hlslib.readthedocs.io/en/latest/) library. In the next part of the Jupyter notebook these are described in more detail. </font>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## HLS FINN-ONNX node\n",
+    "\n",
+    "<font size=\"3\">The creation of an HLS FINN-ONNX node looks very similar to the creation of a basic FINN-ONNX node. But three new attributes are introduced that are necessary to enable the processing of HLS FINN-ONNX nodes in FINN.</font>\n",
+    "\n",
+    "`FCLayer_node = helper.make_node(\n",
+    "    \"StreamingFCLayer_Batch\",\n",
+    "    node_inp_list,\n",
+    "    node_outp_list,\n",
+    "    domain=\"finn\",\n",
+    "    backend=\"fpgadataflow\",\n",
+    "    code_gen_dir=\"\",\n",
+    "    executable_path=\"\",\n",
+    "    resType=\"ap_resource_lut()\",\n",
+    "    MW=mw,\n",
+    "    MH=mh,\n",
+    "    SIMD=simd,\n",
+    "    PE=pe,\n",
+    "    inputDataType=<FINN DataType>,\n",
+    "    weightDataType=<FINN DataType>,\n",
+    "    outputDataType=<FINN DataType>,\n",
+    "    ActVal=actval,\n",
+    "    binaryXnorMode=<0/1>,\n",
+    "    noActivation=<0/1>\n",
+    ")`"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font size=\"3\">`\"StreamingFCLayer_Batch\"` describes the op_type, then the inputs and outputs are declared. This is still like building a default onnx node without additional attributes. But since this is a custom op node of FINN, the attribute `domain=\"finn\"` must be set. The streaming fc layer is a custom op from the [finn-hls](https://finn-hlslib.readthedocs.io/en/latest/) library, this information is set in the node using the `backend` attribute. To execute a custom op from the [finn-hls](https://finn-hlslib.readthedocs.io/en/latest/) library, the corresponding c++ code must be created and an executable must be produced. Where the generated code is stored is specified in the `code_gen_dir` attribute and `executable_path` specifies the path to the produced executable. In addition to the data types of the input and output tensors, the node also contains various other attributes resulting from the parameters of the corresponding [finn-hls](https://finn-hlslib.readthedocs.io/en/latest/) library function. More detailed information can be found in the documentation of [finn-hlslib](https://finn-hlslib.readthedocs.io/en/latest/).</font>"
    ]
   },
   {
@@ -200,7 +318,7 @@
    "source": [
     "## HLSCustomOp class\n",
     "\n",
-    "<font size=\"3\">If it is a node from the finn-hls library another class is used which is derived from the CustomOp class:</font>"
+    "<font size=\"3\">If it is a node from the [finn-hls](https://finn-hlslib.readthedocs.io/en/latest/) library another class is used which is derived from the CustomOp class:</font>"
    ]
   },
   {