diff --git a/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb b/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb
index b628fa455a27649791c2b6f72409b85f71f7c704..a2747e3921dc8e5a8427b4d5d9b7f143a57b018f 100644
--- a/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb
+++ b/notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb
@@ -63,7 +63,7 @@
     "from finn.util.visualization import showInNetron\n",
     "import os\n",
     "    \n",
-    "build_dir = os.environ[\"FINN_ROOT\"]"
+    "build_dir = os.environ[\"FINN_BUILD_DIR\"]"
    ]
   },
   {
@@ -120,7 +120,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "You can see that the network is composed of a repeating convolution-convolution-maxpool layer pattern to extract features using 3x3 convolution kernels (with weights binarized) and `Sign` activations, followed by fully connected layers acting as the classifier. Also notice the initial `MultiThreshold` layer at the beginning of the network, which is quantizing float inputs to 8-bit ones."
+    "You can see that the network is composed of a repeating convolution-convolution-maxpool layer pattern to extract features using 3x3 convolution kernels (with weights binarized), followed by fully connected layers acting as the classifier. Also notice the initial `MultiThreshold` layer at the beginning of the network, which is quantizing float inputs to 8-bit ones."
    ]
   },
   {
@@ -202,7 +202,9 @@
     "Note how the convolution layer looks very similar to the fully connected one in terms of the matrix-vector-threshold unit (MVTU), but now the MVTU is preceded by a sliding window unit that produces the matrix from the input image. All of these building blocks, including the `MaxPool` layer you see in this figure, exist as templated Vivado HLS C++ functions in [finn-hlslib](https://github.com/Xilinx/finn-hlslib).\n",
     "\n",
     "\n",
-    "To target this kind of hardware architecture with our network we'll apply a convolution lowering transformation, in addition to streamlining. You may recall the *streamlining transformation* that we applied to the TFC-w1a1 network, which is a series of mathematical simplifications that allow us to get rid of floating point scaling operations by implementing few-bit activations as thresholding operations. **The current implementation of streamlining is highly network-specific and may not work for your network if its topology is very different than the example network here. We hope to rectify this in future releases.**"
+    "To target this kind of hardware architecture with our network we'll apply a convolution lowering transformation, in addition to streamlining. You may recall the *streamlining transformation* that we applied to the TFC-w1a1 network, which is a series of mathematical simplifications that allow us to get rid of floating point scaling operations by implementing few-bit activations as thresholding operations. \n",
+    "\n",
+    "**The current implementation of streamlining is highly network-specific and may not work for your network if its topology is very different than the example network here. We hope to rectify this in future releases.**"
    ]
   },
   {
@@ -422,12 +424,37 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "test_pynq_board = \"Pynq-Z2\"\n",
+    "test_pynq_board = \"Pynq-Z1\"\n",
     "target_clk_ns = 10\n",
     "\n",
     "from finn.transformation.fpgadataflow.make_zynq_proj import ZynqBuild\n",
     "model = ModelWrapper(build_dir+\"/end2end_cnv_w1a1_folded.onnx\")\n",
-    "model = model.transform(ZynqBuild(platform = test_pynq_board, period_ns = target_clk_ns))\n",
+    "model = model.transform(ZynqBuild(platform = test_pynq_board, period_ns = target_clk_ns))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "After the `ZynqBuild` we run one additional transformation to generate a PYNQ driver for the accelerator."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from finn.transformation.fpgadataflow.make_pynq_driver import MakePYNQDriver\n",
+    "model = model.transform(MakePYNQDriver(\"zynq-iodma\"))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
     "model.save(build_dir + \"/end2end_cnv_w1a1_synth.onnx\")"
    ]
   },
@@ -437,7 +464,7 @@
    "source": [
     "## 5. Deployment and Remote Execution\n",
     "\n",
-    "Now that we're done with the hardware generation, we can generate a Python driver for accelerator and copy the necessary files onto our PYNQ board.\n",
+    "Now that we're done with the hardware generation, we can copy the necessary files onto our PYNQ board.\n",
     "\n",
     "**Make sure you've [set up the SSH keys for your PYNQ board](https://finn-dev.readthedocs.io/en/latest/getting_started.html#pynq-board-first-time-setup) before executing this step.**"
    ]
@@ -452,7 +479,7 @@
     "\n",
     "# set up the following values according to your own environment\n",
     "# FINN will use ssh to deploy and run the generated accelerator\n",
-    "ip = os.getenv(\"PYNQ_IP\", \"192.168.2.99\")\n",
+    "ip = \"192.168.2.99\"\n",
     "username = os.getenv(\"PYNQ_USERNAME\", \"xilinx\")\n",
     "password = os.getenv(\"PYNQ_PASSWORD\", \"xilinx\")\n",
     "port = os.getenv(\"PYNQ_PORT\", 22)\n",
@@ -612,13 +639,6 @@
    "source": [
     "We see that the final top-1 accuracy is 84.19%, which is very close to the 84.22% reported on the [BNN-PYNQ accuracy table in Brevitas](https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq). "
    ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
   }
  ],
  "metadata": {