diff --git a/docker/Dockerfile.finn b/docker/Dockerfile.finn
index 6036f2e744f53dfaf287b97d2789bb20bdd9d9f7..850d637de4d1384231a90dc5cdca532cb4dec5fd 100644
--- a/docker/Dockerfile.finn
+++ b/docker/Dockerfile.finn
@@ -80,7 +80,7 @@ RUN pip install jupyter==1.0.0
 RUN pip install markupsafe==2.0.1
 RUN pip install matplotlib==3.3.1 --ignore-installed
 RUN pip install pytest-dependency==0.5.1
-RUN pip install sphinx==3.1.2
+RUN pip install sphinx==5.0.2
 RUN pip install sphinx_rtd_theme==0.5.0
 RUN pip install pytest-xdist[setproctitle]==2.4.0
 RUN pip install pytest-parallel==0.1.0
diff --git a/docs/finn/brevitas_export.rst b/docs/finn/brevitas_export.rst
index 408b14fd2b6c99ce3ec128a0361a25b3f2c193a5..304aa30854118e1ebd3258169ee4698a873e8689 100644
--- a/docs/finn/brevitas_export.rst
+++ b/docs/finn/brevitas_export.rst
@@ -8,7 +8,7 @@ Brevitas Export
    :scale: 70%
    :align: center
 
-FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/brevitas_examples/bnn_pynq>`_. Brevitas provides an export of a quantized network in ONNX representation in several flavors.
+FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq>`_. Brevitas provides an export of a quantized network in ONNX representation in several flavors.
 Two of the Brevitas-exported ONNX variants can be ingested by FINN:
 
    * FINN-ONNX: Quantized weights exported as tensors with additional attributes to mark low-precision datatypes. Quantized activations exported as MultiThreshold nodes.
diff --git a/docs/finn/command_line.rst b/docs/finn/command_line.rst
index 54ffca9430a57ed4513ce822afbe0f1642b77404..12e01db5544e847a775d330929d1eea916cae74e 100644
--- a/docs/finn/command_line.rst
+++ b/docs/finn/command_line.rst
@@ -41,7 +41,7 @@ To use it, first create a folder with the necessary configuration and model file
 2. Put your ONNX model to be converted under ``dataflow_build_dir/model.onnx``.
    The filename is important and must exactly be ``model.onnx``.
 3. Create a JSON file with the build configuration. It must be named ``dataflow_build_dir/dataflow_build_config.json``.
-   Read more about the build configuration options on :py:mod:``finn.builder.build_dataflow_config.DataflowBuildConfig``.
+   Read more about the build configuration options on :py:mod:`finn.builder.build_dataflow_config.DataflowBuildConfig`.
    You can find an example .json file under ``src/finn/qnn-data/build_dataflow/dataflow_build_config.json``
 4. (Optional) create a JSON file with the folding configuration. It must be named ``dataflow_build_dir/folding_config.json``.
    You can find an example .json file under ``src/finn/qnn-data/build_dataflow/folding_config.json``.
@@ -55,7 +55,7 @@ Now you can invoke the simple dataflow build as follows:
   ./run-docker.sh build_dataflow <path/to/dataflow_build_dir/>
 
 Depending on the chosen output products, the dataflow build will run for a while
-as it go through numerous steps:
+as it goes through numerous steps:
 
 .. code-block:: none
 
diff --git a/docs/finn/developers.rst b/docs/finn/developers.rst
index 2e05761d1fc1b9a23abb29f7bc062cf99a8acf5c..b152dfef66d0eb47e086d3c5cd51174c5df52128 100644
--- a/docs/finn/developers.rst
+++ b/docs/finn/developers.rst
@@ -84,7 +84,6 @@ The finn.dev image is built and launched as follows:
 
 4. Entrypoint script (docker/finn_entrypoint.sh) upon launching container performs the following:
 
-  * Do `pip install` on the dependency git repos at specified commits.
   * Source Vivado settings64.sh from specified path to make vivado and vivado_hls available.
   * Download PYNQ board files into the finn root directory, unless they already exist.
   * Source Vitits settings64.sh if Vitis is mounted.
@@ -92,7 +91,7 @@ The finn.dev image is built and launched as follows:
 5. Depending on the arguments to run-docker.sh a different application is launched. run-docker.sh notebook launches a Jupyter server for the tutorials, whereas run-docker.sh build_custom and run-docker.sh build_dataflow trigger a dataflow build (see documentation). Running without arguments yields an interactive shell. See run-docker.sh for other options.
 
 (Re-)launching builds outside of Docker
-======================================
+========================================
 
 It is possible to launch builds for FINN-generated HLS IP and stitched-IP folders outside of the Docker container.
 This may be necessary for visual inspection of the generated designs inside the Vivado GUI, if you run into licensing
@@ -122,16 +121,16 @@ The checks are configured in .pre-commit-config.yaml under the repo root.
 Testing
 =======
 
-Tests are vital to keep FINN running.  All the FINN tests can be found at https://github.com/Xilinx/finn/tree/master/tests.
+Tests are vital to keep FINN running.  All the FINN tests can be found at https://github.com/Xilinx/finn/tree/main/tests.
 These tests can be roughly grouped into three categories:
 
- * Unit tests: targeting unit functionality, e.g. a single transformation. Example: https://github.com/Xilinx/finn/blob/master/tests/transformation/streamline/test_sign_to_thres.py tests the expected behavior of the `ConvertSignToThres` transformation pass.
+ * Unit tests: targeting unit functionality, e.g. a single transformation. Example: https://github.com/Xilinx/finn/blob/main/tests/transformation/streamline/test_sign_to_thres.py tests the expected behavior of the `ConvertSignToThres` transformation pass.
 
- * Small-scale integration tests: targeting a group of related classes or functions that to test how they behave together. Example: https://github.com/Xilinx/finn/blob/master/tests/fpgadataflow/test_convert_to_hls_conv_layer.py sets up variants of ONNX Conv nodes that are first lowered and then converted to FINN HLS layers.
+ * Small-scale integration tests: targeting a group of related classes or functions that to test how they behave together. Example: https://github.com/Xilinx/finn/blob/main/tests/fpgadataflow/test_convert_to_hls_conv_layer.py sets up variants of ONNX Conv nodes that are first lowered and then converted to FINN HLS layers.
 
- * End-to-end tests: testing a typical 'end-to-end' compilation flow in FINN, where one end is a trained QNN and the other end is a hardware implementation. These tests can be quite large and are typically broken into several steps that depend on prior ones. Examples: https://github.com/Xilinx/finn/tree/master/tests/end2end
+ * End-to-end tests: testing a typical 'end-to-end' compilation flow in FINN, where one end is a trained QNN and the other end is a hardware implementation. These tests can be quite large and are typically broken into several steps that depend on prior ones. Examples: https://github.com/Xilinx/finn/tree/main/tests/end2end
 
-Additionally, finn-base, brevitas and finn-hlslib also include their own test suites.
+Additionally, qonnx, brevitas and finn-hlslib also include their own test suites.
 The full FINN compiler test suite
 (which will take several hours to run and require a PYNQ board) can be executed
 by:
diff --git a/docs/finn/end_to_end_flow.rst b/docs/finn/end_to_end_flow.rst
index a51d56d771384fddbc51271a074748e23ec8295c..bc5c5230718bcc8dd50334cc1f20c3c84c012ca4 100644
--- a/docs/finn/end_to_end_flow.rst
+++ b/docs/finn/end_to_end_flow.rst
@@ -11,7 +11,7 @@ As you can see in the picture, FINN has a high modularity and has the property t
 
 The white fields show the state of the network representation in the respective step. The colored fields represent the transformations that are applied to the network to achieve a certain result. The diagram is divided into five sections, each of it includes several flow steps. The flow starts in top left corner with Brevitas export (green section), followed by the preparation of the network (blue section) for the Vivado HLS and Vivado IPI (orange section). There is also a section for testing and verification in software (red section) and the hardware generation and deployment on the PYNQ board (yellow section).
 
-This example flow is covered in the `end2end_example <https://github.com/Xilinx/finn/tree/master/notebooks/end2end_example>`_ Jupyter notebooks.
+This example flow is covered in the `end2end_example <https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example>`_ Jupyter notebooks.
 For a more detailed overview about the different flow sections, please have a look at the corresponding pages:
 
 .. toctree::
diff --git a/docs/finn/example_networks.rst b/docs/finn/example_networks.rst
index 3f1ae0d603b18e8467477ea6e44863a02dee467b..ee58926578df58fab7264a22aa915e527b7edc4a 100644
--- a/docs/finn/example_networks.rst
+++ b/docs/finn/example_networks.rst
@@ -13,22 +13,16 @@ compiler.
 End-to-end Integration tests
 ============================
 
-The FINN compiler uses `several pre-trained QNNs <https://github.com/Xilinx/brevitas/tree/master/brevitas_examples/bnn_pynq>`_
+The FINN compiler uses `several pre-trained QNNs <https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq>`_
 that serve as both examples and testcases.
 
 * TFC, SFC, LFC... are fully-connected networks trained on the MNIST dataset
 * CNV is a convolutional network trained on the CIFAR-10 dataset
 * w\_a\_ refers to the quantization used for the weights (w) and activations (a) in bits
 
-These networks are built end-to-end as part of the `FINN integration tests <https://github.com/Xilinx/finn/blob/master/tests/end2end/test_end2end_bnn_pynq.py>`_ ,
+These networks are built end-to-end as part of the `FINN integration tests <https://github.com/Xilinx/finn/blob/main/tests/end2end/test_end2end_bnn_pynq.py>`_ ,
 and the key performance indicators (FPGA resource, frames per second...) are
 automatically posted to the dashboard below.
-To implement a new network, you can use the `integration test code <https://github.com/Xilinx/finn/blob/dev/tests/end2end/test_end2end_bnn_pynq.py>`_
+To implement a new network, you can use the `integration test code <https://github.com/Xilinx/finn/blob/main/tests/end2end/test_end2end_bnn_pynq.py>`_
 as a starting point, as well as the `relevant Jupyter notebooks
-<https://github.com/Xilinx/finn/tree/master/notebooks/end2end_example/bnn-pynq>`_.
-
-.. image:: https://firebasestorage.googleapis.com/v0/b/drive-assets.google.com.a.appspot.com/o/Asset%20-%20Drive%20Icon512.png?alt=media
-  :width: 50px
-  :align: left
-
-`FINN end-to-end integration tests dashboard on Google Drive <https://bit.ly/finn-end2end-dashboard>`_
+<https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example/bnn-pynq>`_.
diff --git a/docs/finn/faq.rst b/docs/finn/faq.rst
index 3ddd13664432ceefdd0379004d856abd096f93ff..ef4457f53a8391621c54a70e29780c833a52aaf3 100644
--- a/docs/finn/faq.rst
+++ b/docs/finn/faq.rst
@@ -1,8 +1,8 @@
 .. _faq:
 
-***********************
+***************************
 Frequently Asked Questions
-***********************
+***************************
 
 Can't find the answer to your question here? Check `FINN GitHub Discussions <https://github.com/Xilinx/finn/discussions>`_.
 
@@ -100,7 +100,7 @@ Which data layout do FINN-generated accelerators use? Big-endian? Little-endian?
     If you need to do this manually, first examine how the `FINN PYNQ Python drivers <https://github.com/Xilinx/finn-examples/blob/main/finn_examples/driver.py#L379>`_ do this – notice how the input data is
     first reshaped to create the “folded input shape” that reflects the word size of the first layer based on how much it
     was parallelized, then data packing is applied to obtain a raw byte array (with some reversals going on) that can be
-    fed directly to the hardware. Another example of this is the `npy_to_rtlsim_input <https://github.com/Xilinx/finn-base/blob/dev/src/finn/util/data_packing.py#L289>`_ function, which converts npy arrays to lists of Python arbitrary-precision integers that we feed into pyverilator for rtl simulation:
+    fed directly to the hardware. Another example of this is the `npy_to_rtlsim_input <https://github.com/Xilinx/finn-base/blob/dev/src/finn/util/data_packing.py#L289>`_ function, which converts npy arrays to lists of Python arbitrary-precision integers that we feed into pyverilator for rtl simulation.
 
 Why does FIFO sizing take so long for my network? Is something wrong?
     The automatic FIFO sizing in FINN can take quite long. It unfortunately doesn’t really parallelize on multiple cores since
diff --git a/docs/finn/getting_started.rst b/docs/finn/getting_started.rst
index 3e730924c032765ebf8f58afaa9ae2e694fb3d11..8a8a803a3d9c2a0bc780e8fd6b33cd20060a28a6 100644
--- a/docs/finn/getting_started.rst
+++ b/docs/finn/getting_started.rst
@@ -8,7 +8,7 @@ Quickstart
 ==========
 
 1. Install Docker to run `without root <https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user>`_
-2. Set up ``FINN_XILINX_PATH`` and ``FINN_XILINX_VERSION`` environment variables pointing respectively to the Xilinx tools installation directory and version (e.g. ``FINN_XILINX_PATH=/opt/Xilinx`` and ``FINN_XILINX_VERSION=2020.1``)
+2. Set up ``FINN_XILINX_PATH`` and ``FINN_XILINX_VERSION`` environment variables pointing respectively to the Xilinx tools installation directory and version (e.g. ``FINN_XILINX_PATH=/opt/Xilinx`` and ``FINN_XILINX_VERSION=2022.1``)
 3. Clone the FINN compiler from the repo: ``git clone https://github.com/Xilinx/finn/`` and go into the directory where it is cloned
 4. Execute ``./run-docker.sh quicktest`` to verify your installation.
 5. Optionally, follow the instructions on :ref:`PYNQ board first-time setup` or :ref:`Alveo first-time setup` for board setup.
@@ -47,7 +47,7 @@ by using the "advanced mode" described in the :ref:`command_line` section.
 
 Running FINN in Docker
 ======================
-FINN only running inside a Docker container, and comes with a script to easily build and launch the container. If you are not familiar with Docker, there are many excellent `online resources <https://docker-curriculum.com/>`_ to get started.
+FINN runs inside a Docker container, it comes with a script to easily build and launch the container. If you are not familiar with Docker, there are many excellent `online resources <https://docker-curriculum.com/>`_ to get started.
 You may want to review the :ref:`General FINN Docker tips` and :ref:`Environment variables` as well.
 If you want to use prebuilt images, read :ref:`Using a prebuilt image`.
 The ``run-docker.sh`` script that can be launched in the following modes:
@@ -82,9 +82,11 @@ FINN comes with numerous Jupyter notebook tutorials, which you can launch with:
   bash ./run-docker.sh notebook
 
 This will launch the `Jupyter notebook <https://jupyter.org/>`_ server inside a Docker container, and print a link on the terminal that you can open in your browser to run the FINN notebooks or create new ones.
-.. note:: The link will look something like this (the token you get will be different):
-http://127.0.0.1:8888/?token=f5c6bd32ae93ec103a88152214baedff4ce1850d81065bfc.
-The ``run-docker.sh`` script forwards ports 8888 for Jupyter and 8081 for Netron, and launches the notebook server with appropriate arguments.
+
+.. note::
+  The link will look something like this (the token you get will be different):
+  http://127.0.0.1:8888/?token=f5c6bd32ae93ec103a88152214baedff4ce1850d81065bfc.
+  The ``run-docker.sh`` script forwards ports 8888 for Jupyter and 8081 for Netron, and launches the notebook server with appropriate arguments.
 
 
 Environment variables
@@ -94,7 +96,7 @@ Prior to running the `run-docker.sh` script, there are several environment varia
 These are summarized below:
 
 * (required) ``FINN_XILINX_PATH`` points to your Xilinx tools installation on the host (e.g. ``/opt/Xilinx``)
-* (required) ``FINN_XILINX_VERSION`` sets the Xilinx tools version to be used (e.g. ``2020.1``)
+* (required) ``FINN_XILINX_VERSION`` sets the Xilinx tools version to be used (e.g. ``2022.1``)
 * (required for Alveo) ``PLATFORM_REPO_PATHS`` points to the Vitis platform files (DSA).
 * (required for Alveo) ``XRT_DEB_VERSION`` specifies the .deb to be installed for XRT inside the container (see default value in ``run-docker.sh``).
 * (optional) ``NUM_DEFAULT_WORKERS`` (default 4) specifies the degree of parallelization for the transformations that can be run in parallel, potentially reducing build time
@@ -121,7 +123,7 @@ General FINN Docker tips
 ************************
 * Several folders including the root directory of the FINN compiler and the ``FINN_HOST_BUILD_DIR`` will be mounted into the Docker container and can be used to exchange files.
 * Do not use ``sudo`` to launch the FINN Docker. Instead, setup Docker to run `without root <https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user>`_.
-* If you want a new terminal on an already-running container, you can do this with `docker exec -it <name_of_container> bash`.
+* If you want a new terminal on an already-running container, you can do this with ``docker exec -it <name_of_container> bash``.
 * The container is spawned with the `--rm` option, so make sure that any important files you created inside the container are either in the finn compiler folder (which is mounted from the host computer) or otherwise backed up.
 
 Using a prebuilt image
@@ -138,8 +140,10 @@ If you are having trouble building the Docker image or need offline access, you
 
 Supported FPGA Hardware
 =======================
-**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by  `PYNQ <https://pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Ultra96, ZCU102 and ZCU104 boards.
-As of FINN v0.4b we also have preliminary support for `Xilinx Alveo boards <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_ using PYNQ and Vitis, see instructions below for Alveo setup.
+**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by  `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Ultra96, ZCU102 and ZCU104 boards.
+
+.. warning::
+  In previous FINN versions (v0.4b - v0.7) we had preliminary support for `Xilinx Alveo boards <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_ using PYNQ and Vitis 2020.1, see instructions below for Alveo setup that works with older versions. Please note that with the new release with Vitis 2022.1, we do not have support for an automatic deployment on Alveo cards.
 
 **Vivado IPI support for any Xilinx FPGA:** FINN generates a Vivado IP Integrator (IPI) design from the neural network with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx FPGA as part of a larger system. It's up to you to take the FINN-generated accelerator (what we call "stitched IP" in the tutorials), wire it up to your FPGA design and send/receive neural network data to/from the accelerator.
 
@@ -163,6 +167,9 @@ Continue on the host side (replace the ``<PYNQ_IP>`` and ``<PYNQ_USERNAME>`` wit
 
 Alveo first-time setup
 **********************
+.. warning::
+  Alveo cards are not automatically supported in the new FINN release with Vitis 2022.1. If you are looking for a build flow for Alveo inside of FINN, you will need to use older FINN versions (v0.4b - v0.7) with Vitis 2020.1.
+
 We use *host* to refer to the PC running the FINN Docker environment, which will build the accelerator+driver and package it up, and *target* to refer to the PC where the Alveo card is installed. These two can be the same PC, or connected over the network -- FINN includes some utilities to make it easier to test on remote PCs too. Prior to first usage, you need to set up both the host and the target in the following manner:
 
 On the target side:
@@ -201,11 +208,10 @@ System Requirements
 
 * Ubuntu 18.04 with ``bash`` installed
 * Docker `without root <https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user>`_
-* A working Vivado 2020.1 installation
+* A working Vitis/Vivado 2022.1 installation
 * ``FINN_XILINX_PATH`` and ``FINN_XILINX_VERSION`` environment variables correctly set, see `Quickstart`_
 * *(optional)* `Vivado/Vitis license`_ if targeting non-WebPack FPGA parts.
 * *(optional)* A PYNQ board with a network connection, see `PYNQ board first-time setup`_
-* *(optional)* An Alveo board, and a working Vitis 2020.1 installation if you want to use Vitis and Alveo (see `Alveo first-time setup`_ )
 
 We also recommend running the FINN compiler on a system with sufficiently
 strong hardware:
diff --git a/docs/finn/hw_build.rst b/docs/finn/hw_build.rst
index d03fc400bde90da905c45d408c95badc85b7d6ec..e1e5411adb4078636ddd4c0087245f8c2a58c372 100644
--- a/docs/finn/hw_build.rst
+++ b/docs/finn/hw_build.rst
@@ -9,12 +9,18 @@ Hardware Build and Deployment
    :align: center
 
 A model where all layers have been converted to HLS layers can be processed by
-FINN to build a bitfile targeting either a Zynq or Alveo system.
+FINN to build a bitfile and driver targeting a Zynq system or to generate a Vivado IP Integrator (IPI)
+design with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx FPGA as part of a larger system.
+
+.. warning::
+    With the new FINN release, we do not offer out-of-the box support for Alveo cards anymore.
+    Please use an older FINN version (v04b - v0.7) and Vitis 2020.1 in case you want to use `VitisBuild`. The description for the `VitisBuild` below is still valid for older versions.
+
 
 Hardware Build
 ==============
 
-Internally, the hardware build consists of the following steps:
+Internally, the hardware build for Zynq devices consists of the following steps:
 
 1. Driver generation
 2. DMA and DWC node insertion
@@ -22,12 +28,9 @@ Internally, the hardware build consists of the following steps:
 4. FIFO insertion and IP generation
 5. Vivado/Vitis project generation and synthesis
 
-.. note:: **In previous FINN releases it was necessary to step through the
-individual sub-steps for hardware build manually by calling each transformation.
-The hardware build transformations `ZynqBuild` and `VitisBuild` now execute all
-necessary sub-transformations. For more control over the build process, the
-transformations listed below can still be called individually.
-**
+.. note::
+  In previous FINN releases it was necessary to step through the individual sub-steps for hardware build manually by calling each transformation. The hardware build transformations `ZynqBuild` now execute all necessary sub-transformations. For more control over the build process, the transformations listed below can still be called individually.
+
 
 Driver Generation
 ------------------
@@ -60,9 +63,7 @@ This is accomplished by the :py:mod:`finn.transformation.fpgadataflow.floorplan.
 and :py:mod:`finn.transformation.fpgadataflow.create_dataflow_partition.CreateDataflowPartition`
 transformations.
 
-.. note:: **For Vitis, each partition will be compiled as a separate kernel,
-and linked together afterwards. For Zynq, each partition will become an IP
-block. **
+.. note:: For Vitis, each partition will be compiled as a separate kernel, and linked together afterwards. For Zynq, each partition will become an IP block.
 
 
 FIFO Insertion and IP Generation
diff --git a/docs/finn/img/repo-structure.png b/docs/finn/img/repo-structure.png
index 05031ff9a5500c3302a36ea88309b3707bc5d108..704e5e5bdab8d51d88f5a18893153b5c0827f755 100644
Binary files a/docs/finn/img/repo-structure.png and b/docs/finn/img/repo-structure.png differ
diff --git a/docs/finn/index.rst b/docs/finn/index.rst
index 751b105bb4ec35c880664e85a9550207e8a1f076..c13bf81cec949498fd6ebdf971b23535c47f3ef1 100644
--- a/docs/finn/index.rst
+++ b/docs/finn/index.rst
@@ -33,9 +33,7 @@ More FINN Resources
 
 * `The FINN examples repository <https://github.com/Xilinx/finn-examples>`_
 
-* `List of publications <https://github.com/Xilinx/finn/blob/master/docs/publications.md>`_
-
-* `Roadmap <https://github.com/Xilinx/finn/projects/1>`_
+* `List of publications <https://xilinx.github.io/finn/publications>`_
 
 .. toctree::
    :maxdepth: 5
diff --git a/docs/finn/internals.rst b/docs/finn/internals.rst
index e28874145d6d61232b0d63b0e53e4dd5dcdc4cfc..0b33affc76484d2175a336b188661550731ca1ab 100644
--- a/docs/finn/internals.rst
+++ b/docs/finn/internals.rst
@@ -1,8 +1,8 @@
 .. _internals:
 
-*********
+**********
 Internals
-*********
+**********
 
 Intermediate Representation: QONNX and FINN-ONNX
 ================================================
@@ -14,16 +14,18 @@ FINN uses `ONNX <https://github.com/onnx/onnx>`_ as an intermediate representati
 Custom Quantization Annotations
 ===============================
 
-ONNX does not support datatypes smaller than 8-bit integers, whereas in FINN we are interested in smaller integers down to ternary and bipolar. To make this work, FINN uses the quantization_annotation field in ONNX to annotate tensors with their FINN DataType (:py:mod:`qonnx.core.datatype.DataType`) information. However, all tensors are expected to use single-precision floating point (float32) storage in FINN. This means we store even a 1-bit value as floating point for the purposes of representation. The FINN compiler flow is responsible for eventually producing a packed representation for the target hardware, where the 1-bit is actually stored as 1-bit.
+ONNX does not support datatypes smaller than 8-bit integers, whereas in FINN we are interested in smaller integers down to ternary and bipolar. To make this work, FINN-ONNX uses the quantization_annotation field in ONNX to annotate tensors with their FINN DataType (:py:mod:`qonnx.core.datatype.DataType`) information. However, all tensors are expected to use single-precision floating point (float32) storage in FINN. This means we store even a 1-bit value as floating point for the purposes of representation. The FINN compiler flow is responsible for eventually producing a packed representation for the target hardware, where the 1-bit is actually stored as 1-bit.
 
 Note that FINN uses floating point tensors as a carrier data type to represent integers. Floating point arithmetic can introduce rounding errors, e.g. (int_num * float_scale) / float_scale is not always equal to int_num.
 When using the custom ONNX execution flow, FINN will attempt to sanitize any rounding errors for integer tensors. See (:py:mod:`qonnx.util.basic.sanitize_quant_values`) for more information.
 This behavior can be disabled (not recommended!) by setting the environment variable SANITIZE_QUANT_TENSORS=0.
 
+.. note:: In QONNX the quantization is represented differently, for details please check the `QONNX repository <https://github.com/fastmachinelearning/qonnx>`_ .
+
 Custom Operations/Nodes
 =======================
 
-FINN uses many custom operations (op_type in ONNX NodeProto) that are not defined in the ONNX operator schema. These custom nodes are marked with domain="finn.*" in the protobuf to identify them as such. These nodes can represent specific operations that we need for low-bit networks, or operations that are specific to a particular hardware backend. To get more familiar with custom operations and how they are created, please take a look in the Jupyter notebook about CustomOps (see chapter :ref:`tutorials` for details) or directly in the module :py:mod:`finn.custom_op`.
+FINN uses many custom operations (op_type in ONNX NodeProto) that are not defined in the ONNX operator schema. These custom nodes are marked with domain="finn.*" or domain="qonnx.*" in the protobuf to identify them as such. These nodes can represent specific operations that we need for low-bit networks, or operations that are specific to a particular hardware backend. To get more familiar with custom operations and how they are created, please take a look in the Jupyter notebook about CustomOps (see chapter :ref:`tutorials` for details) or directly in the module :py:mod:`finn.custom_op`.
 
 .. note:: See the description of `this PR <https://github.com/Xilinx/finn-base/pull/6>`_ for more on how the operator wrapper library is organized.
 
@@ -118,7 +120,7 @@ As mentioned above there are FINN DataTypes additional to the container datatype
   # set tensor datatype of third tensor in model tensor list
   from qonnx.core.datatype import DataType
 
-  finn_dtype = DataType.BIPOLAR
+  finn_dtype = DataType["BIPOLAR"]
   model.set_tensor_datatype(tensor_list[2], finn_dtype)
 
 ModelWrapper contains two helper functions for tensor initializers, one to determine the current initializer and one to set the initializer of a tensor. If there is no initializer, None is returned.
@@ -147,15 +149,17 @@ A transformation passes changes (transforms) the given model, it gets the model
 .. _mem_mode:
 
 MatrixVectorActivation *mem_mode*
-===========================
+==================================
 
-FINN supports two types of the so-called *mem_mode* attrıbute for the node MatrixVectorActivation. This mode controls how the weight values are accessed during the execution. That means the mode setting has direct influence on the resulting circuit. Currently two settings for the *mem_mode* are supported in FINN:
+FINN supports three types of the so-called *mem_mode* attrıbute for the node MatrixVectorActivation. This mode controls how the weight values are accessed during the execution. That means the mode setting has direct influence on the resulting circuit. Currently three settings for the *mem_mode* are supported in FINN:
 
 * "const"
 
 * "decoupled"
 
-The following picture shows the idea behind the two modes.
+* "external"
+
+The following picture shows the idea behind the "const" and "decoupled" mode.
 
 .. image:: img/mem_mode.png
    :scale: 55%
diff --git a/docs/finn/nw_prep.rst b/docs/finn/nw_prep.rst
index 8d0403fc9bb6a45fae60f14c0fb0acf862792abb..566eda5bac38855e9ed8edfdf53193bb6c025256 100644
--- a/docs/finn/nw_prep.rst
+++ b/docs/finn/nw_prep.rst
@@ -17,7 +17,7 @@ Various transformations are involved in the network preparation. The following i
 Tidy-up transformations
 =======================
 
-These transformations do not appear in the diagram above, but are applied in many steps in the FINN flow to postprocess the model after a transformation and/or prepare it for the next transformation. They ensure that all information is set and behave like a "tidy-up". These transformations are the following:
+These transformations do not appear in the diagram above, but are applied in many steps in the FINN flow to postprocess the model after a transformation and/or prepare it for the next transformation. They ensure that all information is set and behave like a "tidy-up". These transformations are located in the `QONNX repository <https://github.com/fastmachinelearning/qonnx>`_ and can be imported:
 
 * :py:mod:`qonnx.transformation.general.GiveReadableTensorNames` and :py:mod:`qonnx.transformation.general.GiveUniqueNodeNames`
 
@@ -35,7 +35,7 @@ After this transformation the ONNX model is streamlined and contains now custom
 Convert to HLS Layers
 =====================
 
-Pairs of binary XNORPopcountMatMul layers are converted to MatrixVectorActivation layers and following Multithreshold layers are absorbed into the Matrix-Vector-Activate-Unit (MVAU). The result is a model consisting of a mixture of HLS and non-HLS layers. For more details, see :py:mod:`finn.transformation.fpgadataflow.convert_to_hls_layers`. The MVAU can be implemented in two different modes, *const* and *decoupled*, see chapter :ref:`mem_mode`.
+In this step standard or custom layers are converted to HLS layers. HLS layers are layers that directly correspond to a finn-hlslib function call. For example pairs of binary XNORPopcountMatMul and MultiThreshold layers are converted to MatrixVectorActivation layers. The result is a model consisting of a mixture of HLS and non-HLS layers. For more details, see :py:mod:`finn.transformation.fpgadataflow.convert_to_hls_layers`. The MatrixVectorActivation layer can be implemented in three different modes, *const*, *decoupled* (see chapter :ref:`mem_mode`) and *external*.
 
 Dataflow Partitioning
 =====================
@@ -47,4 +47,4 @@ Folding
 
 To adjust the folding, the values for PE and SIMD can be increased to achieve also an increase in the performance. The result can be verified using the same simulation flow as for the network with maximum folding (*cppsim* using C++), for details please have a look at chapter :ref:`verification`.
 
-The result is a network of HLS layers with desired folding and it can be passed to :ref:`vivado_synth`.
+The result is a network of HLS layers with desired folding and it can be passed to :ref:`hw_build`.
diff --git a/docs/finn/source_code/finn.analysis.rst b/docs/finn/source_code/finn.analysis.rst
index 1de42ac32bc62ce71e039f63168302b22711f454..f2321dbee7ee0ba98d7b982202ae4918e0973489 100644
--- a/docs/finn/source_code/finn.analysis.rst
+++ b/docs/finn/source_code/finn.analysis.rst
@@ -15,26 +15,26 @@ Submodules
 Analysis Passes
 ===============
 
-finn.analysis.base
+qonnx.analysis.base
 -----------------------------
 
-.. automodule:: finn.analysis.base
+.. automodule:: qonnx.analysis.base
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.analysis.inference\_cost
------------------------------
+qonnx.analysis.inference\_cost
+-------------------------------
 
-.. automodule:: finn.analysis.inference_cost
+.. automodule:: qonnx.analysis.inference_cost
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.analysis.topology
+qonnx.analysis.topology
 -----------------------------
 
-.. automodule:: finn.analysis.topology
+.. automodule:: qonnx.analysis.topology
    :members:
    :undoc-members:
    :show-inheritance:
diff --git a/docs/finn/source_code/finn.core.rst b/docs/finn/source_code/finn.core.rst
index 2e2a8532c6419198c5075a08bef5207b39d4658b..4e3de458e153871d1d5969442af5940dc1673ecd 100644
--- a/docs/finn/source_code/finn.core.rst
+++ b/docs/finn/source_code/finn.core.rst
@@ -5,7 +5,7 @@ Core
 Modules
 =======
 
-finn.core.data\_layout
+qonnx.core.data\_layout
 -------------------------
 
 .. automodule:: qonnx.core.data_layout
@@ -21,10 +21,10 @@ qonnx.core.datatype
    :undoc-members:
    :show-inheritance:
 
-finn.core.execute\_custom\_node
+qonnx.core.execute\_custom\_node
 --------------------------------------
 
-.. automodule:: finn.core.execute_custom_node
+.. automodule:: qonnx.core.execute_custom_node
    :members:
    :undoc-members:
    :show-inheritance:
diff --git a/docs/finn/source_code/finn.custom_op.fpgadataflow.rst b/docs/finn/source_code/finn.custom_op.fpgadataflow.rst
index 7de038248d418e1964effd7678bc1cad4cb48c14..cc56ea603e589d7000fe5b2b2943e67cdb90c884 100644
--- a/docs/finn/source_code/finn.custom_op.fpgadataflow.rst
+++ b/docs/finn/source_code/finn.custom_op.fpgadataflow.rst
@@ -22,7 +22,7 @@ finn.custom\_op.fpgadataflow.addstreams\_batch
    :show-inheritance:
 
 finn.custom\_op.fpgadataflow.channelwise\_op\_batch
------------------------------------------------
+-----------------------------------------------------
 
 .. automodule:: finn.custom_op.fpgadataflow.channelwise_op_batch
    :members:
@@ -55,7 +55,7 @@ finn.custom\_op.fpgadataflow.downsampler
    :show-inheritance:
 
 finn.custom\_op.fpgadataflow.duplicatestreams\_batch
------------------------------------------------
+-------------------------------------------------------
 
 .. automodule:: finn.custom_op.fpgadataflow.duplicatestreams_batch
    :members:
@@ -71,7 +71,7 @@ finn.custom\_op.fpgadataflow.fmpadding\_batch
    :show-inheritance:
 
 finn.custom\_op.fpgadataflow.globalaccpool\_batch
------------------------------------------------
+---------------------------------------------------
 
 .. automodule:: finn.custom_op.fpgadataflow.globalaccpool_batch
    :members:
@@ -160,7 +160,7 @@ finn.custom\_op.fpgadataflow.templates
    :show-inheritance:
 
 finn.custom\_op.fpgadataflow.thresholding\_batch
------------------------------------------------
+-------------------------------------------------------
 
 .. automodule:: finn.custom_op.fpgadataflow.thresholding_batch
    :members:
@@ -185,7 +185,7 @@ finn.custom\_op.fpgadataflow.upsampler
    :show-inheritance:
 
 finn.custom\_op.fpgadataflow.vectorvectoractivation
------------------------------------------------
+-----------------------------------------------------
 
 .. automodule:: finn.custom_op.fpgadataflow.vectorvectoractivation
    :members:
diff --git a/docs/finn/source_code/finn.custom_op.rst b/docs/finn/source_code/finn.custom_op.rst
index 3e91eff9a16b3dedf0e1682c79d6f8022ebe0db8..20d90a7bb596d6ce5638d9b2d9bae8a5c7e5c723 100644
--- a/docs/finn/source_code/finn.custom_op.rst
+++ b/docs/finn/source_code/finn.custom_op.rst
@@ -17,12 +17,12 @@ Custom Op Nodes
 Base Class
 ----------
 
-.. automodule:: finn.custom_op.base
+.. automodule:: qonnx.custom_op.base
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.custom\_op.registry
+qonnx.custom\_op.registry
 -------------------------
 
 .. automodule:: qonnx.custom_op.registry
diff --git a/docs/finn/source_code/finn.rst b/docs/finn/source_code/finn.rst
index 607ac636a43d88150493eebb86b1e568b38b681a..5547a46623d4cd80b82ac334ae082f9a99b7e8dd 100644
--- a/docs/finn/source_code/finn.rst
+++ b/docs/finn/source_code/finn.rst
@@ -3,7 +3,7 @@ FINN API
 ********
 The FINN sources are divided into different modules. They are listed below.
 
-.. note:: **Some of these functions and modules are located in the `finn-base` repository.**
+.. note:: **Some of these functions and modules are located in the `qonnx` repository.**
 
 Modules
 =======
diff --git a/docs/finn/source_code/finn.transformation.qonnx.rst b/docs/finn/source_code/finn.transformation.qonnx.rst
index 8320e19efb81dd5a52f750e22e280f41070bf48c..1332639b1d694ce7c230b8926edfc82f2521e580 100644
--- a/docs/finn/source_code/finn.transformation.qonnx.rst
+++ b/docs/finn/source_code/finn.transformation.qonnx.rst
@@ -1,4 +1,4 @@
-***********************
+************************
 Transformation - QONNX
 ************************
 
diff --git a/docs/finn/source_code/finn.transformation.rst b/docs/finn/source_code/finn.transformation.rst
index acd09993472d56bc3b9c4db49042601e4cef7547..6a28eeedb2aa547ba80677864ae9fb8c6aa64097 100644
--- a/docs/finn/source_code/finn.transformation.rst
+++ b/docs/finn/source_code/finn.transformation.rst
@@ -25,7 +25,7 @@ Base Class
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.batchnorm\_to\_affine
+qonnx.transformation.batchnorm\_to\_affine
 ------------------------------------------------
 
 .. automodule:: qonnx.transformation.batchnorm_to_affine
@@ -33,55 +33,55 @@ finn.transformation.batchnorm\_to\_affine
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.bipolar\_to\_xnor
+qonnx.transformation.bipolar\_to\_xnor
 --------------------------------------------
 
-.. automodule:: finn.transformation.bipolar_to_xnor
+.. automodule:: qonnx.transformation.bipolar_to_xnor
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.change\_3d\_tensors\_to\_4d
+qonnx.transformation.change\_3d\_tensors\_to\_4d
 ------------------------------------------------
 
-.. automodule:: finn.transformation.change_3d_tensors_to_4d
+.. automodule:: qonnx.transformation.change_3d_tensors_to_4d
   :members:
   :undoc-members:
   :show-inheritance:
 
-finn.transformation.change\_datalayout
+qonnx.transformation.change\_datalayout
 --------------------------------------------
 
-.. automodule:: finn.transformation.change_datalayout
+.. automodule:: qonnx.transformation.change_datalayout
   :members:
   :undoc-members:
   :show-inheritance:
 
-finn.transformation.create\_generic\_partitions
+qonnx.transformation.create\_generic\_partitions
 ------------------------------------------------
 
-.. automodule:: finn.transformation.create_generic_partitions
+.. automodule:: qonnx.transformation.create_generic_partitions
   :members:
   :undoc-members:
   :show-inheritance:
 
-finn.transformation.double\_to\_single\_float
+qonnx.transformation.double\_to\_single\_float
 ----------------------------------------------------
 
-.. automodule:: finn.transformation.double_to_single_float
+.. automodule:: qonnx.transformation.double_to_single_float
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.extend\_partition
+qonnx.transformation.extend\_partition
 ------------------------------------------
 
-.. automodule:: finn.transformation.extend_partition
+.. automodule:: qonnx.transformation.extend_partition
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.extract\_conv\_bias
+qonnx.transformation.extract\_conv\_bias
 ------------------------------------------
 
 .. automodule:: qonnx.transformation.extract_conv_bias
@@ -90,7 +90,7 @@ finn.transformation.extract\_conv\_bias
    :show-inheritance:
 
 
-finn.transformation.fold\_constants
+qonnx.transformation.fold\_constants
 ------------------------------------------
 
 .. automodule:: qonnx.transformation.fold_constants
@@ -98,7 +98,7 @@ finn.transformation.fold\_constants
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.gemm\_to\_matmul
+qonnx.transformation.gemm\_to\_matmul
 ------------------------------------------
 
 .. automodule:: qonnx.transformation.gemm_to_matmul
@@ -114,7 +114,7 @@ qonnx.transformation.general
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.infer\_data\_layouts
+qonnx.transformation.infer\_data\_layouts
 -------------------------------------------
 
 .. automodule:: qonnx.transformation.infer_data_layouts
@@ -122,7 +122,7 @@ finn.transformation.infer\_data\_layouts
   :undoc-members:
   :show-inheritance:
 
-finn.transformation.infer\_datatypes
+qonnx.transformation.infer\_datatypes
 -------------------------------------------
 
 .. automodule:: qonnx.transformation.infer_datatypes
@@ -130,7 +130,7 @@ finn.transformation.infer\_datatypes
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.infer\_shapes
+qonnx.transformation.infer\_shapes
 ----------------------------------------
 
 .. automodule:: qonnx.transformation.infer_shapes
@@ -138,7 +138,7 @@ finn.transformation.infer\_shapes
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.insert\_topk
+qonnx.transformation.insert\_topk
 ---------------------------------------
 
 .. automodule:: qonnx.transformation.insert_topk
@@ -146,15 +146,15 @@ finn.transformation.insert\_topk
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.lower\_convs\_to\_matmul
+qonnx.transformation.lower\_convs\_to\_matmul
 ---------------------------------------------------
 
-.. automodule:: finn.transformation.lower_convs_to_matmul
+.. automodule:: qonnx.transformation.lower_convs_to_matmul
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.transformation.make\_input\_chanlast
+qonnx.transformation.make\_input\_chanlast
 ------------------------------------------
 
 .. automodule:: qonnx.transformation.make_input_chanlast
@@ -162,7 +162,7 @@ finn.transformation.make\_input\_chanlast
   :undoc-members:
   :show-inheritance:
 
-finn.transformation.merge\_onnx\_models
+qonnx.transformation.merge\_onnx\_models
 ----------------------------------------
 
 .. automodule:: qonnx.transformation.merge_onnx_models
diff --git a/docs/finn/source_code/finn.util.rst b/docs/finn/source_code/finn.util.rst
index aec42ae905445947a59cb256f55eda2070347edf..8dffa016327c3bbe50f21278c859c83556b2b213 100644
--- a/docs/finn/source_code/finn.util.rst
+++ b/docs/finn/source_code/finn.util.rst
@@ -5,24 +5,33 @@ Util
 Utility Modules
 ===============
 
-finn.util.basic
+qonnx.util.basic
 ----------------------
 
-.. automodule:: finn.util.basic
+.. automodule:: qonnx.util.basic
    :members:
    :undoc-members:
    :show-inheritance:
 
+
 qonnx.util.config
-----------------
+--------------------
 
 .. automodule:: qonnx.util.config
   :members:
   :undoc-members:
   :show-inheritance:
 
+finn.util.basic
+----------------------
+
+.. automodule:: finn.util.basic
+   :members:
+   :undoc-members:
+   :show-inheritance:
+
 finn.util.create
-----------------
+------------------
 
 .. automodule:: finn.util.create
   :members:
@@ -63,11 +72,10 @@ finn.util.imagenet
   :undoc-members:
   :show-inheritance:
 
-
-finn.util.onnx
+qonnx.util.onnx
 ---------------------
 
-.. automodule:: finn.util.onnx
+.. automodule:: qonnx.util.onnx
    :members:
    :undoc-members:
    :show-inheritance:
diff --git a/docs/finn/source_code/finn.custom_op.general.rst b/docs/finn/source_code/qonnx.custom_op.general.rst
similarity index 75%
rename from docs/finn/source_code/finn.custom_op.general.rst
rename to docs/finn/source_code/qonnx.custom_op.general.rst
index dfca29a8f3b6836e2af3fb566e0394eb920c2f6e..84609971edf4ce22696ca131bb9fc4494b3a12c6 100644
--- a/docs/finn/source_code/finn.custom_op.general.rst
+++ b/docs/finn/source_code/qonnx.custom_op.general.rst
@@ -5,7 +5,7 @@ Custom Op - General
 General Custom Ops
 ===================
 
-finn.custom\_op.general.bipolar_quant
+qonnx.custom\_op.general.bipolar_quant
 --------------------------------------
 
 .. automodule:: qonnx.custom_op.general.bipolar_quant
@@ -13,15 +13,15 @@ finn.custom\_op.general.bipolar_quant
    :undoc-members:
    :show-inheritance:
 
-finn.custom\_op.general.debugmarker
------------------------------------
+qonnx.custom\_op.general.debugmarker
+------------------------------------
 
 .. automodule:: qonnx.custom_op.general.debugmarker
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.custom\_op.general.genericpartition
+qonnx.custom\_op.general.genericpartition
 -----------------------------------------
 
 .. automodule:: qonnx.custom_op.general.genericpartition
@@ -29,15 +29,15 @@ finn.custom\_op.general.genericpartition
    :undoc-members:
    :show-inheritance:
 
-finn.custom\_op.general.im2col
-------------------------------
+qonnx.custom\_op.general.im2col
+-------------------------------
 
 .. automodule:: qonnx.custom_op.general.im2col
    :members:
    :undoc-members:
    :show-inheritance:
 
-finn.custom\_op.general.maxpoolnhwc
+qonnx.custom\_op.general.maxpoolnhwc
 ------------------------------------
 
 .. automodule:: qonnx.custom_op.general.maxpoolnhwc
@@ -45,7 +45,7 @@ finn.custom\_op.general.maxpoolnhwc
    :undoc-members:
    :show-inheritance:
 
-finn.custom\_op.general.multithreshold
+qonnx.custom\_op.general.multithreshold
 ---------------------------------------
 
 .. automodule:: qonnx.custom_op.general.multithreshold
@@ -53,7 +53,7 @@ finn.custom\_op.general.multithreshold
    :undoc-members:
    :show-inheritance:
 
-finn.custom\_op.general.quant
+qonnx.custom\_op.general.quant
 ------------------------------
 
 .. automodule:: qonnx.custom_op.general.quant
@@ -61,15 +61,15 @@ finn.custom\_op.general.quant
   :undoc-members:
   :show-inheritance:
 
-finn.custom\_op.general.quantavgpool2d
---------------------------------------
+qonnx.custom\_op.general.quantavgpool2d
+---------------------------------------
 
 .. automodule:: qonnx.custom_op.general.quantavgpool2d
   :members:
   :undoc-members:
   :show-inheritance:
 
-finn.custom\_op.general.trunc
+qonnx.custom\_op.general.trunc
 ------------------------------
 
 .. automodule:: qonnx.custom_op.general.trunc
@@ -77,7 +77,7 @@ finn.custom\_op.general.trunc
   :undoc-members:
   :show-inheritance:
 
-finn.custom\_op.general.xnorpopcount
+qonnx.custom\_op.general.xnorpopcount
 -------------------------------------
 
 .. automodule:: qonnx.custom_op.general.xnorpopcount
diff --git a/docs/finn/tutorials.rst b/docs/finn/tutorials.rst
index 4c260ecfb1b25448b4b8e1fe71d8c257cd7e31ff..110f77c5b10d2415ac2d2ff7b716567ec5cb76fa 100644
--- a/docs/finn/tutorials.rst
+++ b/docs/finn/tutorials.rst
@@ -5,7 +5,7 @@ Tutorials
 *********
 
 FINN provides several Jupyter notebooks that can help to get familiar with the basics, the internals and the end-to-end flow in FINN.
-All Jupyter notebooks can be found in the repo in the `notebook folder <https://github.com/Xilinx/finn/tree/master/notebooks>`_.
+All Jupyter notebooks can be found in the repo in the `notebook folder <https://github.com/Xilinx/finn/tree/main/notebooks>`_.
 
 Basics
 ======
@@ -23,7 +23,7 @@ The notebooks in this folder should give a basic insight into FINN, how to get s
 End-to-End Flow
 ===============
 
-There are two groups of notebooks currently available under `the end2end_example directory <https://github.com/Xilinx/finn/tree/master/notebooks/end2end_example>`_ :
+There are two groups of notebooks currently available under `the end2end_example directory <https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example>`_ :
 
 * ``cybersecurity`` shows how to train a quantized MLP with Brevitas and deploy it with FINN using the :ref:`command_line` build system.
 
diff --git a/docs/finn/verification.rst b/docs/finn/verification.rst
index 7c636941ad5b8d3d95a152f78e883f6f4782a2f0..e1a9ac4b31ebaebbc3dfcb672b5ead2c0fd8a806 100644
--- a/docs/finn/verification.rst
+++ b/docs/finn/verification.rst
@@ -8,7 +8,7 @@ Functional Verification
    :scale: 70%
    :align: center
 
-This part of the flow is covered by the Jupyter notebook about the verification of a simple fully-connected network, which you can find in the `end2end notebook folder <https://github.com/Xilinx/finn/tree/master/notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb>`_.
+This part of the flow is covered by the Jupyter notebook about the verification of a simple fully-connected network, which you can find in the `end2end notebook folder <https://github.com/Xilinx/finn/blob/main/notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb>`_.
 
 When the network is transformed it is important to verify the functionality to make sure the transformation did not change the behaviour of the model. There are multiple ways of verification that can be applied in different stages of the network inside FINN. All can be accessed using the execution function in module :py:mod:`finn.core.onnx_exec`. The execution happens in most cases node by node, which supports networks that have a mixture of standard ONNX nodes, custom nodes and HLS custom nodes. A single node can be executed using one or more of the following methods:
 
diff --git a/docs/finn/vivado_synth.rst b/docs/finn/vivado_synth.rst
deleted file mode 100644
index ca8b8ad655df7b227441f020aca6d629ce1b6afc..0000000000000000000000000000000000000000
--- a/docs/finn/vivado_synth.rst
+++ /dev/null
@@ -1,13 +0,0 @@
-.. _vivado_synth:
-
-*************************
-Vivado HLS and Vivado IPI
-*************************
-
-.. image:: img/vivado-synth.png
-   :scale: 70%
-   :align: center
-
-In this step the system is handed over to Vivado. To do this, IP blocks are created from each layer using Vivado HLS and then stitched together using Vivado IP Integrator. This creates a Vivado design of the entire network. The design can be verified using `PyVerilator <https://github.com/maltanar/pyverilator>`_ either on the network with the unstitched IP blocks or on the stitched IP. The generated verilog files are passed to PyVerilator and in this way the model can be emulated. This procedure is called *rtlsim* in FINN flow and details can be found in the chapter :ref:`verification`.
-
-Once the model is in the form of a stitched IP, it can be passed to the next flow step :ref:`pynq_deploy`.