{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Train a Quantized MLP on UNSW-NB15 with Brevitas"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font color=\"red\">**Live FINN tutorial:** We recommend clicking **Cell -> Run All** when you start reading this notebook for \"latency hiding\".</font>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this notebook, we will show how to create, train and export a quantized Multi Layer Perceptron (MLP) with quantized weights and activations with [Brevitas](https://github.com/Xilinx/brevitas).\n",
    "Specifically, the task at hand will be to label network packets as normal or suspicious (e.g. originating from an attacker, virus, malware or otherwise) by training on a quantized variant of the UNSW-NB15 dataset. \n",
    "\n",
    "**You won't need a GPU to train the neural net.** This MLP will be small enough to train on a modern x86 CPU, so no GPU is required to follow this tutorial  Alternatively, we provide pre-trained parameters for the MLP if you want to skip the training entirely.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## A quick introduction to the task and the dataset\n",
    "\n",
    "*The task:* The goal of [*network intrusion detection*](https://ieeexplore.ieee.org/abstract/document/283931) is to identify, preferably in real time, unauthorized use, misuse, and abuse of computer systems by both system insiders and external penetrators. This may be achieved by a mix of techniques, and machine-learning (ML) based techniques are increasing in popularity. \n",
    "\n",
    "*The dataset:* Several datasets are available for use in ML-based methods for intrusion detection.\n",
    "The **UNSW-NB15** is one such dataset created by the Australian Centre for Cyber Security (ACCS) to provide a comprehensive network based data set which can reflect modern network traffic scenarios. You can find more details about the dataset on [its homepage](https://www.unsw.adfa.edu.au/unsw-canberra-cyber/cybersecurity/ADFA-NB15-Datasets/).\n",
    "\n",
    "*Performance considerations:* FPGAs are commonly used for implementing high-performance packet processing systems that still provide a degree of programmability. To avoid introducing bottlenecks on the network, the DNN implementation must be capable of detecting malicious ones at line rate, which can be millions of packets per second, and is expected to increase further as next-generation networking solutions provide increased\n",
    "throughput. This is a good reason to consider FPGA acceleration for this particular use-case."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Outline\n",
    "-------------\n",
    "\n",
    "* [Load the UNSW_NB15 Dataset](#load_dataset) \n",
    "* [Define the Quantized MLP Model](#define_quantized_mlp)\n",
    "* [Define Train and Test  Methods](#train_test)\n",
    "    * [(Option 1) Train the Model from Scratch](#train_scratch)\n",
    "    * [(Option 2) Load Pre-Trained Parameters](#load_pretrained)\n",
    "* [Network Surgery Before Export](#network_surgery)\n",
    "* [Export to FINN-ONNX](#export_finn_onnx)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import onnx\n",
    "import torch"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**This is important -- always import onnx before torch**. This is a workaround for a [known bug](https://github.com/onnx/onnx/issues/2394)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Load the UNSW_NB15 Dataset <a id='load_dataset'></a>\n",
    "\n",
    "### Dataset Quantization <a id='dataset_qnt'></a>\n",
    "\n",
    "The goal of this notebook is to train a Quantized Neural Network (QNN) to be later deployed as an FPGA accelerator generated by the FINN compiler. Although we can choose a variety of different precisions for the input, [Murovic and Trost](https://ev.fe.uni-lj.si/1-2-2019/Murovic.pdf) have previously shown we can actually binarize the inputs and still get good (90%+) accuracy."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will create a binarized representation for the dataset by following the procedure defined by Murovic and Trost, which we repeat briefly here:\n",
    "\n",
    "* Original features have different formats ranging from integers, floating numbers to strings.\n",
    "* Integers, which for example represent a packet lifetime, are binarized with as many bits as to include the maximum value. \n",
    "* Another case is with features formatted as strings (protocols), which are binarized by simply counting the number of all different strings for each feature and coding them in the appropriate number of bits.\n",
    "* Floating-point numbers are reformatted into fixed-point representation.\n",
    "* In the end, each sample is transformed into a 593-bit wide binary vector. \n",
    "* All vectors are labeled as bad (0) or normal (1)\n",
    "\n",
    "Following Murovic and Trost's open-source implementation provided as a Matlab script [here](https://github.com/TadejMurovic/BNN_Deployment/blob/master/cybersecurity_dataset_unswb15.m), we've created a [Python version](dataloader_quantized.py).\n",
    "\n",
    "<font color=\"red\">**Live FINN tutorial:** Downloading the original dataset and quantizing it can take some time, so we provide a download link to the pre-quantized version for your convenience. </font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "--2022-01-13 11:50:50--  https://zenodo.org/record/4519767/files/unsw_nb15_binarized.npz?download=1\n",
      "Resolving zenodo.org (zenodo.org)... 137.138.76.77\n",
      "Connecting to zenodo.org (zenodo.org)|137.138.76.77|:443... connected.\n",
      "HTTP request sent, awaiting response... 200 OK\n",
      "Length: 13391907 (13M) [application/octet-stream]\n",
      "Saving to: 'unsw_nb15_binarized.npz'\n",
      "\n",
      "     0K .......... .......... .......... .......... ..........  0% 1.71M 7s\n",
      "    50K .......... .......... .......... .......... ..........  0% 2.20M 7s\n",
      "   100K .......... .......... .......... .......... ..........  1% 1.72M 7s\n",
      "   150K .......... .......... .......... .......... ..........  1% 2.59M 6s\n",
      "   200K .......... .......... .......... .......... ..........  1% 1.63M 7s\n",
      "   250K .......... .......... .......... .......... ..........  2% 1.97M 7s\n",
      "   300K .......... .......... .......... .......... ..........  2% 2.56M 6s\n",
      "   350K .......... .......... .......... .......... ..........  3% 1.10M 7s\n",
      "   400K .......... .......... .......... .......... ..........  3% 2.69M 7s\n",
      "   450K .......... .......... .......... .......... ..........  3% 2.03M 6s\n",
      "   500K .......... .......... .......... .......... ..........  4% 2.05M 6s\n",
      "   550K .......... .......... .......... .......... ..........  4% 1.97M 6s\n",
      "   600K .......... .......... .......... .......... ..........  4% 2.19M 6s\n",
      "   650K .......... .......... .......... .......... ..........  5% 2.31M 6s\n",
      "   700K .......... .......... .......... .......... ..........  5% 1.51M 6s\n",
      "   750K .......... .......... .......... .......... ..........  6% 2.82M 6s\n",
      "   800K .......... .......... .......... .......... ..........  6% 2.35M 6s\n",
      "   850K .......... .......... .......... .......... ..........  6% 1.08M 6s\n",
      "   900K .......... .......... .......... .......... ..........  7% 1.94M 6s\n",
      "   950K .......... .......... .......... .......... ..........  7% 1.69M 6s\n",
      "  1000K .......... .......... .......... .......... ..........  8% 2.32M 6s\n",
      "  1050K .......... .......... .......... .......... ..........  8% 2.34M 6s\n",
      "  1100K .......... .......... .......... .......... ..........  8% 2.58M 6s\n",
      "  1150K .......... .......... .......... .......... ..........  9%  949K 6s\n",
      "  1200K .......... .......... .......... .......... ..........  9% 1.41M 6s\n",
      "  1250K .......... .......... .......... .......... ..........  9% 4.16M 6s\n",
      "  1300K .......... .......... .......... .......... .......... 10% 2.29M 6s\n",
      "  1350K .......... .......... .......... .......... .......... 10% 2.14M 6s\n",
      "  1400K .......... .......... .......... .......... .......... 11% 2.21M 6s\n",
      "  1450K .......... .......... .......... .......... .......... 11% 2.22M 6s\n",
      "  1500K .......... .......... .......... .......... .......... 11% 1.79M 6s\n",
      "  1550K .......... .......... .......... .......... .......... 12%  990K 6s\n",
      "  1600K .......... .......... .......... .......... .......... 12% 2.12M 6s\n",
      "  1650K .......... .......... .......... .......... .......... 12% 1.81M 6s\n",
      "  1700K .......... .......... .......... .......... .......... 13% 2.50M 6s\n",
      "  1750K .......... .......... .......... .......... .......... 13% 2.39M 6s\n",
      "  1800K .......... .......... .......... .......... .......... 14%  993K 6s\n",
      "  1850K .......... .......... .......... .......... .......... 14% 3.66M 6s\n",
      "  1900K .......... .......... .......... .......... .......... 14% 2.05M 6s\n",
      "  1950K .......... .......... .......... .......... .......... 15% 1.83M 6s\n",
      "  2000K .......... .......... .......... .......... .......... 15% 2.60M 6s\n",
      "  2050K .......... .......... .......... .......... .......... 16% 2.04M 6s\n",
      "  2100K .......... .......... .......... .......... .......... 16% 2.24M 6s\n",
      "  2150K .......... .......... .......... .......... .......... 16% 2.02M 6s\n",
      "  2200K .......... .......... .......... .......... .......... 17% 1.54M 6s\n",
      "  2250K .......... .......... .......... .......... .......... 17% 1.06M 6s\n",
      "  2300K .......... .......... .......... .......... .......... 17% 3.33M 6s\n",
      "  2350K .......... .......... .......... .......... .......... 18% 1.77M 6s\n",
      "  2400K .......... .......... .......... .......... .......... 18% 1.44M 6s\n",
      "  2450K .......... .......... .......... .......... .......... 19% 2.65M 6s\n",
      "  2500K .......... .......... .......... .......... .......... 19% 1.78M 6s\n",
      "  2550K .......... .......... .......... .......... .......... 19% 1.39M 6s\n",
      "  2600K .......... .......... .......... .......... .......... 20% 2.13M 5s\n",
      "  2650K .......... .......... .......... .......... .......... 20% 1.77M 5s\n",
      "  2700K .......... .......... .......... .......... .......... 21% 1.60M 5s\n",
      "  2750K .......... .......... .......... .......... .......... 21%  677K 6s\n",
      "  2800K .......... .......... .......... .......... .......... 21% 5.16M 6s\n",
      "  2850K .......... .......... .......... .......... .......... 22% 1.72M 5s\n",
      "  2900K .......... .......... .......... .......... .......... 22% 13.3M 5s\n",
      "  2950K .......... .......... .......... .......... .......... 22% 2.50M 5s\n",
      "  3000K .......... .......... .......... .......... .......... 23% 1.37M 5s\n",
      "  3050K .......... .......... .......... .......... .......... 23%  734K 5s\n",
      "  3100K .......... .......... .......... .......... .......... 24% 9.25M 5s\n",
      "  3150K .......... .......... .......... .......... .......... 24% 1.18M 5s\n",
      "  3200K .......... .......... .......... .......... .......... 24% 60.6M 5s\n",
      "  3250K .......... .......... .......... .......... .......... 25% 1.64M 5s\n",
      "  3300K .......... .......... .......... .......... .......... 25% 1.76M 5s\n",
      "  3350K .......... .......... .......... .......... .......... 25% 1.54M 5s\n",
      "  3400K .......... .......... .......... .......... .......... 26% 1.56M 5s\n",
      "  3450K .......... .......... .......... .......... .......... 26% 1018K 5s\n",
      "  3500K .......... .......... .......... .......... .......... 27% 1.55M 5s\n",
      "  3550K .......... .......... .......... .......... .......... 27% 1.21M 5s\n",
      "  3600K .......... .......... .......... .......... .......... 27% 2.78M 5s\n",
      "  3650K .......... .......... .......... .......... .......... 28% 2.79M 5s\n",
      "  3700K .......... .......... .......... .......... .......... 28% 2.10M 5s\n",
      "  3750K .......... .......... .......... .......... .......... 29% 1.44M 5s\n",
      "  3800K .......... .......... .......... .......... .......... 29% 2.38M 5s\n",
      "  3850K .......... .......... .......... .......... .......... 29% 2.87M 5s\n",
      "  3900K .......... .......... .......... .......... .......... 30% 1.56M 5s\n",
      "  3950K .......... .......... .......... .......... .......... 30% 2.50M 5s\n",
      "  4000K .......... .......... .......... .......... .......... 30% 1.25M 5s\n",
      "  4050K .......... .......... .......... .......... .......... 31% 2.16M 5s\n",
      "  4100K .......... .......... .......... .......... .......... 31% 1.62M 5s\n",
      "  4150K .......... .......... .......... .......... .......... 32% 3.11M 5s\n",
      "  4200K .......... .......... .......... .......... .......... 32% 1.25M 5s\n",
      "  4250K .......... .......... .......... .......... .......... 32% 5.60M 5s\n",
      "  4300K .......... .......... .......... .......... .......... 33% 1.64M 5s\n",
      "  4350K .......... .......... .......... .......... .......... 33% 1.17M 5s\n",
      "  4400K .......... .......... .......... .......... .......... 34% 2.21M 5s\n",
      "  4450K .......... .......... .......... .......... .......... 34% 1.32M 5s\n"
     ]
    }
   ],
   "source": [
    "! wget -O unsw_nb15_binarized.npz https://zenodo.org/record/4519767/files/unsw_nb15_binarized.npz?download=1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can extract the binarized numpy arrays from the .npz archive and wrap them as a PyTorch `TensorDataset` as follows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  4500K .......... .......... .......... .......... .......... 34% 1.93M 5s\n",
      "  4550K .......... .......... .......... .......... .......... 35% 2.94M 5s\n",
      "  4600K .......... .......... .......... .......... .......... 35% 1.73M 5s\n",
      "  4650K .......... .......... .......... .......... .......... 35% 1.79M 5s\n",
      "  4700K .......... .......... .......... .......... .......... 36% 1.82M 5s\n",
      "  4750K .......... .......... .......... .......... .......... 36% 3.63M 4s\n",
      "  4800K .......... .......... .......... .......... .......... 37% 1.59M 4s\n",
      "  4850K .......... .......... .......... .......... .......... 37% 1.92M 4s\n",
      "  4900K .......... .......... .......... .......... .......... 37% 2.08M 4s\n",
      "  4950K .......... .......... .......... .......... .......... 38% 1.46M 4s\n",
      "  5000K .......... .......... .......... .......... .......... 38% 3.30M 4s\n",
      "  5050K .......... .......... .......... .......... .......... 38% 1.73M 4s\n",
      "  5100K .......... .......... .......... .......... .......... 39% 2.67M 4s\n",
      "  5150K .......... .......... .......... .......... .......... 39% 1016K 4s\n",
      "  5200K .......... .......... .......... .......... .......... 40% 2.61M 4s\n",
      "  5250K .......... .......... .......... .......... .......... 40% 2.05M 4s\n",
      "  5300K .......... .......... .......... .......... .......... 40% 2.23M 4s\n",
      "  5350K .......... .......... .......... .......... .......... 41% 1.68M 4s\n",
      "  5400K .......... .......... .......... .......... .......... 41% 2.31M 4s\n",
      "  5450K .......... .......... .......... .......... .......... 42% 1.60M 4s\n",
      "  5500K .......... .......... .......... .......... .......... 42% 2.89M 4s\n",
      "  5550K .......... .......... .......... .......... .......... 42% 1.11M 4s\n",
      "  5600K .......... .......... .......... .......... .......... 43% 2.18M 4s\n",
      "  5650K .......... .......... .......... .......... .......... 43% 1.07M 4s\n",
      "  5700K .......... .......... .......... .......... .......... 43% 2.19M 4s\n",
      "  5750K .......... .......... .......... .......... .......... 44% 2.17M 4s\n",
      "  5800K .......... .......... .......... .......... .......... 44% 2.00M 4s\n",
      "  5850K .......... .......... .......... .......... .......... 45% 2.21M 4s\n",
      "  5900K .......... .......... .......... .......... .......... 45% 2.19M 4s\n",
      "  5950K .......... .......... .......... .......... .......... 45% 1.02M 4s\n",
      "  6000K .......... .......... .......... .......... .......... 46% 1.83M 4s\n",
      "  6050K .......... .......... .......... .......... .......... 46% 2.18M 4s\n",
      "  6100K .......... .......... .......... .......... .......... 47% 2.65M 4s\n",
      "  6150K .......... .......... .......... .......... .......... 47% 2.19M 4s\n",
      "  6200K .......... .......... .......... .......... .......... 47% 2.26M 4s\n",
      "  6250K .......... .......... .......... .......... .......... 48% 2.07M 4s\n",
      "  6300K .......... .......... .......... .......... .......... 48% 2.21M 4s\n",
      "  6350K .......... .......... .......... .......... .......... 48% 1.08M 4s\n",
      "  6400K .......... .......... .......... .......... .......... 49% 1.77M 4s\n",
      "  6450K .......... .......... .......... .......... .......... 49% 1.16M 4s\n",
      "  6500K .......... .......... .......... .......... .......... 50% 1.40M 4s\n",
      "  6550K .......... .......... .......... .......... .......... 50% 1.79M 4s\n",
      "  6600K .......... .......... .......... .......... .......... 50% 1.35M 3s\n",
      "  6650K .......... .......... .......... .......... .......... 51% 1.82M 3s\n",
      "  6700K .......... .......... .......... .......... .......... 51% 3.76M 3s\n",
      "  6750K .......... .......... .......... .......... .......... 51% 1.02M 3s\n",
      "  6800K .......... .......... .......... .......... .......... 52% 1.69M 3s\n",
      "  6850K .......... .......... .......... .......... .......... 52% 2.08M 3s\n",
      "  6900K .......... .......... .......... .......... .......... 53% 1.29M 3s\n",
      "  6950K .......... .......... .......... .......... .......... 53% 2.20M 3s\n",
      "  7000K .......... .......... .......... .......... .......... 53% 1.05M 3s\n",
      "  7050K .......... .......... .......... .......... .......... 54% 2.72M 3s\n",
      "  7100K .......... .......... .......... .......... .......... 54% 2.09M 3s\n",
      "  7150K .......... .......... .......... .......... .......... 55% 1.59M 3s\n",
      "  7200K .......... .......... .......... .......... .......... 55% 1.79M 3s\n",
      "  7250K .......... .......... .......... .......... .......... 55% 3.40M 3s\n",
      "  7300K .......... .......... .......... .......... .......... 56% 1.66M 3s\n",
      "  7350K .......... .......... .......... .......... .......... 56% 2.87M 3s\n",
      "  7400K .......... .......... .......... .......... .......... 56% 1.64M 3s\n",
      "  7450K .......... .......... .......... .......... .......... 57% 1.78M 3s\n",
      "  7500K .......... .......... .......... .......... .......... 57% 1.44M 3s\n",
      "  7550K .......... .......... .......... .......... .......... 58% 1.68M 3s\n",
      "  7600K .......... .......... .......... .......... .......... 58% 2.16M 3s\n",
      "  7650K .......... .......... .......... .......... .......... 58% 1.22M 3s\n",
      "  7700K .......... .......... .......... .......... .......... 59% 1.51M 3s\n",
      "  7750K .......... .......... .......... .......... .......... 59% 3.80M 3s\n",
      "  7800K .......... .......... .......... .......... .......... 60% 2.14M 3s\n",
      "  7850K .......... .......... .......... .......... .......... 60% 1.92M 3s\n",
      "  7900K .......... .......... .......... .......... .......... 60% 1.95M 3s\n",
      "  7950K .......... .......... .......... .......... .......... 61% 1.02M 3s\n",
      "  8000K .......... .......... .......... .......... .......... 61%  686K 3s\n",
      "  8050K .......... .......... .......... .......... .......... 61% 1.66M 3s\n",
      "  8100K .......... .......... .......... .......... .......... 62% 1.33M 3s\n",
      "  8150K .......... .......... .......... .......... .......... 62% 2.47M 3s\n",
      "  8200K .......... .......... .......... .......... .......... 63% 1.67M 3s\n",
      "  8250K .......... .......... .......... .......... .......... 63% 1.83M 3s\n",
      "  8300K .......... .......... .......... .......... .......... 63% 2.64M 3s\n",
      "  8350K .......... .......... .......... .......... .......... 64% 1.87M 3s\n",
      "  8400K .......... .......... .......... .......... .......... 64% 2.22M 3s\n",
      "  8450K .......... .......... .......... .......... .......... 64% 1.32M 3s\n",
      "  8500K .......... .......... .......... .......... .......... 65% 2.02M 2s\n",
      "  8550K .......... .......... .......... .......... .......... 65% 1.73M 2s\n",
      "  8600K .......... .......... .......... .......... .......... 66% 1.87M 2s\n",
      "  8650K .......... .......... .......... .......... .......... 66% 2.23M 2s\n",
      "  8700K .......... .......... .......... .......... .......... 66% 2.15M 2s\n",
      "  8750K .......... .......... .......... .......... .......... 67% 1.09M 2s\n",
      "  8800K .......... .......... .......... .......... .......... 67% 2.00M 2s\n",
      "  8850K .......... .......... .......... .......... .......... 68% 1.81M 2s\n",
      "  8900K .......... .......... .......... .......... .......... 68% 1.70M 2s\n",
      "  8950K .......... .......... .......... .......... .......... 68% 1.31M 2s\n",
      "  9000K .......... .......... .......... .......... .......... 69% 2.41M 2s\n",
      "  9050K .......... .......... .......... .......... .......... 69% 1.69M 2s\n",
      "  9100K .......... .......... .......... .......... .......... 69% 2.73M 2s\n",
      "  9150K .......... .......... .......... .......... .......... 70% 2.18M 2s\n",
      "  9200K .......... .......... .......... .......... .......... 70% 2.05M 2s\n",
      "  9250K .......... .......... .......... .......... .......... 71% 1.46M 2s\n",
      "  9300K .......... .......... .......... .......... .......... 71% 2.35M 2s\n",
      "  9350K .......... .......... .......... .......... .......... 71% 2.21M 2s\n",
      "  9400K .......... .......... .......... .......... .......... 72% 1.74M 2s\n",
      "  9450K .......... .......... .......... .......... .......... 72% 2.12M 2s\n",
      "  9500K .......... .......... .......... .......... .......... 73%  797K 2s\n",
      "  9550K .......... .......... .......... .......... .......... 73% 3.19M 2s\n",
      "  9600K .......... .......... .......... .......... .......... 73% 1.69M 2s\n",
      "  9650K .......... .......... .......... .......... .......... 74% 1.47M 2s\n",
      "  9700K .......... .......... .......... .......... .......... 74% 1.97M 2s\n",
      "  9750K .......... .......... .......... .......... .......... 74% 1.62M 2s\n",
      "  9800K .......... .......... .......... .......... .......... 75% 2.12M 2s\n",
      "  9850K .......... .......... .......... .......... .......... 75% 2.05M 2s\n",
      "  9900K .......... .......... .......... .......... .......... 76% 1.95M 2s\n",
      "  9950K .......... .......... .......... .......... .......... 76% 1.01M 2s\n",
      " 10000K .......... .......... .......... .......... .......... 76% 2.51M 2s\n",
      " 10050K .......... .......... .......... .......... .......... 77% 2.53M 2s\n",
      " 10100K .......... .......... .......... .......... .......... 77% 2.10M 2s\n",
      " 10150K .......... .......... .......... .......... .......... 77% 1.36M 2s\n",
      " 10200K .......... .......... .......... .......... .......... 78% 1.31M 2s\n",
      " 10250K .......... .......... .......... .......... .......... 78% 1.75M 2s\n",
      " 10300K .......... .......... .......... .......... .......... 79% 3.77M 2s\n",
      " 10350K .......... .......... .......... .......... .......... 79% 2.14M 1s\n",
      " 10400K .......... .......... .......... .......... .......... 79% 2.44M 1s\n",
      " 10450K .......... .......... .......... .......... .......... 80% 1.72M 1s\n",
      " 10500K .......... .......... .......... .......... .......... 80% 1.78M 1s\n",
      " 10550K .......... .......... .......... .......... .......... 81% 2.27M 1s\n",
      " 10600K .......... .......... .......... .......... .......... 81% 1.98M 1s\n",
      " 10650K .......... .......... .......... .......... .......... 81% 2.53M 1s\n",
      " 10700K .......... .......... .......... .......... .......... 82% 2.03M 1s\n",
      " 10750K .......... .......... .......... .......... .......... 82% 1.26M 1s\n",
      " 10800K .......... .......... .......... .......... .......... 82% 1.73M 1s\n",
      " 10850K .......... .......... .......... .......... .......... 83% 2.74M 1s\n",
      " 10900K .......... .......... .......... .......... .......... 83% 1.64M 1s\n",
      " 10950K .......... .......... .......... .......... .......... 84% 1.21M 1s\n",
      " 11000K .......... .......... .......... .......... .......... 84% 1.53M 1s\n",
      " 11050K .......... .......... .......... .......... .......... 84% 2.89M 1s\n",
      " 11100K .......... .......... .......... .......... .......... 85%  789K 1s\n",
      " 11150K .......... .......... .......... .......... .......... 85% 1.85M 1s\n",
      " 11200K .......... .......... .......... .......... .......... 86% 1.90M 1s\n",
      " 11250K .......... .......... .......... .......... .......... 86%  936K 1s\n",
      " 11300K .......... .......... .......... .......... .......... 86% 2.06M 1s\n",
      " 11350K .......... .......... .......... .......... .......... 87% 1.95M 1s\n",
      " 11400K .......... .......... .......... .......... .......... 87% 1.97M 1s\n",
      " 11450K .......... .......... .......... .......... .......... 87% 2.29M 1s\n",
      " 11500K .......... .......... .......... .......... .......... 88% 2.09M 1s\n",
      " 11550K .......... .......... .......... .......... .......... 88%  957K 1s\n",
      " 11600K .......... .......... .......... .......... .......... 89% 2.78M 1s\n",
      " 11650K .......... .......... .......... .......... .......... 89% 1.89M 1s\n",
      " 11700K .......... .......... .......... .......... .......... 89% 1.96M 1s\n",
      " 11750K .......... .......... .......... .......... .......... 90% 1.71M 1s\n",
      " 11800K .......... .......... .......... .......... .......... 90% 1.93M 1s\n",
      " 11850K .......... .......... .......... .......... .......... 90%  523K 1s\n",
      " 11900K .......... .......... .......... .......... .......... 91% 7.70M 1s\n",
      " 11950K .......... .......... .......... .......... .......... 91% 42.7M 1s\n",
      " 12000K .......... .......... .......... .......... .......... 92% 4.57M 1s\n",
      " 12050K .......... .......... .......... .......... .......... 92% 1.63M 1s\n",
      " 12100K .......... .......... .......... .......... .......... 92% 2.15M 1s\n",
      " 12150K .......... .......... .......... .......... .......... 93% 1.64M 0s\n",
      " 12200K .......... .......... .......... .......... .......... 93% 1.49M 0s\n",
      " 12250K .......... .......... .......... .......... .......... 94% 2.46M 0s\n",
      " 12300K .......... .......... .......... .......... .......... 94% 1.54M 0s\n",
      " 12350K .......... .......... .......... .......... .......... 94% 1.24M 0s\n",
      " 12400K .......... .......... .......... .......... .......... 95% 2.09M 0s\n",
      " 12450K .......... .......... .......... .......... .......... 95% 1.63M 0s\n",
      " 12500K .......... .......... .......... .......... .......... 95% 2.97M 0s\n",
      " 12550K .......... .......... .......... .......... .......... 96% 2.20M 0s\n",
      " 12600K .......... .......... .......... .......... .......... 96% 1.58M 0s\n",
      " 12650K .......... .......... .......... .......... .......... 97% 2.16M 0s\n",
      " 12700K .......... .......... .......... .......... .......... 97% 2.64M 0s\n",
      " 12750K .......... .......... .......... .......... .......... 97% 1.12M 0s\n",
      " 12800K .......... .......... .......... .......... .......... 98% 2.23M 0s\n",
      " 12850K .......... .......... .......... .......... .......... 98% 2.04M 0s\n",
      " 12900K .......... .......... .......... .......... .......... 99% 1.94M 0s\n",
      " 12950K .......... .......... .......... .......... .......... 99% 2.25M 0s\n",
      " 13000K .......... .......... .......... .......... .......... 99% 2.04M 0s\n",
      " 13050K .......... .......... ........                        100% 1.63M=7.2s\n",
      "\n",
      "2022-01-13 11:50:58 (1.77 MB/s) - 'unsw_nb15_binarized.npz' saved [13391907/13391907]\n",
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Samples in each set: train = 175341, test = 82332\n",
      "Shape of one input sample: torch.Size([593])\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from torch.utils.data import TensorDataset\n",
    "\n",
    "def get_preqnt_dataset(data_dir: str, train: bool):\n",
    "    unsw_nb15_data = np.load(data_dir + \"/unsw_nb15_binarized.npz\")\n",
    "    if train:\n",
    "        partition = \"train\"\n",
    "    else:\n",
    "        partition = \"test\"\n",
    "    part_data = unsw_nb15_data[partition].astype(np.float32)\n",
    "    part_data = torch.from_numpy(part_data)\n",
    "    part_data_in = part_data[:, :-1]\n",
    "    part_data_out = part_data[:, -1]\n",
    "    return TensorDataset(part_data_in, part_data_out)\n",
    "\n",
    "train_quantized_dataset = get_preqnt_dataset(\".\", True)\n",
    "test_quantized_dataset = get_preqnt_dataset(\".\", False)\n",
    "\n",
    "print(\"Samples in each set: train = %d, test = %s\" % (len(train_quantized_dataset), len(test_quantized_dataset))) \n",
    "print(\"Shape of one input sample: \" +  str(train_quantized_dataset[0][0].shape))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Set up DataLoader\n",
    "\n",
    "Following either option, we now have access to the quantized dataset. We will wrap the dataset in a PyTorch `DataLoader` for easier access in batches."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.utils.data import DataLoader, Dataset\n",
    "\n",
    "batch_size = 1000\n",
    "\n",
    "# dataset loaders\n",
    "train_quantized_loader = DataLoader(train_quantized_dataset, batch_size=batch_size, shuffle=True)\n",
    "test_quantized_loader = DataLoader(test_quantized_dataset, batch_size=batch_size, shuffle=False)    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Input shape for 1 batch: torch.Size([1000, 593])\n",
      "Label shape for 1 batch: torch.Size([1000])\n"
     ]
    }
   ],
   "source": [
    "count = 0\n",
    "for x,y in train_quantized_loader:\n",
    "    print(\"Input shape for 1 batch: \" + str(x.shape))\n",
    "    print(\"Label shape for 1 batch: \" + str(y.shape))\n",
    "    count += 1\n",
    "    if count == 1:\n",
    "        break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Define a PyTorch Device <a id='define_pytorch_device'></a> \n",
    "\n",
    "GPUs can significantly speed-up training of deep neural networks. We check for availability of a GPU and if so define it as target device."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Target device: cuda\n"
     ]
    }
   ],
   "source": [
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "print(\"Target device: \" + str(device))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Define the Quantized MLP Model <a id='define_quantized_mlp'></a>\n",
    "\n",
    "We'll now define an MLP model that will be trained to perform inference with quantized weights and activations.\n",
    "For this, we'll use the quantization-aware training (QAT) capabilities offered by [Brevitas](https://github.com/Xilinx/brevitas).\n",
    "\n",
    "Our MLP will have four fully-connected (FC) layers in total: three hidden layers with 64 neurons, and a final output layer with a single output, all using 2-bit weights. We'll use 2-bit quantized ReLU activation functions, and apply batch normalization between each FC layer and its activation.\n",
    "\n",
    "In case you'd like to experiment with different quantization settings or topology parameters, we'll define all these topology settings as variables."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_size = 593      \n",
    "hidden1 = 64      \n",
    "hidden2 = 64\n",
    "hidden3 = 64\n",
    "weight_bit_width = 2\n",
    "act_bit_width = 2\n",
    "num_classes = 1    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can define our MLP using the layer primitives provided by Brevitas:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (0): QuantLinear(\n",
       "    in_features=593, out_features=64, bias=True\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (output_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (weight_quant): WeightQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (tensor_quant): RescalingIntQuant(\n",
       "        (int_quant): IntQuant(\n",
       "          (float_to_int_impl): RoundSte()\n",
       "          (tensor_clamp_impl): TensorClampSte()\n",
       "          (delay_wrapper): DelayWrapper(\n",
       "            (delay_impl): _NoDelay()\n",
       "          )\n",
       "        )\n",
       "        (scaling_impl): StatsFromParameterScaling(\n",
       "          (parameter_list_stats): _ParameterListStats(\n",
       "            (first_tracked_param): _ViewParameterWrapper(\n",
       "              (view_shape_impl): OverTensorView()\n",
       "            )\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsMax()\n",
       "            )\n",
       "          )\n",
       "          (stats_scaling_impl): _StatsScaling(\n",
       "            (affine_rescaling): Identity()\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_scaling_pre): Identity()\n",
       "          )\n",
       "        )\n",
       "        (int_scaling_impl): IntScaling()\n",
       "        (zero_point_impl): ZeroZeroPoint(\n",
       "          (zero_point): StatelessBuffer()\n",
       "        )\n",
       "        (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "          (bit_width): StatelessBuffer()\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (bias_quant): BiasQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "  )\n",
       "  (1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "  (2): Dropout(p=0.5, inplace=False)\n",
       "  (3): QuantReLU(\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (act_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (fused_activation_quant_proxy): FusedActivationQuantProxy(\n",
       "        (activation_impl): ReLU()\n",
       "        (tensor_quant): RescalingIntQuant(\n",
       "          (int_quant): IntQuant(\n",
       "            (float_to_int_impl): RoundSte()\n",
       "            (tensor_clamp_impl): TensorClamp()\n",
       "            (delay_wrapper): DelayWrapper(\n",
       "              (delay_impl): _NoDelay()\n",
       "            )\n",
       "          )\n",
       "          (scaling_impl): ParameterFromRuntimeStatsScaling(\n",
       "            (stats_input_view_shape_impl): OverTensorView()\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsPercentile()\n",
       "            )\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_inplace_preprocess): Identity()\n",
       "            (restrict_preprocess): Identity()\n",
       "          )\n",
       "          (int_scaling_impl): IntScaling()\n",
       "          (zero_point_impl): ZeroZeroPoint(\n",
       "            (zero_point): StatelessBuffer()\n",
       "          )\n",
       "          (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "  )\n",
       "  (4): QuantLinear(\n",
       "    in_features=64, out_features=64, bias=True\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (output_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (weight_quant): WeightQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (tensor_quant): RescalingIntQuant(\n",
       "        (int_quant): IntQuant(\n",
       "          (float_to_int_impl): RoundSte()\n",
       "          (tensor_clamp_impl): TensorClampSte()\n",
       "          (delay_wrapper): DelayWrapper(\n",
       "            (delay_impl): _NoDelay()\n",
       "          )\n",
       "        )\n",
       "        (scaling_impl): StatsFromParameterScaling(\n",
       "          (parameter_list_stats): _ParameterListStats(\n",
       "            (first_tracked_param): _ViewParameterWrapper(\n",
       "              (view_shape_impl): OverTensorView()\n",
       "            )\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsMax()\n",
       "            )\n",
       "          )\n",
       "          (stats_scaling_impl): _StatsScaling(\n",
       "            (affine_rescaling): Identity()\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_scaling_pre): Identity()\n",
       "          )\n",
       "        )\n",
       "        (int_scaling_impl): IntScaling()\n",
       "        (zero_point_impl): ZeroZeroPoint(\n",
       "          (zero_point): StatelessBuffer()\n",
       "        )\n",
       "        (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "          (bit_width): StatelessBuffer()\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (bias_quant): BiasQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "  )\n",
       "  (5): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "  (6): Dropout(p=0.5, inplace=False)\n",
       "  (7): QuantReLU(\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (act_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (fused_activation_quant_proxy): FusedActivationQuantProxy(\n",
       "        (activation_impl): ReLU()\n",
       "        (tensor_quant): RescalingIntQuant(\n",
       "          (int_quant): IntQuant(\n",
       "            (float_to_int_impl): RoundSte()\n",
       "            (tensor_clamp_impl): TensorClamp()\n",
       "            (delay_wrapper): DelayWrapper(\n",
       "              (delay_impl): _NoDelay()\n",
       "            )\n",
       "          )\n",
       "          (scaling_impl): ParameterFromRuntimeStatsScaling(\n",
       "            (stats_input_view_shape_impl): OverTensorView()\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsPercentile()\n",
       "            )\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_inplace_preprocess): Identity()\n",
       "            (restrict_preprocess): Identity()\n",
       "          )\n",
       "          (int_scaling_impl): IntScaling()\n",
       "          (zero_point_impl): ZeroZeroPoint(\n",
       "            (zero_point): StatelessBuffer()\n",
       "          )\n",
       "          (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "  )\n",
       "  (8): QuantLinear(\n",
       "    in_features=64, out_features=64, bias=True\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (output_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (weight_quant): WeightQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (tensor_quant): RescalingIntQuant(\n",
       "        (int_quant): IntQuant(\n",
       "          (float_to_int_impl): RoundSte()\n",
       "          (tensor_clamp_impl): TensorClampSte()\n",
       "          (delay_wrapper): DelayWrapper(\n",
       "            (delay_impl): _NoDelay()\n",
       "          )\n",
       "        )\n",
       "        (scaling_impl): StatsFromParameterScaling(\n",
       "          (parameter_list_stats): _ParameterListStats(\n",
       "            (first_tracked_param): _ViewParameterWrapper(\n",
       "              (view_shape_impl): OverTensorView()\n",
       "            )\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsMax()\n",
       "            )\n",
       "          )\n",
       "          (stats_scaling_impl): _StatsScaling(\n",
       "            (affine_rescaling): Identity()\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_scaling_pre): Identity()\n",
       "          )\n",
       "        )\n",
       "        (int_scaling_impl): IntScaling()\n",
       "        (zero_point_impl): ZeroZeroPoint(\n",
       "          (zero_point): StatelessBuffer()\n",
       "        )\n",
       "        (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "          (bit_width): StatelessBuffer()\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (bias_quant): BiasQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "  )\n",
       "  (9): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "  (10): Dropout(p=0.5, inplace=False)\n",
       "  (11): QuantReLU(\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (act_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (fused_activation_quant_proxy): FusedActivationQuantProxy(\n",
       "        (activation_impl): ReLU()\n",
       "        (tensor_quant): RescalingIntQuant(\n",
       "          (int_quant): IntQuant(\n",
       "            (float_to_int_impl): RoundSte()\n",
       "            (tensor_clamp_impl): TensorClamp()\n",
       "            (delay_wrapper): DelayWrapper(\n",
       "              (delay_impl): _NoDelay()\n",
       "            )\n",
       "          )\n",
       "          (scaling_impl): ParameterFromRuntimeStatsScaling(\n",
       "            (stats_input_view_shape_impl): OverTensorView()\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsPercentile()\n",
       "            )\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_inplace_preprocess): Identity()\n",
       "            (restrict_preprocess): Identity()\n",
       "          )\n",
       "          (int_scaling_impl): IntScaling()\n",
       "          (zero_point_impl): ZeroZeroPoint(\n",
       "            (zero_point): StatelessBuffer()\n",
       "          )\n",
       "          (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "  )\n",
       "  (12): QuantLinear(\n",
       "    in_features=64, out_features=1, bias=True\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (output_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (weight_quant): WeightQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (tensor_quant): RescalingIntQuant(\n",
       "        (int_quant): IntQuant(\n",
       "          (float_to_int_impl): RoundSte()\n",
       "          (tensor_clamp_impl): TensorClampSte()\n",
       "          (delay_wrapper): DelayWrapper(\n",
       "            (delay_impl): _NoDelay()\n",
       "          )\n",
       "        )\n",
       "        (scaling_impl): StatsFromParameterScaling(\n",
       "          (parameter_list_stats): _ParameterListStats(\n",
       "            (first_tracked_param): _ViewParameterWrapper(\n",
       "              (view_shape_impl): OverTensorView()\n",
       "            )\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsMax()\n",
       "            )\n",
       "          )\n",
       "          (stats_scaling_impl): _StatsScaling(\n",
       "            (affine_rescaling): Identity()\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_scaling_pre): Identity()\n",
       "          )\n",
       "        )\n",
       "        (int_scaling_impl): IntScaling()\n",
       "        (zero_point_impl): ZeroZeroPoint(\n",
       "          (zero_point): StatelessBuffer()\n",
       "        )\n",
       "        (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "          (bit_width): StatelessBuffer()\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (bias_quant): BiasQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "  )\n",
       ")"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from brevitas.nn import QuantLinear, QuantReLU\n",
    "import torch.nn as nn\n",
    "\n",
    "# Setting seeds for reproducibility\n",
    "torch.manual_seed(0)\n",
    "\n",
    "model = nn.Sequential(\n",
    "      QuantLinear(input_size, hidden1, bias=True, weight_bit_width=weight_bit_width),\n",
    "      nn.BatchNorm1d(hidden1),\n",
    "      nn.Dropout(0.5),\n",
    "      QuantReLU(bit_width=act_bit_width),\n",
    "      QuantLinear(hidden1, hidden2, bias=True, weight_bit_width=weight_bit_width),\n",
    "      nn.BatchNorm1d(hidden2),\n",
    "      nn.Dropout(0.5),\n",
    "      QuantReLU(bit_width=act_bit_width),\n",
    "      QuantLinear(hidden2, hidden3, bias=True, weight_bit_width=weight_bit_width),\n",
    "      nn.BatchNorm1d(hidden3),\n",
    "      nn.Dropout(0.5),\n",
    "      QuantReLU(bit_width=act_bit_width),\n",
    "      QuantLinear(hidden3, num_classes, bias=True, weight_bit_width=weight_bit_width)\n",
    ")\n",
    "\n",
    "model.to(device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note that the MLP's output is not yet quantized. Even though we want the final output of our MLP to be a binary (0/1) value indicating the classification, we've only defined a single-neuron FC layer as the output. While training the network we'll pass that output through a sigmoid function as part of the loss criterion, which [gives better numerical stability](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html). Later on, after we're done training the network, we'll add a quantization node at the end before we export it to FINN."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Define Train and Test  Methods  <a id='train_test'></a>\n",
    "The train and test methods will use a `DataLoader`, which feeds the model with a new predefined batch of training data in each iteration, until the entire training data is fed to the model. Each repetition of this process is called an `epoch`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train(model, train_loader, optimizer, criterion):\n",
    "    losses = []\n",
    "    # ensure model is in training mode\n",
    "    model.train()    \n",
    "    \n",
    "    for i, data in enumerate(train_loader, 0):        \n",
    "        inputs, target = data\n",
    "        inputs, target = inputs.to(device), target.to(device)\n",
    "        optimizer.zero_grad()   \n",
    "                \n",
    "        # forward pass\n",
    "        output = model(inputs.float())\n",
    "        loss = criterion(output, target.unsqueeze(1))\n",
    "        \n",
    "        # backward pass + run optimizer to update weights\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        \n",
    "        # keep track of loss value\n",
    "        losses.append(loss.data.cpu().numpy()) \n",
    "           \n",
    "    return losses"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "def test(model, test_loader):    \n",
    "    # ensure model is in eval mode\n",
    "    model.eval() \n",
    "    y_true = []\n",
    "    y_pred = []\n",
    "   \n",
    "    with torch.no_grad():\n",
    "        for data in test_loader:\n",
    "            inputs, target = data\n",
    "            inputs, target = inputs.to(device), target.to(device)\n",
    "            output_orig = model(inputs.float())\n",
    "            # run the output through sigmoid\n",
    "            output = torch.sigmoid(output_orig)  \n",
    "            # compare against a threshold of 0.5 to generate 0/1\n",
    "            pred = (output.detach().cpu().numpy() > 0.5) * 1\n",
    "            target = target.cpu().float()\n",
    "            y_true.extend(target.tolist()) \n",
    "            y_pred.extend(pred.reshape(-1).tolist())\n",
    "        \n",
    "    return accuracy_score(y_true, y_pred)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Train the QNN <a id=\"train_qnn\"></a>\n",
    "\n",
    "We provide two options for training below: you can opt for training the model from scratch (slower) or use a pre-trained model (faster). The first option will give more insight into how the training process works, while the second option will likely give better accuracy."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## (Option 1, slower) Train the Model from Scratch <a id=\"train_scratch\"></a>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before we start training our MLP we need to define some hyperparameters. Moreover, in order to monitor the loss function evolution over epochs, we need to define a method for it. As mentioned earlier, we'll use a loss criterion which applies a sigmoid function during the training phase (`BCEWithLogitsLoss`). For the testing phase, we're manually computing the sigmoid and thresholding at 0.5."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "num_epochs = 10\n",
    "lr = 0.001 \n",
    "\n",
    "def display_loss_plot(losses, title=\"Training loss\", xlabel=\"Iterations\", ylabel=\"Loss\"):\n",
    "    x_axis = [i for i in range(len(losses))]\n",
    "    plt.plot(x_axis,losses)\n",
    "    plt.title(title)\n",
    "    plt.xlabel(xlabel)\n",
    "    plt.ylabel(ylabel)\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "# loss criterion and optimizer\n",
    "criterion = nn.BCEWithLogitsLoss().to(device)\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=lr, betas=(0.9, 0.999))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Training loss = 0.132592 test accuracy = 0.808884: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [02:06<00:00, 12.67s/it]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from sklearn.metrics import accuracy_score\n",
    "from tqdm import tqdm, trange\n",
    "\n",
    "# Setting seeds for reproducibility\n",
    "torch.manual_seed(0)\n",
    "np.random.seed(0)\n",
    "\n",
    "running_loss = []\n",
    "running_test_acc = []\n",
    "t = trange(num_epochs, desc=\"Training loss\", leave=True)\n",
    "\n",
    "for epoch in t:\n",
    "        loss_epoch = train(model, train_quantized_loader, optimizer,criterion)\n",
    "        test_acc = test(model, test_quantized_loader)\n",
    "        t.set_description(\"Training loss = %f test accuracy = %f\" % (np.mean(loss_epoch), test_acc))\n",
    "        t.refresh() # to show immediately the update           \n",
    "        running_loss.append(loss_epoch)\n",
    "        running_test_acc.append(test_acc)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAY4AAAEWCAYAAABxMXBSAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8/fFQqAAAACXBIWXMAAAsTAAALEwEAmpwYAAAoYklEQVR4nO3dfXQd9X3n8fdHV4+W5Hv9IIPRFbENNEBBcroOaQLkYbPpgWw3TtrTLTRLSAtL2LO0TTc5Dds/stntPpCEJE27JCwhtOxpUjbbwIYmTkhCSGhLk9oQ25gHY9c8WH6UbWxJfpAs6bt/3JF9LV/Z99oa3yvp8zpHZ2Z+M7+5v7kYfTS/mfmNIgIzM7Ny1VW7AWZmNr04OMzMrCIODjMzq4iDw8zMKuLgMDOzijg4zMysIg4OszMg6buSbp7qbStswzsl9U71fs1Op77aDTA7VyQNFi3OAYaA0WT5IxHxtXL3FRHXp7Gt2XTg4LBZIyLaxuclvQLcGhE/nLidpPqIGDmXbTObTtxVZbPeeJePpE9I2gn8uaR5kr4tqU/S68l8vqjOjyXdmsx/WNLfSbo72fZlSdef4bZLJT0paUDSDyXdI+kvyzyOy5LP2i/pOUnvK1r3XknPJ/vdJunjSfnC5Nj2S9on6W8l+feCnZL/gZgVnA/MB94A3Ebh/40/T5YvBA4D//MU9d8CbAQWAp8BvipJZ7Dt14F/BBYAnwJuKqfxkhqAvwG+DywCfhf4mqQ3Jpt8lUJ3XDtwBfCjpPxjQC/QAZwH/BHgcYjslBwcZgVjwH+KiKGIOBwReyPimxFxKCIGgP8GvOMU9V+NiK9ExCjwILCYwi/isreVdCHwZuCTETEcEX8HPFpm+38ZaAPuSur+CPg2cGOy/ihwuaS5EfF6RDxTVL4YeENEHI2Ivw0PYGen4eAwK+iLiCPjC5LmSPpfkl6V1A88CeQkZSapv3N8JiIOJbNtFW57AbCvqAxga5ntvwDYGhFjRWWvAp3J/K8D7wVelfQTSW9Nyj8LbAa+L2mLpDvL/DybxRwcZgUT/8r+GPBG4C0RMRd4e1I+WffTVNgBzJc0p6isq8y624GuCdcnLgS2AUTE6ohYSaEb6/8B30jKByLiYxGxDPhXwH+Q9O6zOwyb6RwcZqW1U7iusV/SfOA/pf2BEfEqsAb4lKTG5KzgX5VZ/WfAQeAPJTVIemdS96FkXx+UlI2Io0A/yW3Ikn5V0sXJNZbx8tGSn2CWcHCYlfYnQAuwB/gp8L1z9LkfBN4K7AX+K/B/KDxvckoRMQy8D7ieQpu/BHwoIl5MNrkJeCXpdrsd+DdJ+SXAD4FB4B+AL0XEj6fqYGxmkq+DmdUuSf8HeDEiUj/jMSuXzzjMaoikN0u6SFKdpOuAlRSuSZjVDD85blZbzgcepvAcRy/w7yLi59VtktmJ3FVlZmYVcVeVmZlVZFZ0VS1cuDCWLFlS7WaYmU0rTz/99J6I6JhYPiuCY8mSJaxZs6bazTAzm1YkvVqq3F1VZmZWEQeHmZlVxMFhZmYVcXCYmVlFHBxmZlYRB4eZmVXEwWFmZhVxcJzCExt386Ufb652M8zMaoqD4xSe2ryHP/nhJoZHxk6/sZnZLOHgOIXufI7hkTFe2jVQ7aaYmdUMB8cp9ORzAKzdur+q7TAzqyWpBoek6yRtlLRZ0p0l1q+UtF7SWklrJF1zurqS5kv6gaRNyXReWu3vmt/CvDkNrO/dn9ZHmJlNO6kFh6QMcA+FdyBfDtwo6fIJmz0O9ETEcuB3gPvLqHsn8HhEXJLUPymQpvAY6M7nWN97IK2PMDObdtI847gK2BwRWyJiGHiIwmswj4mIwTj+JqlWIMqouxJ4MJl/EHh/eocAPV05Xto1wKHhkTQ/xsxs2kgzODqBrUXLvUnZCSR9QNKLwHconHWcru55EbEDIJkuKvXhkm5Lur/W9PX1nfFB9OSzjAVs2NZ/xvswM5tJ0gwOlSg76T21EfFIRFxK4czhjyupeyoRcV9ErIiIFR0dJ72HpGzdyQVyX+cwMytIMzh6ga6i5TywfbKNI+JJ4CJJC09Td5ekxQDJdPdUNnqijvYmOnMtvrPKzCyRZnCsBi6RtFRSI3AD8GjxBpIulqRk/peARmDvaeo+CtyczN8MfCvFYwCgO5/1BXIzs0Rqr46NiBFJdwCPARnggYh4TtLtyfp7gV8HPiTpKHAY+M3kYnnJusmu7wK+IekW4DXgN9I6hnHd+Rzf3bCT1w8OM6+1Me2PMzOraam+czwiVgGrJpTdWzT/aeDT5dZNyvcC757alp5aT1cWgHW9+3nnG0teizczmzX85HgZruzMIuHuKjMzHBxlaW9uYNnCVt9ZZWaGg6NsPV051m49wPHnFc3MZicHR5l68jn2DA6x48CRajfFzKyqHBxl6s4XLpC7u8rMZjsHR5kuWzyXhoxYu9UXyM1sdnNwlKm5IcOl58/1GYeZzXoOjgp057M823uAsTFfIDez2cvBUYGerhwDQyNs2XOw2k0xM6saB0cFejxSrpmZg6MSFy9qY05jxk+Qm9ms5uCoQKZOXNGZ9RDrZjarOTgq1JPP8vyOfoZHxqrdFDOzqnBwVKg7n2N4ZIyNOweq3RQzs6pwcFRoeVcOKAyxbmY2Gzk4KpSf18K8OQ2+s8rMZi0HR4Uk0Z3Psc5Dj5jZLOXgOAM9XTk27R7g0PBItZtiZnbOOTjOQE8+y1jAhm391W6Kmdk55+A4A93JE+Tr/DyHmc1CqQaHpOskbZS0WdKdJdZ/UNL65OcpST1J+RslrS366Zf00WTdpyRtK1r33jSPoZSO9iY6cy2+s8rMZqX6tHYsKQPcA7wH6AVWS3o0Ip4v2uxl4B0R8bqk64H7gLdExEZgedF+tgGPFNX7QkTcnVbby9Gdz3roETObldI847gK2BwRWyJiGHgIWFm8QUQ8FRGvJ4s/BfIl9vNu4J8i4tUU21qxnq4cr+07xL6Dw9VuipnZOZVmcHQCW4uWe5OyydwCfLdE+Q3AX00ouyPp3npA0rxSO5N0m6Q1ktb09fVV0u6y+FWyZjZbpRkcKlFW8g1Ikt5FITg+MaG8EXgf8H+Lir8MXEShK2sH8LlS+4yI+yJiRUSs6OjoqLjxp3NlZxYJd1eZ2ayTZnD0Al1Fy3lg+8SNJHUD9wMrI2LvhNXXA89ExK7xgojYFRGjETEGfIVCl9g5197cwEUdbb6zysxmnTSDYzVwiaSlyZnDDcCjxRtIuhB4GLgpIl4qsY8bmdBNJWlx0eIHgA1T2uoKdOezrOs9QIRfJWtms0dqwRERI8AdwGPAC8A3IuI5SbdLuj3Z7JPAAuBLya21a8brS5pD4Y6shyfs+jOSnpW0HngX8AdpHcPp9ORz7BkcYseBI9VqgpnZOZfa7bgAEbEKWDWh7N6i+VuBWyepe4hCqEwsv2mKm3nGesZHyt26nwtyLdVtjJnZOeInx8/CZYvbaciIdb5AbmaziIPjLDTVZ7j0/Lm+JdfMZhUHx1nq6crybO8BxsZ8gdzMZgcHx1nqzucYGBphy56D1W6Kmdk54eA4Sz3JSLnurjKz2cLBcZYuXtTGnMaMHwQ0s1nDwXGWMnXiis6s76wys1nDwTEFevJZnt/Rz/DIWLWbYmaWOgfHFOjpyjE8MsbGnQPVboqZWeocHFNg/AK53whoZrOBg2MK5Oe1MG9Og++sMrNZwcExBSTR05Vj3VZfIDezmc/BMUW68zk27R7g4NBItZtiZpYqB8cU6clnGQvYsM1nHWY2szk4pkj3sSfIHRxmNrM5OKZIR3sTnbkW31llZjOeg2MK9XRlHRxmNuM5OKZQdz7H1n2H2XdwuNpNMTNLjYNjCnXns4BHyjWzmS3V4JB0naSNkjZLurPE+g9KWp/8PCWpp2jdK5KelbRW0pqi8vmSfiBpUzKdl+YxVOLKziwSfp7DzGa01IJDUga4B7geuBy4UdLlEzZ7GXhHRHQDfwzcN2H9uyJieUSsKCq7E3g8Ii4BHk+Wa0J7cwMXdbT5jMPMZrQ0zziuAjZHxJaIGAYeAlYWbxART0XE68niT4F8GftdCTyYzD8IvH9qmjs1uvOFIdYj/CpZM5uZ0gyOTmBr0XJvUjaZW4DvFi0H8H1JT0u6raj8vIjYAZBMF5XamaTbJK2RtKavr++MDuBMLO/KsWdwiO0HjpyzzzQzO5fSDA6VKCv5Z7ikd1EIjk8UFV8dEb9Eoavr30t6eyUfHhH3RcSKiFjR0dFRSdWzcuxBQL8R0MxmqDSDoxfoKlrOA9snbiSpG7gfWBkRe8fLI2J7Mt0NPEKh6wtgl6TFSd3FwO5UWn+GLlvcTkNGfiOgmc1YaQbHauASSUslNQI3AI8WbyDpQuBh4KaIeKmovFVS+/g88CvAhmT1o8DNyfzNwLdSPIaKNdVnuGzxXL+D3MxmrPq0dhwRI5LuAB4DMsADEfGcpNuT9fcCnwQWAF+SBDCS3EF1HvBIUlYPfD0ivpfs+i7gG5JuAV4DfiOtYzhT3fks3/r5dsbGgrq6Uj12ZmbTV2rBARARq4BVE8ruLZq/Fbi1RL0tQM/E8mTdXuDdU9vSqdWdz/GXP32NLXsOcvGitmo3x8xsSvnJ8RQs78oBuLvKzGYkB0cKLupoY05jxg8CmtmM5OBIQaZOXNGZ9Z1VZjYjOThSsrwrx/Pb+xkeGat2U8zMppSDIyXd+SzDo2Ns3DlQ7aaYmU0pB0dKepInyP1iJzObaRwcKcnPa2F+a6PvrDKzGcfBkRJJdOezrPcFcjObYRwcKerO59i0e4CDQyPVboqZ2ZRxcKRoeVeWsYAN23zWYWYzh4MjRceGWHd3lZnNIA6OFC1sa6Iz18Ja31llZjOIgyNlPV1ZDz1iZjOKgyNl3fkcW/cdZt/B4Wo3xcxsSjg4UuYHAc1spnFwpOzKfBYJ1m/1BXIzmxkcHClra6rnoo42X+cwsxnDwXEO9ORzrOvdT0RUuylmZmfNwXEO9HRl2TM4zPYDR6rdFDOzs+bgOAeOPQjoAQ/NbAZINTgkXSdpo6TNku4ssf6DktYnP09J6knKuyQ9IekFSc9J+v2iOp+StE3S2uTnvWkew1S4bHE7DRn5QUAzmxHq09qxpAxwD/AeoBdYLenRiHi+aLOXgXdExOuSrgfuA94CjAAfi4hnJLUDT0v6QVHdL0TE3Wm1fao11We4bPFc31llZjNCWWccklol1SXzvyDpfZIaTlPtKmBzRGyJiGHgIWBl8QYR8VREvJ4s/hTIJ+U7IuKZZH4AeAHoLPegalF3Psuz2w4wNuYL5GY2vZXbVfUk0CypE3gc+G3gL05TpxPYWrTcy6l/+d8CfHdioaQlwJuAnxUV35F0bz0gaV6pnUm6TdIaSWv6+vpO09T09eRzDA6NsGXPYLWbYmZ2VsoNDkXEIeDXgD+LiA8Al5+uTomykn9uS3oXheD4xITyNuCbwEcjoj8p/jJwEbAc2AF8rtQ+I+K+iFgRESs6OjpO09T09XTlAFjn7iozm+bKDg5JbwU+CHwnKTvd9ZFeoKtoOQ9sL7HjbuB+YGVE7C0qb6AQGl+LiIfHyyNiV0SMRsQY8BUKXWI176KONuY0ZvwgoJlNe+UGx0eB/wg8EhHPSVoGPHGaOquBSyQtldQI3AA8WryBpAuBh4GbIuKlonIBXwVeiIjPT6izuGjxA8CGMo+hqjJ14srOLGv9bg4zm+bKuqsqIn4C/AQguUi+JyJ+7zR1RiTdATwGZIAHktC5PVl/L/BJYAHwpUJWMBIRK4CrgZuAZyWtTXb5RxGxCviMpOUUur1eAT5S9tFWWU9Xjr/4+1cYHhmjsd6P0JjZ9FRWcEj6OnA7MAo8DWQlfT4iPnuqeskv+lUTyu4tmr8VuLVEvb+j9DUSIuKmctpci7rzWYZHx9i4c4Ar89lqN8fM7IyU+2fv5cnF6fdTCIILKZwRWAXGh1j3g4BmNp2VGxwNycXq9wPfioijTHKHlE0uP6+F+a2NHnrEzKa1coPjf1G4ntAKPCnpDUD/KWvYSSTRnc+y3hfIzWwaKys4IuJPI6IzIt4bBa8C70q5bTNSTz7Hpt0DHBwaqXZTzMzOSLlDjmQlfX78SWxJn6Nw9mEV6unKMhawYZvPOsxseiq3q+oBYAD418lPP/DnaTVqJjs2xLq7q8xsmip3dNyLIuLXi5b/c9HzFVaBhW1NdOZafGeVmU1b5Z5xHJZ0zfiCpKuBw+k0aebr6cp66BEzm7bKPeO4HfjfksafWnsduDmdJs18Pfkcq57dyb6Dw8xvbax2c8zMKlLuXVXrIqIH6Aa6I+JNwD9PtWUz2Ph1jnU+6zCzaaiiAZMior9oePP/kEJ7ZoUr81kk/EZAM5uWzmakvZJjSdnptTXVc3FHm884zGxaOpvg8JAjZ6E7n2N9734i/DWa2fRyyuCQNCCpv8TPAHDBOWrjjNTTlWXP4DDbDxypdlPMzCpyyruqIqL9XDVkthkfKXfd1v105lqq2xgzswr4bUJVcunidhoy8nUOM5t2HBxV0lSf4bLFc31nlZlNOw6OKurJ53h22wHGxnyB3MymDwdHFXXnswwOjbBlz2C1m2JmVjYHRxX1dOUAWOfuKjObRlINDknXSdooabOkO0us/6Ck9cnPU5J6TldX0nxJP5C0KZnOS/MY0nRRRxutjRlfIDezaSW14JCUAe4BrgcuB26UdPmEzV4G3hER3cAfA/eVUfdO4PGIuAR4PFmeljJ14orOLOv8bg4zm0bSPOO4CtgcEVsiYhh4CFhZvEFEPBURryeLPwXyZdRdCTyYzD8IvD+9Q0hfT1eOF7b3MzwyVu2mmJmVJc3g6AS2Fi33JmWTuQX4bhl1z4uIHQDJdFGpnUm6bfxVt319fWfQ/HOjJ59jeHSMF3f2n35jM7MakGZwlBoEseR9p5LeRSE4PlFp3clExH0RsSIiVnR0dFRS9ZzqzhdeceLuKjObLtIMjl6gq2g5D2yfuJGkbuB+YGVE7C2j7i5Ji5O6i4HdU9zucyo/r4X5rY2s37q/2k0xMytLmsGxGrhE0lJJjcANwKPFG0i6EHgYuCkiXiqz7qMcf/vgzcC3UjyG1EmiJ5/1nVVmNm2U++rYikXEiKQ7gMeADPBARDwn6fZk/b3AJ4EFwJckAYwk3Usl6ya7vgv4hqRbgNeA30jrGM6V7nyOn7zUx8GhEVqbUvtPYmY2JVL9LRURq4BVE8ruLZq/Fbi13LpJ+V7g3VPb0urq6coyFrBh2wHesmxBtZtjZnZKfnK8Bvgd5GY2nTg4asDCtiY6cy2+s8rMpgUHR41Y3lV4layZWa1zcNSI7nyWrfsOs3dwqNpNMTM7JQdHjRi/zrF+m7urzKy2OThqxJX5LBJ+I6CZ1TwHR41oa6rn4o4231llZjXPwVFDuvOFC+QRfpWsmdUuB0cNWd6VZc/gMNv2H652U8zMJuXgqCHHLpD7eQ4zq2EOjhpy6eJ2GjLydQ4zq2kOjhrSVJ/h8sVzWech1s2shjk4akx3PseGbf2MjfkCuZnVJgdHjenOZxkcGmHLnsFqN8XMrCQHR41Z3pUDYK0fBDSzGuXgqDHLOtpobcx4wEMzq1kOjhqTqRNXdGY9xLqZ1SwHRw1a3pXjhe39DI+MVbspZmYncXDUoO58juHRMV7c2V/tppiZnSTV4JB0naSNkjZLurPE+ksl/YOkIUkfLyp/o6S1RT/9kj6arPuUpG1F696b5jFUQ3c+C+DuKjOrSakFh6QMcA9wPXA5cKOkyydstg/4PeDu4sKI2BgRyyNiOfDPgEPAI0WbfGF8fUSsSusYqiU/r4UFrY1+ENDMalKaZxxXAZsjYktEDAMPASuLN4iI3RGxGjh6iv28G/iniHg1vabWFkl057O+s8rMalKawdEJbC1a7k3KKnUD8FcTyu6QtF7SA5Lmlaok6TZJaySt6evrO4OPra7ufI5NuwcZHBqpdlPMzE6QZnCoRFlF42hIagTeB/zfouIvAxcBy4EdwOdK1Y2I+yJiRUSs6OjoqORja8LyrhwRsMGvkjWzGpNmcPQCXUXLeWB7hfu4HngmInaNF0TErogYjYgx4CsUusRmnPEL5O6uMrNak2ZwrAYukbQ0OXO4AXi0wn3cyIRuKkmLixY/AGw4q1bWqAVtTXTmWnxnlZnVnPq0dhwRI5LuAB4DMsADEfGcpNuT9fdKOh9YA8wFxpJbbi+PiH5Jc4D3AB+ZsOvPSFpOodvrlRLrZ4zlXTnfWWVmNSe14ABIbpVdNaHs3qL5nRS6sErVPQQsKFF+0xQ3s2Z157N859kd7B0cYkFbU7WbY2YG+MnxmtaTjJS73hfIzayGODhq2BWdWSTcXWVmNcXBUcPamuq5uKON9b5AbmY1xMFR43q6cqzv3U+EXyVrZrXBwVHjevJZ9gwOs23/4Wo3xcwMcHDUvO58DsDdVWZWMxwcNe7Sxe00ZupY5yfIzaxGpPoch529pvoMPV1ZHnzqFSLgtrcvY6Gf6TCzKvIZxzTwhd9cznuvWMz9f7uFaz/9BP9j1QvsHRyqdrPMbJbSbLhbZ8WKFbFmzZpqN+Os/VPfIH/2+CYeXbedpvoMH3rbG7jt2mV+qtzMUiHp6YhYcVK5g2P62bx7kD/7USFAWhoyfOitS7jt7cuY39pY7aaZ2Qzi4JhBwTFu8+4B/vTxzfzN+kKA3Py2Jdx27TLmOUDMbAo4OGZgcIzbvHuALz6+mW+v386chgwfvnoJt17jADGzs+PgmMHBMW7TrgG++PgmvvPsDlob6/nw25Zw67VLyc1xgJhZ5RwcsyA4xr00HiDrd9DWVM9vJ2cg2TkN1W6amU0jDo5ZFBzjNu4c4E+TM5D2JEBucYCYWZkcHLMwOMa9uLOfP318E6ue3VkIkGuWcss1S8m2OEDMbHIOjlkcHONe2FEIkO9u2El7cz2/c/VSfscBYmaTcHA4OI55fns/X3z8JR57bhftzfXcck0hQOY2O0DM7DgHh4PjJM9tP8AXf7iJ7z+/i7nN9dx67TI+fPUSB4iZAZMHR6pjVUm6TtJGSZsl3Vli/aWS/kHSkKSPT1j3iqRnJa2VtKaofL6kH0jalEznpXkMM9kvXpDlvg+t4Nu/ew1vWbaAz//gJa799BP82eObGDhytNrNM7MaldoZh6QM8BLwHqAXWA3cGBHPF22zCHgD8H7g9Yi4u2jdK8CKiNgzYb+fAfZFxF1JGM2LiE+cqi0+4yjPhm0H+JMfbuKHL+wi29LAv712KTe/bQntPgMxm5WqccZxFbA5IrZExDDwELCyeIOI2B0Rq4FK/rxdCTyYzD9IIXRsClzRmeX+m1fwN3dcw5uXzOPu77/EtZ95gnue2Mzg0Ei1m2dmNSLN4OgEthYt9yZl5Qrg+5KelnRbUfl5EbEDIJkuKlVZ0m2S1kha09fXV2HTZ7cr81nuv/nNPHrH1fyzC+fx2cc2cs2nf8Q9T2zmpV0D7D807Hegm81iab7ISSXKKvltc3VEbE+6s34g6cWIeLLcyhFxH3AfFLqqKvhcS3Tnc3z1w29m3db9fPHxTXz2sY189rGNADTW19HR1sR5c5tY1N7MorlNLGovzHcUzS9obaSurtQ/BTObrtIMjl6gq2g5D2wvt3JEbE+muyU9QqHr60lgl6TFEbFD0mJg9xS22Uro6crxwIffzIs7+3lp1yC7+4/QNzDErv4j7B4YYnPfIE/90x76j5zcnZWpEwvbGlnU3sx5c5voaG8uhMp44CTzC9uaaMj4vWJm00GawbEauETSUmAbcAPwW+VUlNQK1EXEQDL/K8B/SVY/CtwM3JVMvzXVDbfSLj1/LpeeP3fS9UeOjtI3MMTugSPs7h9i94T53tcP8/PX9rP34PBJdSWYP6eRjvYmFs1NAqW9ifPmnhg0He1NNDdk0jxMMzuN1IIjIkYk3QE8BmSAByLiOUm3J+vvlXQ+sAaYC4xJ+ihwObAQeETSeBu/HhHfS3Z9F/ANSbcArwG/kdYxWGWaGzJ0zZ9D1/w5p9zu6OgYewaHSoZL30DhLOalnQP0DQ4xOnZyL+Pc5nouyLWwONvM4lwLnePz2cL8edkmmuodLmZp8QOAVrPGxoJ9h4aPdYn19RdCZlf/EDsOHGHHgcNs33+Y1w+dfFPewrYmOnOFMFmca07C5fh8R1uTr72YncZkt+Om2VVldlbq6sTCtsL1j188xXaHh0eTEDnC9gOH2bH/CNv3H2b7gcNs7hvkyU19HBoePaFOfZ04P9vMBUmYXJBr4YLkrOWCXAsX5JrJtjSQnPWaWREHh017LY0ZlnW0sayjreT6iKD/8AjbkzOU7QcKwbIjmX/61ddZ9ewOjo6eePbd0pDhgiRUFmfHw6U4aFpoaXSXmM0+Dg6b8SSRndNAdk4Dly0ufXF/bCzYMzjEtv2H2ZEEy/b9SXfYgSO8uLOPvoGhk+qdP7eZJQvnsHRhG8sWtrJkYStLF7Zy4fw5NNb7LjGbmRwcZhS6xRbNbWbR3GbeNMk2wyNj7Oo/koTLYba9fpiX9xzi5T2DfG/DjhOutdQJuubPYcmCQpAs6yhMlyxo5YJcCxlfX7FpzMFhVqbG+rpT3jW2/9AwL+85yMt7DvLKnoNsSebXvLKPg0XXWBrr61iyIAmVjlaWLWxl6cI2liycQ0dbk6+rWM1zcJhNkdycRt50YSNvuvDEAZsjgr6BIbYkgfJyEipb9hzkxxv7GB4dO7ZtW1N94cwk6fIq7v7yC7esVjg4zFImHe8G++VlC05YNzoWbN9/+KRQWbd1P99Zv53ix1gWtDaWDJULci00ZESdRKZOZCTfamypcnCYVVGmTse6v97xCx0nrBsaGWXrvkPHrqOMd4P97aY+/vrp3rL2XQgRjoVJpk7U1x0PmWNhUyfqRImy4/OZ8Tp1IjNh27o6Machw4K2Jha2NbKgrZGFbU0saC0sz2tt9JAyM4iDw6xGNdVnuHhROxcvagfOO2HdwaGRwrWUvQfZeeAIo2PBaARjY8HoGMfnIwrrkp+xOHE6OgZjEYyMjdeNk+qObzsyNsbQSDAaHNv22H4iODg0wr6Dwyfd1jwuN6chCZMkVNoaWdDadCxkCoFTWG5vqve1nhrm4DCbhlqb6rmiM8sVndlqN+UEEUH/kRH2Dg6x9+AwewaG2HNwuLA8OMzeg0PsGRzmhZ397B0c5sDh0q/iaczUHT9rSQJm/ExmQWsTC9uPB9D81kbf+nyOOTjMbMpIItvSQLalgWUdp99+eGSM1w8NsycJlmPTg0nQJAG0adcgfYNDDI+MldzP3Ob6YyGTbWmkrSlDW3M9rU31tDcVpsXzbc31tDUVflqTqW+RLp+Dw8yqprG+jvPmNnPe3ObTbhsRDA6NnHDmcjxsCmc2ewYKD3EeHBrh4NAIA0Mjk4bNRC0NmQmBkjkxXJrraWusPymQZmMIOTjMbFqQRHtzA+3NDSxZ2Fp2veGRMQ4OjTCY/IwHysGhEQaPjJeNMjh0lMGh0WPbDB4ZYfv+I8eXh0YYqiCEWhozNNXX0VRfR3NDMj8+rc/Q3HDitKmhjuZj0+PbNpdZp6m+7pxdF3JwmNmM1lhfR2N94c6us3V0tBBCA0dGODh8PHiOh8sog8m6w8OjDI2MMjQyxpGjx6eDQyPsGRwurDs6dmx6ZGR00hsLKjnW4rBpbsjw3z9wJVctnX/Wx17MwWFmVqaGTB25OY3k5px9CJUyOhYMTwia4unQyChHjp44HQ+d4unQyBhDSZ22pqn/Ne/gMDOrEZk60dKYqflRl30Pm5mZVcTBYWZmFXFwmJlZRVINDknXSdooabOkO0usv1TSP0gakvTxovIuSU9IekHSc5J+v2jdpyRtk7Q2+XlvmsdgZmYnSu3iuKQMcA/wHqAXWC3p0Yh4vmizfcDvAe+fUH0E+FhEPCOpHXha0g+K6n4hIu5Oq+1mZja5NM84rgI2R8SWiBgGHgJWFm8QEbsjYjVwdEL5joh4JpkfAF4AOlNsq5mZlSnN4OgEthYt93IGv/wlLQHeBPysqPgOSeslPSBpXumaZmaWhjSDo9Sz7xU9FimpDfgm8NGI6E+KvwxcBCwHdgCfm6TubZLWSFrT19dXyceamdkppPkAYC/QVbScB7aXW1lSA4XQ+FpEPDxeHhG7irb5CvDtUvUj4j7gvmS7PkmvVtT64xYCe86w7kzk7+M4fxcn8vdxopnwfbyhVGGawbEauETSUmAbcAPwW+VUVGGkrq8CL0TE5yesWxwRO5LFDwAbTre/iChjgOdJ27ImIlacaf2Zxt/Hcf4uTuTv40Qz+ftILTgiYkTSHcBjQAZ4ICKek3R7sv5eSecDa4C5wJikjwKXA93ATcCzktYmu/yjiFgFfEbScgrdXq8AH0nrGMzM7GSpjlWV/KJfNaHs3qL5nRS6sCb6O0pfIyEibprKNpqZWWX85Pjp3VftBtQYfx/H+bs4kb+PE83Y70MRZzf+u5mZzS4+4zAzs4o4OMzMrCIOjlM43SCNs8WpBp2czSRlJP1cUslniWYTSTlJfy3pxeTfyVur3aZqkfQHyf8nGyT9laTmardpqjk4JlE0SOP1FG4RvlHS5dVtVdWMDzp5GfDLwL+fxd9Fsd+nMI6awReB70XEpUAPs/R7kdRJYeDWFRFxBYVHEW6obqumnoNjcqcdpHG28KCTJ5OUB/4lcH+121JtkuYCb6fw0C4RMRwR+6vaqOqqB1ok1QNzqGDEjOnCwTG5KRmkcaaZZNDJ2ehPgD8ExqrcjlqwDOgD/jzpurtfUmu1G1UNEbENuBt4jcJYegci4vvVbdXUc3BM7qwHaZxpJhl0ctaR9KvA7oh4utptqRH1wC8BX46INwEHgVl5TTAZrXslsBS4AGiV9G+q26qp5+CY3FkN0jjTTDbo5Cx1NfA+Sa9Q6ML855L+srpNqqpeoDcixs9C/5pCkMxG/wJ4OSL6IuIo8DDwtiq3aco5OCZ3bJBGSY0ULnA9WuU2VcWpBp2cjSLiP0ZEPiKWUPh38aOImHF/VZYrGTpoq6Q3JkXvBp4/RZWZ7DXglyXNSf6/eTcz8EaBVMeqms4mG6Sxys2qlquZfNBJM4DfBb6W/JG1BfjtKrenKiLiZ5L+GniGwt2IP2cGDj3iIUfMzKwi7qoyM7OKODjMzKwiDg4zM6uIg8PMzCri4DAzs4o4OMzKIGkwmS6R9FtTvO8/mrD81FTu32yqOTjMKrMEqCg4kpGWT+WE4IiIGfeksc0sDg6zytwFXCtpbfLehYykz0paLWm9pI8ASHpn8g6TrwPPJmX/T9LTybsabkvK7qIwkupaSV9LysbPbpTse4OkZyX9ZtG+f1z0/ouvJU8pI+kuSc8nbbn7nH87Niv4yXGzytwJfDwifhUgCYADEfFmSU3A30saHw31KuCKiHg5Wf6diNgnqQVYLembEXGnpDsiYnmJz/o1YDmF91ssTOo8max7E/CLFMZP+3vgaknPAx8ALo2IkJSb2kM3K/AZh9nZ+RXgQ8lQLD8DFgCXJOv+sSg0AH5P0jrgpxQG0LyEU7sG+KuIGI2IXcBPgDcX7bs3IsaAtRS60PqBI8D9kn4NOHSWx2ZWkoPD7OwI+N2IWJ78LC16/8LBYxtJ76QwcupbI6KHwhhGp3ulaKmh/ccNFc2PAvURMULhLOebwPuB71VwHGZlc3CYVWYAaC9afgz4d8mw80j6hUleYpQFXo+IQ5IupfAK3nFHx+tP8CTwm8l1lA4Kb9n7x8kalrwvJZsMPvlRCt1cZlPO1zjMKrMeGEm6nP6Cwru2lwDPJBeo+yj8tT/R94DbJa0HNlLorhp3H7Be0jMR8cGi8keAtwLrKLxE7A8jYmcSPKW0A9+S1EzhbOUPzugIzU7Do+OamVlF3FVlZmYVcXCYmVlFHBxmZlYRB4eZmVXEwWFmZhVxcJiZWUUcHGZmVpH/D8AmKCm8mmFhAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "%matplotlib inline\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "loss_per_epoch = [np.mean(loss_per_epoch) for loss_per_epoch in running_loss]\n",
    "display_loss_plot(loss_per_epoch)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEWCAYAAAB8LwAVAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8/fFQqAAAACXBIWXMAAAsTAAALEwEAmpwYAAAsHElEQVR4nO3deXzU1b3/8dcnCZCwhX0NEFRcEAQxooBWrbUFrxZrXXCt2lurrdq99ba/2+X2tret9VZb7eVaC1o3agXrhkCXWxVcStg3USQBQljCkhCWrPP5/TFfdAiTZAKZfJPM+/l4zCPzXc53PjMPmM+cc77nHHN3RERE6koLOwAREWmdlCBERCQuJQgREYlLCUJEROJSghARkbiUIEREJC4lCBERiUsJQto8M9sf84iY2aGY7RuO4Xr/MLN/TUasIm1JRtgBiBwvd+96+LmZFQL/6u5/DS+i5DKzDHevCTsOaf9Ug5B2y8zSzOxeM/vAzHab2bNm1is4lmlmTwb7S81ssZn1N7OfAOcDDwU1kIfqufafzGy7mZWZ2etmdnrMsSwzu9/MNgXHF5pZVnDsPDN7M3jNLWZ2S7D/iFqLmd1iZgtjtt3Mvmxm7wPvB/seDK6xz8yWmNn5Meenm9l3g/deHhwfYmYPm9n9dd7LS2b21eP+wKXdUYKQ9uwe4ArgAmAQsBd4ODj2OSAbGAL0Bu4ADrn794A3gLvcvau731XPtV8FRgD9gKXAUzHHfgmcBUwEegHfBiJmNjQo9xugLzAWWN6E93MFcA4wMtheHFyjF/A08CczywyOfR24DrgU6A7cBhwEHgeuM7M0ADPrA1wMPNOEOCRFqIlJ2rMvEv2iLwIwsx8Cm83sJqCaaGI4yd1XAkuacmF3n3H4eXDdvWaWDZQT/TI+1923Bqe8GZx3A/BXdz/8Zbw7eCTqv9x9T0wMT8Ycu9/M/h9wCrAC+Ffg2+6+Pji+4vBrmlkZ0aTwF2Aa8A9339GEOCRFqAYh7dkw4PmgOacUWAfUAv2BJ4D5wCwzKzazX5hZh0QuGjTf/CxovtkHFAaH+gSPTOCDOEWH1LM/UVvqxPENM1sXNGOVEq0R9UngtR4Hbgye30j0sxA5ihKEtGdbgCnu3iPmkenuW9292t1/5O4jiTYFXQbcHJRrbIrj64GpwCeIfinnBvsN2AVUACfWE0+8/QAHgM4x2wPinPNhXEF/w3eAa4Ce7t4DKAtiaOy1ngSmmtkY4DTgz/WcJylOCULas+nAT8xsGICZ9TWzqcHzi8xstJmlA/uINjnVBuV2ACc0cN1uQCXR5qHOwE8PH3D3CDAD+G8zGxTUNiaYWSei/RSfMLNrzCzDzHqb2dig6HLgSjPrbGYnAZ9v5L11A2qAEiDDzL5PtK/hsEeBH5vZCIs6w8x6BzEWEe2/eAKY7e6HGnktSVFKENKePQi8CCwws3LgbaKdvBD9hf4c0eSwDniN6C/rw+WuMrO9ZvbrONf9A7AJ2AqsDa4b65vAKqJfwnuAnwNp7r6ZaKfxN4L9y4ExQZlfAVVEk9PjHNnpHc98oh3e7wWxVHBkE9R/A88CC4L3+HsgK+b448Bo1LwkDTAtGCSSeszsY0QTYm5Q6xE5imoQIikm6Iz/CvCokoM0RAlCJIWY2WlAKTAQeCDUYKTVUxOTiIjEpRqEiIjE1a5GUvfp08dzc3PDDkNEpM1YsmTJLnfvG+9Yu0oQubm55Ofnhx2GiEibYWab6jumJiYREYlLCUJEROJSghARkbiUIEREJC4lCBERiUsJQkRE4lKCEBGRuNrVOAgRkZaye38lz+YXURv5aL7DujMXxZvI6OhzvMHjca9T56TOnTK444L61oc6dkoQIiLH4IG/vs8Tb9c7xizpzD563qdrJyUIEZHWoOxgNc8tKeKz43L42WdHH3Xc6m6bNXL86NeoWyYMSe2DMLPJZrbezDaY2b1xjmeb2UtmtsLM1pjZrYmWFREJyx/zN3OoupbPnzecDulpRz0y6jzS0+yIR1qdh9nRj9YgaQkiWOv3YWAKMBK4zsxG1jnty8Badx8DXAjcb2YdEywrItLiamojPP7mJs49oRcjB3VvvEAblswaxHhgg7tvdPcqYBYwtc45DnSzaLrsSnSd3poEy4qItLgFa3ewtfQQt00aHnYoSZfMBDGYIxdRLwr2xXoIOA0oJrrI+1eCJRATKSsi0uJmLCxgaK/OXHxa/7BDSbpkJoh4jWh179b6FLAcGASMBR4ys+4Jlo2+iNntZpZvZvklJSXHHq2ISCNWFpWSv2kvt0zMJT2tdfQTJFMyE0QRMCRmO4doTSHWrcAcj9oAFACnJlgWAHd/xN3z3D2vb9+4a16IiDSLmYsK6dopg6vzcsIOpUUkM0EsBkaY2XAz6whMA16sc85m4GIAM+sPnAJsTLCsiEiL2bGvgpdXFnN1Xg7dMjuEHU6LSNo4CHevMbO7gPlAOjDD3deY2R3B8enAj4HHzGwV0Wal77j7LoB4ZZMVq4hIY558exM1EeeWiblhh9JikjpQzt3nAnPr7Jse87wY+GSiZUVEwlBRXctT72zmE6f1Z1jvLmGH02I0WZ+ISCNeXF7MngNVKXFraywlCBGRBrg7MxYVcNrA7px7Qq+ww2lRShAiIg1464PdvLu9nFsn5baaKTBaihKEiEgDZiwqoHeXjnx6zKCwQ2lxShAiIvUo3HWAv727kxvOHUZmh/Sww2lxShAiIvV47M1CMtKMG88dGnYooVCCEBGJY19FNX/K38LlYwbRr1tm2OGEQglCRCSOZxdv4UBVbcrd2hpLCUJEpI7aiPPYm4WMz+3FqMHZYYcTGiUIEZE6/rJ2B0V7D3HbeblhhxIqJQgRkTpmLiogp2cWl4wcEHYooVKCEBGJsXprGe8U7EmZNR8aogQhIhJj5qJCOndM5+q8IY2f3M4pQYiIBHaWV/DSimKuPiuH7KzUWPOhIUoQIiKBp97eTFVthFtS+NbWWEoQIiJAZU0tT72ziYtP7cfwPqmz5kNDlCBERICXVmxj1/4qbjtPtYfDlCBEJOW5OzMWFnBK/25MPLF32OG0GkoQIpLy3inYw9pt+1JyzYeGKEGISMqbsbCAnp07cMWZg8MOpVVRghCRlLZ590H+sm4HN5yTmms+NCSpCcLMJpvZejPbYGb3xjn+LTNbHjxWm1mtmfUKjhWa2argWH4y4xSR1PX4W4Wkm3HThGFhh9LqZCTrwmaWDjwMXAIUAYvN7EV3X3v4HHe/D7gvOP9y4GvuvifmMhe5+65kxSgiqa28opo/Lt7CZWcMpH/31FzzoSHJrEGMBza4+0Z3rwJmAVMbOP864JkkxiMicoTnlhSxv7KGWzUwLq5kJojBwJaY7aJg31HMrDMwGZgds9uBBWa2xMxur+9FzOx2M8s3s/ySkpJmCFtEUsHhNR/OGtaTMUN6hB1Oq5TMBBHvXjGv59zLgUV1mpcmufs4YArwZTP7WLyC7v6Iu+e5e17fvn2PL2IRSRl/f3cnm3YfTOkV4xqTzARRBMROh5gDFNdz7jTqNC+5e3HwdyfwPNEmKxGRZjFzUQGDsjP51On9ww6l1UpmglgMjDCz4WbWkWgSeLHuSWaWDVwAvBCzr4uZdTv8HPgksDqJsYpIClm3bR9vfrCbz03MJSNdd/vXJ2l3Mbl7jZndBcwH0oEZ7r7GzO4Ijk8PTv0MsMDdD8QU7w88H4xozACedvd5yYpVRFLLzEUFZHVIZ9rZQ8MOpVVLWoIAcPe5wNw6+6bX2X4MeKzOvo3AmGTGJiKpadf+Sv68vJhr8nLI7qw1HxqiupWIpJSn39lMVU2EWyaqc7oxShAikjKqaiI88fYmLjylLyf16xp2OK2eEoSIpIxXVhVTUl6pW1sTpAQhIinB3fn9wgJO6teV80f0CTucNkEJQkRSQv6mvazeqjUfmkIJQkRSwoyFBWRndeDKM3PCDqXNUIIQkXavaO9B5q/ZzvXnDCWro9Z8SJQShIi0e394axNmxs1a86FJlCBEpF07UFnDM//czJRRAxiYnRV2OG2KEoSItGuzlxZRXlHDbefp1tamUoIQkXYrEnFmLipk7JAejBvaM+xw2hwlCBFpt157r4SCXQdUezhGShAi0m7NWFTAgO6ZTBk1IOxQ2iQlCBFpl97bUc4b7+/ipgnD6KA1H46JPjURaZdmLiqgU0Ya14/Xmg/HSglCRNqdPQeqmLN0K1eOy6Fnl45hh9NmKUGISLvzzD83U1kT4bZJuWGH0qYpQYhIu1JdG+EPbxVy/og+jOjfLexw2jQlCBFpV+au2saOfZW6tbUZKEGISLvh7sxYWMAJfbpwwYi+YYfT5ilBiEi7sXRzKSuKyrh1Ui5paVrz4XglNUGY2WQzW29mG8zs3jjHv2Vmy4PHajOrNbNeiZQVEalrxqICumdmcOU4rfnQHJKWIMwsHXgYmAKMBK4zs5Gx57j7fe4+1t3HAv8GvObuexIpKyISq7j0EPNWb+e68UPp0ikj7HDahWTWIMYDG9x9o7tXAbOAqQ2cfx3wzDGWFZEU94e3NgFw88TccANpR5KZIAYDW2K2i4J9RzGzzsBkYPYxlL3dzPLNLL+kpOS4gxaRtudgVXTNh0+d3p/BPbTmQ3NJZoKI10Pk9Zx7ObDI3fc0tay7P+Luee6e17ev7loQSUVzlm6l7FA1t03Sra3NKZkJoggYErOdAxTXc+40PmpeampZEUlh0TUfCjgjJ5uzhmnNh+ZUb0+OmV2ZQPkKd59bz7HFwAgzGw5sJZoEro/zOtnABcCNTS0rIvLGhl18UHKAB64di5lubW1ODXX1/w54gfjNPYd9DIibINy9xszuAuYD6cAMd19jZncEx6cHp34GWODuBxorm+B7EpEUMmNhAf26deLS0QPDDqXdaShBvOrutzVU2MyebOh4ULuYW2ff9DrbjwGPJVJWRCTWhp3lvPZeCd+45GQ6Zmjcb3Or9xN19xvrO9aUc0REkmXmokI6ZqRx/Tla8yEZEk65ZnaSmT1pZrPNbEIygxIRaUzpwSpmLy3iM2MH07trp7DDaZca6qTOdPeKmF0/Bn5A9HbTPwFjkxuaiEj9Zi3eQkV1hFvPyw07lHaroRrES2Z2U8x2NZAbPGqTGJOISIOqayM8/mYhk07qzakDuocdTrvVUIKYDGSb2TwzOx/4JtG7lqYAN7REcCIi8cxfs51tZRXcOlED45Kp3iYmd68FHjKzJ4DvAwOBf3f3D1oqOBGReGYsLGBY7858/NR+YYfSrjXUB3EO8C2gCvgpcAj4iZkVAT9297KWCVFEUklVTYTtZRUUlx1iW9khiksrKC49RHHpIbaVVbC19BDlFTX88PKRWvMhyRoaBzEduAroCvyvu08CppnZBcCzwKdaID4RaUciEWfXgUqKSyvYVnqIrcGXfnHpIYqDv7v2V+J1Zl7r2bkDg3pkkdOzM+cM78VJ/boybbxubU22hhJELdEO6c5EaxEAuPtrwGvJDUtE2qJ9FdXRX/qlFcGXf+zzCraVHaK69shv/6wO6QzskcngHlmcckpfBvXIYlB2FoN6ZDGwRyaDsrPI6pge0jtKbQ0liOuBLxJNDje3TDgi0prt2l/JezvK2VZ65K/+w01B+ytrjjg/Pc0Y0D2TgdmZjB3SgymjBzC4RxYDs7MYFHz59+jcQXMotVINdVK/B3yjBWMRkVZsxZZSrvnft6isiXy4r3eXjgzskUlu7y5MPLEPA7MzozWAHtG//bplkq5+gjaroU7ql939soYKJ3KOiLR9ew5UceeTS+jTtRM/++xocnp2ZmB2Jpkd1PTTnjXUxHSemb3YwHEjul60iLRjtRHnK7OWsWt/Fc/dOYEzcnqEHZK0kIYSRCJrQFc1foqItGUP/PU93nh/F/915WglhxTTUB+E7lQSSXF/XbuD3/x9A9fk5TDt7CGNF5B2RROoi0hchbsO8LVnl3P6oO78x9RRutMoBSlBiMhRDlXVcseTS0gzY/qNZ6kzOkU1miDM7DIzUyIRSRHuzvf+vIr1O8p5YNpYhvTqHHZIEpJEvvinAe+b2S/M7LRkByQi4Xrqnc3MWbqVez4+gotO0WR4qazRBBEsK3om8AEw08zeMrPbzaxb0qMTkRa1bPNefvTSGi44uS9fuXhE2OFIyBJqOnL3fcBsYBbRab8/Ayw1s7sbKmdmk81svZltMLN76znnQjNbbmZrzOy1mP2FZrYqOJaf8DsSkWOye38lX3pqKf27Z/LgtLGaKVUaHAcBgJldDtwGnAg8AYx3951m1hlYB/ymnnLpwMPAJUARsNjMXnT3tTHn9AB+C0x2981mVrc+e5G772r62xKRpqiNOPfMWsbuA1XMuXMiPTp3DDskaQUaTRDA1cCv3P312J3uftDMbmug3Hhgg7tvBDCzWUQH362NOed6YI67bw6uubMpwYtI87h/wXoWbdjNLz57BqMGZ4cdjrQSiTQx/QD45+ENM8sys1wAd/9bA+UGA1titouCfbFOBnqa2T/MbImZxc4a68CCYP/tCcQpIsdgwZrt/PYfHzDt7CFco8FwEiORGsSfgIkx27XBvrMbKRevAbPOMiBkAGcBFwNZwFtm9nYwk+wkdy8Omp3+Ymbv1q3FAATJ43aAoUO1gIhIUxTsOsA3nl3B6MHZ/PDTp4cdjrQyidQgMtw9dsGgKiCRBsoiIPbnSA5QHOecee5+IOhreB0YE7xOcfB3J/A80Saro7j7I+6e5+55ffv2TSAsEQE4WFXDnU8uIT3d+O0N4zQYTo6SSIIoMbNPH94ws6lAIh3Hi4ERZjbczDoSHU9Rd3bYF4DzzSwj6PQ+B1hnZl0O30ZrZl2ATwKrE3hNEUmAu/O951ezfkc5D047U4PhJK5EmpjuAJ4ys4eINhttIYEV5ty9xszuAuYD6cAMd19jZncEx6e7+zozmwesBCLAo+6+2sxOAJ4P5n7JAJ5293nH8P5EJI4n3t7E88u28vVLTuaCk1XzlvjM664OXt+JZl2D88uTG9Kxy8vL8/x8DZkQacjSzXu59n/f4vwRfXn05jyNd0hxZrbE3fPiHUukBoGZ/QtwOpB5eEZHd/+PZotQRFrErv2VfOnJpQzIzuRX12gwnDQskYFy04HOwEXAo8BVxNz2KiJtQ01thLufXsbeg1XMvnMi2Z07hB2StHKJdFJPdPebgb3u/iNgAkfenSQibcAvF7zHWxt3859XjNJgOElIIgmiIvh70MwGAdXA8OSFJCLNbd7q7Ux/7QOuP2coV+fp950kJpE+iJeCOZPuA5YSHez2u2QGJSLNZ2PJfr75pxWMycnmB5ePDDscaUMaTBDBQkF/c/dSYLaZvQxkuntZSwQnIsfnYFUNdzy5hA7pxm9vPItOGRoMJ4lrsInJ3SPA/THblUoOIm2Du3Pv7FW8v3M/v77uTAb3yAo7JGljEumDWGBmnzWtWC7Spjz+ZiEvrijmG5eczPkjNBhOmi6RPoivA12AGjOrIDqa2t29e1IjE5FjtmTTHv7zlXV84rR+fOnCk8IOR9qoRhOEu2tpUZE2pKQ8ujLc4J5Z3K/BcHIcEhko97F4++NNvS0i4aqpjXDX00spO1TNzFvGk52lwXBy7BJpYvpWzPNMotNuLwE+npSIROSY/WL+et4p2MP9V49h5CC1AsvxSaSJ6fLYbTMbAvwiaRGJyDF5ddU2Hnl9IzeeO5TPnpUTdjjSDiRyF1NdRcCo5g5ERI7dhp37+dZzKxkzpAf/fpkGw0nzSKQP4jd8tFRoGjAWWJHEmESkCQ5URgfDdcxI439uGKfBcNJsEumDiF1goQZ4xt0XJSkeEWkCd+c7s1eysWQ/T3z+HAZpMJw0o0QSxHNAhbvXAphZupl1dveDyQ1NRBozc1EhL6/cxrcnn8Kkk/qEHY60M4n0QfwNiP1ZkgX8NTnhiEiiFhfu4adz13HJyP7cecGJYYcj7VAiCSLT3fcf3giea4VzkRDt3FfBl55aSk7PLO6/ZgyaCUeSIZEEccDMxh3eMLOzgEPJC0lEGlJdG+Gup5dRXlHN9JvOonumBsNJciTSB/FV4E9mVhxsDwSuTVpEItKgn7/6Lv8s3MMD147l1AEaDCfJ02gNwt0XA6cCdwJfAk5z9yWJXNzMJpvZejPbYGb31nPOhWa23MzWmNlrTSkrkmpeWbmNRxcW8LkJw7jizMFhhyPtXKMJwsy+DHRx99XuvgroamZfSqBcOvAwMAUYCVxnZiPrnNMD+C3waXc/Hbg60bIiqWbDznK+9dwKxg3twff+Rf8dJPkS6YP4QrCiHADuvhf4QgLlxgMb3H2ju1cBs4Cpdc65Hpjj7puDa+9sQlmRlLG/soYvPrGErA7pPHzDODpmHMskCCJNk8i/srTYxYKCX/cdEyg3GNgSs10U7It1MtDTzP5hZkvM7OYmlD0cz+1mlm9m+SUlJQmEJdK2rCwq5YtP5FOw6wC/uf5MBmZrMJy0jEQ6qecDz5rZdKJTbtwBzEugXLz77rzOdgZwFnAx0fEVb5nZ2wmWje50fwR4BCAvLy/uOSJtTU1thPlrdjBzUQH5m/bSpWM6P75iFBNP1GA4aTmJJIjvALcT7aQ2YAHwuwTKFQFDYrZzgOI45+xy9wNEb6d9HRiTYFmRdqf0YBWzFm/hD28WUlxWwZBeWfz7ZSO5Oi9Ht7NKi0tkuu8IMD14YGbnAb8BvtxI0cXACDMbDmwFphHtc4j1AvCQmWUQbbY6B/gV8G4CZaUdc3cOVtXSpVMiv2Havvd3lDPzzULmLC2iojrChBN688NPn87Fp/UnXSvCSUgS+t9nZmOB64iOfygA5jRWxt1rzOwuok1U6cAMd19jZncEx6e7+zozmwesBCLAo+6+OnjNo8o29c1J2/Wjl9byh7cKOWd4by4dPYBPnT6Aft0zww6rWUUizmvvlzBjYQFvvL+LjhlpXDF2ELdOGs5pAzW+QcJn7vGb7c3sZKK/3K8DdgN/BL7p7sNaLrymycvL8/z8/MZPlFZt3upt3PHkUs4f0Yfi0kN8UHIAMzh7WC+mjB7A5FED2nRH7YHKGmYvLeKxRYVs3HWAft06cfOEYVw3fii9u3YKOzxJMWa2xN3z4h5rIEFEgDeAz7v7hmDfRnc/IWmRHicliLavaO9BLn3wDXL7dOG5OybSMSON93eUM3fVdl5dvY13t5cDMG5oDy4dPZDJowaQ07NtTA22Zc9B/vBWIbMWb6G8ooYxOdncdt5wpowaqNtWJTTHmiA+Q7QGMZHoXUuziDYBDU9WoMdLCaJtq6mNMO2Rt3l3ezmv3HMew3p3OeqcD0r2M2/1duau2saa4n0AjMnJZsrogUwZNSBumTC5O/8s2MPMRYUsWLsdM2PKqAHcOmk444b20CR7ErpjShAxhbsAVxBtavo48DjwvLsvaOY4j5sSRNt2/4L1/ObvG3hw2limjm18GolNuw/w6urtvLpqGyuKygA4fVD3D2sWJ/btmuyQ61VRXctLK4qZuaiQtdv20aNzB64bP5Sbzh2mRX2kVTmuBFHnQr2ITodxrbt/vJniazZKEG3Xmx/s4oZH3+GqcTncd/WYJpcv2nvww5rF0s2lAJzSvxtTRg/g0tEDGdGva4v8Wt9ZXsGTb2/m6Xc2sWt/FSP6deXWScP5zJmDyeqopUCl9Wm2BNHaKUG0Tbv3VzLlwTfompnBy3efR+eOx3dr67ayQ8xfvZ25q7ezuHAP7nBi3y5cOnogU0YN5LSB3Zo9WawqKmPmogJeWllMda3z8VP7ceukXM47qY+akaRVU4KQVsvd+fzj+Sx8fxfPf3kipw/Kbtbr7yyvYP6aHby6ahtvb9xNxCG3d2emjB7IpaMGMmpw92P+Aq+pjbBgbXS08+LCvXTumM41eUP43MRchvdpXX0hIvVRgpBW6/cLC/jxy2v50adP53MTc5P6Wrv3V7Jg7Q7mrtrGmx/spjbi5PTMCmoWAxg7JLFO47KD1cxavJk/vLWJraWHyOmZxS0Tc7nm7CEa7SxtjhKEtEqrisq48n8WceEp/XjkprNatClm74Eq/rIuWrNYuGEX1bXOoOxMJo8ayJTRAzhraE/S6oxg3rCznJmLCpmzdCuHqms594Re3DppOJ/QaGdpw5QgpNXZX1nDZb9+g8qaCHPvOZ+eXRKZIDg5yg5V87d1O5i7ajuvv19CVU2Eft06MXnUAKaMGkhFTS0zFxXy+nsldMxIY+qYQdwyKbfZm8NEwqAEIa3O1/+4nD8v38ozXziXc07oHXY4HyqvqObv7+7k1VXb+b/1O6msiQDQt1snbjp3GNefM5Q+Gu0s7UhDCSI1ZkKTVmX2kiLmLNvKVz8xolUlB4BumR2YOnYwU8cO5kBlDa+9V4I7XDKyv0Y7S8pRgpAWtbFkP//+wmrGD+/F3R8fEXY4DerSKYNLRw8MOwyR0OgnkbSYyppa7n5mGR0z0nhw2lh17Iq0cqpBSIv5+avrWVO8j9/dnNemZ2MVSRWqQUiL+Nu6HcxYVMAtE3O5ZGT/sMMRkQQoQUjSbS+r4Jt/WsFpA7tz75RTww5HRBKkBCFJVRtxvvrHZVTWRHjo+jPJ7KAJ60TaCvVBSFI9/H8beHvjHu676oxQp98WkaZTDUKSZnHhHh7463tMHTuIq87KCTscEWkiJQhJitKDVXzlmWUM6dWZ/7xilKa8FmmD1MQkzc7d+c7slZTsr2T2nRPpphlORdqkpNYgzGyyma03sw1mdm+c4xeaWZmZLQ8e3485Vmhmq4L9mmCpDXny7U3MX7ODb3/qVM7I6RF2OCJyjJJWgzCzdOBh4BKgCFhsZi+6+9o6p77h7pfVc5mL3H1XsmKU5rdu2z5+/Mo6Lji5L58/b3jY4YjIcUhmDWI8sMHdN7p7FTALmJrE15OQHayq4e5nlpGd1YH7rxlz1HoKItK2JDNBDAa2xGwXBfvqmmBmK8zsVTM7PWa/AwvMbImZ3V7fi5jZ7WaWb2b5JSUlzRO5HJMfvbiWD0r288C1YzUltkg7kMxO6ng/H+suPrEUGObu+83sUuDPwOEpPie5e7GZ9QP+YmbvuvvrR13Q/RHgEYiuB9Fs0UuTvLSimD/mb+FLF57IpJP6hB2OiDSDZNYgioAhMds5QHHsCe6+z933B8/nAh3MrE+wXRz83Qk8T7TJSlqhzbsP8t05qxg3tAdfu+TksMMRkWaSzASxGBhhZsPNrCMwDXgx9gQzG2DBDfJmNj6IZ7eZdTGzbsH+LsAngdVJjFWOUXVthLtnLQODB6edSYd0Da0RaS+S1sTk7jVmdhcwH0gHZrj7GjO7Izg+HbgKuNPMaoBDwDR3dzPrDzwf5I4M4Gl3n5esWOXY/XLBelZsKeW3N4xjSK/OYYcjIs0oqQPlgmajuXX2TY95/hDwUJxyG4ExyYxNjt/r75Xwv69t5PpzhmrlNZF2SO0Bckx2llfw9WeXc3L/rnz/spFhhyMiSaCpNqTJIhHnG8+uoLyihqe/cK6m8BZpp1SDkCZ75I2NvPH+Ln5w+emc3L9b2OGISJIoQUiTLNu8l1/OX8+/jB7IdeOHNF5ARNosJQhJ2L6Kau5+Zhn9u2fy0ytHawpvkXZOfRCSEHfnu3NWsa2sgme/OIHsLE3hLdLeqQYhCXk2fwsvr9zG1y85mbOG9Qw7HBFpAUoQ0qgNO8v5wYtrmHRSb+684MSwwxGRFqIEIQ2qqK7lrqeX0aVjBr+6Zqym8BZJIeqDkAb95JV1vLu9nJm3nk2/7plhhyMiLUg1CKnXvNXbeeLtTXzh/OFcdEq/sMMRkRamBCFxbS09xLefW8EZOdl861Onhh2OiIRACUKOUlMb4SvPLCPi8OtpZ9IxQ/9MRFKR+iDkKL/+2/vkb9rLg9PGktunS9jhiEhIlCBake1lFew9WEVNrVNVG6GmNkJ1rVNdGwkeTk0kQlVNhJqIf7ivujZCdU2E6mBfvHKHn3907WBfxKNla6PXrKqJUFx2iKvOymHq2HhLiItIqlCCaAUOVNbwi3nv8vhbm477Wh3T08hINzqkp9Eh+Ht4+4hjaWl06pBG1/Q0MtLS6JhhZKSl0SE9jX7dO3HXRSc1wzsTkbZMCSJkb27YxXfmrKRo7yE+N2EY557Q+8Mv9egX+kdf9B3iPM+I+eLPSDPNjyQizUYJIiTlFdX816vv8vQ7mxnepwvPfnECZ+f2CjssEZEPKUGE4LX3Svi32SvZvq+CL5w/nK9fcgpZHbXojoi0LkoQLajsUDU/eWUtz+YXcWLfLjx350TGDdXEdyLSOiX1Bnczm2xm681sg5ndG+f4hWZWZmbLg8f3Ey3b1vxt3Q4++avXmL10K3deeCKv3HO+koOItGpJq0GYWTrwMHAJUAQsNrMX3X1tnVPfcPfLjrFsq1d6sIofvbSW55dt5ZT+3fjdzXmckdMj7LBERBqVzCam8cAGd98IYGazgKlAIl/yx1O21Zi3ejv/78+rKT1YxT0Xj+Cui07SqGQRaTOSmSAGA1titouAc+KcN8HMVgDFwDfdfU0TyrZKu/dX8oMX1/Dyym2MHNidx287m9MHZYcdlohIkyQzQcS7Id/rbC8Fhrn7fjO7FPgzMCLBstEXMbsduB1g6NChxxxsc3ll5Ta+/8Jq9lVU8/VLTubOC0+kQ7pqDSLS9iQzQRQBQ2K2c4jWEj7k7vtins81s9+aWZ9EysaUewR4BCAvLy9uEmkJJeWVfP+F1by6ejtn5GTz1FXncOqA7mGFIyJy3JKZIBYDI8xsOLAVmAZcH3uCmQ0Adri7m9l4ondV7QZKGyvbWrg7Lywv5ocvreFgVS3fmXwqXzh/OBmqNYhIG5e0BOHuNWZ2FzAfSAdmuPsaM7sjOD4duAq408xqgEPANHd3IG7ZZMV6rHbsq+B7z6/ir+t2cubQHtx31Rmc1K9b2GGJiDQLi34ftw95eXmen5+f9Ndxd55bUsSPX15LZU2Eb37yFG47bzjpWq9ZRNoYM1vi7nnxjmkkdRMVlx7iu8+v4h/rSzg7tyc//+wZnNC3a9hhiYg0OyWIBLk7sxZv4SevrKM24vzw8pHcPCGXNNUaRKSdUoJIwJY9B/m3OatYuGEXE07ozc8/ewZDe3cOOywRkaRSgmhAJOI89c4mfvbquwD85xWjuH78UNUaRCQlKEHUY9PuA3xn9kre3riH80f04b+uHE1OT9UaRCR1KEHUEYk4j71ZyH3z15ORZvzsytFce/YQrdQmIilHCSLGxpL9fPu5leRv2stFp/Tlp1eOZmB2VthhiYiEQgkCqI04v1+4kfsXvEenjDTuv3oMV44brFqDiKS0lE8QZQer+dzMf7J8SymfOK0/P/3MKPp1zww7LBGR0KV8guielcGw3p25dVIunx4zSLUGEZFAyicIM+PBaWeGHYaISKujKUdFRCQuJQgREYlLCUJEROJSghARkbiUIEREJC4lCBERiUsJQkRE4lKCEBGRuNrVmtRmVgJsOsbifYBdzRhOW6bP4kj6PI6kz+Mj7eGzGObufeMdaFcJ4niYWX59C3enGn0WR9LncSR9Hh9p75+FmphERCQuJQgREYlLCeIjj4QdQCuiz+JI+jyOpM/jI+36s1AfhIiIxKUahIiIxKUEISIicaV8gjCzyWa23sw2mNm9YccTJjMbYmb/Z2brzGyNmX0l7JjCZmbpZrbMzF4OO5awmVkPM3vOzN4N/o1MCDumMJnZ14L/J6vN7Bkza3drFad0gjCzdOBhYAowErjOzEaGG1WoaoBvuPtpwLnAl1P88wD4CrAu7CBaiQeBee5+KjCGFP5czGwwcA+Q5+6jgHRgWrhRNb+UThDAeGCDu2909ypgFjA15JhC4+7b3H1p8Lyc6BfA4HCjCo+Z5QD/AjwadixhM7PuwMeA3wO4e5W7l4YaVPgygCwzywA6A8Uhx9PsUj1BDAa2xGwXkcJfiLHMLBc4E3gn5FDC9ADwbSASchytwQlACTAzaHJ71My6hB1UWNx9K/BLYDOwDShz9wXhRtX8Uj1BWJx9KX/fr5l1BWYDX3X3fWHHEwYzuwzY6e5Lwo6llcgAxgH/4+5nAgeAlO2zM7OeRFsbhgODgC5mdmO4UTW/VE8QRcCQmO0c2mE1sSnMrAPR5PCUu88JO54QTQI+bWaFRJseP25mT4YbUqiKgCJ3P1yjfI5owkhVnwAK3L3E3auBOcDEkGNqdqmeIBYDI8xsuJl1JNrJ9GLIMYXGzIxoG/M6d//vsOMJk7v/m7vnuHsu0X8Xf3f3dvcLMVHuvh3YYmanBLsuBtaGGFLYNgPnmlnn4P/NxbTDTvuMsAMIk7vXmNldwHyidyHMcPc1IYcVpknATcAqM1se7Puuu88NLyRpRe4Gngp+TG0Ebg05ntC4+ztm9hywlOjdf8toh9NuaKoNERGJK9WbmEREpB5KECIiEpcShIiIxKUEISIicSlBiIhIXEoQIgEz2x/8zTWz65v52t+ts/1mc15fJBmUIESOlgs0KUEEMwM35IgE4e7tbtSttD9KECJH+xlwvpktD+b8Tzez+8xssZmtNLMvApjZhcH6GU8Dq4J9fzazJcE6AbcH+35GdNbP5Wb2VLDvcG3FgmuvNrNVZnZtzLX/EbP+wlPBiF3M7GdmtjaI5Zct/ulIykjpkdQi9bgX+Ka7XwYQfNGXufvZZtYJWGRmh2fuHA+McveCYPs2d99jZlnAYjOb7e73mtld7j42zmtdCYwlur5Cn6DM68GxM4HTic4PtgiYZGZrgc8Ap7q7m1mP5n3rIh9RDUKkcZ8Ebg6mH3kH6A2MCI79MyY5ANxjZiuAt4lOBDmChp0HPOPute6+A3gNODvm2kXuHgGWE2362gdUAI+a2ZXAweN8byL1UoIQaZwBd7v72OAxPGbu/wMfnmR2IdFZPie4+xii8/M0tgxlvCnnD6uMeV4LZLh7DdFay2zgCmBeE96HSJMoQYgcrRzoFrM9H7gzmAodMzu5nsVysoG97n7QzE4lumzrYdWHy9fxOnBt0M/Rl+iqbf+sL7BgrY7sYALFrxJtnhJJCvVBiBxtJVATNBU9RnQt5lxgadBRXEL013td84A7zGwlsJ5oM9NhjwArzWypu98Qs/95YAKwguhiVd929+1BgomnG/CCmWUSrX187ZjeoUgCNJuriIjEpSYmERGJSwlCRETiUoIQEZG4lCBERCQuJQgREYlLCUJEROJSghARkbj+P2aNBHvv68ZiAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "acc_per_epoch = [np.mean(acc_per_epoch) for acc_per_epoch in running_test_acc]\n",
    "display_loss_plot(acc_per_epoch, title=\"Test accuracy\", ylabel=\"Accuracy [%]\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.8088835446727882"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test(model, test_quantized_loader)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Save the Brevitas model to disk\n",
    "torch.save(model.state_dict(), \"state_dict_self-trained.pth\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## (Option 2, faster) Load Pre-Trained Parameters <a id=\"load_pretrained\"></a>\n",
    "\n",
    "Instead of training from scratch, you can also use pre-trained parameters we provide here. These parameters should achieve ~91.9% test accuracy."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<All keys matched successfully>"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "# Make sure the model is on CPU before loading a pretrained state_dict\n",
    "model = model.cpu()\n",
    "\n",
    "# Load pretrained weights\n",
    "trained_state_dict = torch.load(\"state_dict.pth\")[\"models_state_dict\"][0]\n",
    "\n",
    "model.load_state_dict(trained_state_dict, strict=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.9188772287810328"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Move the model back to it's target device\n",
    "model.to(device)\n",
    "\n",
    "# Test for accuracy\n",
    "test(model, test_quantized_loader)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Why do these parameters give better accuracy vs training from scratch?** Even with the topology and quantization fixed, achieving good accuracy on a given dataset requires [*hyperparameter tuning*](https://towardsdatascience.com/hyperparameters-optimization-526348bb8e2d) and potentially running training for a long time. The \"training from scratch\" example above is only intended as a quick example, whereas the pretrained parameters are obtained from a longer training run using the [determined.ai](https://determined.ai/) platform for hyperparameter tuning."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Network Surgery Before Export <a id=\"network_surgery\"></a>\n",
    "\n",
    "Sometimes, it's desirable to make some changes to our trained network prior to export (this is known in general as \"network surgery\"). This depends on the model and is not generally necessary, but in this case we want to make a couple of changes to get better results with FINN."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Move the model to CPU before surgery\n",
    "model = model.cpu()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's start by padding the input. Our input vectors are 593-bit, which will make folding (parallelization) for the first layer a bit tricky since 593 is a prime number. So we'll pad the weight matrix of the first layer with seven 0-valued columns to work with an input size of 600 instead. When using the modified network we'll similarly provide inputs padded to 600 bits."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(64, 593)"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from copy import deepcopy\n",
    "\n",
    "modified_model = deepcopy(model)\n",
    "\n",
    "W_orig = modified_model[0].weight.data.detach().numpy()\n",
    "W_orig.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(64, 600)"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# pad the second (593-sized) dimensions with 7 zeroes at the end\n",
    "W_new = np.pad(W_orig, [(0,0), (0,7)])\n",
    "W_new.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([64, 600])"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "modified_model[0].weight.data = torch.from_numpy(W_new)\n",
    "modified_model[0].weight.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, we'll modify the expected input/output ranges. In FINN, we prefer to work with bipolar {-1, +1} instead of binary {0, 1} values. To achieve this, we'll create a \"wrapper\" model that handles the pre/postprocessing as follows:\n",
    "\n",
    "* on the input side, we'll pre-process by (x + 1) / 2 in order to map incoming {-1, +1} inputs to {0, 1} ones which the trained network is used to. Since we're just multiplying/adding a scalar, these operations can be [*streamlined*](https://finn.readthedocs.io/en/latest/nw_prep.html#streamlining-transformations) by FINN and implemented with no extra cost.\n",
    "\n",
    "* on the output side, we'll add a binary quantizer which maps everthing below 0 to -1 and everything above 0 to +1. This is essentially the same behavior as the sigmoid we used earlier, except the outputs are bipolar instead of binary."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "CybSecMLPForExport(\n",
       "  (pretrained): Sequential(\n",
       "    (0): QuantLinear(\n",
       "      in_features=593, out_features=64, bias=True\n",
       "      (input_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (output_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (weight_quant): WeightQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "        (tensor_quant): RescalingIntQuant(\n",
       "          (int_quant): IntQuant(\n",
       "            (float_to_int_impl): RoundSte()\n",
       "            (tensor_clamp_impl): TensorClampSte()\n",
       "            (delay_wrapper): DelayWrapper(\n",
       "              (delay_impl): _NoDelay()\n",
       "            )\n",
       "          )\n",
       "          (scaling_impl): StatsFromParameterScaling(\n",
       "            (parameter_list_stats): _ParameterListStats(\n",
       "              (first_tracked_param): _ViewParameterWrapper(\n",
       "                (view_shape_impl): OverTensorView()\n",
       "              )\n",
       "              (stats): _Stats(\n",
       "                (stats_impl): AbsMax()\n",
       "              )\n",
       "            )\n",
       "            (stats_scaling_impl): _StatsScaling(\n",
       "              (affine_rescaling): Identity()\n",
       "              (restrict_clamp_scaling): _RestrictClampValue(\n",
       "                (clamp_min_ste): ScalarClampMinSte()\n",
       "                (restrict_value_impl): FloatRestrictValue()\n",
       "              )\n",
       "              (restrict_scaling_pre): Identity()\n",
       "            )\n",
       "          )\n",
       "          (int_scaling_impl): IntScaling()\n",
       "          (zero_point_impl): ZeroZeroPoint(\n",
       "            (zero_point): StatelessBuffer()\n",
       "          )\n",
       "          (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "      (bias_quant): BiasQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "    )\n",
       "    (1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    (2): Dropout(p=0.5, inplace=False)\n",
       "    (3): QuantReLU(\n",
       "      (input_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (act_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "        (fused_activation_quant_proxy): FusedActivationQuantProxy(\n",
       "          (activation_impl): ReLU()\n",
       "          (tensor_quant): RescalingIntQuant(\n",
       "            (int_quant): IntQuant(\n",
       "              (float_to_int_impl): RoundSte()\n",
       "              (tensor_clamp_impl): TensorClamp()\n",
       "              (delay_wrapper): DelayWrapper(\n",
       "                (delay_impl): _NoDelay()\n",
       "              )\n",
       "            )\n",
       "            (scaling_impl): ParameterFromRuntimeStatsScaling(\n",
       "              (stats_input_view_shape_impl): OverTensorView()\n",
       "              (stats): _Stats(\n",
       "                (stats_impl): AbsPercentile()\n",
       "              )\n",
       "              (restrict_clamp_scaling): _RestrictClampValue(\n",
       "                (clamp_min_ste): ScalarClampMinSte()\n",
       "                (restrict_value_impl): FloatRestrictValue()\n",
       "              )\n",
       "              (restrict_inplace_preprocess): Identity()\n",
       "              (restrict_preprocess): Identity()\n",
       "            )\n",
       "            (int_scaling_impl): IntScaling()\n",
       "            (zero_point_impl): ZeroZeroPoint(\n",
       "              (zero_point): StatelessBuffer()\n",
       "            )\n",
       "            (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "              (bit_width): StatelessBuffer()\n",
       "            )\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (4): QuantLinear(\n",
       "      in_features=64, out_features=64, bias=True\n",
       "      (input_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (output_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (weight_quant): WeightQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "        (tensor_quant): RescalingIntQuant(\n",
       "          (int_quant): IntQuant(\n",
       "            (float_to_int_impl): RoundSte()\n",
       "            (tensor_clamp_impl): TensorClampSte()\n",
       "            (delay_wrapper): DelayWrapper(\n",
       "              (delay_impl): _NoDelay()\n",
       "            )\n",
       "          )\n",
       "          (scaling_impl): StatsFromParameterScaling(\n",
       "            (parameter_list_stats): _ParameterListStats(\n",
       "              (first_tracked_param): _ViewParameterWrapper(\n",
       "                (view_shape_impl): OverTensorView()\n",
       "              )\n",
       "              (stats): _Stats(\n",
       "                (stats_impl): AbsMax()\n",
       "              )\n",
       "            )\n",
       "            (stats_scaling_impl): _StatsScaling(\n",
       "              (affine_rescaling): Identity()\n",
       "              (restrict_clamp_scaling): _RestrictClampValue(\n",
       "                (clamp_min_ste): ScalarClampMinSte()\n",
       "                (restrict_value_impl): FloatRestrictValue()\n",
       "              )\n",
       "              (restrict_scaling_pre): Identity()\n",
       "            )\n",
       "          )\n",
       "          (int_scaling_impl): IntScaling()\n",
       "          (zero_point_impl): ZeroZeroPoint(\n",
       "            (zero_point): StatelessBuffer()\n",
       "          )\n",
       "          (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "      (bias_quant): BiasQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "    )\n",
       "    (5): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    (6): Dropout(p=0.5, inplace=False)\n",
       "    (7): QuantReLU(\n",
       "      (input_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (act_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "        (fused_activation_quant_proxy): FusedActivationQuantProxy(\n",
       "          (activation_impl): ReLU()\n",
       "          (tensor_quant): RescalingIntQuant(\n",
       "            (int_quant): IntQuant(\n",
       "              (float_to_int_impl): RoundSte()\n",
       "              (tensor_clamp_impl): TensorClamp()\n",
       "              (delay_wrapper): DelayWrapper(\n",
       "                (delay_impl): _NoDelay()\n",
       "              )\n",
       "            )\n",
       "            (scaling_impl): ParameterFromRuntimeStatsScaling(\n",
       "              (stats_input_view_shape_impl): OverTensorView()\n",
       "              (stats): _Stats(\n",
       "                (stats_impl): AbsPercentile()\n",
       "              )\n",
       "              (restrict_clamp_scaling): _RestrictClampValue(\n",
       "                (clamp_min_ste): ScalarClampMinSte()\n",
       "                (restrict_value_impl): FloatRestrictValue()\n",
       "              )\n",
       "              (restrict_inplace_preprocess): Identity()\n",
       "              (restrict_preprocess): Identity()\n",
       "            )\n",
       "            (int_scaling_impl): IntScaling()\n",
       "            (zero_point_impl): ZeroZeroPoint(\n",
       "              (zero_point): StatelessBuffer()\n",
       "            )\n",
       "            (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "              (bit_width): StatelessBuffer()\n",
       "            )\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (8): QuantLinear(\n",
       "      in_features=64, out_features=64, bias=True\n",
       "      (input_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (output_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (weight_quant): WeightQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "        (tensor_quant): RescalingIntQuant(\n",
       "          (int_quant): IntQuant(\n",
       "            (float_to_int_impl): RoundSte()\n",
       "            (tensor_clamp_impl): TensorClampSte()\n",
       "            (delay_wrapper): DelayWrapper(\n",
       "              (delay_impl): _NoDelay()\n",
       "            )\n",
       "          )\n",
       "          (scaling_impl): StatsFromParameterScaling(\n",
       "            (parameter_list_stats): _ParameterListStats(\n",
       "              (first_tracked_param): _ViewParameterWrapper(\n",
       "                (view_shape_impl): OverTensorView()\n",
       "              )\n",
       "              (stats): _Stats(\n",
       "                (stats_impl): AbsMax()\n",
       "              )\n",
       "            )\n",
       "            (stats_scaling_impl): _StatsScaling(\n",
       "              (affine_rescaling): Identity()\n",
       "              (restrict_clamp_scaling): _RestrictClampValue(\n",
       "                (clamp_min_ste): ScalarClampMinSte()\n",
       "                (restrict_value_impl): FloatRestrictValue()\n",
       "              )\n",
       "              (restrict_scaling_pre): Identity()\n",
       "            )\n",
       "          )\n",
       "          (int_scaling_impl): IntScaling()\n",
       "          (zero_point_impl): ZeroZeroPoint(\n",
       "            (zero_point): StatelessBuffer()\n",
       "          )\n",
       "          (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "      (bias_quant): BiasQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "    )\n",
       "    (9): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    (10): Dropout(p=0.5, inplace=False)\n",
       "    (11): QuantReLU(\n",
       "      (input_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (act_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "        (fused_activation_quant_proxy): FusedActivationQuantProxy(\n",
       "          (activation_impl): ReLU()\n",
       "          (tensor_quant): RescalingIntQuant(\n",
       "            (int_quant): IntQuant(\n",
       "              (float_to_int_impl): RoundSte()\n",
       "              (tensor_clamp_impl): TensorClamp()\n",
       "              (delay_wrapper): DelayWrapper(\n",
       "                (delay_impl): _NoDelay()\n",
       "              )\n",
       "            )\n",
       "            (scaling_impl): ParameterFromRuntimeStatsScaling(\n",
       "              (stats_input_view_shape_impl): OverTensorView()\n",
       "              (stats): _Stats(\n",
       "                (stats_impl): AbsPercentile()\n",
       "              )\n",
       "              (restrict_clamp_scaling): _RestrictClampValue(\n",
       "                (clamp_min_ste): ScalarClampMinSte()\n",
       "                (restrict_value_impl): FloatRestrictValue()\n",
       "              )\n",
       "              (restrict_inplace_preprocess): Identity()\n",
       "              (restrict_preprocess): Identity()\n",
       "            )\n",
       "            (int_scaling_impl): IntScaling()\n",
       "            (zero_point_impl): ZeroZeroPoint(\n",
       "              (zero_point): StatelessBuffer()\n",
       "            )\n",
       "            (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "              (bit_width): StatelessBuffer()\n",
       "            )\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (12): QuantLinear(\n",
       "      in_features=64, out_features=1, bias=True\n",
       "      (input_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (output_quant): ActQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "      (weight_quant): WeightQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "        (tensor_quant): RescalingIntQuant(\n",
       "          (int_quant): IntQuant(\n",
       "            (float_to_int_impl): RoundSte()\n",
       "            (tensor_clamp_impl): TensorClampSte()\n",
       "            (delay_wrapper): DelayWrapper(\n",
       "              (delay_impl): _NoDelay()\n",
       "            )\n",
       "          )\n",
       "          (scaling_impl): StatsFromParameterScaling(\n",
       "            (parameter_list_stats): _ParameterListStats(\n",
       "              (first_tracked_param): _ViewParameterWrapper(\n",
       "                (view_shape_impl): OverTensorView()\n",
       "              )\n",
       "              (stats): _Stats(\n",
       "                (stats_impl): AbsMax()\n",
       "              )\n",
       "            )\n",
       "            (stats_scaling_impl): _StatsScaling(\n",
       "              (affine_rescaling): Identity()\n",
       "              (restrict_clamp_scaling): _RestrictClampValue(\n",
       "                (clamp_min_ste): ScalarClampMinSte()\n",
       "                (restrict_value_impl): FloatRestrictValue()\n",
       "              )\n",
       "              (restrict_scaling_pre): Identity()\n",
       "            )\n",
       "          )\n",
       "          (int_scaling_impl): IntScaling()\n",
       "          (zero_point_impl): ZeroZeroPoint(\n",
       "            (zero_point): StatelessBuffer()\n",
       "          )\n",
       "          (msb_clamp_bit_width_impl): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "        )\n",
       "      )\n",
       "      (bias_quant): BiasQuantProxyFromInjector(\n",
       "        (_zero_hw_sentinel): StatelessBuffer()\n",
       "      )\n",
       "    )\n",
       "  )\n",
       "  (qnt_output): QuantIdentity(\n",
       "    (input_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "    )\n",
       "    (act_quant): ActQuantProxyFromInjector(\n",
       "      (_zero_hw_sentinel): StatelessBuffer()\n",
       "      (fused_activation_quant_proxy): FusedActivationQuantProxy(\n",
       "        (activation_impl): Identity()\n",
       "        (tensor_quant): ClampedBinaryQuant(\n",
       "          (scaling_impl): ParameterFromRuntimeStatsScaling(\n",
       "            (stats_input_view_shape_impl): OverTensorView()\n",
       "            (stats): _Stats(\n",
       "              (stats_impl): AbsPercentile()\n",
       "            )\n",
       "            (restrict_clamp_scaling): _RestrictClampValue(\n",
       "              (clamp_min_ste): ScalarClampMinSte()\n",
       "              (restrict_value_impl): FloatRestrictValue()\n",
       "            )\n",
       "            (restrict_inplace_preprocess): Identity()\n",
       "            (restrict_preprocess): Identity()\n",
       "          )\n",
       "          (bit_width): BitWidthConst(\n",
       "            (bit_width): StatelessBuffer()\n",
       "          )\n",
       "          (zero_point): StatelessBuffer()\n",
       "          (delay_wrapper): DelayWrapper(\n",
       "            (delay_impl): _NoDelay()\n",
       "          )\n",
       "          (tensor_clamp_impl): TensorClamp()\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "  )\n",
       ")"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from brevitas.core.quant import QuantType\n",
    "from brevitas.nn import QuantIdentity\n",
    "\n",
    "\n",
    "class CybSecMLPForExport(nn.Module):\n",
    "    def __init__(self, my_pretrained_model):\n",
    "        super(CybSecMLPForExport, self).__init__()\n",
    "        self.pretrained = my_pretrained_model\n",
    "        self.qnt_output = QuantIdentity(quant_type=QuantType.BINARY, bit_width=1, min_val=-1.0, max_val=1.0)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        # assume x contains bipolar {-1,1} elems\n",
    "        # shift from {-1,1} -> {0,1} since that is the\n",
    "        # input range for the trained network\n",
    "        x = (x + torch.tensor([1.0]).to(x.device)) / 2.0  \n",
    "        out_original = self.pretrained(x)\n",
    "        out_final = self.qnt_output(out_original)   # output as {-1,1}     \n",
    "        return out_final\n",
    "\n",
    "model_for_export = CybSecMLPForExport(modified_model)\n",
    "model_for_export.to(device)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "def test_padded_bipolar(model, test_loader):    \n",
    "    # ensure model is in eval mode\n",
    "    model.eval() \n",
    "    y_true = []\n",
    "    y_pred = []\n",
    "   \n",
    "    with torch.no_grad():\n",
    "        for data in test_loader:\n",
    "            inputs, target = data\n",
    "            inputs, target = inputs.to(device), target.to(device)\n",
    "            # pad inputs to 600 elements\n",
    "            input_padded = torch.nn.functional.pad(inputs, (0,7,0,0))\n",
    "            # convert inputs to {-1,+1}\n",
    "            input_scaled = 2 * input_padded - 1\n",
    "            # run the model\n",
    "            output = model(input_scaled.float())\n",
    "            y_pred.extend(list(output.flatten().cpu().numpy()))\n",
    "            # make targets bipolar {-1,+1}\n",
    "            expected = 2 * target.float() - 1\n",
    "            expected = expected.cpu().numpy()\n",
    "            y_true.extend(list(expected.flatten()))\n",
    "        \n",
    "    return accuracy_score(y_true, y_pred)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.9188772287810328"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_padded_bipolar(model_for_export, test_quantized_loader)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Export to FINN-ONNX <a id=\"export_finn_onnx\" ></a>\n",
    "\n",
    "\n",
    "[ONNX](https://onnx.ai/) is an open format built to represent machine learning models, and the FINN compiler expects an ONNX model as input. We'll now export our network into ONNX to be imported and used in FINN for the next notebooks. Note that the particular ONNX representation used for FINN differs from standard ONNX, you can read more about this [here](https://finn.readthedocs.io/en/latest/internals.html#intermediate-representation-finn-onnx).\n",
    "\n",
    "You can see below how we export a trained network in Brevitas into a FINN-compatible ONNX representation. Note how we create a `QuantTensor` instance with dummy data to tell Brevitas how our inputs look like, which will be used to set the input quantization annotation on the exported model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model saved to cybsec-mlp-ready.onnx\n"
     ]
    }
   ],
   "source": [
    "import brevitas.onnx as bo\n",
    "from brevitas.quant_tensor import QuantTensor\n",
    "\n",
    "ready_model_filename = \"cybsec-mlp-ready.onnx\"\n",
    "input_shape = (1, 600)\n",
    "\n",
    "# create a QuantTensor instance to mark input as bipolar during export\n",
    "input_a = np.random.randint(0, 1, size=input_shape).astype(np.float32)\n",
    "input_a = 2 * input_a - 1\n",
    "scale = 1.0\n",
    "input_t = torch.from_numpy(input_a * scale)\n",
    "input_qt = QuantTensor(\n",
    "    input_t, scale=torch.tensor(scale), bit_width=torch.tensor(1.0), signed=True\n",
    ")\n",
    "\n",
    "#Move to CPU before export\n",
    "model_for_export.cpu()\n",
    "\n",
    "# Export to ONNX\n",
    "bo.export_finn_onnx(\n",
    "    model_for_export, export_path=ready_model_filename, input_t=input_qt\n",
    ")\n",
    "\n",
    "print(\"Model saved to %s\" % ready_model_filename)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## View the Exported ONNX in Netron\n",
    "\n",
    "Let's examine the exported ONNX model with [Netron](https://github.com/lutzroeder/netron), which is a visualizer for neural networks and allows interactive investigation of network properties. For example, you can click on the individual nodes and view the properties. Particular things of note:\n",
    "\n",
    "* The input tensor \"0\" is annotated with `quantization: finn_datatype: BIPOLAR`\n",
    "* The input preprocessing (x + 1) / 2 is exported as part of the network (initial `Add` and `Div` layers)\n",
    "* Brevitas `QuantLinear` layers are exported to ONNX as `MatMul`. We've exported the padded version; shape of the first MatMul node's weight parameter is 600x64\n",
    "* The weight parameters (second inputs) for MatMul nodes are annotated with `quantization: finn_datatype: INT2`\n",
    "* The quantized activations are exported as `MultiThreshold` nodes with `domain=finn.custom_op.general`\n",
    "* There's a final `MultiThreshold` node with threshold=0 to produce the final bipolar output (this is the `qnt_output` from `CybSecMLPForExport`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from finn.util.visualization import showInNetron\n",
    "\n",
    "showInNetron(ready_model_filename)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## That's it! <a id=\"thats_it\" ></a>\n",
    "You created, trained and tested a quantized MLP that is ready to be loaded into FINN, congratulations! You can now proceed to the next notebook."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}