diff --git a/docs/index.md b/docs/index.md
index 8213e6b637b0df788a3368b52ead7c2e591359c1..d8025be0a864f6eab832d6c9992983e6001de5d4 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -11,23 +11,17 @@ It is not
 intended to be a generic DNN accelerator like xDNN, but rather a tool for
 exploring the design space of DNN inference accelerators on FPGAs.
 <br><br>
-A new, more modular version of FINN is currently under development on GitHub, and we welcome contributions from the community!
-<br>
+A new, more modular version of FINN is currently under development <a href="https://github.com/Xilinx/finn">on GitHub</a>, and we welcome contributions from the community!
+
 
 ## Quickstart
 
-Depending on what you would like to do, we have
-different suggestions on where to get started:
+Depending on what you would like to do, we have different suggestions on where to get started:
 
-* **I want to try out premade accelerators on real hardware.** Head over to <a href="https://github.com/Xilinx/BNN-PYNQ" target="_blank">BNN-PYNQ</a> repository to try out some image
+* **I want to try out prebuilt QNN accelerators on real hardware.** Head over to <a href="https://github.com/Xilinx/BNN-PYNQ" target="_blank">BNN-PYNQ</a> repository to try out some image
 classification accelerators, or to <a href="https://github.com/Xilinx/LSTM-PYNQ" target="_blank">LSTM-PYNQ</a>
 to try optical character recognition with LSTMs.
-* **I want to try the full design flow.** The <a href="https://github.com/Xilinx/FINN" target="_blank">FINN</a> repository
-contains the Python toolflow that goes from a trained, quantized Caffe network
-to an accelerator running on real hardware.
-* **I want to train new quantized networks for FINN.** Have a look <a href="https://github.com/Xilinx/BNN-PYNQ/tree/master/bnn/src/training" target="_blank">here</a>, at
-[this presentation](https://drive.google.com/open?id=17oorGvtUbdFd-o1OzSuxGCSrWsvm_S2ftC1UC2FLtuE)
-for an example with Fashion-MNIST, or <a href="https://github.com/Xilinx/pytorch-ocr" target="_blank">here</a> for quantized
-LSTMs with PyTorch.
+* **I want to train new quantized networks for FINN.** Check out <a href="https://github.com/Xilinx/brevitas">Brevitas</a>, 
+our PyTorch library for training quantized networks. The Brevitas-to-FINN part of the flow is coming soon!
 * **I want to understand how it all fits together.** Check out our [publications](#publications),
 particularly the <a href="https://arxiv.org/abs/1612.07119" target="_blank">FINN paper at FPGA'17</a> and the <a href="https://arxiv.org/abs/1809.04570" target="_blank">FINN-R paper in ACM TRETS</a>.