diff --git a/README.md b/README.md
index 609995df32aacdc073ca2013019c512e944e467e..bc1b537aea7249e5f2e3ced5451c9cf3d88003ff 100644
--- a/README.md
+++ b/README.md
@@ -1,18 +1,17 @@
 # FINN
 
-Fast, Scalable Quantized Neural Network Inference on FPGAs
+## <img src=https://raw.githubusercontent.com/Xilinx/finn/master/docs/img/finn-logo.png width=128/> Fast, Scalable Quantized Neural Network Inference on FPGAs
+==============================================
 
 [![Gitter](https://badges.gitter.im/xilinx-finn/community.svg)](https://gitter.im/xilinx-finn/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
-
-
-## Description
-
+[![ReadTheDocs](https://readthedocs.org/projects/finn/badge/?version=latest&style=plastic)](http://finn.readthedocs.io/)
 
 FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network.
-For more information, please visit the [project page](https://xilinx.github.io/finn/).
+For more general information about FINN, please visit the [project page](https://xilinx.github.io/finn/).
+
+## Getting Started
 
-A new, more modular version of FINN is currently under development on GitHub, and we welcome contributions from the community!
-Stay tuned for more updates.
+Please see the [Getting Started](https://finn.readthedocs.io/en/latest/getting_started.html) page for more information on installation, requirements and how to run FINN in different modes.
 
 ## Old version