intended to be a generic DNN accelerator like xDNN, but rather a tool for
exploring the design space of DNN inference accelerators on FPGAs.
<br><br>
A new, more modular version of FINN is currently under development on GitHub, and we welcome contributions from the community!
<br>
A new, more modular version of FINN is currently under development <ahref="https://github.com/Xilinx/finn">on GitHub</a>, and we welcome contributions from the community!
## Quickstart
Depending on what you would like to do, we have
different suggestions on where to get started:
Depending on what you would like to do, we have different suggestions on where to get started:
***I want to try out premade accelerators on real hardware.** Head over to <ahref="https://github.com/Xilinx/BNN-PYNQ"target="_blank">BNN-PYNQ</a> repository to try out some image
***I want to try out prebuilt QNN accelerators on real hardware.** Head over to <ahref="https://github.com/Xilinx/BNN-PYNQ"target="_blank">BNN-PYNQ</a> repository to try out some image
classification accelerators, or to <ahref="https://github.com/Xilinx/LSTM-PYNQ"target="_blank">LSTM-PYNQ</a>
to try optical character recognition with LSTMs.
***I want to try the full design flow.** The <ahref="https://github.com/Xilinx/FINN"target="_blank">FINN</a> repository
contains the Python toolflow that goes from a trained, quantized Caffe network
to an accelerator running on real hardware.
***I want to train new quantized networks for FINN.** Have a look <ahref="https://github.com/Xilinx/BNN-PYNQ/tree/master/bnn/src/training"target="_blank">here</a>, at
for an example with Fashion-MNIST, or <ahref="https://github.com/Xilinx/pytorch-ocr"target="_blank">here</a> for quantized
LSTMs with PyTorch.
***I want to train new quantized networks for FINN.** Check out <ahref="https://github.com/Xilinx/brevitas">Brevitas</a>,
our PyTorch library for training quantized networks. The Brevitas-to-FINN part of the flow is coming soon!
***I want to understand how it all fits together.** Check out our [publications](#publications),
particularly the <ahref="https://arxiv.org/abs/1612.07119"target="_blank">FINN paper at FPGA'17</a> and the <ahref="https://arxiv.org/abs/1809.04570"target="_blank">FINN-R paper in ACM TRETS</a>.