FINN
Fast, Scalable Quantized Neural Network Inference on FPGAs
Description
FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. For more information, please visit the project page.
A new, more modular version of FINN is currently under development on GitHub, and we welcome contributions from the community! Stay tuned for more updates.
Old version
We previously released an early-stage prototype of a toolflow that took in Caffe-HWGQ binarized network descriptions and produced dataflow architectures. You can find it in the v0.1 branch in this repository. Please be aware that this version is deprecated and unsupported, and the master branch does not share history with that branch so it should be treated as a separate repository for all purposes.