-
Yaman Umuroglu authoredYaman Umuroglu authored
FINN
Fast, Scalable Quantized Neural Network Inference on FPGAs
==============================================
FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. For more general information about FINN, please visit the project page.
Getting Started
Please see the Getting Started page for more information on installation, requirements and how to run FINN in different modes.
Old version
We previously released an early-stage prototype of a toolflow that took in Caffe-HWGQ binarized network descriptions and produced dataflow architectures. You can find it in the v0.1 branch in this repository. Please be aware that this version is deprecated and unsupported, and the master branch does not share history with that branch so it should be treated as a separate repository for all purposes.