Skip to content
Snippets Groups Projects
user avatar
Yaman Umuroglu authored
* [Build] add a deployment_package step

* [Build] typo fix in build examples

* [Build] fix deployment_package step

* [Analysis] ensure python integers in fpgadataflow analysis passes

* [Build] report generation and other minor improvements

* [Docs] update build_dataflow docs

* [Build] latency est. fix

* [Build] check if fold config is None

* [Build] add ooc synthesis step

* [Deps] update finn-base

* [Util] out_of_context_synth: remove remote, use launch_process_helper

* [Build] include all outputs in examples configs

* [Docs] update build flow docs

* [Deps] update finn-base

* [Util] bugfix in launch_process_helper call

* [Docker] use interactive mode for builds

* [Build] enable pdb debugging for builds

* [Refactor] move build functions to own submodule

* [Test] build_dataflow: fix expected files

* [Build] report estimated resource total

* [Infra] remove old eggs

* [HLSCustomOp] introduce get_op_counts

only implemented for MVAU and VVAU for now

* [HLSCustomOp] extend get_op_counts to include params too

* [Analysis] introduce op_and_param_counts pass

* [Build] generate op/param counts as part of estimates + add doc

* [HLSCustomOp] assert if ap_int_max_w is too large

* [StreamingFC] fix ap_int_max_w calculation

* [Build] minor fix in step_generate_estimate_reports
884cb146
History

FINN

drawing

FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. It is not intended to be a generic DNN accelerator like xDNN, but rather a tool for exploring the design space of DNN inference accelerators on FPGAs.

A new, more modular version of FINN is currently under development on GitHub, and we welcome contributions from the community!

Quickstart

Depending on what you would like to do, we have different suggestions on where to get started:

  • I want to try out prebuilt QNN accelerators on real hardware. Head over to BNN-PYNQ repository to try out some image classification accelerators, or to LSTM-PYNQ to try optical character recognition with LSTMs.
  • I want to train new quantized networks for FINN. Check out Brevitas, our PyTorch library for training quantized networks. The Brevitas-to-FINN part of the flow is coming soon!
  • I want to understand the computations involved in quantized inference. Check out these Jupyter notebooks on QNN inference. This repo contains simple Numpy/Python layer implementations and a few pretrained QNNs for instructive purposes.
  • I want to understand how it all fits together. Check out our publications, particularly the FINN paper at FPGA'17 and the FINN-R paper in ACM TRETS.