Skip to content
Snippets Groups Projects
Commit 369d998a authored by auphelia's avatar auphelia
Browse files

Merge branch 'dev' of https://github.com/Xilinx/finn into dev

parents d1855d74 b045a114
No related branches found
No related tags found
No related merge requests found
......@@ -23,8 +23,3 @@ repos:
args: ['--fix=no']
- id: flake8
args: ['--max-line-length=88'] # default of Black
- repo: https://github.com/pre-commit/mirrors-isort
rev: v4.3.4
hooks:
- id: isort
# FINN
## <img src=https://raw.githubusercontent.com/Xilinx/finn/master/docs/img/finn-logo.png width=128/> Fast, Scalable Quantized Neural Network Inference on FPGAs
Fast, Scalable Quantized Neural Network Inference on FPGAs
<img align="left" src="https://raw.githubusercontent.com/Xilinx/finn/master/docs/img/finn-stack.png" alt="drawing" style="margin-right: 20px" width="250"/>
[![Gitter](https://badges.gitter.im/xilinx-finn/community.svg)](https://gitter.im/xilinx-finn/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![ReadTheDocs](https://readthedocs.org/projects/finn/badge/?version=latest&style=plastic)](http://finn.readthedocs.io/)
FINN is an experimental framework from Xilinx Research Labs to explore deep neural network
inference on FPGAs.
It specifically targets <a href="https://github.com/maltanar/qnn-inference-examples" target="_blank">quantized neural
networks</a>, with emphasis on
generating dataflow-style architectures customized for each network.
The resulting FPGA accelerators can yield very high classification rates, or conversely be run with a slow clock for very low power consumption.
The framework is fully open-source in order to give a higher degree of flexibility, and is intended to enable neural network research spanning several layers of the software/hardware abstraction stack.
For more general information about FINN, please visit the [project page](https://xilinx.github.io/finn/), check out the [publications](https://xilinx.github.io/finn/publications) or some of the [demos](https://xilinx.github.io/finn/demos).
## Getting Started
Please see the [Getting Started](https://finn.readthedocs.io/en/latest/getting_started.html) page for more information on requirements, installation, and how to run FINN in different modes. Due to the complex nature of the dependencies of the project, we only support Docker-based deployment at this time.
## What's New in FINN?
* **2020-02-27:** FINN v0.2b (beta) is released, which is a clean-slate reimplementation of the framework. Currently only fully-connected networks are supported for the end-to-end flow. Please see the release blog post for a summary of the key features.
## Documentation
You can view the documentation on [readthedocs](https://finn.readthedocs.io) or build them locally using `python setup.py doc` from inside the Docker container. Additionally, there is a series of [Jupyter notebook tutorials](https://github.com/Xilinx/finn/tree/master/notebooks), which we recommend running from inside Docker for a better experience.
## Community
We have a [gitter channel](https://gitter.im/xilinx-finn/community) where you can ask questions. You can use the GitHub issue tracker to report bugs, but please don't file issues to ask questions as this is better handled in the gitter channel. We also heartily welcome contributors to the project but do not yet have guidelines in place for this, so if you are interested just get in touch over gitter.
## Description
## Citation
The current implementation of the framework is based on the following publications. Please consider citing them if you find FINN useful.
FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network.
For more information, please visit the [project page](https://xilinx.github.io/finn/).
@article{blott2018finn,
title={FINN-R: An end-to-end deep-learning framework for fast exploration of quantized neural networks},
author={Blott, Michaela and Preu{\ss}er, Thomas B and Fraser, Nicholas J and Gambardella, Giulio and O’brien, Kenneth and Umuroglu, Yaman and Leeser, Miriam and Vissers, Kees},
journal={ACM Transactions on Reconfigurable Technology and Systems (TRETS)},
volume={11},
number={3},
pages={1--23},
year={2018},
publisher={ACM New York, NY, USA}
}
A new, more modular version of FINN is currently under development on GitHub, and we welcome contributions from the community!
Stay tuned for more updates.
@inproceedings{finn,
author = {Umuroglu, Yaman and Fraser, Nicholas J. and Gambardella, Giulio and Blott, Michaela and Leong, Philip and Jahre, Magnus and Vissers, Kees},
title = {FINN: A Framework for Fast, Scalable Binarized Neural Network Inference},
booktitle = {Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays},
series = {FPGA '17},
year = {2017},
pages = {65--74},
publisher = {ACM}
}
## Old version
......
# Status for FINN example networks
| | Basic test | TFC-w1a1 | TFC-w1a2 | CNV-w1a1 | CNV-w1a2 | CNV-w2a2 |
|--------------------------- |------------ |---------- |---------- |---------- |---------- |---------- |
| Export/Import | x | x | x | x | | |
| Streamlining | x | x | x | | | |
| Convert to HLS layers | x | x | | | | |
| npysim | x | x | | | | |
| Stitched IPI design | x | x | | | | |
| rtlsim | x | | | | | |
| Hardware test | x | | | | | |
This page has moved to:
https://finn-dev.readthedocs.io/en/latest/example_networks.html
.. _example_networks:
***************
Example Networks
***************
FINN uses `several pre-trained QNNs <https://github.com/maltanar/brevitas_cnv_lfc>`_ that serve as examples and testcases.
You can find a status summary below for each network.
* TFC, SFC, LFC... are fully-connected networks trained on the MNIST dataset
* CNV is a convolutional network trained on the CIFAR-10 dataset
* w\_a\_ refers to the quantization used for the weights (w) and activations (a) in bits
The rows in the table are different steps of the FINN end-to-end flow.
If a particular network is supported for a particular step in the current FINN
version, this is indicated by an x mark in the table.
+-----------------------+------------+----------+----------+----------+----------+----------+
| FINN step | Basic test | TFC-w1a1 | TFC-w1a2 | CNV-w1a1 | CNV-w1a2 | CNV-w2a2 |
+-----------------------+------------+----------+----------+----------+----------+----------+
| Export/Import | x | x | x | x | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
| Streamlining | x | x | x | | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
| Convert to HLS layers | x | x | | | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
| Stitched IP | x | x | | | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
| Hardware test | x | x | | | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
| npysim | x | x | | | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
| rtlsim node-by-node | x | x | | | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
| rtlsim stitched IP | x | x | | | | |
+-----------------------+------------+----------+----------+----------+----------+----------+
......@@ -26,10 +26,8 @@ More FINN Resources
===================
* `List of publications <https://github.com/Xilinx/finn/blob/dev/docs/publications.md>`_
* `Roadmap <https://github.com/Xilinx/finn/projects/1>`_
* `Status of example networks <https://github.com/Xilinx/finn/blob/dev/docs/example-networks.md>`_
.. toctree::
:maxdepth: 5
......@@ -38,6 +36,7 @@ More FINN Resources
getting_started
tutorials
end_to_end_flow
example_networks
internals
source_code/finn
genindex
......@@ -9,10 +9,6 @@ if [ -z "$PYNQ_IP" ];then
echo "Please set the PYNQ_IP env.var. to enable PYNQ deployment tests."
fi
if [ -z "$PYNQ_BOARD" ];then
echo "Please set the PYNQ_BOARD env.var. to enable PYNQ deployment tests."
fi
DOCKER_GID=$(id -g)
DOCKER_GNAME=$(id -gn)
DOCKER_UNAME=$(id -un)
......@@ -34,6 +30,7 @@ DOCKER_INST_NAME=$(echo "$DOCKER_INST_NAME" | tr '[:upper:]' '[:lower:]')
: ${NETRON_PORT=8081}
: ${PYNQ_USERNAME="xilinx"}
: ${PYNQ_PASSWORD="xilinx"}
: ${PYNQ_BOARD="Pynq-Z1"}
: ${PYNQ_TARGET_DIR="/home/xilinx/$DOCKER_INST_NAME"}
# Absolute path to this script, e.g. /home/user/bin/foo.sh
......
......@@ -4,6 +4,7 @@ from pkgutil import get_data
import pytest
import numpy as np
# as of Feb'20 there is a bug that segfaults ONNX shape inference if we
# import pytorch before onnx, so we make sure to import onnx first
import onnx # NOQA
......@@ -171,7 +172,6 @@ def test_end2end_tfc_verify_all():
golden = ModelWrapper(build_dir + "/end2end_tfc_w1_a1_streamlined.onnx")
iname = golden.graph.input[0].name
oname = golden.graph.output[0].name
ishape = golden.get_tensor_shape(iname)
raw_i = get_data("finn", "data/onnx/mnist-conv/test_data_set_0/input_0.pb")
input_tensor = onnx.load_tensor_from_string(raw_i)
x = nph.to_array(input_tensor)
......@@ -229,6 +229,8 @@ def test_end2end_tfc_deploy_on_pynq():
model = ModelWrapper(build_dir + "/end2end_tfc_w1_a1_pynq_driver.onnx")
try:
ip = os.environ["PYNQ_IP"] # no fault for this one; skip if not defined
if ip == "":
pytest.skip("PYNQ board IP address not specified")
username = os.getenv("PYNQ_USERNAME", "xilinx")
password = os.getenv("PYNQ_PASSWORD", "xilinx")
target_dir = os.getenv("PYNQ_TARGET_DIR", "/home/xilinx/finn")
......@@ -244,7 +246,6 @@ def test_end2end_tfc_run_on_pynq():
golden = ModelWrapper(build_dir + "/end2end_tfc_w1_a1_streamlined.onnx")
iname = golden.graph.input[0].name
oname = golden.graph.output[0].name
ishape = golden.get_tensor_shape(iname)
raw_i = get_data("finn", "data/onnx/mnist-conv/test_data_set_0/input_0.pb")
input_tensor = onnx.load_tensor_from_string(raw_i)
x = nph.to_array(input_tensor)
......@@ -259,6 +260,8 @@ def test_end2end_tfc_run_on_pynq():
oname = parent_model.graph.output[0].name
try:
ip = os.environ["PYNQ_IP"] # NOQA
if ip == "":
pytest.skip("PYNQ board IP address not specified")
# produce results with npysim
sdp_node = getCustomOp(parent_model.graph.node[2])
sdp_node.set_nodeattr(
......
......@@ -274,6 +274,8 @@ def test_fpgadataflow_ipstitch_pynq_deployment_folder():
)
try:
ip = os.environ["PYNQ_IP"] # no default for this one; skip if not defined
if ip == "":
pytest.skip("PYNQ board IP address not specified")
username = os.getenv("PYNQ_USERNAME", "xilinx")
password = os.getenv("PYNQ_PASSWORD", "xilinx")
target_dir = os.getenv("PYNQ_TARGET_DIR", "/home/xilinx/finn")
......@@ -305,6 +307,8 @@ def test_fpgadataflow_ipstitch_remote_execution():
)
try:
ip = os.environ["PYNQ_IP"] # NOQA
if ip == "":
pytest.skip("PYNQ board IP address not specified")
idt = DataType.INT2
x = gen_finn_dt_tensor(idt, (1, 4))
input_dict = {"inp": x}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment