Skip to content
Snippets Groups Projects
Commit 3dc48c57 authored by Hendrik Borras's avatar Hendrik Borras
Browse files

Applied new pre-commit-config to all files, non-critical changes only

parent 4dca934f
No related branches found
No related tags found
No related merge requests found
Showing
with 143 additions and 138 deletions
......@@ -12,14 +12,15 @@
#
import os
import sys
sys.path.insert(0, os.path.abspath('../../src/'))
sys.path.insert(0, os.path.abspath("../../src/"))
# -- Project information -----------------------------------------------------
project = 'FINN'
copyright = '2020, Xilinx'
author = 'Y. Umuroglu and J. Petri-Koenig'
project = "FINN"
copyright = "2020, Xilinx"
author = "Y. Umuroglu and J. Petri-Koenig"
# -- General configuration ---------------------------------------------------
......@@ -27,17 +28,16 @@ author = 'Y. Umuroglu and J. Petri-Koenig'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
]
extensions.append('sphinx.ext.autodoc')
extensions = []
extensions.append("sphinx.ext.autodoc")
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# -- Options for HTML output -------------------------------------------------
......@@ -45,11 +45,11 @@ exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme = "sphinx_rtd_theme"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ["_static"]
master_doc = 'index'
master_doc = "index"
......@@ -9,25 +9,25 @@ Frequently Asked Questions
Can I install FINN out of the Docker container?
===============================================
We do not support out of the Docker implementations at the moment. This is due
We do not support out of the Docker implementations at the moment. This is due
to the high complexity of the FINN project dependencies.
Since FINN uses ONNX, can I compile any model from the ONNX Model Zoo to an FPGA accelerator?
=============================================================================================
The short answer is no. FINN uses ONNX in a specific (non-standard) way, including custom layer
The short answer is no. FINN uses ONNX in a specific (non-standard) way, including custom layer
types and quantization annotations. Networks must be first quantized using Brevitas and exported
to FINN-ONNX to be converted to FPGA accelerators.
Can I deploy custom NNs with arbitrary precisions and layers using FINN?
Can I deploy custom NNs with arbitrary precisions and layers using FINN?
=========================================================================
Yes, though the effort required and quality of results will vary.
Although we do support arbitrary
precision, the way we create the hardware isn't typically practical for more than
4 bits, or very large networks for a single FPGA.
In terms of layers, only a subset of quantized layers covered by the various FINN examples
Although we do support arbitrary
precision, the way we create the hardware isn't typically practical for more than
4 bits, or very large networks for a single FPGA.
In terms of layers, only a subset of quantized layers covered by the various FINN examples
are currently supported.
It is possible to add support for new layers, though we don't have tutorials for this in place
just yet.
......@@ -35,16 +35,16 @@ just yet.
Does FINN only work with the example networks?
==============================================
FINN isn't restricted to the example networks;
rather, it's restricted to certain patterns (e.g. certain layer types and their combinations).
The current best practice for custom networks is to take a working network and gradually modify it.
FINN isn't restricted to the example networks;
rather, it's restricted to certain patterns (e.g. certain layer types and their combinations).
The current best practice for custom networks is to take a working network and gradually modify it.
What is the expected background for using FINN?
===============================================
Some general knowledge of Python, Docker, machine learning with neural networks and Jupyter notebooks
is expected.
Our goal is to make the tool in a shape and form so that no hardware/FPGA background
Our goal is to make the tool in a shape and form so that no hardware/FPGA background
should be necessary, although having some knowledge would give better results.
What operating systems are supported by FINN?
......@@ -66,6 +66,6 @@ What board do you recommend to start working with FINN?
Our preferred target platforms are those supported by `PYNQ <http://www.pynq.io/board.html>`_.
For those boards we can offer end-to-end (DNN-to-bitstream) deployment,
see the `finn-examples <https://github.com/Xilinx/finn-examples>`_ repository for some examples.
However, FINN also supports Vivado IP Integrator designs. The IPs connect using AXI stream (FIFO)
However, FINN also supports Vivado IP Integrator designs. The IPs connect using AXI stream (FIFO)
in-and-out interfaces. This means that it can be integrated onto any Xilinx FPGA board,
though you will have to do the system integration manually.
......@@ -36,4 +36,4 @@ for (( i=0; i<$NBLOCKS; i++ ))
do
START=$(( 1 + $i * 1024 ))
tail -n +$START $1 | head -n 1024 >> memblock_$i.dat
done
\ No newline at end of file
done
......@@ -26,12 +26,12 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch
import pandas as pd
import math
import numpy as np
import pandas as pd
import torch
from sklearn import preprocessing
from sklearn.preprocessing import OneHotEncoder
import math
# quantize the UNSW_NB15 dataset and convert it to binary vectors
# reimplementation
......@@ -112,7 +112,7 @@ class UNSW_NB15_quantized(torch.utils.data.Dataset):
def round_like_matlab_number(self, n: np.float64) -> int:
"""Round the input "n" like matlab uint32(n) cast (which also rounds) e.g.
0.5->1; 1.5->2; 2.3->2; 2.45->2 """
0.5->1; 1.5->2; 2.3->2; 2.45->2"""
if n - math.floor(n) < 0.5:
return math.floor(n)
return math.ceil(n)
......
......@@ -27,9 +27,9 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import argparse
import numpy as np
from driver import io_shape_dict
from driver_base import FINNExampleOverlay
import numpy as np
def make_unsw_nb15_test_batches(bsize, dataset_root):
......
......@@ -35,10 +35,11 @@
PyScaffold helps you to put up the scaffold of your new Python project.
Learn more under: https://pyscaffold.org/
"""
import sys
from pkg_resources import VersionConflict, require
from setuptools import setup
import sys
try:
require("setuptools>=38.3")
except VersionConflict:
......
......@@ -26,8 +26,8 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from finn.util.fpgadataflow import is_fpgadataflow_node
from finn.custom_op.registry import getCustomOp
from finn.util.fpgadataflow import is_fpgadataflow_node
def floorplan_params(model):
......
......@@ -25,8 +25,8 @@
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import warnings
import os
import warnings
import xml.etree.ElementTree as ET
import finn.custom_op.registry as registry
......
......@@ -29,9 +29,9 @@
import os
import xml.etree.ElementTree as ET
from finn.transformation.move_reshape import _is_fpgadataflow_node
from finn.core.modelwrapper import ModelWrapper
from finn.custom_op.registry import getCustomOp
from finn.transformation.move_reshape import _is_fpgadataflow_node
def post_synth_res(model, override_synth_report_filename=None):
......
......@@ -26,14 +26,15 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import List, Optional, Any
from finn.util.basic import pynq_part_map, alveo_part_map
from finn.transformation.fpgadataflow.vitis_build import VitisOptStrategy
from enum import Enum
import numpy as np
import os
from dataclasses import dataclass
from dataclasses_json import dataclass_json
import os
import numpy as np
from enum import Enum
from typing import Any, List, Optional
from finn.transformation.fpgadataflow.vitis_build import VitisOptStrategy
from finn.util.basic import alveo_part_map, pynq_part_map
class ShellFlowType(str, Enum):
......
......@@ -26,78 +26,78 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from finn.core.modelwrapper import ModelWrapper
import os
import json
import numpy as np
import os
from copy import deepcopy
from shutil import copy, copytree
import finn.transformation.fpgadataflow.convert_to_hls_layers as to_hls
import finn.transformation.streamline.absorb as absorb
from finn.transformation.bipolar_to_xnor import ConvertBipolarMatMulToXnorPopcount
from finn.transformation.fold_constants import FoldConstants
from finn.transformation.general import (
ApplyConfig,
GiveReadableTensorNames,
GiveUniqueNodeNames,
RemoveUnusedTensors,
RemoveStaticGraphInputs,
)
from finn.transformation.infer_datatypes import InferDataTypes
from finn.transformation.infer_shapes import InferShapes
from finn.transformation.streamline import Streamline
from finn.transformation.infer_data_layouts import InferDataLayouts
from finn.transformation.move_reshape import RemoveCNVtoFCFlatten
from finn.transformation.lower_convs_to_matmul import LowerConvsToMatMul
from finn.transformation.streamline.reorder import MakeMaxPoolNHWC
from shutil import copy, copytree
from finn.transformation.fpgadataflow.insert_dwc import InsertDWC
from finn.transformation.fpgadataflow.insert_fifo import InsertFIFO
from finn.transformation.fpgadataflow.prepare_ip import PrepareIP
from finn.transformation.fpgadataflow.hlssynth_ip import HLSSynthIP
from finn.transformation.fpgadataflow.create_stitched_ip import CreateStitchedIP
from finn.transformation.fpgadataflow.set_fifo_depths import (
InsertAndSetFIFODepths,
RemoveShallowFIFOs,
)
from finn.transformation.fpgadataflow.make_zynq_proj import ZynqBuild
from finn.transformation.fpgadataflow.vitis_build import VitisBuild
from finn.transformation.fpgadataflow.make_pynq_driver import MakePYNQDriver
from finn.transformation.fpgadataflow.set_folding import SetFolding
from finn.transformation.fpgadataflow.create_dataflow_partition import (
CreateDataflowPartition,
)
from finn.transformation.fpgadataflow.replace_verilog_relpaths import (
ReplaceVerilogRelPaths,
)
from finn.custom_op.registry import getCustomOp
from finn.analysis.fpgadataflow.dataflow_performance import dataflow_performance
from finn.analysis.fpgadataflow.exp_cycles_per_layer import exp_cycles_per_layer
from finn.analysis.fpgadataflow.res_estimation import (
res_estimation,
res_estimation_complete,
)
from finn.analysis.fpgadataflow.hls_synth_res_estimation import hls_synth_res_estimation
from finn.analysis.fpgadataflow.op_and_param_counts import (
aggregate_dict_keys,
op_and_param_counts,
)
from finn.analysis.fpgadataflow.dataflow_performance import dataflow_performance
from finn.analysis.fpgadataflow.hls_synth_res_estimation import hls_synth_res_estimation
from finn.util.config import extract_model_config_to_json
from finn.transformation.fpgadataflow.synth_ooc import SynthOutOfContext
from finn.analysis.fpgadataflow.res_estimation import (
res_estimation,
res_estimation_complete,
)
from finn.builder.build_dataflow_config import (
DataflowBuildConfig,
DataflowOutputType,
ShellFlowType,
VerificationStepType,
)
from finn.transformation.fpgadataflow.annotate_cycles import AnnotateCycles
from finn.core.modelwrapper import ModelWrapper
from finn.core.onnx_exec import execute_onnx
import numpy as np
from finn.util.test import execute_parent
from finn.transformation.fpgadataflow.prepare_cppsim import PrepareCppSim
from finn.core.throughput_test import throughput_test_rtlsim
from finn.custom_op.registry import getCustomOp
from finn.transformation.bipolar_to_xnor import ConvertBipolarMatMulToXnorPopcount
from finn.transformation.fold_constants import FoldConstants
from finn.transformation.fpgadataflow.annotate_cycles import AnnotateCycles
from finn.transformation.fpgadataflow.compile_cppsim import CompileCppSim
from finn.transformation.fpgadataflow.set_exec_mode import SetExecMode
from finn.transformation.fpgadataflow.create_dataflow_partition import (
CreateDataflowPartition,
)
from finn.transformation.fpgadataflow.create_stitched_ip import CreateStitchedIP
from finn.transformation.fpgadataflow.hlssynth_ip import HLSSynthIP
from finn.transformation.fpgadataflow.insert_dwc import InsertDWC
from finn.transformation.fpgadataflow.insert_fifo import InsertFIFO
from finn.transformation.fpgadataflow.make_pynq_driver import MakePYNQDriver
from finn.transformation.fpgadataflow.make_zynq_proj import ZynqBuild
from finn.transformation.fpgadataflow.prepare_cppsim import PrepareCppSim
from finn.transformation.fpgadataflow.prepare_ip import PrepareIP
from finn.transformation.fpgadataflow.prepare_rtlsim import PrepareRTLSim
from finn.core.throughput_test import throughput_test_rtlsim
from copy import deepcopy
from finn.transformation.fpgadataflow.replace_verilog_relpaths import (
ReplaceVerilogRelPaths,
)
from finn.transformation.fpgadataflow.set_exec_mode import SetExecMode
from finn.transformation.fpgadataflow.set_fifo_depths import (
InsertAndSetFIFODepths,
RemoveShallowFIFOs,
)
from finn.transformation.fpgadataflow.set_folding import SetFolding
from finn.transformation.fpgadataflow.synth_ooc import SynthOutOfContext
from finn.transformation.fpgadataflow.vitis_build import VitisBuild
from finn.transformation.general import (
ApplyConfig,
GiveReadableTensorNames,
GiveUniqueNodeNames,
RemoveStaticGraphInputs,
RemoveUnusedTensors,
)
from finn.transformation.infer_data_layouts import InferDataLayouts
from finn.transformation.infer_datatypes import InferDataTypes
from finn.transformation.infer_shapes import InferShapes
from finn.transformation.lower_convs_to_matmul import LowerConvsToMatMul
from finn.transformation.move_reshape import RemoveCNVtoFCFlatten
from finn.transformation.streamline import Streamline
from finn.transformation.streamline.reorder import MakeMaxPoolNHWC
from finn.util.config import extract_model_config_to_json
from finn.util.test import execute_parent
def verify_step(
......
......@@ -26,6 +26,8 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from finn.custom_op.fpgadataflow.addstreams_batch import AddStreams_Batch
from finn.custom_op.fpgadataflow.channelwise_op_batch import ChannelwiseOp_Batch
from finn.custom_op.fpgadataflow.convolutioninputgenerator import (
ConvolutionInputGenerator,
)
......@@ -33,28 +35,26 @@ from finn.custom_op.fpgadataflow.convolutioninputgenerator1d import (
ConvolutionInputGenerator1D,
)
from finn.custom_op.fpgadataflow.downsampler import DownSampler
from finn.custom_op.fpgadataflow.streamingfclayer_batch import StreamingFCLayer_Batch
from finn.custom_op.fpgadataflow.streamingmaxpool_batch import StreamingMaxPool_Batch
from finn.custom_op.fpgadataflow.streamingfifo import StreamingFIFO
from finn.custom_op.fpgadataflow.tlastmarker import TLastMarker
from finn.custom_op.fpgadataflow.duplicatestreams_batch import DuplicateStreams_Batch
from finn.custom_op.fpgadataflow.fmpadding_batch import FMPadding_Batch
from finn.custom_op.fpgadataflow.globalaccpool_batch import GlobalAccPool_Batch
from finn.custom_op.fpgadataflow.iodma import IODMA
from finn.custom_op.fpgadataflow.labelselect_batch import LabelSelect_Batch
from finn.custom_op.fpgadataflow.pool_batch import Pool_Batch
from finn.custom_op.fpgadataflow.streamingdataflowpartition import (
StreamingDataflowPartition,
)
from finn.custom_op.fpgadataflow.streamingdatawidthconverter_batch import (
StreamingDataWidthConverter_Batch,
)
from finn.custom_op.fpgadataflow.globalaccpool_batch import GlobalAccPool_Batch
from finn.custom_op.fpgadataflow.pool_batch import Pool_Batch
from finn.custom_op.fpgadataflow.fmpadding_batch import FMPadding_Batch
from finn.custom_op.fpgadataflow.streamingfclayer_batch import StreamingFCLayer_Batch
from finn.custom_op.fpgadataflow.streamingfifo import StreamingFIFO
from finn.custom_op.fpgadataflow.streamingmaxpool_batch import StreamingMaxPool_Batch
from finn.custom_op.fpgadataflow.thresholding_batch import Thresholding_Batch
from finn.custom_op.fpgadataflow.addstreams_batch import AddStreams_Batch
from finn.custom_op.fpgadataflow.labelselect_batch import LabelSelect_Batch
from finn.custom_op.fpgadataflow.duplicatestreams_batch import DuplicateStreams_Batch
from finn.custom_op.fpgadataflow.tlastmarker import TLastMarker
from finn.custom_op.fpgadataflow.vector_vector_activate_batch import (
Vector_Vector_Activate_Batch,
)
from finn.custom_op.fpgadataflow.channelwise_op_batch import ChannelwiseOp_Batch
from finn.custom_op.fpgadataflow.iodma import IODMA
from finn.custom_op.fpgadataflow.streamingdataflowpartition import (
StreamingDataflowPartition,
)
custom_op = dict()
......
......@@ -26,13 +26,13 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import numpy as np
import os
import warnings
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from onnx import TensorProto, helper
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
......
......@@ -26,12 +26,12 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from math import ceil
import os
import numpy as np
import os
import warnings
from math import ceil
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from finn.util.data_packing import (
......@@ -39,9 +39,8 @@ from finn.util.data_packing import (
numpy_to_hls_code,
rtlsim_output_to_npy,
)
from . import templates
import warnings
from . import templates
# ONNX i/o tensor shape assumptions for channelwise ops:
# input 0 is the input tensor, shape (..., NumChannels)
......@@ -217,7 +216,7 @@ class ChannelwiseOp_Batch(HLSCustomOp):
return 0
def lut_estimation(self):
"""Calculates LUT cost, taking memory resource type into account """
"""Calculates LUT cost, taking memory resource type into account"""
# TODO add in/out FIFO contributions
style = self.get_nodeattr("ram_style")
P = self.get_nodeattr("PE")
......@@ -490,7 +489,9 @@ class ChannelwiseOp_Batch(HLSCustomOp):
numReps = numInputVectors[0]
self.code_gen_dict["$DEFINES$"] = [
"""#define NumChannels1 {}\n#define PE1 {}\n#define numReps {}""".format(
self.get_nodeattr("NumChannels"), self.get_nodeattr("PE"), numReps,
self.get_nodeattr("NumChannels"),
self.get_nodeattr("PE"),
numReps,
)
]
......@@ -533,7 +534,9 @@ class ChannelwiseOp_Batch(HLSCustomOp):
self.code_gen_dict["$DOCOMPUTE$"] = [
"""Thresholding_Batch<{}, NumChannels1, PE1, {}, {}>
(in0, out, threshs, numReps);""".format(
imgdim, tmpl_args["TSrcI"], tmpl_args["TDstI"],
imgdim,
tmpl_args["TSrcI"],
tmpl_args["TDstI"],
)
]
......
......@@ -26,15 +26,14 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import math
import numpy as np
import os
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from finn.custom_op.general.im2col import compute_conv_output_dim
from onnx import TensorProto, helper
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
# ONNX i/o tensor shape assumptions for ConvolutionInputGenerator:
......
......@@ -26,15 +26,14 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import math
import numpy as np
import os
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from finn.custom_op.general.im2col import compute_conv_output_dim
from onnx import TensorProto, helper
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
# This operation should only be used for 1D convolutions. Either the
......
import os
import numpy as np
import os
import warnings
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
import warnings
class DownSampler(HLSCustomOp):
......
......@@ -26,13 +26,13 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import numpy as np
import os
import warnings
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from onnx import helper, TensorProto
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
......
import os
import numpy as np
import os
import warnings
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
import warnings
class FMPadding_Batch(HLSCustomOp):
......
......@@ -26,13 +26,13 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import numpy as np
import os
import warnings
from onnx import TensorProto, helper
from finn.core.datatype import DataType
from finn.custom_op.fpgadataflow.hlscustomop import HLSCustomOp
from onnx import TensorProto, helper
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment