Skip to content
Snippets Groups Projects
Select Git revision
  • accl default
  • adegendt/accl_clone
  • dev
  • full_accl_support
  • georg/accl-build
  • georg/accl-complete
  • georg/accl-finn
  • georg/end2end
  • main protected
9 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.011Oct108764130Sep292827242322212019171613121098731Aug30272320191716131210965426Jul23151312653230Jun282524232117151410987543130May28262520181716141110987653130Apr2927232221201918151412109826Mar25242317543226Feb242322181211108764131Jan29262120171517Dec161413983124Nov231996530Oct282726222120191514131211109876529Sep272625242321201817161514121110954[Refactor] use get_accumulator_dt_cands as part of DataType rf.[Deps] update finn-baseAdded support for moving scalar ops, which are not flat.[Pack, Test] fix npy<>stream packing for fixed pt, add test[Refactor] Datatype.X -> DataType["X"][Deps] update finn-base to get refactored dtypes+fixed-pointSmall refactoring for QONNX activation handlers.Add optional support for QONNX ingestion in the dataflow builder.Update qonnx commit.[Test] update expected artifacts in test_build_dataflow_directory[Build] add cfg option to spec rtlsim perf batch size[Build] save autogenerated folding config to .jsonAdded mobilenet to test_QONNX_to_FINN test.Correct for fp accuracy issues during Quant constant folding.Added support for converting Gemm to MatMul.Updated QONNX commit.Catch unsupported constant folding for SCALED datatypes.Added support for FoldQuantWeights into Add nodes.Updated QONNX commit.Merge pull request #379 from jterry-x/masterMerge pull request #382 from Xilinx/fix/fclk_override[Stitch] always enable user-resolve mode for bus interfaces[Zynq] fix auto fclk setting req mode for Zynq UltraScale+Made node insertion optional if nothing would change.Automatically move scalar and bias node into MultiThreshold node where possible.Moved removal of FINN datatypes into main QONNX to FINN transformation.Moved QONNX transformations to a new folder and moved activation handlers to their own file.Moved QONNX transformations to a new folder and moved activation handlers to their own file.Fixed a potential bug where removing the FINN datatype on a tensor would also remove other quantization information.Resolved UserWarnings after QONNX to FINN conversion.Added support for catching UserWarnings emitted by FINN during onnx execution as errors.Updated achritecture families to all current types, minus the CPLDsAdded analysis test for QONNX to FINN conversion checking for Quant nodes.Added test for QONNX to FINN conversion with FINN sample models.Added Copyright disclaimer.Added rudimentary doc strings.Moved allowed_predecessors definition into handler classes.Added overall transformation for converting QONNX to FINN-ONNX.Added compatibility check for identity and relu activation conversion.Added support for constant folding Quant weight nodes with per-channel scaling for convolutions.
Loading