Skip to content
Snippets Groups Projects
Select Git revision
  • accl default
  • adegendt/accl_clone
  • dev
  • full_accl_support
  • georg/accl-build
  • georg/accl-complete
  • georg/accl-finn
  • georg/end2end
  • main protected
9 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.026Mar25242322212019181312111096543228Feb2726252423212019181716141312111076543131Jan30292827242322212017161514109873220Dec19181716131211[CustomOp] increase AP_INT_MAX_W to 16384[CustomOp] use -O3 in npysim compilation for faster runs[Test] call InferStreamingMaxPool in cnv-w1a1 HLS conv test[Transform] add InferStreamingMaxPool[End2end notebook] Add new finn flow diagram[CustomOp, Test] rewrite StreamingMaxPool and test suite[SWG] minor fix in num output elems calc[End2end notebooks] Change FINNFlow diagram and text accordingly[StreamingFC] Change streaming MVAU interface back to version before[Test] call InferConvInpGen as part of cnv-w1a1 HLS test[Transform] add a first v. of InferConvInpGen[Transform] respect inner dim internal order while lowering[SWG] fix get_number_output_values[SWG] add an extra SIMD=IFM check[Test] add multichan test cases for swg[Test] fix conv lowering test[CustomOp] use standard data un/packing for ConvInpGen exec[StreamingFC] take changes from npysim branch[StreamingFC] Change streaming MVAU interface[Docker] Change finn-hlslib version temporarily to version from T.Alonso[CustomOp] change ConvInpGen layout, various shape fixes[StreamingFC] Fixed bug in saving of weight .npy file[CustomOp] add shape and type inference to ConvInpGen[Test] npysim/rtlsim as param for test_fpgadataflow_slidingwindow[Transform] call InferDataTypes after bipolar2xnor[Test] use model later on in transforms as golden for cnv-w1a1[Test] use smaller PE/SIMD values in cnv-w1a1 conversion test[StreamingFC] umReps = num inp vectors, reread decoupled weights[Util] support numReps in npy<>apintstream adapters[Test] switch to decoupled weights for cnv-w1a1 hls testMerge branch 'feature/weight_streamers_npysim' of https://github.com/Xilinx/finn into feature/cnv_w1a1_convert_to_hls_layers[Transform] set numInpVectors when inferring StreamingFC[StreamingFC] more i/o shape fixes to support mat-mat[StreamingFC] consider num vecs when producing i/o folded shape[StreamingFC] use get_folded_shape fxns throughout[StreamingFC] Unflip weight tensor for .dat files (rtlsim)[Test] add more steps in cnvw1a1 HLS conversion test[Transform] relax matmul shape in ConvertToHLSLayers[Transform] handle upstream MT finding in bipolar2xnor[DataTypeInf] infer odt=idt for certain node types
Loading