- Jun 28, 2021
-
-
Javier M. Duarte authored
-
- Jun 25, 2021
-
-
Hendrik Borras authored
-
- May 10, 2021
-
-
Yaman Umuroglu authored
-
- Apr 22, 2021
-
-
Yaman Umuroglu authored
-
- Mar 24, 2021
-
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
Peter Lehnhardt authored
-
- Mar 23, 2021
-
-
Yaman Umuroglu authored
-
- Feb 26, 2021
-
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
* [Notebook] tentative PYNQ deployment section * [Notebook] cybsec-3: workaround for driver bug * [Deps] update finn-base to get faster binary packing in driver * [Notebook] cybsec-3: add optional deployment sections
-
- Feb 24, 2021
-
-
Yaman Umuroglu authored
-
- Feb 23, 2021
-
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
- Feb 22, 2021
-
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
- Feb 21, 2021
-
-
Yaman Umuroglu authored
* [Notebook] start custom op nb * [Notebook] update custom_op notebook * [Notebook] update the custom op notebook to reflect new reg system * [Notebook] Added more descriptive text into the notebook (#278) * [Web] update publications * Build step: measure rtlsim performance (#262) * [Build] introduce step_measure_rtlsim_performance * [Docs] document the rtlsim performance step * [Docs] add comments on CPU/RAM/storage recommandations * [Build, Test] enable rtlsim perf as part of build test too * [Build] print where intermediate outputs are generated * [Docs] remove ghpages content, add note * [Docker] use -c continue for build_custom mode * [Docs] minor fixes * Driver data packing + improvements (#261) * [Driver] add driver_base.py as own template file + comments * [Driver] also move validation to won template + use in transform * [Driver] more comments * [Driver] suggested updates from PYNQ team + async mode exec_on_buffers * [Driver] allow smaller batchsize in execute_on_buffers * [Driver] optimize buffer alloc a bit * [Driver] wait condition fix * [Deps] update finn-base * [Deps] update finn-base * [Driver] enable fast_mode, expose more benchmarks * [Deps] update finn-base * [Driver] handle case with no rt weights * [Build] auto-exit build_custom if no errors * [Driver] get Alveo clock during test * Add report_utilization to stitched project tcl script (#243) * [Vitis] use -mode batch for report gen Vivado launch * Feature/cybersecurity notebook (#259) * Created cybersecurity notebook and downloaded data with wget * Reorganized the cybersecurity notebook * Read the entire dataset as a pytorch tensor and trained a simple MLP on the UNSW_NB15 dataset * small changes to markdown * Training and testing have the same integer encoding * Added debugger tool * added loss visualization * aded 1hot encoder and separated dataloader Added 1-hot encoder with scikit learn. Seperated the dataloader into a python file. * changes to get 99.998% accuracy * changed loss plot * accuracy at 70% after 50 epochs However, loss is not ok * got 75% accuracy * see loss * added scheduled statistics * added iterator over all possible parameters * updates on automation debugging model error * Delete cybersecurity-checkpoint.ipynb * Delete cybersecurity_2-checkpoint.ipynb * Delete exemplofixe-checkpoint.ipynb * Delete UNSW_NB15_testing-set.csv * Delete UNSW_NB15_train.csv * Delete UNSW_NB15_training-set.csv * Delete UNSW_NB15_val.csv * general cleanup * general cleanup * updates with 75.7238% accuracy * added quantization of the dataset * debugging the quantization of the dataset * added plots for the debugging * debugging the quantization. Show the differences * update debugging * updates on debugging * updates on debugging quantization * updates on debug, uint32 df added * updates on debugging * added 2 pictures for the debugging * Added quantization of the dataset * aded results for training the model with the quantized dataset * added quantization of the dataset * added new notebook with model definition with brevitas * cleaning up documents * modified the model definition * Changed the loss function * Added debug with pdb * Successfully created neural network with Brevitas * Correctly quantize the dataset * Added export of onnx model * Added FINN validation of the Brevitas model * improved dataloader * general cleanup * General Cleanup * General Cleanup * Completed verifying the FINN model against Brevitas * Added new layer to MLP - Debugging * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs * verified that model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 inputs * verified that model with new layer and input shifted to accept {-1,1}, outputs the same in Brevitas (input is in {-1,+1}) and in FINN (input is in {-1,+1}) for the 82332 inputs * General cleanup and added text * General cleanup: improved text * General cleanup: fixed text typos * General cleanup: Added text * Delete cybersecurity.ipynb * Delete dataloader.py * Rename cybersecurity_Brevitas_1bit.ipynb to 1-cybersecurity-Brevitas-1bit.ipynb * Rename cybersecurity_Brevitas_Verification.ipynb to 2-cybersecurity-finn-verification.ipynb * Added the last notebook * Added last notebook describing the finn build * Added changed parameters to see differences * Added changed parameters and added text * Fixed typo * [Notebooks] reorganize into folders, add README for cybsec * [Notebook] add license header and refs to cybsec dataset quantizer * [Notebooks] rename cybsec notebook files * [Notebook] first pass thru cybsec part 1 * [Notebook] refactor more of cybsec part 1 * [Notebook] add option to use pretrained weights * fixed outline and typo * [Notebook] update cybsec notebook #2 and gitignore * [Notebook] start refactoring cybsec part 3 * [ConvertToHLS] allow out_scale=2 for bipolar MT * [Build] add alternative set of steps for estimation only * [Transform] attempt to handle padding for IODMAs * [Transform] explicitly ignore IODMA nodes for InsertDWC * [Notebook] full pass over cybsec notebook 3 * [Util] move vivado utils into finn-base * fixed outline * [HLSCustomOp] better err msg on ipgen failure Co-authored-by:
Alina Vasilciuc <alinav@xlnx.xilinx.com> Co-authored-by:
Yaman Umuroglu <yamanu@xilinx.com> * [Notebook] add README, remove mobilenet notebook * [Docs] round of updates * [Docs] make stack image local * [Docs] apidocs updates * [Docs] update README for v0.5b * [Release] merge dev into master for v0.5b * [Docs] bring back missing img * [Docs] bring back missing images * Switch hlslib comparison functions (#263) * [Test] add lfc to end2end tests * [Deps] update hlslib to get comp:: fxns * [HLS] use comp:: comparators instead of std:: * [VVAU] hlslib now uses inner prod dim instead of K * [Thres] manually workaround vivado_hls bug for T[0][0]=0 * [Infra] add .vscode to .gitignore * [Build] separate HLS codegen and ipgen steps (#265) * [Docs] first version of developer docs * [Notebook] fix broken resource paths * Fix Im2Col attributes for newer finn-base (#276) * [Deps] update finn-base * [Refactor] fix im2col props for updated finn-base * [Docs ] Add FAQ page to the Documentation (#273) * [Docs] Add FAQ page into the Docs * [Docs] some minor text format changes. * [Docs] updates to FAQ Co-authored-by:
Yaman Umuroglu <yaman.umuroglu@xilinx.com> * [Notebook] Added more descriptive text into the notebook * Fix ZCU102 support in Vivado shell project (#280) * [ConvertToHLS] fix conversion for bipolar outputs * [Notebook] updates to custom op notebook Co-authored-by:
Yaman Umuroglu <yamanu@xilinx.com> Co-authored-by:
Tobi-Alonso <tobi.alonso@gmail.com> Co-authored-by:
alinavalinav <60705229+alinavalinav@users.noreply.github.com> Co-authored-by:
Alina Vasilciuc <alinav@xlnx.xilinx.com> Co-authored-by:
Yaman Umuroglu <yaman.umuroglu@xilinx.com> Co-authored-by:
Felix Jentzsch <45395194+fpjentzsch@users.noreply.github.com> Co-authored-by:
jalezeta <51440887+jalezeta@users.noreply.github.com> Co-authored-by:
Tobi-Alonso <tobi.alonso@gmail.com> Co-authored-by:
alinavalinav <60705229+alinavalinav@users.noreply.github.com> Co-authored-by:
Alina Vasilciuc <alinav@xlnx.xilinx.com> Co-authored-by:
Felix Jentzsch <45395194+fpjentzsch@users.noreply.github.com>
-
- Feb 18, 2021
-
-
jalezeta authored
-
Yaman Umuroglu authored
-
- Feb 08, 2021
-
-
Yaman Umuroglu authored
-
- Feb 04, 2021
-
-
jalezeta authored
* Move python package installations into the finn_dev Dockerfile * Delete previous run results to avoid "File exists" error. * [Docker] pin new packages to versions Co-authored-by:
Yaman Umuroglu <yaman.umuroglu@xilinx.com>
-
- Feb 01, 2021
-
-
Yaman Umuroglu authored
-
Hendrik Borras authored
-
- Dec 17, 2020
-
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
alinavalinav authored
* Created cybersecurity notebook and downloaded data with wget * Reorganized the cybersecurity notebook * Read the entire dataset as a pytorch tensor and trained a simple MLP on the UNSW_NB15 dataset * small changes to markdown * Training and testing have the same integer encoding * Added debugger tool * added loss visualization * aded 1hot encoder and separated dataloader Added 1-hot encoder with scikit learn. Seperated the dataloader into a python file. * changes to get 99.998% accuracy * changed loss plot * accuracy at 70% after 50 epochs However, loss is not ok * got 75% accuracy * see loss * added scheduled statistics * added iterator over all possible parameters * updates on automation debugging model error * Delete cybersecurity-checkpoint.ipynb * Delete cybersecurity_2-checkpoint.ipynb * Delete exemplofixe-checkpoint.ipynb * Delete UNSW_NB15_testing-set.csv * Delete UNSW_NB15_train.csv * Delete UNSW_NB15_training-set.csv * Delete UNSW_NB15_val.csv * general cleanup * general cleanup * updates with 75.7238% accuracy * added quantization of the dataset * debugging the quantization of the dataset * added plots for the debugging * debugging the quantization. Show the differences * update debugging * updates on debugging * updates on debugging quantization * updates on debug, uint32 df added * updates on debugging * added 2 pictures for the debugging * Added quantization of the dataset * aded results for training the model with the quantized dataset * added quantization of the dataset * added new notebook with model definition with brevitas * cleaning up documents * modified the model definition * Changed the loss function * Added debug with pdb * Successfully created neural network with Brevitas * Correctly quantize the dataset * Added export of onnx model * Added FINN validation of the Brevitas model * improved dataloader * general cleanup * General Cleanup * General Cleanup * Completed verifying the FINN model against Brevitas * Added new layer to MLP - Debugging * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs * verified that model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 inputs * verified that model with new layer and input shifted to accept {-1,1}, outputs the same in Brevitas (input is in {-1,+1}) and in FINN (input is in {-1,+1}) for the 82332 inputs * General cleanup and added text * General cleanup: improved text * General cleanup: fixed text typos * General cleanup: Added text * Delete cybersecurity.ipynb * Delete dataloader.py * Rename cybersecurity_Brevitas_1bit.ipynb to 1-cybersecurity-Brevitas-1bit.ipynb * Rename cybersecurity_Brevitas_Verification.ipynb to 2-cybersecurity-finn-verification.ipynb * Added the last notebook * Added last notebook describing the finn build * Added changed parameters to see differences * Added changed parameters and added text * Fixed typo * [Notebooks] reorganize into folders, add README for cybsec * [Notebook] add license header and refs to cybsec dataset quantizer * [Notebooks] rename cybsec notebook files * [Notebook] first pass thru cybsec part 1 * [Notebook] refactor more of cybsec part 1 * [Notebook] add option to use pretrained weights * fixed outline and typo * [Notebook] update cybsec notebook #2 and gitignore * [Notebook] start refactoring cybsec part 3 * [ConvertToHLS] allow out_scale=2 for bipolar MT * [Build] add alternative set of steps for estimation only * [Transform] attempt to handle padding for IODMAs * [Transform] explicitly ignore IODMA nodes for InsertDWC * [Notebook] full pass over cybsec notebook 3 * [Util] move vivado utils into finn-base * fixed outline * [HLSCustomOp] better err msg on ipgen failure Co-authored-by:
Alina Vasilciuc <alinav@xlnx.xilinx.com> Co-authored-by:
Yaman Umuroglu <yamanu@xilinx.com>
-
- Dec 01, 2020
-
-
Yaman Umuroglu authored
* [Util] Add mobilenet to test.py in util * [Test] Add first draft of brevitas export unit test into test suite * [Test] Shorten import of mobilenet and add example picture * [Docker] Temporarily set brevitas commit version to auphelia brevitas fork * [Test] Add transformation and execution function to mobilnet test * [Docker] Change brevitas repo back to Xilinx repo * [Test] Add transformations and execution to mobilenet test * [Docker] Update brevitas commit version * [Test] Insert Topk into mobilenet test * [Test] Use Top5 to verify mobilenet functionality of execution in FINN * [CustomOp] Remove rounding in QuantAvgPool2d * [Test] Add first streamlining transformations to mobilenet test * [Notebook] Add end2end notebook for mobilenet-v1 * [Test] Add transformation to streamline mobilenet-v1 * [Test & Notebook] Update streamlining and lowering of mobilenet-v1 * [Test] Add test setup for move flatten transformation * [Test] Add tidy up trafo mobilenet test * [Streamline] Add reorder fct to move flatten past matmul, mul or add * [Test] Update test to check functional verification after MoveFlatten trafo * [Test] Add new trafos to mobilenet test * [Streamline] Add drafts for move transformations to reorder trafos * [Test] Delete obsolete test and update mobilenet test * [Streamline] Fix missing return in MoveTransposePastScalarMul * [Test] Update mobilenet test with new transformations * [Test] Add mul value to make model outputs comparable (mobilenet-v1) * [Test&Notebook] Update mobilenet-v1 streamlining * [Test] Add preprocessing as exportable pytorch module for mobilenet and merge models * [Util] Add pytorch modules for imagenet normalize preprocessing * [Util] Add functions to resize and centercrop a PIL image * [Test] Refactor mobilenet test * [Test] Set input finn dtype and fix bug with saving onnx checkpoints * [Test] First draft of end2end test mobilenet (prepare model for flow) * [Test] Add streamlining and lowering to end2end mobilenet test * [Test] Add hls conversion and dataflow partitioning to mobilenet end2end test * [Transform] ConvertToHLSLayers add support for QuantAvgPool2d with data layout NHWC * [Test] Save golden output for end2end mobilenet * [Test] Add folding and draft for verification to end2end mobilenet * [Transform] Fix bug in insertion of pool batch node * [Test] Add time measurement to end2end mobilenet * [Test] Add ip gen and rtlsim to end2end mobilenet test * [Transform] Add missing import HLS conversion * [Test] Clean dataflow partition of mobilenet before saving * [Docker] mount imgnet val if specified * [Util] add some ImageNet val utils * [Test] add validation test for MobileNet-v1 * [Util] support logging QuantTensors in forward hook * [Test] add debug option for tensorwise comparison in validate_mobilenet * [Test] Delete streamlining part from mobilenet export test * [Test] pre-commit test_brevitas_mobilenet * [Util] Set resample=0 in PIL resize function * [Doc] document the imagenet val env.var * [Test] mark mobilenet val test as xfail * [Test] correct typo in MobileNet-v1 val test * [Test] fix MobileNet-v1 validation test for multiple imgs * [Util] fix get_val_images for ImageNet validation * [Util] more ImageNet testing utils * [Test] use new utils in MobileNet-v1 tests * [Util] update ImageNet utils to use torchvision utils * [Test] test preproc only in test_brevitas_mobilenet_preproc * [Util] add option to control get_val_images order * [Test] different classes for mobilenet comparison * [StreamingFC] clip thresholds larger than acc * [VVAU] add accumulator minimization and threshold clipping * [HLSCustomOp] clip thresholds on both sides if needed * [Transform] call acc minimization for VVAU too * [Test] reorder tests for end2end mobilenet * [Test] fixes to MobileNet validation after merge * [Test] MobileNet-v1: temp fix for export + add fifo set and build * [Transform] fix num inp vectors for InferLabelSelect * [Test] MobileNet: bring back labelselect, use dataflow partition * [Deps] update Brevitas to get mobilenet export fix * [Test] bring back export for mobilenet-v1 end2end * [Test] MobileNet-v1: add extra_fold, reorder tests * [Test] MobileNet-v1: additional marks + bugfix * [Test] MobileNet-v1: fix build dir * [LabelSelect] fix cppsim bug * [SetFIFODepths] allow overriding auto for large FIFOs * [Test] MobileNet-v1: add more config options to mnv1 end2end test * [Vitis] enable Vivado physopts with PERFORMANCE_BEST * [Test] MobileNet-v1 edn2end: aim for higher perf * [Build] add more build options + minor improvements including Vitis build strategy, large FIFO mem mode + ability to spec custom fifo depths * [Docker] minor improvements in run-docker.sh * [Docker] new attempt at handling XRT deps * [Test] mark semi-failing MNv1 tests as xfail * [Infra] fix entrypoint script working dirs * [Build] allow specifying fxns as build steps * [Build] print build log location * [InsertFIFO] allow creating shallow FIFOs if desired * [Build] create shallow FIFOs to use ApplyConfig, then remove as needed * [Infra] use abspath for Dockerfile * Revert "[Infra] use abspath for Dockerfile" This reverts commit 010fb910b140e7539e1599862681a4d520171388. * [Infra] better solution for run-docker.sh from outside * [HLSCustomOp] add directory check after running IPGenBuilder * [Build] rename to step_set_fifo_depths, fix non-auto depth case * [Build] typo fix Co-authored-by:
auphelia <jakobapk@web.de>
-
- Oct 28, 2020
-
-
Yaman Umuroglu authored
* [Refactor] use getCustomOp instead of direct registry access * [Refactor] move HLSCustomOp base to own file * [Refactor] register all HLSCustomOps in new style * [Refactor] use correct domain for custom ops acc. to new style * [Deps] update finn-base to get new-style customop domains * [Refactor] more domain fixes * [Test] fix ipstitch expected io values in rtlsim * [Deps] update finn-base and brevitas * [Docs] link to CustomOp reorg PR
-
- Oct 05, 2020
-
-
Yaman Umuroglu authored
-
- Sep 21, 2020
-
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
- Sep 11, 2020
-
-
Yaman Umuroglu authored
called prior to transformations now by default
-
- Sep 10, 2020
-
-
Yaman Umuroglu authored
-
- Sep 09, 2020
-
-
Yaman Umuroglu authored
-
- Sep 03, 2020
-
-
Yaman Umuroglu authored
-
- Aug 20, 2020
-
-
Yaman Umuroglu authored
-
Yaman Umuroglu authored
-
- Aug 18, 2020
-
-
Yaman Umuroglu authored
-