Skip to content
Snippets Groups Projects
  • alinavalinav's avatar
    004475a4
    Feature/cybersecurity notebook (#259) · 004475a4
    alinavalinav authored
    
    * Created cybersecurity notebook and downloaded data with wget
    
    * Reorganized the cybersecurity notebook
    
    * Read the entire dataset as a pytorch tensor and trained a simple MLP on the UNSW_NB15 dataset
    
    * small changes to markdown
    
    * Training and testing have the same integer encoding
    
    * Added debugger tool
    
    * added loss visualization
    
    * aded 1hot encoder and separated dataloader
    
    Added 1-hot encoder with scikit learn.
    Seperated the dataloader into a python file.
    
    * changes to get 99.998% accuracy
    
    * changed loss plot
    
    * accuracy at 70% after 50 epochs
    
    However, loss is not ok
    
    * got 75% accuracy
    
    * see loss
    
    * added scheduled statistics
    
    * added iterator over all possible parameters
    
    * updates on automation
    
    debugging model error
    
    * Delete cybersecurity-checkpoint.ipynb
    
    * Delete cybersecurity_2-checkpoint.ipynb
    
    * Delete exemplofixe-checkpoint.ipynb
    
    * Delete UNSW_NB15_testing-set.csv
    
    * Delete UNSW_NB15_train.csv
    
    * Delete UNSW_NB15_training-set.csv
    
    * Delete UNSW_NB15_val.csv
    
    * general cleanup
    
    * general cleanup
    
    * updates with 75.7238% accuracy
    
    * added quantization of the dataset
    
    * debugging the quantization of the dataset
    
    * added plots for the debugging
    
    * debugging the quantization. Show the differences
    
    * update debugging
    
    * updates on debugging
    
    * updates on debugging quantization
    
    * updates on debug, uint32 df added
    
    * updates on debugging
    
    * added 2 pictures for the debugging
    
    * Added quantization of the dataset
    
    * aded results for training the model with the quantized dataset
    
    * added quantization of the dataset
    
    * added new notebook with model definition with brevitas
    
    * cleaning up documents
    
    * modified the model definition
    
    * Changed the loss function
    
    * Added debug with pdb
    
    * Successfully created neural network with Brevitas
    
    * Correctly quantize the dataset
    
    * Added export of onnx model
    
    * Added FINN validation of the Brevitas model
    
    * improved dataloader
    
    * general cleanup
    
    * General Cleanup
    
    * General Cleanup
    
    * Completed verifying the FINN model against Brevitas
    
    * Added new layer to MLP - Debugging
    
    * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs
    
    * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs
    
    * verified that model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 inputs
    
    * verified that model with new layer and input shifted to accept {-1,1}, outputs the same in Brevitas (input is in {-1,+1}) and in FINN (input is in {-1,+1})  for the 82332 inputs
    
    * General cleanup and added text
    
    * General cleanup: improved text
    
    * General cleanup: fixed text typos
    
    * General cleanup: Added text
    
    * Delete cybersecurity.ipynb
    
    * Delete dataloader.py
    
    * Rename cybersecurity_Brevitas_1bit.ipynb to 1-cybersecurity-Brevitas-1bit.ipynb
    
    * Rename cybersecurity_Brevitas_Verification.ipynb to 2-cybersecurity-finn-verification.ipynb
    
    * Added the last notebook
    
    * Added last notebook describing the finn build
    
    * Added changed parameters to see differences
    
    * Added changed parameters and added text
    
    * Fixed typo
    
    * [Notebooks] reorganize into folders, add README for cybsec
    
    * [Notebook] add license header and refs to cybsec dataset quantizer
    
    * [Notebooks] rename cybsec notebook files
    
    * [Notebook] first pass thru cybsec part 1
    
    * [Notebook] refactor more of cybsec part 1
    
    * [Notebook] add option to use pretrained weights
    
    * fixed outline and typo
    
    * [Notebook] update cybsec notebook #2 and gitignore
    
    * [Notebook] start refactoring cybsec part 3
    
    * [ConvertToHLS] allow out_scale=2 for bipolar MT
    
    * [Build] add alternative set of steps for estimation only
    
    * [Transform] attempt to handle padding for IODMAs
    
    * [Transform] explicitly ignore IODMA nodes for InsertDWC
    
    * [Notebook] full pass over cybsec notebook 3
    
    * [Util] move vivado utils into finn-base
    
    * fixed outline
    
    * [HLSCustomOp] better err msg on ipgen failure
    
    Co-authored-by: default avatarAlina Vasilciuc <alinav@xlnx.xilinx.com>
    Co-authored-by: default avatarYaman Umuroglu <yamanu@xilinx.com>
    Feature/cybersecurity notebook (#259)
    alinavalinav authored
    
    * Created cybersecurity notebook and downloaded data with wget
    
    * Reorganized the cybersecurity notebook
    
    * Read the entire dataset as a pytorch tensor and trained a simple MLP on the UNSW_NB15 dataset
    
    * small changes to markdown
    
    * Training and testing have the same integer encoding
    
    * Added debugger tool
    
    * added loss visualization
    
    * aded 1hot encoder and separated dataloader
    
    Added 1-hot encoder with scikit learn.
    Seperated the dataloader into a python file.
    
    * changes to get 99.998% accuracy
    
    * changed loss plot
    
    * accuracy at 70% after 50 epochs
    
    However, loss is not ok
    
    * got 75% accuracy
    
    * see loss
    
    * added scheduled statistics
    
    * added iterator over all possible parameters
    
    * updates on automation
    
    debugging model error
    
    * Delete cybersecurity-checkpoint.ipynb
    
    * Delete cybersecurity_2-checkpoint.ipynb
    
    * Delete exemplofixe-checkpoint.ipynb
    
    * Delete UNSW_NB15_testing-set.csv
    
    * Delete UNSW_NB15_train.csv
    
    * Delete UNSW_NB15_training-set.csv
    
    * Delete UNSW_NB15_val.csv
    
    * general cleanup
    
    * general cleanup
    
    * updates with 75.7238% accuracy
    
    * added quantization of the dataset
    
    * debugging the quantization of the dataset
    
    * added plots for the debugging
    
    * debugging the quantization. Show the differences
    
    * update debugging
    
    * updates on debugging
    
    * updates on debugging quantization
    
    * updates on debug, uint32 df added
    
    * updates on debugging
    
    * added 2 pictures for the debugging
    
    * Added quantization of the dataset
    
    * aded results for training the model with the quantized dataset
    
    * added quantization of the dataset
    
    * added new notebook with model definition with brevitas
    
    * cleaning up documents
    
    * modified the model definition
    
    * Changed the loss function
    
    * Added debug with pdb
    
    * Successfully created neural network with Brevitas
    
    * Correctly quantize the dataset
    
    * Added export of onnx model
    
    * Added FINN validation of the Brevitas model
    
    * improved dataloader
    
    * general cleanup
    
    * General Cleanup
    
    * General Cleanup
    
    * Completed verifying the FINN model against Brevitas
    
    * Added new layer to MLP - Debugging
    
    * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs
    
    * verified that the MLP model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 test inputs
    
    * verified that model with new layer (QuantIdentity) outputs the same in Brevitas and in FINN for all 82332 inputs
    
    * verified that model with new layer and input shifted to accept {-1,1}, outputs the same in Brevitas (input is in {-1,+1}) and in FINN (input is in {-1,+1})  for the 82332 inputs
    
    * General cleanup and added text
    
    * General cleanup: improved text
    
    * General cleanup: fixed text typos
    
    * General cleanup: Added text
    
    * Delete cybersecurity.ipynb
    
    * Delete dataloader.py
    
    * Rename cybersecurity_Brevitas_1bit.ipynb to 1-cybersecurity-Brevitas-1bit.ipynb
    
    * Rename cybersecurity_Brevitas_Verification.ipynb to 2-cybersecurity-finn-verification.ipynb
    
    * Added the last notebook
    
    * Added last notebook describing the finn build
    
    * Added changed parameters to see differences
    
    * Added changed parameters and added text
    
    * Fixed typo
    
    * [Notebooks] reorganize into folders, add README for cybsec
    
    * [Notebook] add license header and refs to cybsec dataset quantizer
    
    * [Notebooks] rename cybsec notebook files
    
    * [Notebook] first pass thru cybsec part 1
    
    * [Notebook] refactor more of cybsec part 1
    
    * [Notebook] add option to use pretrained weights
    
    * fixed outline and typo
    
    * [Notebook] update cybsec notebook #2 and gitignore
    
    * [Notebook] start refactoring cybsec part 3
    
    * [ConvertToHLS] allow out_scale=2 for bipolar MT
    
    * [Build] add alternative set of steps for estimation only
    
    * [Transform] attempt to handle padding for IODMAs
    
    * [Transform] explicitly ignore IODMA nodes for InsertDWC
    
    * [Notebook] full pass over cybsec notebook 3
    
    * [Util] move vivado utils into finn-base
    
    * fixed outline
    
    * [HLSCustomOp] better err msg on ipgen failure
    
    Co-authored-by: default avatarAlina Vasilciuc <alinav@xlnx.xilinx.com>
    Co-authored-by: default avatarYaman Umuroglu <yamanu@xilinx.com>