To receive notifications about scheduled maintenance, please subscribe to the mailing-list gitlab-operations@sympa.ethz.ch. You can subscribe to the mailing-list at https://sympa.ethz.ch

Commit e01cb03b authored by zpgeng's avatar zpgeng
Browse files

Update README

parent 1c9743c3
......@@ -5,9 +5,18 @@ Predicting eye gaze with DL
## Leonhard Settings
## Model Configuration
Please configure the config.py file correctly before running the main.py file:
config['data_dir'] : indicates the directory where you stored the data
config['model'] : indicates the model you want to use, choose between 'cnn', 'eegnet', 'inception', 'xception' or'deepeye'
config['downsampled'] : True if you want to use 125 data points per second instead of 500. Default is False
config['split'] : True if you want to run a clustered version of the model, please keep it to False as the clustered version is inneficient
## Parameter Tuning
<<<<<<< HEAD
Please find the files which are related to our model:
|
......@@ -18,42 +27,25 @@ Please find the files which are related to our model:
| - InceptionTime -- inception.py
| - Xception -- xception.py
| - ConvNet.py
=======
Please use the command
>>>>>>> d38a0ec9a70efa540b2361987cff15fc6042590f
`module load gcc/6.3.0 python_gpu/3.8.5 hdf5/1.10.1`
You can find the architechture of our models in these files. For `CNN`, `DeepEye`, `DeepEyeRNN`, `InceptionTime`, and `Xception`, you should tune the parameter by looking into the `ConvNet.py` file and adjust the parameters (e.g. `self.nb_filter`) accordingly.
For `EEGNet` model, you should directly look into the `eegNet.py` file and tune the parameters accordingly.
before training the model. Please make sure that the Tensorflow version should be 2.x.
<<<<<<< HEAD
## Running in the Cluster
For [ETH Leonhard](https://scicomp.ethz.ch/wiki/Python_on_Leonhard) users, you can follow these steps:
1. Please use the command `module load gcc/6.3.0 python_gpu/3.8.5 hdf5/1.10.1` before training the model.
2. Edit the _data directory_ and __run directory__ where you _saved the datasets_ and __run the code for saving the logs and outputs__.
=======
## model configuration
Please configure the config.py file correctly before running the main.py file:
config['data_dir'] : indicates the directory where you stored the data
config['model'] : indicates the model you want to use. Please choose between 'cnn', 'eegnet', 'inception', 'xception' or 'deepeye'
config['downsampled'] : True if you want to use 125 data points per second instead of 500. Default is False
config['split'] : True if you want to run a clustered version of the model, please keep it to False as the clustered version is inneficient. The cluster used in this case is defined in the file cluster.py
## Job submission
>>>>>>> d38a0ec9a70efa540b2361987cff15fc6042590f
Please run `bsub -n 4 -W 5:00 -R "rusage[mem=4800, ngpus_excl_p=1]" python ~/dl-project/main.py` on the Leonhard cluster to start the job.
3. Use the following command to run the model: `bsub -n 10 -W 4:00 -o experiment_[your_model] -R "rusage[mem=5000, ngpus_excl_p=1]" python ../main.py`
## read the results
A summary of the model is printed in the lsf. file generated by the Leonhard cluster.
After each run, a directory with the name 'number'_config['model'] is created under ~/runs, where 'number' refers to the number of the training (different for each run of main.py). It contains the weights of the model, a csv file with the fiting history, the best validation score and an accuracy and loss plot of the training.
__Note:__ If you want to run the model locally, you should ensure to have Pytorch, Tensorflow==2.x installed.
<<<<<<< HEAD
## Reading the Results
A summary of the model is printed in the `experiment_[your_model]` file generated by the Leonhard cluster.
......@@ -62,9 +54,6 @@ After each run, a directory with the name `'number'_config['model']` is created
## DeepEye3 Tuning (deprecated)
=======
## deepEye3 tuning
>>>>>>> d38a0ec9a70efa540b2361987cff15fc6042590f
nb_filter: [32, 64]
......@@ -78,3 +67,4 @@ Large depth causes overfitting, same for the number of filters. Kernel size seem
The best setting is **nb_filter == 32, depth == 9, kernel_size == 40, res_jump == 3**
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment