To receive notifications about scheduled maintenance, please subscribe to the mailing-list gitlab-operations@sympa.ethz.ch. You can subscribe to the mailing-list at https://sympa.ethz.ch

README.md 2.84 KB
Newer Older
Ard Kastrati's avatar
Ard Kastrati committed
1
2
# DL-Project

3
Predicting eye gaze with DL
4
5
6
7
    Projects that we try: CNN, InceptionTime, EEGNet, DeepEye



zpgeng's avatar
zpgeng committed
8
9
10
11
12
13
14
15
16
17
18
## Model Configuration

Please configure the config.py file correctly before running the main.py file:   
  
config['data_dir']    : indicates the directory where you stored the data  
config['model']       : indicates the model you want to use, choose between 'cnn', 'eegnet', 'inception', 'xception' or'deepeye'  
config['downsampled'] : True if you want to use 125 data points per second instead of 500. Default is False  
config['split']       : True if you want to run a clustered version of the model, please keep it to False as the clustered version is inneficient  


## Parameter Tuning
zpgeng's avatar
zpgeng committed
19
20
21

Please find the files which are related to our model:
	
zpgeng's avatar
zpgeng committed
22
23
24
25
26
27
28
29
	|
	| - CNN -- CNN.py
	| - DeepEye -- deepeye.py
	| - DeepEyeRNN -- deepeyeRNN.py
	| - EEGNet -- eegNet.py
	| - InceptionTime -- inception.py
	| - Xception -- xception.py
	| - ConvNet.py
zpgeng's avatar
zpgeng committed
30

zpgeng's avatar
zpgeng committed
31
32
33
You can find the architechture of our models in these files. For `CNN`, `DeepEye`, `DeepEyeRNN`, `InceptionTime`, and `Xception`, you should tune the parameter by looking into the `ConvNet.py` file and adjust the parameters (e.g. `self.nb_filter`) accordingly.

For `EEGNet` model, you should directly look into the `eegNet.py` file and tune the parameters accordingly.
zpgeng's avatar
zpgeng committed
34
35
36
37
38
39


## Running in the Cluster

For [ETH Leonhard](https://scicomp.ethz.ch/wiki/Python_on_Leonhard) users, you can follow these steps:
	
zpgeng's avatar
zpgeng committed
40
	1. Please use the command `module load gcc/6.3.0 python_gpu/3.8.5 hdf5/1.10.1` before training the model.
zpgeng's avatar
zpgeng committed
41
	
zpgeng's avatar
zpgeng committed
42
	2. Edit the _data directory_ and __run directory__ where you _saved the datasets_ and __run the code for saving the logs and outputs__.
zpgeng's avatar
zpgeng committed
43
44
	
	3. Use the following command to run the model: `bsub -n 10 -W 4:00 -o experiment_[your_model] -R "rusage[mem=5000, ngpus_excl_p=1]" python ../main.py`
zpgeng's avatar
zpgeng committed
45

zpgeng's avatar
zpgeng committed
46
__Note:__ If you want to run the model locally, you should ensure to have Pytorch, Tensorflow==2.x installed.
zpgeng's avatar
zpgeng committed
47
48


zpgeng's avatar
zpgeng committed
49
50
51
52
53
54
55
## Reading the Results

A summary of the model is printed in the `experiment_[your_model]` file generated by the Leonhard cluster.

After each run, a directory with the name `'number'_config['model']` is created under the run directory, where 'number' refers to the time stamp of the training (different for each run of main.py). It contains the weights of the model, a .csv file with the fiting history, the best validation score and an accuracy and loss plot of the training.


zpgeng's avatar
zpgeng committed
56
## DeepEye3 Tuning (deprecated)
zpgeng's avatar
zpgeng committed
57
58
59
60
61
62
63
64
65
66
67
68

nb_filter: [32, 64]

depth: [9, 12, 20]

kernel_size:[40, 20]

residual_jump: [3, 4]

Large depth causes overfitting, same for the number of filters. Kernel size seems to have tiny affect on validation. Residual jump for 4 (i.e. `depth % (res_jump) == (res_jump - 1)`) is not so good in our task, but I think it would be useful for future tasks.

The best setting is **nb_filter == 32, depth == 9, kernel_size == 40, res_jump == 3**
69

zpgeng's avatar
zpgeng committed
70