This Readme shall provide all necessary insights to use the framework in order to recognize hand eye-coordination (HEC) patterns in eye tracking videos.
FYI: The software code has been developed and tested in a Windows 10 Education OS environment only.
## License
The code and the models in this repo are released under the [MIT License](https://gitlab.ethz.ch/pdz/eye-hand-coordination/-/blob/master/LICENSE).
## Installation
# First step: Download and install anaconda
# Second step: Create environment from .yml file
conda env create -f HEC_CNN_env.yml
# Third step: Activate conda environment
conda activate HEC_CNN_env
# FYI - Save an environment with
conda env export > HEC_CNN_env.yml
## Citation
If you use our code in your research or wish to refer to the baseline results, please use the following BibTeX entry.
>>>>>>>> CHECK THIS CITATION BEFORE RELEASE <<<<<<<<<<<<<<<
# Go to folder of the project and open the command line
python main.py
## Structure of the project
definitiony.py
LICENSE
main.py
opti.py
README.md
data
|
-- datasets
| |
| -- dadaset_gt
| | |
| | - behaviour_ground_truth_name_base{i}.txt
| |
| -- extracted_images
| | |
| | -- test
| | -- train_val
| |
| -- filled_values_segment
| | |
| | -- test
| | -- train_val
| |
| -- masked_videos
| | |
| | -- blacked_mask_videos
| | | |
| | | - name_base{i}_black.avi
| | |
| | -- labels_mask
| | | |
| | | - name_base{i}.csv
| | |
| | -- original_mask_videos
| | |
| | - name_base{i}_masked.avi
| |
| - test_filled_values_id_label_map.csv
| - test_segment_dataset.csv
| - train_filled_values_id_label_map.csv
| - test_segment_dataset.csv
|
-- raw
|
-- gaze
| |
| - name_base{i}.txt
|
-- ground_truth
| |
| - behaviour_ground_truth_name_base{i}.csv
|
-- labels
| |
| - labels_yps.json
|
-- video_times
| |
| - video_times.txt
|
-- videos
|
- name_base{i}.avi
logs
models
|
-- ThreeDCNN
| |
| -- classification_NN
| |
| --temp_train
|
-- TwoDCNN
|
-mrcnn
|
- mask_rcnn_hands.h5
- mask_rcnn_yps.h5
reports
|
-- figures
| |
| -- acc
| -- loss
| -- ClassifiactionRep
| -- ComparisonPredTrue
| -- ConfusionMat
| -- temp_acc_loss
|
-- predictions
src
|
-- ThreeDCNN
| |
| -- dataset_creation
| | |
| | - __init__.py
| | - create_segments.py
| | - extract_features.py
| | - utils.py
| |
| -- models
| |
| -- classifiaction_network
| | |
| | - __init__.py
| | - classifiaction_model.py
| | - train_model.py
| | - utils.py
| |
| -- data_generator
| | |
| | - __init__.py
| | - ThreeDimCNN_datagenerator.py
| |
| -- post_processing
| | |
| | - __init__.py
| | - post_process.py
| | - utils.py
| |
| -- prediction
| | |
| | - __init__.py
| | - predict.py
| | - utils.py
| |
| - __init__.py
|
-- TwoDCNN
|
-- models
| |
| - 2DCNN_inference.py
| - __init__.py
| - makse_mask_gaze_video.py
| - utils.py
|
-- mrcnn
|
- __init__.py
- config.py
- LICENSE
- model.py
- parakkek_model.py (?)
- utils-py
- visualize
venv
|
- HEC_CNN_env.yml
# Hand-eye-coordination (HEC or EHC)
This Readme shall provide all necessary insights to use the framework in order to recognize hand eye-coordination (HEC) patterns in eye tracking videos.
FYI: The software code has been developed and tested in a Windows 10 Education OS environment only.
## License
The code and the models in this repo are released under the [MIT License](https://gitlab.ethz.ch/pdz/eye-hand-coordination/-/blob/master/LICENSE).
## Installation
# First step: Download and install anaconda
# Second step: Create environment from .yml file
conda env create -f HEC_CNN_env.yml
# Third step: Activate conda environment
conda activate HEC_CNN_env
# FYI - Save an environment with
conda env export > HEC_CNN_env.yml
## Citation
If you use our code in your research or wish to refer to the baseline results, please use the following BibTeX entry.
>>>>>>>> CHECK THIS CITATION BEFORE RELEASE <<<<<<<<<<<<<<<