To receive notifications about scheduled maintenance, please subscribe to the mailing-list gitlab-operations@sympa.ethz.ch. You can subscribe to the mailing-list at https://sympa.ethz.ch

Commit d363d44c authored by Carlos Guija's avatar Carlos Guija
Browse files

Update README.md

parent 44dfebca
......@@ -52,7 +52,7 @@ Whenever you want to analyze a new video this step is required.
The assumption is made that a study was done using the SMI EyeTracking glasses and the BeGaze software for postprocessing.
In order to use this algorithm make sure you have the following available somewhere on your local machine:
* The original video of the participant without the overlayed gaze point
* The raw data file as csv exported from the BeGaze software. Make sure that the following columns are included:
* The raw data file as csv exported from the BeGaze software only for the correspondent video. Make sure that the following columns are included:
* 'Category Binocular'
* 'Point of Regard Binocular X [px]'
* 'Point of Regard Binocular Y [px]'
......@@ -62,6 +62,9 @@ You can now proceed by executing the following scripts. They will set up the rel
```
python3 prep.py
```
*Errors for copying might be due to wrong placed slashes. Try \\ instead of /
*Give the file path always directly to the desired files: ending with .avi, .csv, .png
*If python3 doesn't work try using python instead
If everything was done accordingly you should now see the following:
<pre>
......@@ -120,6 +123,13 @@ This script will generate the following output files.
The .csv is used for further processing by aDAM. The .json file is used to train a MaskRCNN model.
To visualize process steps:
*uncomment show_images throughout tracking.py
To achieve better results during process:
*Change tolerance for filtered lines in lsd_helper get_lines_filtered_by_length.
*Change ratios in check_helper check_shape and in lsd_helper get_segmented_by_angle_kmeans
### 7. Gaze Mapping
The actual gaze mapping is done using the data generated from the previous step, using the .csv files just mentioned.
......@@ -159,6 +169,7 @@ This script will generate the following outputs:
* A .csv file with the mapped gaze points in the reference coordinate system
* text_files/eval_mapped_gaze_nameOfTheDevice_nameOfTheMethod.csv
*in case of malfunction, delete all provisional outputs before running the programm again
### 8. Screen Matching
Based on the extracted screens of the previous step screen matching is performed.
......@@ -184,6 +195,13 @@ This script will generate the following output:
> N.B.: As the frame rate is faster than the screens' ability to change the content, ambigous screens might appear that cannot be matched. In order to account for this, the results are filtered.
### 9. Post Processing
For the results, we need the following folder structure. Please proceed and create them:
<pre>
|-- Evaluation
|-- postprocessing
|-- nameOfTheDevice_SLSD
|-- nameOfTheDevice_FLSD
</pre>
If this is the first time you are using this repository, you need to define areas of interest of the device, in order to run the post processing.
```
$ python3 create_aoi.py --device=device-name
......@@ -220,4 +238,4 @@ cd device
If no previously trained network and weights exist, then this will allow you to prepare for the fully automatic evaluation. After one semi-automatic evaluation or enough data points are generated navigate to the device folder
```
cd device
```
\ No newline at end of file
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment