To receive notifications about scheduled maintenance, please subscribe to the mailing-list gitlab-operations@sympa.ethz.ch. You can subscribe to the mailing-list at https://sympa.ethz.ch

Commit cc46e185 authored by Carlos Guija's avatar Carlos Guija
Browse files

Update README.md

parent d363d44c
......@@ -13,6 +13,11 @@ To be able to have a look into the source code, a source code editor such as VSC
The code can also be run in a Jupyter Notebook environment. Most files exist as both version. For optimization reasons and evaluation refer to the python script. For further development the notebooks might be helpful.
An Anaconda virtual environment could be helpful. For its set up run in the anaconda prompt:
```
conda create -n nameOfEnvironment pip python=3.7
```
To install all the necessary packages please use:
```
pip install --user --requirement requirements.txt
......@@ -21,6 +26,9 @@ pip install --user --requirement requirements.txt
### 2. The following folders should exist in the git directory
<pre>
|-- assets
|-- name of the device
|-- layout
|-- screens
|-- Videos (Video files to be analyzed)
|-- Raw Data (Raw data exported from BeGaze)
......@@ -47,7 +55,7 @@ pip install --user --requirement requirements.txt
In case any of these folders are missing, please create them manually. (this might happen, as git does not always push empty folders).
### 4. Load initial data
### 3. Load initial data
Whenever you want to analyze a new video this step is required.
The assumption is made that a study was done using the SMI EyeTracking glasses and the BeGaze software for postprocessing.
In order to use this algorithm make sure you have the following available somewhere on your local machine:
......@@ -74,6 +82,9 @@ If everything was done accordingly you should now see the following:
-- corner_coords.json
-- layout.png
|-- screens
-- screen1.png
-- screen2.png
...
|-- Videos
-- video_nameOfTheDevice.avi
|-- Raw_Data
......@@ -84,7 +95,7 @@ If any of these files or folders are missing just create them manually.
> N.B.: The preparation steps are optimized for macOS. Issues might arise on other platforms.
### 5. Raw data synchronization
### 4. Raw data synchronization
The video from BeGaze has 24fps whereas the gaze points are sampled with 60Hz. This script synchronizes the data and writes the output to:
* text_files/gaze_pts_nameOfTheDevice.csv
......@@ -94,7 +105,7 @@ python3 raw_data_synchronization.py --device=name of the device
This step needs to be done for every new video and raw data set.
### 6. Tracking
### 5. Tracking
This file allows tracking and detecting the screen of the device in the video.
With an existing trained neural network this can be done fully automatic.
This is referred to as the FLSD method.
......@@ -131,7 +142,7 @@ To achieve better results during process:
*Change ratios in check_helper check_shape and in lsd_helper get_segmented_by_angle_kmeans
### 7. Gaze Mapping
### 6. Gaze Mapping
The actual gaze mapping is done using the data generated from the previous step, using the .csv files just mentioned.
For the results, we need the following folder structure. Please proceed and create them:
......@@ -171,7 +182,7 @@ This script will generate the following outputs:
*in case of malfunction, delete all provisional outputs before running the programm again
### 8. Screen Matching
### 7. Screen Matching
Based on the extracted screens of the previous step screen matching is performed.
```ssh
......@@ -194,7 +205,7 @@ This script will generate the following output:
> N.B.: As the frame rate is faster than the screens' ability to change the content, ambigous screens might appear that cannot be matched. In order to account for this, the results are filtered.
### 9. Post Processing
### 8. Post Processing
For the results, we need the following folder structure. Please proceed and create them:
<pre>
|-- Evaluation
......@@ -225,7 +236,7 @@ The will generate the following output:
* Evaluation/postprocessing/ will be filled wilh png files
* text_files/screens_postprocessing_nameOfTheDevice_nameOfTheMethod.csv
### 10. Train a non existing network
### 9. Train a non existing network
Any evaluation for a specific devices done with aDAM will need a semi-automatic evaluation (referred to as SLSD). That means that in cases the tracked points get lost the analyst will need to reinitialize aDAM. However, once this is done data is generated on the go, that can be used to train a model of Mask R-CNN. This model can then be used to substitute the analysts work. In case the tracked points get lost, Mask R-CNN will look for the device and reinitialize by itself.
In order to use this a network needs to be trained.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment