Skip to content
Snippets Groups Projects
README.md 9.23 KiB
Newer Older
stehess's avatar
stehess committed
## aDAM

stehess's avatar
stehess committed
![aDAM example](assets/GazeMapping.gif) ![aDAM examples](assets/ScreenMatching.gif)

stehess's avatar
stehess committed
Two steps for aDAM: mapping of gaze data from a dynamic stimulus to a static stimulus (above) and screen matching (below)
stehess's avatar
stehess committed


Carlos Guija's avatar
Carlos Guija committed
#### Last update 2021
stehess's avatar
stehess committed
## User Guide
### 1. Installing the tool

To be able to have a look into the source code, a source code editor such as VSCode (https://visualstudio.microsoft.com/) is very helpful.

The code can also be run in a Jupyter Notebook environment. Most files exist as both version. For optimization reasons and evaluation refer to the python script. For further development the notebooks might be helpful.

Carlos Guija's avatar
Carlos Guija committed
An Anaconda virtual environment could be helpful. For its set up run in the anaconda prompt:
```
conda create -n nameOfEnvironment pip python=3.7
Carlos Guija's avatar
Carlos Guija committed
conda activate nameOfEnvironment
Carlos Guija's avatar
Carlos Guija committed
```

stehess's avatar
stehess committed
To install all the necessary packages please use:
```
pip install --user --requirement requirements.txt
```

### 2. The following folders should exist in the git directory
<pre>
|-- assets
Carlos Guija's avatar
Carlos Guija committed
    |-- name of the device
        |-- layout
        |-- screens
stehess's avatar
stehess committed
    |-- Videos (Video files to be analyzed)
    |-- Raw Data (Raw data exported from BeGaze)

|-- device (for MaskRCNN training purposes)
    |-- dataset

|-- Evaluation (all evaluation images are put here)
    |-- mapped
    |-- postprocessing
    |-- warped

|-- Helper (helper files)

|-- logs (logs from training CNN and aDAM)
    |-- LSD (Log files from tracking.py)

|-- mrcnn (folder with MaskRCNN implementation)

|-- text_files (all csv and json output files are written here)

|-- weights (.h5 files from training the network)
</pre>

In case any of these folders are missing, please create them manually. (this might happen, as git does not always push empty folders).


Carlos Guija's avatar
Carlos Guija committed
### 3. Load initial data
stehess's avatar
stehess committed
Whenever you want to analyze a new video this step is required.
The assumption is made that a study was done using the SMI EyeTracking glasses and the BeGaze software for postprocessing.
In order to use this algorithm make sure you have the following available somewhere on your local machine:
* The original video of the participant without the overlayed gaze point
Carlos Guija's avatar
Carlos Guija committed
* The raw data file as csv exported from the BeGaze software only for the correspondent video. Make sure that the following columns are included:
stehess's avatar
stehess committed
    * 'Category Binocular'
    * 'Point of Regard Binocular X [px]'
    * 'Point of Regard Binocular Y [px]'
    * 'Time of Day [h:m:s:ms]'

You can now proceed by executing the following scripts. They will set up the relevant folder related to your evaluation.
```
python3 prep.py
```
Carlos Guija's avatar
Carlos Guija committed
*Errors for copying might be due to wrong placed slashes. Try \\ instead of /
*Give the file path always directly to the desired files: ending with .avi, .csv, .png
*If python3 doesn't work try using python instead
stehess's avatar
stehess committed

If everything was done accordingly you should now see the following:
<pre>
|-- assets
    |-- name of the device
        |-- layout
            -- corner_coords.json
            -- layout.png
        |-- screens
Carlos Guija's avatar
Carlos Guija committed
            -- screen1.png
            -- screen2.png
            ...
stehess's avatar
stehess committed
    |-- Videos
        -- video_nameOfTheDevice.avi
    |-- Raw_Data
        -- raw_data_nameOfTheDevice.csv
</pre>

If any of these files or folders are missing just create them manually.

> N.B.: The preparation steps are optimized for macOS. Issues might arise on other platforms.

Carlos Guija's avatar
Carlos Guija committed
### 4. Raw data synchronization
stehess's avatar
stehess committed
The video from BeGaze has 24fps whereas the gaze points are sampled with 60Hz. This script synchronizes the data and writes the output to:
* text_files/gaze_pts_nameOfTheDevice.csv

```
python3 raw_data_synchronization.py --device=name of the device
```

This step needs to be done for every new video and raw data set.

Carlos Guija's avatar
Carlos Guija committed
### 5. Tracking
stehess's avatar
stehess committed
This file allows tracking and detecting the screen of the device in the video.
With an existing trained neural network this can be done fully automatic.
This is referred to as the FLSD method.

For a first time use the user has to reset whenever the device is out of the frame. This is referred to as the SLSD method.

To execute this please use the following script with the according flags.
```ssh
$ python3 tracking.py
```
```ssh
flags:
 --device=name of the device
 --method=name of the method
 --showMore= shows more images
 --saveResult= if results want to be save
 --startFrame=<int-OPTIONAL>
 --endFrame=<int-OPTIONAL>
 --ratio=ratio of height to widths
 --backlight=if the screen is backlit
 --weights= True if weights exist (i.e. MaskRCNN was trained)
```
This script will generate the following output files.
* text_files/tracked_pts_nameOfTheDevice_nameOfTheMethod.csv
* text_files/tracked_pts_nameOfTheDevice_nameOfTheMethod.json

The .csv is used for further processing by aDAM. The .json file is used to train a MaskRCNN model.

Carlos Guija's avatar
Carlos Guija committed
To visualize process steps:
*uncomment show_images throughout tracking.py

To achieve better results during process: 
*Change tolerance for filtered lines in lsd_helper get_lines_filtered_by_length.
*Change ratios in check_helper check_shape and in lsd_helper get_segmented_by_angle_kmeans

stehess's avatar
stehess committed

Carlos Guija's avatar
Carlos Guija committed
### 6. Gaze Mapping
stehess's avatar
stehess committed
The actual gaze mapping is done using the data generated from the previous step, using the .csv files just mentioned.

For the results, we need the following folder structure. Please proceed and create them:
<pre>
|-- Evaluation
    |-- mapped
        |-- nameOfTheDevice_SLSD
        |-- nameOfTheDevice_FLSD
    |-- warped
        |-- nameOfTheDevice_SLSD
        |-- nameOfTheDevice_FLSD
</pre>

e.g.
<pre>
|-- Evaluation
    |-- mapped
        |-- hamilton_SLSD
        |-- hamilton_FLSD
    |-- warped
        |-- hamilton_SLSD
        |-- hamilton_FLSD
</pre>

```
$ python3 mapping.py --device=device-name --method=SLSD or FLSD --saveResults=bool --showMore=bool
```

This script will calculate the transformation matrix based on the reference coordinate (saved in the preparation step and to be confirmed again) and the four screen corner coordinates tracked for every frame using tracking.py (saved as .csv file).

This script will generate the following outputs:
* A image of the warped screen content
    * Evaluation/warped/nameOfTheDevice_nameOfTheMethod/warped_screen_frame_nb.png
    * This is used for further processing
* A .csv file with the mapped gaze points in the reference coordinate system
    * text_files/eval_mapped_gaze_nameOfTheDevice_nameOfTheMethod.csv

Carlos Guija's avatar
Carlos Guija committed
*in case of malfunction, delete all provisional outputs before running the programm again
stehess's avatar
stehess committed

Carlos Guija's avatar
Carlos Guija committed
### 7. Screen Matching
stehess's avatar
stehess committed
Based on the extracted screens of the previous step screen matching is performed.

```ssh
$ python3 screen_matching_parallel.py --device=device-name --saveResults=bool --cores=nb_cores
```

This script is optimized to run in parallel. The --cores flags lets you choose the number of cores you want to use.

Two different implementations exist
* one using the licensed algorithmn SIFT
* one using the license free algorithm BRISK

If you do not have the opencv_contrib_modules installed you will not be able to execute the SIFT algorithm.

The implementation with SIFT will not be supported further so please refer to BRISK.

This script will generate the following output:
* text_files/screen_matching_nameOfTheDevice_nameOfTheMethod.csv
* text_files/filtered_screen_matching_nameOfTheDevice_nameOfTheMethod.csv

> N.B.:  As the frame rate is faster than the screens' ability to change the content, ambigous screens might appear that cannot be matched. In order to account for this, the results are filtered.

Carlos Guija's avatar
Carlos Guija committed
### 8. Post Processing
Carlos Guija's avatar
Carlos Guija committed
For the results, we need the following folder structure. Please proceed and create them:
<pre>
|-- Evaluation
    |-- postprocessing
        |-- nameOfTheDevice_SLSD
        |-- nameOfTheDevice_FLSD
</pre>
stehess's avatar
stehess committed
If this is the first time you are using this repository, you need to define areas of interest of the device, in order to run the post processing.
```
$ python3 create_aoi.py --device=device-name
```
Per Drag and Drop please select all the areas on interest you want.

This should generate an aois.json file in the assets folder in the corresponding device folder.

The visualization of the results can then be run with:

```ssh
$ python3 postprocessing.py --device=device-name --method=SLSD or FLSD
```

In order to get post processing information for the screen matching data please run
```ssh
$ python3 screen_post.py --device=device-name
```

The will generate the following output:
* Evaluation/postprocessing/ will be filled wilh png files
* text_files/screens_postprocessing_nameOfTheDevice_nameOfTheMethod.csv

Carlos Guija's avatar
Carlos Guija committed
### 9. Train a non existing network
stehess's avatar
stehess committed
Any evaluation for a specific devices done with aDAM will need a semi-automatic evaluation (referred to as SLSD). That means that in cases the tracked points get lost the analyst will need to reinitialize aDAM. However, once this is done data is generated on the go, that can be used to train a model of Mask R-CNN. This model can then be used to substitute the analysts work. In case the tracked points get lost, Mask R-CNN will look for the device and reinitialize by itself.

In order to use this a network needs to be trained.
Please navigate to the device folder and follow the instructions.
```
cd device
```


If no previously trained network and weights exist, then this will allow you to prepare for the fully automatic evaluation. After one semi-automatic evaluation or enough data points are generated navigate to the device folder
```
cd device
Carlos Guija's avatar
Carlos Guija committed
```