To receive notifications about scheduled maintenance, please subscribe to the mailing-list gitlab-operations@sympa.ethz.ch. You can subscribe to the mailing-list at https://sympa.ethz.ch

Commit b703fbc3 authored by stehess's avatar stehess

Initial commit

parent 5e812841
# Data files and directories common in repo root
datasets/
logs/
*.h5
results/
temp/
test/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# VS Studio Code
.vscode
# PyCharm
.idea/
# Dropbox
.dropbox.attr
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# dotenv
.env
# virtualenv
.venv
venv/
ENV/
# cGOM
# cGOM
## Automating Areas Of Interest Analysis in Mobile Eye Tracking Experiments based on Machine Learning
### User Guide
1. Install the tool. For this clone the public git repo to your target directory from
http://pdz-git.ethz.ch/stehess/cGOM.git.
Note that you should have pre-installed python and pip.
To be able to have a look into the source code, a source code editor such as atom (https://atom.io/) is very helpful.
Then install all requirements:
```ssh
$ pip install −r requirements.txt
```
2. The following folders should exist in the git directory:
- data_sets
Datasets for training and validation can be added to this folder. The sets must be presented as the examples.
- gaze
Contains the text file containing information about the fixations. Note that it must have the same name as the corresponding video file.
- images
Is an empty folder. Serves as target directory for the images extracted later.
- labels
Contains a json file with the labels. This is automatically generated.
- misc
Contains functions to extract frames from a video, create masked videos, or manually label images.
- mrcnn
Contains everything responsible for mask R-CNN to work properly.
- toolbox
Contains all functions described in this work.
- videos
Contains a video from the eye tracking camera. Note that it must have the same name as the corresponding gaze file.
- weights
Contains the COCO weights, and two other files containing the weights of the partially trained agent and the fully trained agent.
3. Navigate to the misc-folder and run the function extract_images_from_video.py with the corresponding parameters to extract frames from a video. Note that a gaze file can be included in order to solely generate frames from fixations.
```ssh
$ python extract_images_from_video.py --video_path ../videos/Name-of-the-video.avi --output_dir ../images --num_images 60 --gaze_path ../gaze/Name-of-the-gaze-file.txt
```
4. The folder images should contain around 100 frames. Navigate into this folder and create a new folder in it called Object1_Object2. Drag and drop all images into that folder. Note that object 1 and object 2 are placeholder and should be replaced by names of your objects of interest. You may include more objects of interest by following the same logic Obj1_Obj2_Obj3. This procedure is necessary to use the included labelling tool for the training images in step 5.
5. Navigate to the folder toolbox and inspect all the default configuration files. Make changes if required. Then call the function make_masks.py:
```ssh
$ python make_masks.py
```
Note that this tool currently only works on Linus or iOS. If you are using Windows, please continue with step 8.
6. A window with an image and a bunch of sliders will appear. The sliders can be set to 0, 1, and 2. Now adjust the sliders that all masks corresponding to object1 are set to 1 and all corresponding to object2 to 2.
Setting one mask to 0 disables the mask, and setting all masks to 0 dumps the image to a different unmasks folder. Note that all masks for an image must exist, otherwise it should be discharged to the unmasks folder since it can cause difficulties during the learning process. You should be able to label ca. half of all images with this method.
7. Now navigate to the data_sets folder. There should be a new subfolder called Object1_Object2. Extract the folder unmasks and place it at a desired destination.
8. Open via.html from misc. In it load ca. 90% of the images of unmasks, this will correspond to the training set.
9. On the first frame draw a polygon around the first object. Then open Region Attributes and replace [Add New] with label. Then label the newly created polygon with either cup or pen.
10. Repeat the above process for all images. Finally click on Save as JSON under Annotation.
11. Repeat steps 9 and 10 and create a validation set with the remaining images.
12. Navigate back to data_sets in it create a new folder Object1_Object2_1. In Object1_Object2_1 place the two folders train and val containing the previously labeled images with the corresponding via_region_data.json file. All in all the folder structure with all its content should look similar to Object1_Object2. Also check the content of the via_region_data.json file.
13. Now call the training function from the toolbox:
```ssh
$ python make_train.py
```
14. Once training is over extract the weights from the newly created log folder, rename it according to your preferences and place them in the weights folder.
15. Now call the inference function:
```ssh
$ python make_gaze.py
```
This will generate a new folder outputs in you can find the results from the method.
Please note that a lot of adjustments can be made in the configuration files making parts of the above description obsolete. However, there are some requirements:
- Image folders for labeling must contain a subfolder with the label names, as described above. The algorithm obtains the labels from the folder names.
- Dataset folders must contain a train and val folder, each containing a file named via_region_data.json.
- via_region_data.json must follow the json files generated by via.html, where each polygon must be labeled.
- If there are two datasets with the same label, call them Object1_Object2, Object1_Object2_1, Object1_Object2_2, etc. Repeating labels is handled by the algorithm: cup_pen and lamp_pen, for instance.
- Data sets from former studies are included in the training of the neural network if they are in the data_sets folder. Please remove data sets that should not be included in the training set.
- The video file and the gaze file must have the same name. Multiple video files and gaze files can be processed automatically, thus they must have a corresponding name.
\ No newline at end of file
This diff is collapsed.
{"image_0.JPG118233":{"fileref":"","size":118233,"filename":"image_0.JPG","base64_img_data":"","file_attributes":{},"regions":{"0":{"shape_attributes":{"name":"polygon","all_points_x":[578,602,589,599,612,628,649,668,665,655,645,661,648,641,611,604,598,587,587,578],"all_points_y":[592,713,736,746,740,807,834,832,817,797,733,715,705,710,577,575,562,564,575,592]},"region_attributes":{"label":"shot"}},"1":{"shape_attributes":{"name":"polygon","all_points_x":[672,672,693,683,689,705,708,722,730,730,743,743,728,699,689,685,673,675,672],"all_points_y":[595,595,695,713,722,722,747,738,738,713,712,696,696,587,581,571,571,584,595]},"region_attributes":{"label":"shot"}},"2":{"shape_attributes":{"name":"polygon","all_points_x":[373,387,372,372,390,397,409,404,443,466,460,441,458,458,441,416,390,373],"all_points_y":[642,819,823,833,843,839,941,955,960,945,935,833,820,809,807,636,625,642]},"region_attributes":{"label":"shot"}},"3":{"shape_attributes":{"name":"polygon","all_points_x":[292,292,296,315,322,299,299,238,232,216,214,231,231,251,271,292],"all_points_y":[618,618,824,824,841,850,955,957,860,856,843,827,618,604,607,618]},"region_attributes":{"label":"shot"}},"4":{"shape_attributes":{"name":"polygon","all_points_x":[141,141,140,158,171,171,122,120,48,48,9,4,4,34,58,104,141],"all_points_y":[615,615,897,897,918,925,940,958,958,954,955,947,920,908,609,584,615]},"region_attributes":{"label":"shot"}},"5":{"shape_attributes":{"name":"polygon","all_points_x":[214,196,4,6,185,185,191,205,205,246,275,295,303,322,318,308,259,219,214],"all_points_y":[528,510,555,476,433,411,411,423,436,430,430,421,409,476,484,474,487,497,528]},"region_attributes":{"label":"shot"}},"6":{"shape_attributes":{"name":"polygon","all_points_x":[550,567,577,594,621,706,723,729,760,738,720,699,695,685,676,669,671,636,636,619,605,560,550],"all_points_y":[121,376,430,447,447,431,424,404,142,94,84,84,67,56,41,26,1,19,43,65,98,105,121]},"region_attributes":{"label":"bottle"}}}},"image_1.JPG75845":{"fileref":"","size":75845,"filename":"image_1.JPG","base64_img_data":"","file_attributes":{},"regions":{"0":{"shape_attributes":{"name":"polygon","all_points_x":[115,132,112,110,19,1,3,26,26,115],"all_points_y":[806,814,747,760,787,772,863,878,849,806]},"region_attributes":{"label":"shot"}},"1":{"shape_attributes":{"name":"polygon","all_points_x":[333,359,377,394,417,495,501,471,471,486,486,490,470,457,441,431,441,434,419,323,327,363,402,402,380,380,335,333],"all_points_y":[473,685,728,743,743,713,701,495,486,471,439,436,419,424,404,384,366,355,350,366,382,377,377,394,416,441,453,473]},"region_attributes":{"label":"bottle"}},"2":{"shape_attributes":{"name":"polygon","all_points_x":[507,507,560,578,507,497,500,473,484,488,507],"all_points_y":[446,474,474,775,736,736,698,488,477,441,446]},"region_attributes":{"label":"shot"}},"3":{"shape_attributes":{"name":"polygon","all_points_x":[511,520,591,591,511],"all_points_y":[823,955,954,893,823]},"region_attributes":{"label":"shot"}}}},"image_2.JPG109481":{"fileref":"","size":109481,"filename":"image_2.JPG","base64_img_data":"","file_attributes":{},"regions":{"0":{"shape_attributes":{"name":"polygon","all_points_x":[923,812,797,816,930,943,957,957,1079,1066,1066,945,937,923],"all_points_y":[649,629,617,598,611,594,601,617,635,649,661,648,662,649]},"region_attributes":{"label":"shot"}},"1":{"shape_attributes":{"name":"polygon","all_points_x":[1002,930,913,884,903,883,859,861,877,906,917,917,938,965,987,977,1069,1064,1002],"all_points_y":[662,718,713,733,755,775,780,807,830,839,832,817,799,819,800,775,691,666,662]},"region_attributes":{"label":"shot"}},"2":{"shape_attributes":{"name":"polygon","all_points_x":[1068,1034,1081,1099,1108,1068],"all_points_y":[601,624,634,629,619,601]},"region_attributes":{"label":"shot"}},"3":{"shape_attributes":{"name":"polygon","all_points_x":[884,933,960,975,982,975,977,893,881,884],"all_points_y":[282,345,337,349,336,325,313,198,194,282]},"region_attributes":{"label":"shot"}}}},"image_3.JPG93037":{"fileref":"","size":93037,"filename":"image_3.JPG","base64_img_data":"","file_attributes":{},"regions":{"0":{"shape_attributes":{"name":"polygon","all_points_x":[235,300,325,329,327,231,184,235],"all_points_y":[702,736,726,708,693,639,638,702]},"region_attributes":{"label":"shot"}},"1":{"shape_attributes":{"name":"polygon","all_points_x":[662,662,642,634,665,665,692,699,685,672,671,658,685,728,722,720,689,685,705,722,722,703,698,683,679,662],"all_points_y":[299,555,558,570,582,594,605,645,696,715,775,790,803,797,780,631,591,565,544,548,299,290,242,242,289,299]},"region_attributes":{"label":"shot"}}}},"image_4.JPG111003":{"fileref":"","size":111003,"filename":"image_4.JPG","base64_img_data":"","file_attributes":{},"regions":{"0":{"shape_attributes":{"name":"polygon","all_points_x":[19,185,211,279,330,345,345,330,225,202,152,128,95,84,88,67,0,0,47,57,47,14,19],"all_points_y":[595,816,820,793,772,755,735,688,450,439,454,433,426,404,383,369,383,420,420,437,484,505,595]},"region_attributes":{"label":"bottle"}},"1":{"shape_attributes":{"name":"polygon","all_points_x":[718,716,682,682,666,666,726,735,718],"all_points_y":[824,869,864,810,792,780,796,806,824]},"region_attributes":{"label":"shot"}},"2":{"shape_attributes":{"name":"polygon","all_points_x":[682,682,716,716,706,703,695,682],"all_points_y":[689,746,732,696,688,679,679,689]},"region_attributes":{"label":"shot"}}}},"image_5.JPG112906":{"fileref":"","size":112906,"filename":"image_5.JPG","base64_img_data":"","file_attributes":{},"regions":{"0":{"shape_attributes":{"name":"polygon","all_points_x":[460,463,471,515,599,618,622,652,634,624,599,588,575,575,584,578,551,505,471,457,464,493,524,541,541,530,515,515,477,460],"all_points_y":[325,551,602,625,614,607,595,364,312,300,302,272,259,242,226,212,206,205,205,212,226,228,238,242,258,275,289,299,303,325]},"region_attributes":{"label":"bottle"}},"1":{"shape_attributes":{"name":"polygon","all_points_x":[282,303,290,290,313,312,310,325,340,343,333,333,347,347,333,310,302,295,285,289,282],"all_points_y":[723,817,827,843,833,840,854,860,851,843,839,829,824,813,810,709,705,692,696,708,723]},"region_attributes":{"label":"shot"}},"2":{"shape_attributes":{"name":"polygon","all_points_x":[40,1,1,11,26,31,41,36,40],"all_points_y":[710,783,693,688,688,683,686,696,710]},"region_attributes":{"label":"shot"}},"3":{"shape_attributes":{"name":"polygon","all_points_x":[1008,1064,1084,1099,1119,1119,1119,1093,1038,1021,1008,1008],"all_points_y":[642,725,746,728,726,712,692,692,625,625,631,642]},"region_attributes":{"label":"shot"}}}},"image_6.JPG108538":{"fileref":"","size":108538,"filename":"image_6.JPG","base64_img_data":"","file_attributes":{},"regions":{"0":{"shape_attributes":{"name":"polygon","all_points_x":[743,743,757,783,816,830,841,854,824,787,755,743],"all_points_y":[663,663,849,822,822,826,810,809,642,628,631,663]},"region_attributes":{"label":"shot"}},"1":{"shape_attributes":{"name":"polygon","all_points_x":[194,268,283,329,397,426,429,392,372,357,322,306,282,282,280,273,248,219,188,155,155,182,235,246,246,241,234,194,192,194],"all_points_y":[488,712,726,725,706,689,671,451,396,389,394,373,363,345,325,318,316,316,319,335,352,349,349,355,374,396,419,426,466,488]},"region_attributes":{"label":"bottle"}}}}}
\ No newline at end of file
This diff is collapsed.
{"BG": 0, "shot": 1, "bottle": 2}
\ No newline at end of file
"""
ST: 'cGOM'
Bachmann David, Hess Stephan & Julian Wolf (SV)
pdz, ETH Zürich
2018
This file contains all functions to extract images from video, also just from fixations.
"""
# Global imports
import cv2
import argparse
import random
import skimage.io
import os
import numpy as np
# Local imports
from utils import read_gaze
def extract_images_from_video(args):
# Read video
video = cv2.VideoCapture(args.video_path)
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = video.get(cv2.CAP_PROP_FPS)
# Either no gaze file is provided, then each frame is chosen at random from the entire video
if args.gaze_path == None:
# Calculate the probabilities in order to end up with config.num_images
num_frames = video.get(cv2.CAP_PROP_FRAME_COUNT)
p = args.num_images / num_frames
# Write the corresponding frames
i = 0
video_flag = True
while video_flag:
video_flag, frame = video.read()
if random.random() < p:
skimage.io.imsave(os.path.join(args.output_dir, 'image_' + str(i) + '.JPG'), np.flip(frame, axis=2))
i += 1
# If a gaze file is provided, we only pull a frame from fixations
else:
# Read the gaze file and make it iterable
gaze = read_gaze(args.gaze_path, max_res=(height, width))
p = args.num_images / len(gaze)
gaze = iter(gaze)
# Random frame
gaze_entry = next(gaze)
(t_start, t_end, x, y) = list(gaze_entry.values())
rand_frame = np.random.uniform(t_start, t_end) * fps
rand_frame = int(rand_frame)
i = 0
f_count = 0
video_flag = True
while video_flag:
video_flag, frame = video.read()
f_count += 1
# If the random frame corresponds to the frame ID we inspect it
if rand_frame == f_count:
# However, the frame is only kept with probability p
if random.random() < p:
skimage.io.imsave(os.path.join(args.output_dir, 'image_' + str(i) + '.JPG'), np.flip(frame, axis=2))
i += 1
# Get next random frame
gaze_entry = next(gaze)
(t_start, t_end, x, y) = list(gaze_entry.values())
rand_frame = np.random.uniform(t_start, t_end) * fps
rand_frame = int(rand_frame)
video.release()
if __name__ == '__main__':
# Get config
parser = argparse.ArgumentParser(description='Extract a number of random frames from a video')
parser.add_argument('--video_path', required=True)
parser.add_argument('--output_dir', required=True)
parser.add_argument('--num_images', type=int, required=True)
parser.add_argument('--gaze_path', default=None)
args = parser.parse_args()
# Extract frames
extract_images_from_video(args)
"""
ST: 'cGOM'
Bachmann David, Hess Stephan & Julian Wolf (SV)
pdz, ETH Zürich
2018
This file contains all functions to create a gaze video.
"""
# Global imports
import os
import sys
import warnings
import argparse
import cv2
import numpy as np
# Local imports
import utils
# Import Mask RCNN from parent directory
sys.path.append('..')
from mrcnn.config import Config
from mrcnn import model as modellib, visualize
# Suppress warnings
warnings.filterwarnings('ignore', message='Anti-aliasing will be enabled by default in skimage 0.15 to')
# Derived config class
class GazeConfig(Config):
# Name
NAME = "gaze"
# Number of GPUs
GPU_COUNT = 1
# Number of images per GPU
IMAGES_PER_GPU = 1
def make_mask_gaze_video(model, args, classes):
# Video capture
video = cv2.VideoCapture(args.video_path)
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = video.get(cv2.CAP_PROP_FPS)
# Video writer
name = os.path.basename(args.video_path).split('.')[0]
writer = cv2.VideoWriter(name + '_masked.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (width, height))
# Read the gaze file and make it iterable
gaze = utils.read_gaze(args.gaze_path, max_res=(height, width))
gaze = iter(gaze)
# Get the first entry from the gaze file
gaze_entry = next(gaze)
(t_start, t_end, x, y) = list(gaze_entry.values())
f_start = int(t_start * fps)
f_end = int(t_end * fps)
# Go through the frames of the video
f_count = 0
video_flag = True
while video_flag:
# Read a frame
video_flag, frame = video.read()
f_count += 1
# Detect masks
frame = np.flip(frame, axis=2).copy()
r = model.detect([frame], verbose=0)[0]
masks = r['masks']
scores = r['scores']
class_ids = r['class_ids']
num_masks = masks.shape[-1]
# Apply the masks
for i in range(num_masks):
id = class_ids[i]
color = (float(id == 1), float(id == 2), float(id > 2))
frame = visualize.apply_mask(frame, masks[:, :, i], color)
# If a frame is within our random array we keep it
if f_start < f_count <= f_end:
assert video_flag, 'A time outside the scope of the video has been selected. This should not happen.'
# Check where the gaze point lies
max_score = 0.
max_class = 0
for i in range(num_masks):
if masks[y, x, i] == True and scores[i] > max_score:
max_score = scores[i]
max_class = class_ids[i]
# Write the class onto the frame
label = classes[max_class]
text = 'label: %s' %label
cv2.putText(frame, text, (int(frame.shape[0] / 50.), int(frame.shape[0] / 50.)), cv2.FONT_HERSHEY_PLAIN,
frame.shape[0] / 700., (255, 0, 0))
# Draw the circle
cv2.circle(frame, (x, y), int(frame.shape[0] / 100.), (255, 0, 0), thickness=int(frame.shape[0] / 100.))
# Check if we are leaving the fixation
# Exhaust casues the last frame not to be written - whatever.
if f_count == f_end:
gaze_entry = next(gaze, 'break')
if gaze_entry == 'break':
break
(t_start, t_end, x, y) = list(gaze_entry.values())
f_start = int(t_start * fps)
f_end = int(t_end * fps)
# Write the frame
writer.write(np.flip(frame, axis=2))
writer.release()
video.release()
if __name__ == '__main__':
# Args
parser = argparse.ArgumentParser(description='Make a fancy videos containing masks, gaze point, and detections')
parser.add_argument('--video_path', required=True)
parser.add_argument('--gaze_path', required=True)
parser.add_argument('--log_dir', default='./logs')
parser.add_argument('--weights_path', default='../weights/w_pilot.h5')
parser.add_argument('--label_dir', default='../labels')
parser.add_argument('--detection_min_confidence', default=0.9, type=int)
args= parser.parse_args()
# Read the labels
LABEL_DIR = os.path.join(args.label_dir, 'labels.json')
classes = list(utils.load(LABEL_DIR).keys())
# Configs for model
class GazeConfig(GazeConfig):
NUM_CLASSES = len(classes)
DETECTION_MIN_CONFIDENCE = args.detection_min_confidence
config = GazeConfig()
config.display()
# Load the model
model = modellib.MaskRCNN(mode="inference", config=config, model_dir=args.log_dir)
# Load weights
if args.weights_path == 'last':
model.load_weights(model.find_last(), by_name=True)
else:
model.load_weights(args.weights_path, by_name=True)
# Make the video
make_mask_gaze_video(model, args, classes)
## [1.0.6] - June 15, 2018
* a patch from Stefan Mihaila which requires polygon shape to have at least 3 points.
* rectangles can now be resized from edges
* added POLYLINE shape
* image file list can be filtered using regular expression
* renamed methods and variable
- _via_reload_img_table : _via_reload_img_fn_list_table
- reload_img_table() : reload_img_fn_list_table()
- _via_loaded_img_table_html : _via_loaded_img_fn_list_table_html
## [1.0.5] - January 16, 2017
* (code contributions from Stefan Mihaila) via.js codebase improvement, wider web browser support (IE 10, IE 11 and Opera 12)
* added Contributors.md file to record contributions to VIA codebase
* removed 'localStorage.clear()' to avoid SecurityError in Safari browser (issue 85 and 108)
## [1.0.4] - October 17, 2017
* fixed polygon copy/paste/resize issue (issue 107)
## [1.0.3] - August 07, 2017
* CSV export now does not add extra comma to each line (issue 103)
## [1.0.2] - August 04, 2017
* removed free resize of ellipse from any edge (issue 100)
* fixed free resize of rectangle (issue 101)
* fixed 1-pixel bug (first set image space coordinate, then set canvas coordinate. see issue 96) for region resize and move
* press Ctrl while resizing to preserve the aspect ratio of rectangle (issue 98)
* fixed issue with CSV files containing newline character \r or \r\n (issue 102)
* top menu bar remains consistent event when the user scrolls the window
## [1.0.1] - June 11, 2017
* fixed issue 33 : Annotations cannot be imported from file of type application/vnd.ms-excel
* fixed issue 96 : A major bug in how canvas coordinates are computed
## [1.0.0] - April 04, 2017
* file-attributes support added (useful for weakly supervised learning)
* spreadsheet like editor for region and file attributes
* visualization of loaded image list improved
* user annotation data cached in browser's localStorage (for data recovery on browser crash)
* zoom in/out support
* improved performance using multi-layered canvas for image and annotations
* new user interface layout (added toolbar on top navigation panel)
* added Getting Started guide and License to help menu
* CSV import/export now conforms to RFC 4180 standard
* added some basic unit tests
* added support for point regions (useful for landmark annotations)
## [1.0.0-beta] - 2017-03-15
* beta release for VIA 1.0.0
## [0.1b] - 2016-10-24
* first release of VGG image annotator
* supports following region shape: rectangle, circle, ellipse, polygon
* contains basic image region operations such as move, resize, delete
* Ctrl a/c/v to select all, copy and paste image regions
* import/export of region data from/to text file in csv,json format
* display list of loaded images
# Contributors to VIA project
We welcome all forms of contributions (code update, documentation, etc) from users.
These contributions must adhere to the existing [license](LICENSE) of VIA project.
Here is the list of current contributions to VIA project.
* Stefan Mihaila (@smihaila, 01 Feb. 2018, updates to via-1.0.5)
01. a patch from Stefan Mihaila which requires polygon shape to have at least 3 points.
* Stefan Mihaila (@smihaila, 15 Jan. 2018, updates to via-1.0.4)
01. Added "use strict";
02. Added the "var _via_current_x = 0; var _via_current_y = 0;" global vars.
03. Replaced any Set() object (_via_region_attributes, _via_file_attributes) with a standard dictionary object.
04. Replaced any Map() object (ImageMetadata.file_attributes, ImageRegion.shape_attributes and ImageRegion.region_attributes) with a standard dictionary object.
05. Made most of the switch() statements more readable or even fixing potential bugs caused by unintended "fall-through" (i.e. lack of "break") statements.
06. Added missing semi-colon (;) expression terminators.
07. Replaced any use of "for (var key of collection_name.keys()) {}" block (combined with collection_name.get(key) inside the block) with "for (var key in collection_name) {}" (combined with collection_name[key] inside the block).
08. Gave a more intuitive name to certain local var names.
09. Commented out unused local vars.
10. Removed un-necessary intermediary local vars.
11. Made certain local vars inside functions, to be more sub-scoped / to reflect their exact use.
12. Added missing "var variable_name" declarations.
13. Leverage Object.keys(collection_name).length property instead of Map.size and Set.size property.
14. Replaced "==" and "!=" with their more precise / identity operators (=== and !==).
15. Simplified some function implementations, using direct "return expression" statements.
16. Fixed spelling errors in comments, string values, variable names and function names.
Copyright (c) 2016-2018, Abhishek Dutta, Visual Geometry Group, Oxford University.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF