Commit a6ffc631 authored by nstorni's avatar nstorni
Browse files

Utils adjustments, readme update

parent fd99e767
......@@ -61,6 +61,9 @@ module load gcc/4.8.5 python_cpu/3.7.1
python $HOME/world-models/utils/ --imagetype=RGB
In the cluster:
bsub -o $HOME/job_logs -n 4 -R "rusage[mem=4096 ]" "python $HOME/world-models/utils/ --imagetype=RGBA"
Move some images from train to test folder (not really random, just taking all the file ending with 1 in the filename) should be about 10% of the train set.
......@@ -81,7 +84,7 @@ cp world-models/template_config.json training_configs/train_config1.json
Modify train_config1.json for your training run, then you can test the configuration by starting the training on the local leonhard instance:
$HOME/world-models/ --modeldir --modelconfigdir $HOME/training_configs/train_config1.json
$HOME/world-models/ --modeldir --modelconfigdir $HOME/training_configs/train_config1.json
If there are no errors interrupt the training with CTRL+C, you can now submit it to the cluster.
......@@ -104,6 +107,11 @@ You can access tensorboard in your browser at localhost:6993
The training job will save the models, samples and logs in the $SCRATCH/experiments folder.
You can delete failed runs from the experiments directory with the following script, the first argument is the experiments subdirectory name and the second argument is the run name (read it from the tensorboard) :
$HOME/world-models/utils/ shvae_temporalmask lwtvae_l3_D20200112T183921
## 5. Generate Moving MNIST dataset
You can generate a custom toy dataset with a MNIST digits moving on a black frame that bounce on the borders with the following script:
......@@ -125,7 +133,37 @@ For large datasets run the script in the cluster:
bsub -o $HOME/job_logs -n 4 -R "rusage[mem=4096]" "$HOME/world-models/utils/ --num_videos 10000 --num_videos_val 300 --num_frames 10 --digits_dim 40 --frame_dim 256"
## 6. Generate interframe differences dataset
mkdir $SCRATCH/data
mkdir $SCRATCH/data/$DATASET_NAME/video
mkdir $SCRATCH/data/$DATASET_NAME/train
mkdir $SCRATCH/data/$DATASET_NAME/train/nolabel
mkdir $SCRATCH/data/$DATASET_NAME/val
mkdir $SCRATCH/data/$DATASET_NAME/val/nolabel
mkdir $SCRATCH/data/$DATASET_NAME/test
mkdir $SCRATCH/data/$DATASET_NAME/test/nolabel
This structure is required for the dataset loader to load each picture (it expects a folder containing subfolders for each class, in our case we have only one class "nolabel").
Load one video to $SCRATCH/data/$DATASET_NAME/video using winSCP or similar.
E.g copy from local (ON YOUR LOCAL MACHINE) directory video.avi to the target directory in the cloud.:
scp *.mp4$SCRATCH/data/mini_mice_dataset/video
Generate interframe differences:
module purge
module load gcc/4.8.5 python_gpu/3.7.1
python $HOME/world-models/video_frame/
# Pytorch implementation of the "WorldModels"
# Make bash file executable chmod u+x
echo "Loading modules"
echo $1
rm -r $SCRATCH/experiments/shvae_temporalmask/tensorboard_logs/$1
rm -r $SCRATCH/experiments/shvae_temporalmask/models/$1
echo $2
rm -r $SCRATCH/experiments/$1/tensorboard_logs/$2
rm -r $SCRATCH/experiments/$1/models/$2
......@@ -4,4 +4,4 @@ echo "Loading modules"
module purge
module load gcc/4.8.5 python_cpu/3.7.1
echo "Starting tensorboard"
tensorboard --logdir $SCRATCH/$1/tensorboard_logs
\ No newline at end of file
tensorboard --logdir $SCRATCH/$1/tensorboard_logs --port 7656
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment