Commit 1e7f6b60 authored by Rafael Daetwyler's avatar Rafael Daetwyler
Browse files

updated readme

parent b4db30ea
......@@ -125,8 +125,31 @@ For large datasets run the script in the cluster:
bsub -o $HOME/job_logs -n 4 -R "rusage[mem=4096]" "$HOME/world-models/utils/ --num_videos 10000 --num_videos_val 300 --num_frames 10 --digits_dim 40 --frame_dim 256"
## 6. Running the RNN
Once you have the latent variables from the VAE, you can use them to train the RNN.
First, create a folder for the RNN:
mkdir $SCRATCH/data/lstm
Afterwards, copy the dataset (containing the latent variables) into this folder. If you need to upload them from your local machine, use the following command:
scp $$USERNAME/data/lstm
Then, you can copy the config template for the RNN into the templates folder:
cd $HOME
cp world-models/template_config_rnn.json training_configs/rnn_config1.json
And edit the config file to fit your parameters. Then you can do a test run to see if the script works:
$HOME/world-models/ --modeldir --modelconfigdir $HOME/training_configs/rnn_config1.json
If you don't get any errors, you can abort the execution and send the job to be executed on the cluster:
bsub -o job_logs -n 4 -R "rusage[ngpus_excl_p=1,mem=4096]" "$HOME/world-models/ --modeldir --modelconfigdir $HOME/training_configs/rnn_config1.json"
# Pytorch implementation of the "WorldModels"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment