Commit 6b8022df authored by Rafael Dätwyler's avatar Rafael Dätwyler
Browse files

Added dockerfile and updated readme

parent d6abd9ce
FROM python:3.7
RUN apt-get update && apt-get install -y git swig g++ cmake xvfb python-opengl
RUN pip install torch \
torchvision \
cma \
argparse \
gym \
box2d-py \
tqdm \
numpy
WORKDIR /app
CMD ["bash"]
\ No newline at end of file
......@@ -5,13 +5,29 @@ Paper: Ha and Schmidhuber, "World Models", 2018. https://doi.org/10.5281/zenodo.
## Prerequisites
The implementation is based on Python3 and PyTorch, check their website [here](https://pytorch.org) for installation instructions. The rest of the requirements is included in the [requirements file](requirements.txt), to install them:
First clone the project files from Gitlab (you will need to enter your credentials):
```bash
git clone https://gitlab.ethz.ch/deep-learning-rodent/world-models.git
```
Navigate to the project directory and execute the following command to build the Docker image. This might take a while, but you only need to do this once.
```bash
pip3 install -r requirements.txt
docker build -t deep-learning:worldmodels .
```
To run the container, run the following command in the project directory, depending on your OS:
```bash
Windows (PowerShell): docker run -it --rm -v ${pwd}:/app deep-learning:worldmodels
Linux: docker run -it --rm -v $(pwd):/app deep-learning:worldmodels
```
## Running the worldmodels
To run the model, run the Docker container (see above) and execute the command inside the container.
The model is composed of three parts:
1. A Variational Auto-Encoder (VAE), whose task is to compress the input images into a compact latent representation.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment