(`--build-arg env_det_num_threads=6` and `--build-arg env_det_verbose=1` are optional and are used to overwrite the provided default parameters)
The docker container expects the data and models in `/opt/data` and `/opt/models` respectively.
The directories need to be mounted via docker commands e.g.
The docker container expects data and models in its own `/opt/data` and `/opt/models`directories respectively.
The directories need to be mounted via docker `-v`. For simplicity and speed, the ENV variables `det_data` and `det_models` can be set in the host system to point to the desired directories. To run:
```bash
docker run --gpus all -v /path/to/data/on/pc:/opt/data -v /path/to/models/on/pc:/opt/models -it nndetection:0.1 /bin/bash
```
If nnDetection is already configured on the host PC the following command can be used to start the container with the correct paths.
```bash
docker run --gpus all -v${det_data}:/opt/data -v${det_models}:/opt/models -it nndetection:0.1 /bin/bash
docker run --gpus all -v${det_data}:/opt/data -v${det_models}:/opt/models -it--shm-size=24gb nndetection:0.1 /bin/bash
```
After activating the environment via `. /activate` inside the container, training or inference scripts can be executed with the usual commands (see below).
Warning:
1. The current pytorch versions do not support the 3d conv speed up and thus compiling pytorch from source will run faster than this container.
2. When running a training inside the container it is necessary to [increase the shared memory](https://stackoverflow.com/questions/30210362/how-to-increase-the-size-of-the-dev-shm-in-docker-container).
I tested the following configuration on my local workstation:
```bash
docker run --gpus all -v${det_data}:/opt/data -v${det_models}:/opt/models -it--shm-size=24gb nndetection:0.1 /bin/bash
```
When running a training inside the container it is necessary to [increase the shared memory](https://stackoverflow.com/questions/30210362/how-to-increase-the-size-of-the-dev-shm-in-docker-container)(via --shm-size).