**Dockerfile** is a simple template that shows how to install the latest Apex on top of an existing image. Edit **Dockerfile** to choose a base image, then run
## Create new container with Apex
**Dockerfile** installs the latest Apex on top of an existing image. Run
```
docker build -t image_with_apex .
```
By default, **Dockerfile** uses NVIDIA's Pytorch container as the base image,
which requires an NVIDIA GPU Cloud (NGC) account. If you don't have an NGC account, you can sign up for free by following the instructions [here](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html#generating-api-key).
Alternatively, you can supply your own base image via the `BASE_IMAGE` build-arg.
Any `BASE_IMAGE` you supply must have Pytorch and Cuda installed, for example:
. If you want to rebuild your image, and force the latest Apex to be cloned and installed, make any small change to the `SHA` variable on line 8.
**base_images.md** provides guidance on base images to use in the `FROM <base image>` line of **Dockerfile**.
If you want to rebuild your image, and force the latest Apex to be cloned and installed, make any small change to the `SHA` variable in **Dockerfile**.
**Warning:**
Currently, Pytorch's default non-devel image on Dockerhub
[pytorch/pytorch:0.4_cuda9_cudnn7](https://hub.docker.com/r/pytorch/pytorch/tags/) contains Pytorch installed with prebuilt binaries. It does not contain NVCC, which means it is not an eligible candidate for `<base image>`.
## Install Apex in running container
Instead of building a new container, it is also a viable option to clone Apex on bare metal, mount the Apex repo into your container at launch by running, for example,
```
docker run --runtime=nvidia -it --rm --ipc=host -v /bare/metal/apex:/apex/in/container <base image>
```
, then go to /apex/in/container within the running container and `python setup.py install`.
then go to /apex/in/container within the running container and `python setup.py install`.
in **Dockerfile**, `<base image>` must have Pytorch and CUDA installed.
If you have an NGC account, you can use Nvidia's official Pytorch container
```
nvcr.io/nvidia/pytorch:18.04-py3
```
as `<base image>`.
If you don't have an NGC account, you can sign up for one for free by following the instructions [here](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html#generating-api-key).
An alternative is to first
[build a local Pytorch image](https://github.com/pytorch/pytorch#docker-image) using Pytorch's Dockerfile on Github. From the root of your cloned Pytorch repo,
`my_pytorch_image` will contain CUDA, and can be used as `<base image>`.
**Warning:**
Currently, Pytorch's latest stable image on Dockerhub
[pytorch/pytorch:0.4_cuda9_cudnn7](https://hub.docker.com/r/pytorch/pytorch/tags/) contains Pytorch installed with prebuilt binaries. It does not contain NVCC, which means it is not an eligible candidate for `<base image>`.