Unverified Commit feae3851 authored by eqy's avatar eqy Committed by GitHub
Browse files

Minor `README.md` edit + docs update from @crcrpar (#1334)



* update ngc link and dockerhub container tag

* update

* update

* update

* Update README.md
Co-authored-by: default avatarMasaki Kozuki <mkozuki@nvidia.com>
parent 7950a82d
# Introduction # Introduction
This repository holds NVIDIA-maintained utilities to streamline This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch.
mixed precision and distributed training in Pytorch.
Some of the code here will be included in upstream Pytorch eventually. Some of the code here will be included in upstream Pytorch eventually.
The intention of Apex is to make up-to-date utilities available to The intent of Apex is to make up-to-date utilities available to users as quickly as possible.
users as quickly as possible.
## Full API Documentation: [https://nvidia.github.io/apex](https://nvidia.github.io/apex) ## Full API Documentation: [https://nvidia.github.io/apex](https://nvidia.github.io/apex)
...@@ -98,49 +96,48 @@ amp.load_state_dict(checkpoint['amp']) ...@@ -98,49 +96,48 @@ amp.load_state_dict(checkpoint['amp'])
Note that we recommend restoring the model using the same `opt_level`. Also note that we recommend calling the `load_state_dict` methods after `amp.initialize`. Note that we recommend restoring the model using the same `opt_level`. Also note that we recommend calling the `load_state_dict` methods after `amp.initialize`.
# Requirements # Installation
Python 3 ## Containers
NVIDIA PyTorch Containers are available on NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch.
The containers come with all the custom extensions available at the moment.
CUDA 9 or newer See [the NGC documentation](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html) for details such as:
- how to pull a container
- how to run a pulled container
- release notes
PyTorch 0.4 or newer. The CUDA and C++ extensions require pytorch 1.0 or newer. ## From Source
We recommend the latest stable release, obtainable from To install Apex from source, we recommend using the nightly Pytorch obtainable from https://github.com/pytorch/pytorch.
[https://pytorch.org/](https://pytorch.org/). We also test against the latest master branch, obtainable from [https://github.com/pytorch/pytorch](https://github.com/pytorch/pytorch).
It's often convenient to use Apex in Docker containers. Compatible options include: The latest stable release obtainable from https://pytorch.org should also work.
* [NVIDIA Pytorch containers from NGC](https://ngc.nvidia.com/catalog/containers/nvidia%2Fpytorch), which come with Apex preinstalled. To use the latest Amp API, you may need to `pip uninstall apex` then reinstall Apex using the **Quick Start** commands below.
* [official Pytorch -devel Dockerfiles](https://hub.docker.com/r/pytorch/pytorch/tags), e.g. `docker pull pytorch/pytorch:nightly-devel-cuda10.0-cudnn7`, in which you can install Apex using the **Quick Start** commands.
See the [Docker example folder](https://github.com/NVIDIA/apex/tree/master/examples/docker) for details.
# Quick Start
### Linux ### Linux
For performance and full functionality, we recommend installing Apex with For performance and full functionality, we recommend installing Apex with
CUDA and C++ extensions via CUDA and C++ extensions via
``` ```bash
git clone https://github.com/NVIDIA/apex git clone https://github.com/NVIDIA/apex
cd apex cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
``` ```
Apex also supports a Python-only build (required with Pytorch 0.4) via Apex also supports a Python-only build via
``` ```bash
pip install -v --disable-pip-version-check --no-cache-dir ./ pip install -v --disable-pip-version-check --no-cache-dir ./
``` ```
A Python-only build omits: A Python-only build omits:
- Fused kernels required to use `apex.optimizers.FusedAdam`. - Fused kernels required to use `apex.optimizers.FusedAdam`.
- Fused kernels required to use `apex.normalization.FusedLayerNorm`. - Fused kernels required to use `apex.normalization.FusedLayerNorm` and `apex.normalization.FusedRMSNorm`.
- Fused kernels that improve the performance and numerical stability of `apex.parallel.SyncBatchNorm`. - Fused kernels that improve the performance and numerical stability of `apex.parallel.SyncBatchNorm`.
- Fused kernels that improve the performance of `apex.parallel.DistributedDataParallel` and `apex.amp`. - Fused kernels that improve the performance of `apex.parallel.DistributedDataParallel` and `apex.amp`.
`DistributedDataParallel`, `amp`, and `SyncBatchNorm` will still be usable, but they may be slower. `DistributedDataParallel`, `amp`, and `SyncBatchNorm` will still be usable, but they may be slower.
Pyprof support has been moved to its own [dedicated repository](https://github.com/NVIDIA/PyProf). Pyprof support has been moved to its own [dedicated repository](https://github.com/NVIDIA/PyProf).
The codebase is deprecated in Apex and will be removed soon. Pyprof is deprecated in Apex and the pyprof directory will be removed by the end of June 2022.
### Windows support ### [Experimental] Windows
Windows support is experimental, and Linux is recommended. `pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .` may work if you were able to build Pytorch from source `pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .` may work if you were able to build Pytorch from source
on your system. `pip install -v --no-cache-dir .` (without CUDA/C++ extensions) is more likely to work. If you installed Pytorch in a Conda environment, make sure to install Apex in that same environment. on your system. A Python-only build via `pip install -v --no-cache-dir .` is more likely to work.
If you installed Pytorch in a Conda environment, make sure to install Apex in that same environment.
# Base image must at least have pytorch and CUDA installed. # Base image must at least have pytorch and CUDA installed.
ARG BASE_IMAGE=nvcr.io/nvidia/pytorch:19.07-py3 ARG BASE_IMAGE=nvcr.io/nvidia/pytorch:22.02-py3
FROM $BASE_IMAGE FROM $BASE_IMAGE
ARG BASE_IMAGE ARG BASE_IMAGE
RUN echo "Installing Apex on top of ${BASE_IMAGE}" RUN echo "Installing Apex on top of ${BASE_IMAGE}"
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment