"vscode:/vscode.git/clone" did not exist on "731ea53c803ce00fbfd7531b4f1c6bf263a1f6b4"
Unverified Commit 5042dc00 authored by Shaden Smith's avatar Shaden Smith Committed by GitHub
Browse files

drafting Jekyll webpage (#143)

parent d6bc44bf
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
import os
import sys
# -- Project information -----------------------------------------------------
project = 'DeepSpeed'
copyright = '2020, Microsoft AI & Research'
author = 'Microsoft AI & Research'
# The full version, including alpha/beta/rc tags
release = '0.1.0'
master_doc = 'index'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'recommonmark',
'sphinx_rtd_theme',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# GitHub integration
html_context = {
"display_github": True,
"github_user": "microsoft",
"github_repo": "DeepSpeed",
"github_version": "master",
"conf_py_path": "/docs/code-docs/source/",
}
# Mock imports so we don't have to install torch to build the docs.
from unittest.mock import MagicMock
sys.path.insert(0, os.path.abspath('../../../'))
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return MagicMock()
MOCK_MODULES = [
'torch',
'torch.utils',
'torch.utils.data',
'torch.utils.data.distributed',
'torch._utils',
'torch.cuda',
'torch.nn.modules',
'torch.nn',
'torch.distributed',
'torch.distributed.distributed_c10d',
'torch.optim',
'torch._six'
]
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
deepspeed.pt package
====================
Submodules
----------
deepspeed.pt.deepspeed\_config module
-------------------------------------
.. automodule:: deepspeed.pt.deepspeed_config
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_constants module
----------------------------------------
.. automodule:: deepspeed.pt.deepspeed_constants
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_csr\_tensor module
------------------------------------------
.. automodule:: deepspeed.pt.deepspeed_csr_tensor
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_dataloader module
-----------------------------------------
.. automodule:: deepspeed.pt.deepspeed_dataloader
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_fused\_lamb module
------------------------------------------
.. automodule:: deepspeed.pt.deepspeed_fused_lamb
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_launch module
-------------------------------------
.. automodule:: deepspeed.pt.deepspeed_launch
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_light module
------------------------------------
.. automodule:: deepspeed.pt.deepspeed_light
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_lr\_schedules module
--------------------------------------------
.. automodule:: deepspeed.pt.deepspeed_lr_schedules
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_run module
----------------------------------
.. automodule:: deepspeed.pt.deepspeed_run
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_timer module
------------------------------------
.. automodule:: deepspeed.pt.deepspeed_timer
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_utils module
------------------------------------
.. automodule:: deepspeed.pt.deepspeed_utils
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.deepspeed\_zero\_optimizer module
----------------------------------------------
.. automodule:: deepspeed.pt.deepspeed_zero_optimizer
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.fp16\_optimizer module
-----------------------------------
.. automodule:: deepspeed.pt.fp16_optimizer
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.fp16\_unfused\_optimizer module
--------------------------------------------
.. automodule:: deepspeed.pt.fp16_unfused_optimizer
:members:
:undoc-members:
:show-inheritance:
deepspeed.pt.loss\_scaler module
--------------------------------
.. automodule:: deepspeed.pt.loss_scaler
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: deepspeed.pt
:members:
:undoc-members:
:show-inheritance:
deepspeed package
=================
Subpackages
-----------
.. toctree::
deepspeed.pt
Module contents
---------------
.. automodule:: deepspeed
:members:
:undoc-members:
:show-inheritance:
DeepSpeed
=========
.. toctree::
:maxdepth: 2
:caption: Contents:
modules
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
deepspeed
=========
.. toctree::
:maxdepth: 4
deepspeed
# PyTorch DeepSpeed Config JSON Documentation
## REQUIRED DeepSpeed Config JSON Parameters
***train\_batch\_size***: [integer]
| Value | Example |
| ------------------------------------------------------------ | ------- |
| The effective training batch size. This is the amount of data samples that leads to one step of model update. ***train\_batch\_size*** is aggregated by the batch size that a single GPU processes in one forward/backward pass (a.k.a., ***train\_step\_batch\_size***), the gradient accumulation steps (a.k.a., ***gradient\_accumulation\_steps***), and the number of GPUs. | `32` |
## OPTIONAL DeepSpeed Config JSON Parameters
### Batch Size Related Parameters
***train\_micro\_batch\_size\_per\_gpu***: [integer]
| Description | Default |
| ------------------------------------------------------------ | ---------------------------- |
| Batch size to be processed by one GPU in one step (without gradient accumulation). When specified, ***gradient\_accumulation\_steps*** is automatically calculated using ***train\_batch\_size*** and number of GPUs. Should not be concurrently specified with ***gradient\_accumulation\_steps*** in the configuration JSON. | ***train\_batch\_size*** value |
***gradient\_accumulation\_steps***: [integer]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| Number of training steps to accumulate gradients before averaging and applying them. This feature is sometimes useful to improve scalability since it results in less frequent communication of gradients between steps. Another impact of this feature is the ability to train with larger batch sizes per GPU. When specified, ***train\_step\_batch\_size*** is automatically calculated using ***train\_batch\_size*** and number of GPUs. Should not be concurrently specified with ***train\_step\_batch\_size*** in the configuration JSON. | `1` |
### Optimizer Parameters
***optimizer***: [dictionary]
| Fields | Value | Example |
| ------ | ------------------------------------------------------------ | ------------------------------ |
| type | The optimizer name. DeepSpeed natively supports Adam and LAMB optimizers and will import other optimizers from [torch](https://pytorch.org/docs/stable/optim.html). | `"Adam"` |
| params | Dictionary of parameters to instantiate optimizer. The parameter names must match the optimizer constructor signature (e.g., for [Adam](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam)). | `{"lr": 0.001, "eps": 1e-8}` |
Example of ***optimizer***
```json
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.001,
"betas": [
0.8,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
}
```
### Scheduler Parameters
***scheduler***: [dictionary]
| Fields | Value | Example |
| ------ | ------------------------------------------------------------ | ------------------------------ |
| type | The scheduler name. See [here](https://microsoft.github.io/DeepSpeed/docs/htmlfiles/api/full/pt/deepspeed_lr_schedules.m.html) for list of support schedulers. | `"1Cycle"` |
| params | Dictionary of parameters to instantiate scheduler. The parameter names should match scheduler constructor signature. | `{"lr": 0.001, "eps": 1e-8}` |
Example of ***scheduler***
```json
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
```
### Communication options
***fp32\_allreduce***: [boolean]
| Description | Default |
| ------------------------------------ | ------- |
| During gradient averaging perform allreduce with 32 bit values | `false` |
***disable\_allgather***: [boolean]
| Description | Default |
| ---------------------------- | ------- |
| Disable allgather when using ZeRO optimizer and instead use broadcast | `false`
***prescale\_gradients***: [boolean]
| Description | Default |
| -------------------------------------- | ------- |
| Scale gradients before doing allreduce | `false` |
***sparse\_gradients***: [boolean]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| Enable sparse compression of [torch.nn.Embedding](https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding) gradients. | `false` |
### FP16 training options
***zero\_optimization***: [boolean]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| Enable ZeRO memory optimization wrapper for FP16 Training. Currently compatible only with Adam optimizer. | `false` |
***fp16***: [dictionary]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| Configuration for using mixed precision/FP16 training that leverages [NVIDIA's Apex package](https://nvidia.github.io/apex/). An example, including the available dictionary keys is illustrated below. | None |
```json
"fp16": {
"enabled": true,
"loss_scale": 0,
"initial_scale_power": 32,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
}
```
***fp16:enabled***: [boolean]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| ***enabled*** is a **fp16** parameter indicating whether or not FP16 training enabled. | `false` |
***fp16:loss\_scale***: [float]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| ***loss\_scale*** is a ***fp16*** parameter representing the loss scaling value for FP16 training. The default value of 0.0 results in dynamic loss scaling, otherwise the value will be used for static fixed loss scaling. | `0.0` |
***fp16:initial\_scale\_power***: [integer]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| ***initial\_loss\_scale\_power*** is a **fp16** parameter representing the power of the initial dynamic loss scale value. The actual loss scale is computed as 2<sup>***initial\_loss\_scale\_power***</sup>. | `32` |
***fp16:loss\_scale\_window***: [integer]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| ***loss\_scale\_window*** is a **fp16** parameter representing the window over which to raise/lower the dynamic loss scale value. | `1000` |
***fp16:hysteresis***: [integer]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| ***hysteresis*** is a **fp16** parameter representing the delay shift in dynamic loss scaling. | `2` |
***fp16:min\_loss\_scale***: [integer]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| ***min\_loss\_scale*** is a **fp16** parameter representing the minimum dynamic loss scale value. | `1000` |
### Gradient Clipping
***gradient\_clipping***: [float]
| Description | Default |
| ----------------------------------- | ------- |
| Enable gradient clipping with value | `0` |
### Logging
***steps\_per\_print***: [integer]
| Description | Default |
| ----------- | ------- |
| Print train loss every N steps | `10` |
***wall\_clock\_breakdown***: [boolean]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| Enable timing of the latency of forward/backward/update training phases | `false` |
***dump_state***: [boolean]
| Description | Default |
| ------------------------------------------------------------ | ------- |
| Print out state information of DeepSpeed object after initialization | `false` |
# Feature Overview
---
title: "Feature Overview"
layout: single
toc: true
toc_label: "Contents"
---
* [Distributed Training with Mixed Precision](#distributed-training-with-mixed-precision)
* 16-bit mixed precision
......
This diff is collapsed.
# Tutorial: 1-Cycle Schedule
This tutorial shows how to implement 1Cycle schedules for learning rate and
momentum in PyTorch.
## 1-Cycle Schedule
Recent research has demonstrated that the slow convergence problems of large
batch size training can be addressed by tuning critical hyperparameters such
as learning rate and momentum, during training using cyclic and decay
schedules. In DeepSpeed, we have implemented a state-of-the-art schedule called
[1-Cycle](https://arxiv.org/abs/1803.09820) to help data scientists
effectively use larger batch sizes to train their models in PyTorch.
## Prerequisites
To use 1-cycle schedule for model training, you should satisfy these two requirements:
1. Integrate DeepSpeed into your training script using this
[guide](../..//README.md#getting-started).
2. Add the parameters to configure a 1-Cycle schedule to the parameters of your
model. We will define the 1-Cycle parameters below.
## Overview
The 1-cycle schedule operates in two phases, a cycle phase and a decay phase,
which span one iteration over the training data. For concreteness, we will
review how 1-cycle schedule of learning rate works. In the cycle phase,
the learning rate oscillates between a minimum value and a maximum value over a
number of training steps. In the decay phase, the learning rate decays starting
from the minimum value of the cycle phase. An example of 1-cycle learning rate
schedule during model training is illustrated below.
![1cycle_lr](../figures/1cycle_lr.png)
### 1-Cycle Parameters
The 1-Cycle schedule is defined by a number of parameters which allow users
explore different configurations. The literature recommends concurrent tuning
of learning rate and momentum because they are correlated hyperparameters. We
have leveraged this recommendation to reduce configuration burden by organizing
the 1-cycle parameters into two groups to:
1. Global parameters for configuring the cycle and decay phase
2. Local parameters for configuring learning rate and momentum
The global parameters for configuring the 1-cycle phases are:
1. `cycle_first_step_size`: The count of training steps to complete first step of cycle phase
2. `cycle_first_stair_count`: The count of updates (or stairs) in first step of cycle phase
3. `cycle_second_step_size`: The count of training steps to complete second step of cycle phase
4. `cycle_second_stair_count`: The count of updates (or stairs) in the second step of cycle phase
5. `post_cycle_decay_step_size`: The interval, in training steps, to decay hyperparameter in decay phase
The local parameters for the hyperparameters are:
**Learning rate**:
1. `cycle_min_lr`: minimum learning rate in cycle phase
2. `cycle_max_lr`: maximum learning rate in cycle phase
3. `decay_lr_rate`: decay rate for learning rate in decay phase
Although appropriate values `cycle_min_lr` and `cycle_max_lr` values can be
selected based on experience or expertise, we recommend using [learning rate
range test](lrrt.md) feature of DeepSpeed to configure them.
**Momentum**
1. `cycle_min_mom`: minimum momentum in cycle phase
2. `cycle_max_mom`: maximum momentum in cycle phase
3. `decay_mom_rate`: decay rate for momentum in decay phase
## Required Model Configuration Changes
To illustrate the required model configuration changes to use 1-Cycle schedule
in model training, we will use a schedule with the following properties:
1. A symmetric cycle phase, where each half of the cycle spans the same number
of training steps. For this example, it will take 1000 training steps for the
learning rate to increase from 0.0001 to 0.0010 (10X scale), and then to
decrease back to 0.0001. The momentum will correspondingly cycle between 0.85
and 0.99 in similar number of steps.
2. A decay phase, where learning rate decays by 0.001 every 1000 steps, while
momentum is not decayed.
Note that these parameters are processed by DeepSpeed as session parameters,
and so should be added to the appropriate section of the model configuration.
### **PyTorch model**
PyTorch versions 1.0.1 and newer provide a feature for implementing schedulers
for hyper-parameters, called [learning rate
schedulers](https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html).
We have implemented 1-Cycle schedule using this feature. You will add a
scheduler entry of type **"OneCycle"** as illustrated below.
```json
"scheduler": {
"type": "OneCycle",
"params": {
"cycle_first_step_size": 1000,
"cycle_first_stair_count": 500,
"cycle_second_step_size": 1000,
"cycle_second_stair_count": 500,
"decay_step_size": 1000,
"cycle_min_lr": 0.0001,
"cycle_max_lr": 0.0010,
"decay_lr_rate": 0.001,
"cycle_min_mom": 0.85,
"cycle_max_mom": 0.99,
"decay_mom_rate": 0.0
}
},
```
## Batch Scaling Example
As example of how 1-Cycle schedule can enable effective batch scaling, we
briefly share our experience with an internal model in Microsoft. In this case,
the model was well-tuned for fast convergence (in data samples) on a single
GPU, but was converging slowly to target performance (AUC) when training on 8
GPUs (8X batch size). The plot below shows model convergence with 8 GPUs for
these learning rate schedules:
1. **Fixed**: using an optimal fixed learning rate for 1-GPU training.
2. **LinearScale**: using a fixed learning rate that is 8X of **Fixed**.
3. **1Cycle**: using 1-Cycle schedule.
![model_convergence](../figures/model_convergence.png)
With **1Cycle**, the model converges faster than the other schedules to the
target AUC . In fact, **1Cycle** converges as fast as the optimal 1-GPU
training (not shown). For **Fixed**, convergence is about 5X slower (needs 5X
more data samples). With **LinearScale**, the model diverges because the
learning rate is too high. The plot below illustrates the schedules by
reporting the learning rate values during 8-GPU training.
![lr_schedule](../figures/lr_schedule.png)
We see that the learning rate for **1Cycle** is always larger than **Fixed**
and is briefly larger than **LinearScale** to achieve faster convergence. Also
**1Cycle** lowers the learning rate later during training to avoid model
divergence, in contrast to **LinearScale**. In summary, by configuring an
appropriate 1-Cycle schedule we were able to effective scale the training batch
size for this model by 8X without loss of convergence speed.
# Tutorial: Megatron-LM GPT2 with DeepSpeed
If you haven't already, we advise you to first read through the [Getting
Started](../../README.md#getting-started) guide before stepping through this
tutorial.
In this tutorial we will be adding DeepSpeed to Megatron-LM GPT2 model, which
is a large, powerful transformer. Megatron-LM supports model-parallel and multi-node
training. Please see the corresponding paper for more details: [Megatron-LM:
Training Multi-Billion Parameter Language Models Using Model
Parallelism](https://arxiv.org/abs/1909.08053).
First, we discuss data and environment setup and how to train the GPT-2 model with the
original Megatron-LM. Next, we proceed step-by-step in enabling this model to run with
DeepSpeed. Finally, we demonstrate the **_performance gains_**, and **_memory footprint
reduction_** from using DeepSpeed.
## 1 Training GPT-2 with the Original Megatron-LM
The original model code from
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM). We've copied this repo
under
[DeepSpeedExamples/Megatron-LM/](https://github.com/microsoft/DeepSpeedExamples/tree/master/Megatron-LM)
and made it available as a submodule. To download, execute:
```bash
git submodule update --init --recursive
```
### 1.1 Training Data Setup
* Follow Megatron's [instructions](https://github.com/NVIDIA/Megatron-LM#collecting-gpt2-webtext-data)
to download the webtext data and place a symbolic link under `DeepSpeedExamples/Megatron-LM/data`:
### 1.2 Running Unmodified Megatron-LM GPT2 model
* For a single GPU run:
- change `scripts/pretrain_gpt2.sh`, set its `--train-data` argument as `"webtext"`.
- run `bash scripts/pretrain_gpt2.sh`
* For multiple GPUs and/or nodes run:
- change `scripts/pretrain_gpt2_model_parallel.sh`
- set its `--train-data` argument as `"webtext"`
- `GPUS_PER_NODE` indicates how many GPUs per node involved in the testing
- `NNODES` indicates how many nodes involved in the testing
- run `bash scripts/pretrain_gpt2_model_parallel.sh`
## 2 Enabling DeepSpeed
To use DeepSpeed we will modify three files :
* `arguments.py` : Arguments configurations
* `pretrain_gpt2.py` : Main entry point for training
* `utils.py` : Checkpoints saving and loading utilities
### 2.1 Argument Parsing
The first step is to apply DeepSpeed is adding DeepSpeed arguments to
Megatron-LM GPT2 model, using `deepspeed.add_config_arguments()` in
`arguments.py`.
```python
def get_args():
"""Parse all the args."""
parser = argparse.ArgumentParser(description='PyTorch BERT Model')
parser = add_model_config_args(parser)
parser = add_fp16_config_args(parser)
parser = add_training_args(parser)
parser = add_evaluation_args(parser)
parser = add_text_generate_args(parser)
parser = add_data_args(parser)
# Include DeepSpeed configuration arguments
parser = deepspeed.add_config_arguments(parser)
```
### 2.2 Initialization and Training
We modify `pretrain.py` to enable training with DeepSpeed.
#### 2.2.1 Initialization
We use `deepspeed.initialize` to create `model_engine`, `optimizer` and LR
`scheduler`. Below is its definition:
```python
def initialize(args,
model,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
mpu=None,
dist_init_required=True,
collate_fn=None):
```
For the Megatron-LM GPT2 model, we initialize DeepSpeed in its
`setup_model_and_optimizer()` function as below, to pass the raw `model`,
`optimizer`, `args`, `lr_scheduler` and `mpu`.
```python
def setup_model_and_optimizer(args):
"""Setup model and optimizer."""
model = get_model(args)
optimizer = get_optimizer(model, args)
lr_scheduler = get_learning_rate_scheduler(optimizer, args)
if args.deepspeed:
import deepspeed
print_rank_0("DeepSpeed is enabled.")
model, optimizer, _, lr_scheduler = deepspeed.initialize(
model=model,
optimizer=optimizer,
args=args,
lr_scheduler=lr_scheduler,
mpu=mpu,
dist_init_required=False
)
```
Note that when FP16 is enabled, Megatron-LM GPT2 adds a wrapper to the `Adam`
optimizer. DeepSpeed has its own FP16 Optimizer, so we need to pass the `Adam`
optimizer to DeepSpeed directly without any wrapper. We return the unwrapped
Adam optimizer from `get_optimizer()` when DeepSpeed is enabled.
```python
def get_optimizer(model, args):
"""Setup the optimizer."""
......
# Use Adam.
optimizer = Adam(param_groups,
lr=args.lr, weight_decay=args.weight_decay)
if args.deepspeed:
# fp16 wrapper is not required for DeepSpeed.
return optimizer
```
#### 2.2.2 Using the Training API
The `model` returned by `deepspeed.initialize` is the _DeepSpeed Model Engine_
that we will use to train the model using the forward, backward and step API.
##### Forward Propagation
The forward propagation API is compatible to PyTorch and no change is required.
##### Backward Propagation
Backward propagation is done by calling `backward(loss)` directly on the model engine.
```python
def backward_step(optimizer, model, lm_loss, args, timers):
"""Backward step."""
# Total loss.
loss = lm_loss
# Backward pass.
if args.deepspeed:
model.backward(loss)
else:
optimizer.zero_grad()
if args.fp16:
optimizer.backward(loss, update_master_grads=False)
else:
loss.backward()
```
Zeroing the gradients is handled automatically by DeepSpeed after the weights
have been updated using a mini-batch.
Furthermore, DeepSpeed addresses distributed data parallel and FP16 under the
hood, simplifying code in multiple places.
(A) DeepSpeed also performs gradient averaging automatically at the gradient
accumulation boundaries. So we skip the allreduce communication.
```python
if args.deepspeed:
# DeepSpeed backward propagation already addressed all reduce communication.
# Reset the timer to avoid breaking timer logs below.
timers('allreduce').reset()
else:
torch.distributed.all_reduce(reduced_losses.data)
reduced_losses.data = reduced_losses.data / args.world_size
if not USE_TORCH_DDP:
timers('allreduce').start()
model.allreduce_params(reduce_after=False,
fp32_allreduce=args.fp32_allreduce)
timers('allreduce').stop()
```
(B) We also skip updating master gradients, since DeepSpeed addresses it internally.
```python
# Update master gradients.
if not args.deepspeed:
if args.fp16:
optimizer.update_master_grads()
# Clipping gradients helps prevent the exploding gradient.
if args.clip_grad > 0:
if not args.fp16:
mpu.clip_grad_norm(model.parameters(), args.clip_grad)
else:
optimizer.clip_master_grads(args.clip_grad)
return lm_loss_reduced
```
##### Updating the Model Parameters
The `step()` function in DeepSpeed engine updates the model parameters as well
as the learning rate.
```python
if args.deepspeed:
model.step()
else:
optimizer.step()
# Update learning rate.
if not (args.fp16 and optimizer.overflow):
lr_scheduler.step()
else:
skipped_iter = 1
```
##### Loss Scaling
The GPT2 training script logs the loss scaling value during training. Inside,
the DeepSpeed optimizer, this value is stored as `cur_scale` instead of
`loss_scale` in Megatron's optimizer. Therefore, we appropriately replace it in
the logging string.
```python
if args.fp16:
log_string += ' loss scale {:.1f} |'.format(
optimizer.cur_scale if args.deepspeed else optimizer.loss_scale)
```
### 2.3 Checkpoints Saving & Loading
DeepSpeed engine has flexible APIs for checkpoint saving and loading, to handle
the states from both the client model and its own internal.
```python
def save_checkpoint(self, save_dir, tag, client_state={})
def load_checkpoint(self, load_dir, tag)
```
Applying DeepSpeed needs to update utils.py in which Megatron-LM GPT2 saves and
loads its checkpoints.
A new function `save_ds_checkpoint()` is created as below for DeepSpeed, it
collects the client model states and passes to DeepSpeed engine by calling
`save_checkpoint()` of DeepSpeed.
```python
def save_ds_checkpoint(iteration, model, args):
"""Save a model checkpoint."""
sd = {}
sd['iteration'] = iteration
# rng states.
if not args.no_save_rng:
sd['random_rng_state'] = random.getstate()
sd['np_rng_state'] = np.random.get_state()
sd['torch_rng_state'] = torch.get_rng_state()
sd['cuda_rng_state'] = torch.cuda.get_rng_state()
sd['rng_tracker_states'] = mpu.get_cuda_rng_tracker().get_states()
model.save_checkpoint(args.save, iteration, client_state = sd)
```
In Megatron-LM GPT2 `save_checkpoint()` function, adds following lines to
invoke the above function for DeepSpeed.
```python
def save_checkpoint(iteration, model, optimizer,
lr_scheduler, args):
"""Save a model checkpoint."""
if args.deepspeed:
save_ds_checkpoint(iteration, model, args)
else:
......
```
In `load_checkpoint()` function, use DeepSpeed loading checkpoint API as below,
and return the states for the client model.
```python
def load_checkpoint(model, optimizer, lr_scheduler, args):
"""Load a model checkpoint."""
iteration, release = get_checkpoint_iteration(args)
if args.deepspeed:
checkpoint_name, sd = model.load_checkpoint(args.load, iteration)
if checkpoint_name is None:
if mpu.get_data_parallel_rank() == 0:
print("Unable to load checkpoint.")
return iteration
else:
......
```
### 2.4 Train scripts
Assume webtext data was prepared in previous step, to start training
Megatron-LM GPT2 model with DeepSpeed applied, execute the following command to
start training.
- Single GPU run
- run `bash scripts/ds_pretrain_gpt2.sh`
- Multiple GPUs/Nodes run
- run `bash scripts/ds_pretrain_gpt2_model_parallel.sh`
## 3 Performance Improvements
DeepSpeed enables training very large models effectively via the advanced [ZeRO
optimizer](https://arxiv.org/abs/1910.02054v2). ZeRO significantly reduces the memory
footprint for training large models which means large models can be trained with i) less
model parallelism and ii) larger batch sizes. A lower model parallelism degree improves
training efficiency by increasing the granularity of the computation such as the matrix
multiplication where performance is directly related to the size of the matrices.
Furthermore, less model parallelism also results in less communication between model
parallel GPUs, which further boosts performance. Larger batch size has a similar effect
of increasing the computational granularity as well as reducing communication, also
resulting in better performance. Therefore, DeepSpeed combines ZeRO-powered data parallelism with
Megatron-LM tensor-slicing model parallelism, which is
significantly faster than using Megatron-LM alone.
The observed performance improvements depend on several factors such as the memory per
GPU, the local GPU interconnect (i.e., PCI-E vs NVLINK vs NVSwitch), the model size,
inter node network interconnect, etc. Below, we show some of the performance improvements
from using DeepSpeed over Megatron on a 16 GPU Low Bandwidth (40 Gbps) cluster and a 400 GPU DGX-2 High Bandwidth (800 Gbps) cluster.
For details please see the [ZeRO Paper](https://arxiv.org/abs/1910.02054v2). We also
present performance improvement on a 64 GPU cluster along with detailed configuration
analysis to show where the improvements come from.
![DeepSpeed-vs-Megatron](../figures/DeepSpeed-vs-Megatron.png)
<p align="center">
<em>The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of Nvidia Megatron-LM) over using Megatron-LM alone.</em>
</p>
### 3.1 On Low Bandwidth GPU Cluster
The figure above shows that training 1.5B parameter model with DeepSpeed is
nearly 4x faster than without DeepSpeed on a cluster with 4 nodes, 4 GPU per
node, and 16 GPUs total. These GPUs have 16GB of memory each, and PCI-E
interconnects GPUs within a node, and 40 Gbps infiniband across nodes.
The performance improvement comes from lower model parallelism degree and
larger batch size as discussed earlier. Training 1.5B parameter model with
Megatron-LM alone requires 4-way model parallelism, and can only fit an effective
batch size of 32 using all 16 GPUs. On the other hand, DeepSpeed does not
require any model-parallelism to train this model, and can support an
effective batch size of 128 without running out of memory, resulting in
significantly higher performance.
### 3.2 On High bandwidth DGX-2 GPU Cluster
Each GPU on the DGX-2 cluster has 32 GB of memory, and GPUs inside a box is connected via
the high-bandwidth NVSwitch. DGX-2 nodes are connected to each other via 800 Gbps (8 x 100Gbps) infiniband interconnect. As such, running a 1.5B model on DGX-2 requires less model
parallelism, and the performance improvement from DeepSpeed for this model size is less
significant. However, at larger model sizes, Megatron still requires significantly larger
model parallelism degree, and can only run much smaller batch sizes than DeepSpeed.
Therefore, as the model sizes get larger, DeepSpeed, by coming ZeRO with Megatron model parallelism, starts to significantly outperform
using Megatron-LM alone.
### 3.3 Performance Improvements with Configuration Details
The figure below compares DeepSpeed with Megatron on a 64 GPU cluster with 4
DGX-2 nodes. To give the readers a clear idea of source of the performance
improvements, we also present the configuration table for both Megatron and
DeepSpeed. It shows the smallest model parallelism degree and the largest batch
size that can be used to train these models without running out of memory. As
discussed above, the tables demonstrate that DeepSpeed runs with smaller model parallelism degree
and achieves better performance.
![DeepSpeed Performance SpeedUp](../figures/megatron-gpt2-perf-test.png)
<p align="center">
<em>The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of Nvidia Megatron-LM) over using Megatron-LM alone.</em>
</p>
**a ) Megatron-LM GPT2 Baseline**
| | Model Parallelism | Data Parallelism | #gpus | batch size | layers | hidden size | attention heads | samples / sec |
| ---- | ----------------: | ---------------: | ----: | ---------: | -----: | -----------:| --------------: | ------------: |
| 1.5B | 2 | 32 | 64 | 512 | 48 | 1600 | 16 | 128.56 |
| 4B | 4 | 16 | 64 | 128 | 64 | 2304 | 16 | 49.36 |
| 8B | 4 | 16 | 64 | 128 | 72 | 3072 | 24 | 24.57 |
| 20B | 16 | 4 | 64 | 16 | 111 | 3808 | 32 | 3.42 |
**b ) Megatron-LM GPT2 with DeepSpeed**
| | Model Parallelism | Data Parallelism | #gpus | batch size | layers | hidden size | attention heads | samples / sec |
| ---- | ----------------: | ---------------: | ----: | ---------: | -----: | -----------:| --------------: | ------------: |
| 1.5B | 1 | 64 | 64 | 2048 | 48 | 1600 | 16 | 151.35 |
| 4B | 1 | 64 | 64 | 512 | 64 | 2304 | 16 | 75.13 |
| 8B | 2 | 32 | 64 | 512 | 72 | 3072 | 24 | 43.52 |
| 20B | 4 | 16 | 64 | 128 | 111 | 3808 | 32 | 12.65 |
# Tutorial: Learning Rate Range Test
This tutorial shows how to use to perform Learning Rate range tests in PyTorch.
## Learning Rate Range Test (LRRT)
Learning rate range test ( [LRRT](https://arxiv.org/abs/1803.09820) ) is a
method for discovering the largest learning rate values that can be used to
train a model without divergence. Data scientists are often interested in this
information because large learning rates lead to faster model convergence than
a small learning rates. Moreover, large learning rates are crucial in learning
rate schedules such as [CLR](https://arxiv.org/abs/1506.01186) and
[1Cycle](https://arxiv.org/abs/1803.09820), which are used to train effectively
with large batch sizes. DeepSpeed provides LRRT for model training in PyTorch
frameworks.
## Prerequisites
To use DeepSpeed's LRRT, you must satisfy the following two conditions:
1. Integrate DeepSpeed into your training script using this
[guide](../../README.md#getting-started).
2. Add the parameters to configure LRRT to the parameters of your model. The
LRRT parameters are defined below.
## LRRT Parameters
LRRT works by linearly increasing the learning rate by a predefined amount, at
predefined intervals. Thus, LRRT is a form of learning rate schedule because it
defines how and when the learning rate should change during model training. To
configure LRRT, you will need to set these parameters:
1. `lr_range_test_min_lr` : The initial learning rate for training `(float)`
2. `lr_range_test_step_size`: The interval for scaling up learning rate,
defined in training steps `(integer)`
3. `lr_range_test_step_rate`: The scaling factor for increasing learning rate
`(float)`
4. `lr_range_test_staircase`: If true, learning rate is changed every
`lr_range_test_step_size` training steps, otherwise learning rate is changed at
every training step `(boolean)`
## Required Model Configuration Changes
We will illustrate the required model configuration changes an example LRRT
schedule that:
1. Starts training with an initial learning rate of 0.0001
2. Uses a scaling rate of 5
3. Uses a scaling interval of 200 training steps
4. Scales learning rate at every training step, i.e., does not use staircase
### PyTorch
For PyTorch models, LRRT is implemented as a [learning rate
scheduler](https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html),
a feature that is available in PyTorch versions 1.0.1 and newer. Thus, you can
add a `"scheduler"` entry of type `"LRRangeTest"` into your model configuration
as illustrated below:
```json
"scheduler": {
"type": "LRRangeTest",
"params": {
"lr_range_test_min_lr": 0.0001,
"lr_range_test_step_size": 200,
"lr_range_test_step_rate": 5,
"lr_range_test_staircase": false
}
}
```
## Example: Tuning for Large Batch Sizes
We illustrate how LRRT can benefit data scientists with a snippet of our
experience of tuning an internal production model to converge efficiently on
larger batch sizes, as we scaled from one GPU (batch size 512) to four GPUs
(batch size 2048). Our goal was to train the model with the larger batch size
to match the performance of the smaller batch size using the same amount of
data samples. The challenge here is the well known problem of slow convergence
of large batch size training. Our approach was to use a
[1Cycle](Cycle.md) schedule in DeepSpeed to tackle
this problem, and we used LRRT to configure the schedule.
In the plots below, we illustrate using LRRT to discover the maximum learning
rates for effective training with batch size 2048. The plot on the left shows
the impact of large learning rates on validation loss over the first 9000
batches of training. The plot on the right shows the learning rate values
during the same period of training. Using grid search we discover that the
best fixed learning rate for the batch size 2048 is 0.0002. The blue line
(`lr=0.0002`) represents training with this fixed learning rate. We compare the
two LRRT schedules with this fixed learning rate. The orange
(`lr_range_test_step_rate=5`) and gray (`lr_range_test_step_rate=50`) lines
represent training with similar LRRT schedules that differ only in
`lr_range_test_step_rate` values. Although the LRRT schedules start from the
same base learning rate, the gray line's learning rate grows about 10 times
faster than the orange line. Also, the learning rates of the LRRT schedules had
grown larger than that of the blue line in the presented data points. We
subsequently refer to the gray line as "fast growing", and the orange line as
"slow growing" LRRT schedules respectively.
![validation_loss](../figures/loss_and_lr.png)
We make the following observations from this small example.
1. Larger learning rates clearly benefit model performance, up to some point.
The fast growing LRRT schedule achieves validation loss of 0.46 after 3000
batches, which the fixed learning rate does not achieve with 9000 batches. The
slow growing LRRT does not match that score until after 6000 batches, however
it maintains an increasing performance advantage over the fixed learning rate.
2. There is an upper bound on learning rate values that are useful for training
the model. The fast growing LRRT schedule hits this boundary quickly and
diverges, while the slow growing LRRT will later diverge for the same reason.
LRRT helped us discover these boundaries quickly, using less than 2% of the
training data. These boundaries are useful information for constructing
learning rate schedules.
These observations from LRRT helped us to configure the learning rate
boundaries and the cycle span for a 1Cycle schedule that solves the problem, as
shown below.
```json
"OneCycle": {
"cycle_min_lr": 0.002,
"cycle_max_lr": 0.005,
"cycle_first_step_size": 2000,
"cycle_second_step_size": 2000,
...
}
```
In our experience these are four most critical parameters of 1Cycle schedules.
1. We chose to use the slower LRRT schedule (`lr_range_test_step_rate=5`) to
set `cycle_min_lr` because it achieves the best loss and the faster schedule
diverges fairly quickly.
2. We set `cycle_min_lr` to 0.005 even though the plot shows that performance
was still improving at slightly higher learning rate. This is because we
observed that if we wait till the maximum learning rate, the model could be at
the point of divergence and impossible to recover.
3. Since it takes 8000 batches for the learning rate to become 0.005, we set
`cycle_first_step_size` and (`cycle_second_step_size`) to 2000 which is the
number of steps that it takes for four GPUs to process 8000 batches.
We hope this brief example sparks your imagination on using LRRT for your own
unique tuning challenges.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment