features.md 15 KB
Newer Older
Shaden Smith's avatar
Shaden Smith committed
1
2
3
---
title: "Feature Overview"
layout: single
Shaden Smith's avatar
Shaden Smith committed
4
permalink: /features/
Shaden Smith's avatar
Shaden Smith committed
5
6
7
toc: true
toc_label: "Contents"
---
Jeff Rasley's avatar
Jeff Rasley committed
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

## Distributed Training with Mixed Precision

### Mixed Precision Training
Enable 16-bit (FP16) training by in the `deepspeed_config` JSON.
```json
"fp16": {
    "enabled": true,
    "loss_scale": 0,
    "loss_scale_window": 1000,
    "hysteresis": 2,
    "min_loss_scale": 1
}
```

### Single-GPU, Multi-GPU, and Multi-Node Training
Easily switch between single-GPU, single-node multi-GPU, or multi-node multi-GPU
execution by specifying resources with a hostfile.
```bash
deepspeed --hostfile=<hostfile> \
	<client_entry.py> <client args> \
	--deepspeed --deepspeed_config ds_config.json
```
31
32
The script `<client_entry.py>` will execute on the resources specified in
[`<hostfile>`](/getting-started/#resource-configuration-multi-node).
Jeff Rasley's avatar
Jeff Rasley committed
33

Shaden Smith's avatar
Shaden Smith committed
34
35
36
37
38
39
## Pipeline Parallelism
DeepSpeed provides [pipeline parallelism](/tutorials/pipeline/) for memory-
and communication- efficient training. DeepSpeed supports a hybrid
combination of data, model, and pipeline parallelism and has scaled to over
[one trillion parameters using 3D parallelism]({{ site.press_release_v3 }}).
Pipeline parallelism can also improve communication efficiency and has
40
accelerated training by up to 7x on low-bandwidth clusters.
Jeff Rasley's avatar
Jeff Rasley committed
41
42


Shaden Smith's avatar
Shaden Smith committed
43
## Model Parallelism
Jeff Rasley's avatar
Jeff Rasley committed
44
### Support for Custom Model Parallelism
Shaden Smith's avatar
Shaden Smith committed
45
46
47
48
49
DeepSpeed supports all forms of model parallelism including tensor slicing
based approaches such as the
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM). It does so by only
requiring the model parallelism framework to provide a *model parallelism
unit* (`mpu`) that implements a few bookkeeping functionalities:
Jeff Rasley's avatar
Jeff Rasley committed
50
51
52
53
54
55

```python
mpu.get_model_parallel_rank()
mpu.get_model_parallel_group()
mpu.get_model_parallel_world_size()

Shaden Smith's avatar
Shaden Smith committed
56
mpu.get_data_parallel_rank()
Jeff Rasley's avatar
Jeff Rasley committed
57
58
59
mpu.get_data_parallel_group()
mpu.get_data_parallel_world_size()
```
Shaden Smith's avatar
Shaden Smith committed
60

Jeff Rasley's avatar
Jeff Rasley committed
61
62
### Integration with Megatron-LM
DeepSpeed is fully compatible with [Megatron](https://github.com/NVIDIA/Megatron-LM).
Shaden Smith's avatar
Shaden Smith committed
63
Please see the [Megatron-LM tutorial](/tutorials/megatron/) for details.
Jeff Rasley's avatar
Jeff Rasley committed
64
65


Shaden Smith's avatar
Shaden Smith committed
66
67


Jeff Rasley's avatar
Jeff Rasley committed
68
69
70
71
72
73
74
## The Zero Redundancy Optimizer
The Zero Redundancy Optimizer ([ZeRO](https://arxiv.org/abs/1910.02054)) is at
the heart of DeepSpeed and enables large model training at a scale that is
simply not possible with model parallelism alone. When enabled, ZeRO allows
training models with over 13 billion parameters without any model parallelism,
and up to 200 billion parameter models with model parallelism on current
generation hardware.
Jeff Rasley's avatar
Jeff Rasley committed
75
76

For more details see the [ZeRO paper](https://arxiv.org/abs/1910.02054), [GPT
Shaden Smith's avatar
Shaden Smith committed
77
tutorial](/tutorials/megatron/) on integration with
Jeff Rasley's avatar
Jeff Rasley committed
78
79
80
81
DeepSpeed.

### Optimizer State and Gradient Partitioning
Optimizer State and Gradient Partitioning in ZeRO reduces the memory consumption of the
82
model states (optimizer states, gradients and parameters) by 8x compared to standard
Jeff Rasley's avatar
Jeff Rasley committed
83
84
85
86
87
88
89
90
91
92
93
94
data parallelism by partitioning these states across data parallel process instead of
replicating them.

### Activation Partitioning
Activation Partitioning is a memory optimization in ZeRO that can reduce the memory
consumed by activations during model parallel training (MP). In MP certain
activations maybe required by all MP processes, resulting in a replication of
activations across MP GPUs. Activation Partitioning stores these activations in a
partitioned state once they are used for computation in the forward propagation. These
activations are allgathered right before they are needed again during the backward propagation.
By storing activations in a partitioned state, ZeRO in DeepSpeed can reduce the activation
memory footprint proportional to the MP degree.
Shaden Smith's avatar
Shaden Smith committed
95

Jeff Rasley's avatar
Jeff Rasley committed
96
97
98
99
100
101
102
103
104
### Constant Buffer Optimization (CBO)
CBO enables high network and memory throughput while restricting memory usage to a
constant size. For memory- and network-bound operations such as normalization or
allreduce collectives, the performance depends on the size of the operand. Simply fusing
all operands into a single large operand can enable great throughput at the expense of
unnecessary memory overhead. CBO in DeepSpeed fuses smaller operands into approximately a
pre-defined sized buffer large enough to achieve great performance without the
unnecessary memory overhead.

Jeff Rasley's avatar
Jeff Rasley committed
105
### Contiguous Memory Optimization (CMO)
Chunyang Wen's avatar
Chunyang Wen committed
106
CMO reduces memory fragmentation during training, preventing out of memory errors
Jeff Rasley's avatar
Jeff Rasley committed
107
108
109
110
111
112
113
due to lack of contiguous memory. Memory fragmentation is a result of interleaving between
short lived and long lived memory objects. During the forward propagation activation
checkpoints are long lived but the activations that recomputed are short lived. Similarly,
during the backward computation, the activation gradients are short lived while the parameter
gradients are long lived. CMO transfers activation checkpoints and parameter gradients
to contiguous buffers preventing memory fragmentation.

Minjia Zhang's avatar
Minjia Zhang committed
114
115
## ZeRO-Offload

Cheng Li's avatar
Cheng Li committed
116
ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. It allows training up to 13-billion-parameter models on a single NVIDIA V100 GPU, 10x larger than the state-of-the-art, while retaining high training throughput of over 30 teraflops per GPU.
Minjia Zhang's avatar
Minjia Zhang committed
117
118
119

For more details see the [ZeRO-Offload release blog]( https://www.microsoft.com/en-us/research/?p=689370&secret=iSlooB), and [tutorial](/tutorials/zero-offload/) on integration with DeepSpeed.

Jeff Rasley's avatar
Jeff Rasley committed
120
121
## Additional Memory and Bandwidth Optimizations

Jeff Rasley's avatar
Jeff Rasley committed
122
123
124
125
126
127
128
129
130
131
### Smart Gradient Accumulation
Gradient accumulation allows running larger batch size with limited memory by breaking an
effective batch into several sequential micro-batches, and averaging the parameter
gradients across these micro-batches. Furthermore, instead of averaging the gradients of
each micro-batch across all GPUs, the gradients are averaged locally during each step of
the sequence, and a single `allreduce` is done at the end of the sequence to produce the
averaged gradients for the effective batch across all GPUs. This strategy significantly
reduces the communication involved over the approach of averaging globally for each
micro-batch, specially when the number of micro-batches per effective batch is large.

Jeff Rasley's avatar
Jeff Rasley committed
132
133
134
### Communication Overlapping
During back propagation, DeepSpeed can overlap the communication required for averaging
parameter gradients that have already been computed with the ongoing gradient computation.
135
This computation-communication overlap allows DeepSpeed to achieve higher throughput even
Cheng Li's avatar
Cheng Li committed
136
at modest batch sizes.
Jeff Rasley's avatar
Jeff Rasley committed
137
138
139
140
141
142
143
144
145
146

## Training Features

### Simplified training API
The DeepSpeed core API consists of just a handful of methods:
* initialization: `initialize`
* training: `backward` and `step`
* argument parsing: `add_config_arguments`
* checkpointing : `load_checkpoint` and `store_checkpoint`

Jeff Rasley's avatar
Jeff Rasley committed
147
DeepSpeed supports most of the features described in this document, via the use of these API,
148
along with a `deepspeed_config` JSON file for enabling and disabling the features.
Shaden Smith's avatar
Shaden Smith committed
149
Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.
Jeff Rasley's avatar
Jeff Rasley committed
150

Jeff Rasley's avatar
Jeff Rasley committed
151
152
### Activation Checkpointing API

153
154
DeepSpeed's Activation Checkpointing API supports activation checkpoint partitioning,
cpu checkpointing, and contiguous memory optimizations, while also allowing layerwise
Jeff Rasley's avatar
Jeff Rasley committed
155
156
profiling. Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.

Jeff Rasley's avatar
Jeff Rasley committed
157
158

### Gradient Clipping
Jeff Rasley's avatar
Jeff Rasley committed
159
160
161
162
163
```json
{
  "gradient_clipping": 1.0
}
```
Jeff Rasley's avatar
Jeff Rasley committed
164
DeepSpeed handles gradient clipping under the hood based on the max gradient norm
165
specified by the user.
Shaden Smith's avatar
Shaden Smith committed
166
Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.
Jeff Rasley's avatar
Jeff Rasley committed
167
168
169

### Automatic loss scaling with mixed precision
DeepSpeed internally handles loss scaling for mixed precision training. The parameters
170
for loss scaling can be specified in the `deepspeed_config` JSON file.
Shaden Smith's avatar
Shaden Smith committed
171
Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.
Jeff Rasley's avatar
Jeff Rasley committed
172
173
174

## Training Optimizers

aiss's avatar
aiss committed
175
### 1-bit Adam, 0/1 Adam and 1-bit LAMB optimizers with up to 26x less communication
176

aiss's avatar
aiss committed
177
178
179
DeepSpeed has three communication-efficient optimizers called 1-bit Adam, 0/1 Adam and 1-bit LAMB.
They offer the same convergence as Adam/LAMB, incur up to 26x less communication that enables
up to 6.6x higher throughput for BERT-Large pretraining and up to 2.7x higher throughput
180
for SQuAD fine-tuning on bandwidth-limited clusters. For more details on usage and performance,
aiss's avatar
aiss committed
181
182
183
184
185
186
please refer to the [1-bit Adam tutorial](https://www.deepspeed.ai/tutorials/onebit-adam),
[1-bit Adam blog post](https://www.deepspeed.ai/news/2020/09/09/onebit-adam-blog-post.md),
[0/1 Adam tutorial](https://www.deepspeed.ai/tutorials/zero-one-adam)
and [1-bit LAMB tutorial](https://www.deepspeed.ai/tutorials/onebit-lamb/). For technical details,
please refer to the [1-bit Adam paper](https://arxiv.org/abs/2102.02888), [0/1 Adam paper](https://arxiv.org/abs/2202.06009) and
[1-bit LAMB paper](https://arxiv.org/abs/2104.06069).
187

Jeff Rasley's avatar
Jeff Rasley committed
188
189
190
191
### Fused Adam optimizer and arbitrary torch.optim.Optimizer
With DeepSpeed, the user can choose to use a high performance implementation of ADAM from
NVIDIA, or any training optimizer that extends torch's `torch.optim.Optimizer` class.

Jeff Rasley's avatar
Jeff Rasley committed
192
193
194
195
### CPU-Adam: High-Performance vectorized implementation of Adam
We introduce an efficient implementation of Adam optimizer on CPU that improves the parameter-update
performance by nearly an order of magnitude. We use the AVX SIMD instructions on Intel-x86 architecture
for the CPU-Adam implementation. We support both AVX-512 and AVX-2 instruction sets. DeepSpeed uses
196
AVX-2 by default which can be switched to AVX-512 by setting the build flag, `DS_BUILD_AVX512` to 1 when
Jeff Rasley's avatar
Jeff Rasley committed
197
198
199
installing DeepSpeed. Using AVX-512, we observe 5.1x to 6.5x speedups considering the model-size between
1 to 10 billion parameters with respect to torch-adam.

Jeff Rasley's avatar
Jeff Rasley committed
200
201
202
203
204
205
206
207
208
209
### Memory bandwidth optimized FP16 Optimizer
Mixed precision training is handled by the DeepSpeed FP16 Optimizer. This optimizer not
only handles FP16 training but is also highly efficient. The performance of weight update
is primarily dominated by the memory bandwidth, and the achieved memory bandwidth is
dependent on the size of the input operands. The FP16 Optimizer is designed to maximize
the achievable memory bandwidth by merging all the parameters of the model into a single
large buffer, and applying the weight updates in a single kernel, allowing it to achieve
high memory bandwidth.

### Large Batch Training with LAMB Optimizer
Shaden Smith's avatar
Shaden Smith committed
210
<!-- **TODO: port tutorial** -->
Jeff Rasley's avatar
Jeff Rasley committed
211
DeepSpeed makes it easy to train with large batch sizes by enabling the LAMB Optimizer.
Shaden Smith's avatar
Shaden Smith committed
212
For more details on LAMB, see the [LAMB paper](https://arxiv.org/pdf/1904.00962.pdf).
Jeff Rasley's avatar
Jeff Rasley committed
213
214

### Memory-Efficient Training with ZeRO Optimizer
215
DeepSpeed can train models with up to 13 billion parameters without model parallelism, and
Jeff Rasley's avatar
Jeff Rasley committed
216
models with up to 200 billion parameters with 16-way model parallelism. This leap in
217
model size is possible through the memory efficiency achieved via the ZeRO Optimizer. For
Jeff Rasley's avatar
Jeff Rasley committed
218
219
220
221
222
223
224
more details see [ZeRO paper](https://arxiv.org/abs/1910.02054) .



## Training Agnostic Checkpointing
DeepSpeed can simplify checkpointing for you regardless of whether you are using data
parallel training, model parallel training, mixed-precision training, a mix of these
225
three, or using the zero optimizer to enable larger model sizes.
Shaden Smith's avatar
Shaden Smith committed
226
Please see the [Getting Started](/getting-started/) guide
227
and the [core API doc](https://deepspeed.readthedocs.io/) for more details.
Jeff Rasley's avatar
Jeff Rasley committed
228
229
230
231
232
233

## Advanced parameter search
DeepSpeed supports multiple Learning Rate Schedules to enable faster convergence for
large batch scaling.

### Learning Rate Range Test
Shaden Smith's avatar
Shaden Smith committed
234
Please refer to the [Learning Rate Range Test](/tutorials/lrrt/) tutorial.
Jeff Rasley's avatar
Jeff Rasley committed
235
236

### 1Cycle Learning Rate Schedule
Shaden Smith's avatar
Shaden Smith committed
237
Please refer to the [1Cycle Learning Rate Schedule](/tutorials/1Cycle/) tutorial.
Jeff Rasley's avatar
Jeff Rasley committed
238
239
240
241
242
243
244


## Simplified Data Loader
DeepSpeed abstracts away data parallelism and model parallelism from the user when it
comes to data loading. Users simply provide a PyTorch dataset, and DeepSpeed data loader
can automatically handle batch creation appropriately.

aiss's avatar
aiss committed
245
246
247
## Curriculum Learning
Please refer to the [Curriculum Learning](/tutorials/curriculum-learning/) tutorial.

Jeff Rasley's avatar
Jeff Rasley committed
248
## Performance Analysis and Debugging
Cheng Li's avatar
Cheng Li committed
249
250
251
252
253
254
255
256
257

DeepSpeed provides a set of tools for performance analysis and debugging.

### Wall Clock Breakdown

DeepSpeed provides a detailed breakdown of the time spent
in different parts of the training.
This can be enabled by setting the following in the `deepspeed_config` file.

Jeff Rasley's avatar
Jeff Rasley committed
258
259
```json
{
Jeff Rasley's avatar
Jeff Rasley committed
260
  "wall_clock_breakdown": true,
Cheng Li's avatar
Cheng Li committed
261
}
Jeff Rasley's avatar
Jeff Rasley committed
262

Cheng Li's avatar
Cheng Li committed
263
264
```

265
###  Timing Activation Checkpoint Functions
Cheng Li's avatar
Cheng Li committed
266

267
When activation checkpointing is enabled, profiling the forward and backward time of each checkpoint function can be enabled in the `deepspeed_config` file.
Cheng Li's avatar
Cheng Li committed
268
269
270

```json
{
Jeff Rasley's avatar
Jeff Rasley committed
271
272
273
  "activation_checkpointing": {
    "profile": true
  }
Jeff Rasley's avatar
Jeff Rasley committed
274
}
Cheng Li's avatar
Cheng Li committed
275

Jeff Rasley's avatar
Jeff Rasley committed
276
```
Cheng Li's avatar
Cheng Li committed
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295

### Flops Profiler

The DeepSpeed flops profiler measures the time, flops and parameters of a PyTorch model and shows which modules or layers are the bottleneck. When used with the DeepSpeed runtime, the flops profiler can be configured in the `deepspeed_config` file as follows:

```json
{
  "flops_profiler": {
    "enabled": true,
    "profile_step": 1,
    "module_depth": -1,
    "top_modules": 3,
    "detailed": true,
    }
}

```
The flops profiler can also be used as a standalone package. Please refer to the [Flops Profiler](/tutorials/flops-profiler) tutorial for more details.

aiss's avatar
aiss committed
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325

### Autotuning

The DeepSpeed Autotuner  uses model information, system information, and heuristics to efficiently tune Zero stage, micro batch size, and other Zero configurations. Using the autotuning feature requires no code change from DeepSpeed users. While `"autotuning": {"enabled": true}` is the minimal required to enable auotuning, there are other parameters users can define to configure the autotuning process. Below shows major parameters and their default values in the autotuning configuration. Please refer to the [Autotuning](/tutorials/autotuning) tutorial for more details.

```json
{
  "autotuning": {
    "enabled": true,
    "results_dir": null,
    "exps_dir": null,
    "overwrite": false,
    "metric": "throughput",
    "num_nodes": null,
    "num_gpus": null,
    "start_profile_step": 3,
    "end_profile_step": 5,
    "fast": true,
    "num_tuning_micro_batch_sizes": 3,
    "tuner_type": "model_based",
    "tuner_early_stopping": 5,
    "tuner_num_trials": 50,
    "arg_mappings": null
  }
}

```
The flops profiler can also be used as a standalone package. Please refer to the [Flops Profiler](/tutorials/flops-profiler) tutorial for more details.


326
## Sparse Attention
327
DeepSpeed offers sparse attention to support long sequences. Please refer to the [Sparse Attention](/tutorials/sparse-attention/) tutorial.
328

329
```bash
330
331
332
--deepspeed_sparse_attention
```

333
```json
334
335
336
337
338
339
340
341
342
343
344
"sparse_attention": {
    "mode": "fixed",
    "block": 16,
    "different_layout_per_head": true,
    "num_local_blocks": 4,
    "num_global_blocks": 1,
    "attention": "bidirectional",
    "horizontal_global_attention": false,
    "num_different_global_patterns": 4
}
```
aiss's avatar
aiss committed
345
346
347

## Mixture of Experts (MoE)
To learn more about training Mixture of Experts (MoE) models with DeepSpeed, see our [tutorial](https://www.deepspeed.ai/tutorials/mixture-of-experts/) for more details.