index.md 10.9 KB
Newer Older
Shaden Smith's avatar
Shaden Smith committed
1
2
3
4
5
6
7
8
9
10
---
layout: single
toc: true
toc_label: "Contents"
---

DeepSpeed is a deep learning optimization library that makes distributed training easy,
efficient, and effective.

<p align="center"><i><b>10x Larger Models</b></i></p>
Jeff Rasley's avatar
Jeff Rasley committed
11
<p align="center"><i><b>10x Faster Training</b></i></p>
Shaden Smith's avatar
Shaden Smith committed
12
13
<p align="center"><i><b>Minimal Code Change</b></i></p>
DeepSpeed can train DL models with over a hundred billion parameters on current
Jeff Rasley's avatar
Jeff Rasley committed
14
generation of GPU clusters, while achieving over 10x in system performance
Shaden Smith's avatar
Shaden Smith committed
15
16
17
18
19
compared to the state-of-art. Early adopters of DeepSpeed have already produced
a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft),
establishing a new SOTA in the LM category.

Shaden Smith's avatar
Shaden Smith committed
20
21
22
23
24
DeepSpeed is an important part of Microsoft’s new
[AI at Scale](https://www.microsoft.com/en-us/research/project/ai-at-scale/)
initiative to enable next-generation AI capabilities at scale, where you can find more
information [here](https://innovation.microsoft.com/en-us/exploring-ai-at-scale).

Shaden Smith's avatar
Shaden Smith committed
25
26
27
28
# What's New?
{% assign news = site.posts | where: "sneak_preview", "false" %}
{% for post in news limit:5 %}
  {% if post.link %}
29
30
31
32
33
    {% if post.image %}
* [{{ post.date | date: "%Y/%m/%d"  }}] [ {{ post.title }} {% if post.new_post %} <span style="color:dodgerblue">**NEW!**</span> {% endif %} ![]({{ post.image }}) ]({{ post.link }})
    {% else %}
* [{{ post.date | date: "%Y/%m/%d"  }}] [{{ post.title }}]({{ post.link }}) {% if post.new_post %} <span style="color:dodgerblue">**NEW!**</span> {% endif %}
    {% endif %}
Shaden Smith's avatar
Shaden Smith committed
34
  {% else %}
35
* [{{ post.date | date: "%Y/%m/%d"}}] [{{ post.title }}]({{ post.url }}) {% if post.new_post %} <span style="color:dodgerblue">**NEW!**</span> {% endif %}
Shaden Smith's avatar
Shaden Smith committed
36
37
38
  {% endif %}
{% endfor %}

Shaden Smith's avatar
Shaden Smith committed
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64

# Why DeepSpeed?
Training advanced deep learning models is challenging. Beyond model design,
model scientists also need to set up the state-of-the-art training techniques
such as distributed training, mixed precision, gradient accumulation, and
checkpointing. Yet still, scientists may not achieve the desired system
performance and convergence rate. Large model sizes are even more challenging:
a large model easily runs out of memory with pure data parallelism and it is
difficult to use model parallelism. DeepSpeed addresses these challenges to
accelerate model development *and* training.

## Distributed, Effective, and Efficient Training with Ease
The DeepSpeed API is a lightweight wrapper on [PyTorch](https://pytorch.org/). This
means that you can use everything you love in PyTorch and without learning a new
platform. In addition, DeepSpeed manages all of the boilerplate state-of-the-art
training techniques, such as distributed training, mixed precision, gradient
accumulation, and checkpoints so that you can focus on your model development. Most
importantly, you can leverage the distinctive efficiency and effectiveness benefit of
DeepSpeed to boost speed and scale with just a few lines of code changes to your PyTorch
models.

## Speed
DeepSpeed achieves high performance and fast convergence through a combination of
efficiency optimizations on compute/communication/memory/IO and effectiveness
optimizations on advanced hyperparameter tuning and optimizers. For example:

Jeff Rasley's avatar
Jeff Rasley committed
65
66
67
* <span style="color:dodgerblue">DeepSpeed trains BERT-large to parity in 44
  mins using 1024 V100 GPUs (64 DGX-2 boxes) and in 2.4 hours using 256 GPUs
  (16 DGX-2 boxes).</span>
Shaden Smith's avatar
Shaden Smith committed
68
69
70

  **BERT-large Training Times**

Jeff Rasley's avatar
Jeff Rasley committed
71
72
73
74
75
76
  | Devices        | Source    |        Training Time  |
  | -------------- | --------- | ---------------------:|
  | 1024 V100 GPUs | DeepSpeed |             **44** min|
  | 256 V100 GPUs  | DeepSpeed |             **2.4** hr|
  | 64 V100 GPUs   | DeepSpeed |            **8.68** hr|
  | 16 V100 GPUs   | DeepSpeed |           **33.22** hr|
Shaden Smith's avatar
Shaden Smith committed
77

Jeff Rasley's avatar
Jeff Rasley committed
78
  *BERT codes and tutorials will be available soon.*
Shaden Smith's avatar
Shaden Smith committed
79
80
81
82

* DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA
  Megatron on Azure GPUs.

Shaden Smith's avatar
Shaden Smith committed
83
  *Read more*: [GPT tutorial](/tutorials/megatron/)
Shaden Smith's avatar
Shaden Smith committed
84
85
86
87
88



## Memory efficiency
DeepSpeed provides memory-efficient data parallelism and enables training models without
Jeff Rasley's avatar
Jeff Rasley committed
89
model parallelism. For example, DeepSpeed can train models with up to 13 billion parameters on
Shaden Smith's avatar
Shaden Smith committed
90
NVIDIA V100 GPUs with 32GB of device memory. In comparison, existing frameworks (e.g.,
Jeff Rasley's avatar
Jeff Rasley committed
91
PyTorch's Distributed Data Parallel) run out of memory with 1.4 billion parameter models.
Shaden Smith's avatar
Shaden Smith committed
92
93
94

DeepSpeed reduces the training memory footprint through a novel solution called Zero
Redundancy Optimizer (ZeRO). Unlike basic data parallelism where memory states are
Jeff Rasley's avatar
Jeff Rasley committed
95
96
97
98
99
100
replicated across data-parallel processes, ZeRO partitions model states and gradients to save
significant memory. Furthermore, it also reduces activation memory and fragmented memory.
The current implementation (ZeRO-2) reduces memory by up to
8x relative to the state-of-art. You can read more about ZeRO in our [paper](https://arxiv.org/abs/1910.02054), and
in our blog posts related to
[ZeRO-1](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/). <!-- and [ZeRO-2](linklink). -->
Shaden Smith's avatar
Shaden Smith committed
101
102
103

With this impressive memory reduction, early adopters of DeepSpeed have already
produced  a language model (LM) with over 17B parameters called
Jeff Rasley's avatar
Jeff Rasley committed
104
105
<a href="https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft">
<span style="color:dodgerblue">Turing-NLG</span></a>,
Shaden Smith's avatar
Shaden Smith committed
106
107
108
109
110
111
establishing a new SOTA in the LM category.


## Scalability
DeepSpeed supports efficient data parallelism, model parallelism, and their
combination. ZeRO boosts the scaling capability and efficiency further.
Jeff Rasley's avatar
Jeff Rasley committed
112
113
114
115
116
* <span style="color:dodgerblue">DeepSpeed provides system support to run models up to 170 billion parameters,
  10x larger than the state-of-art (8 billion NVIDIA GPT, 11 billion Google T5).</span>
* <span style="color:dodgerblue">DeepSpeed can run large models more efficiently, up to 10x
  faster for models with
  various sizes spanning 1.5B to 170B.</span> More specifically, the data parallelism powered by ZeRO
Shaden Smith's avatar
Shaden Smith committed
117
118
119
120
  is complementary and can be combined with different types of model parallelism.  It allows
  DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering
  significant performance gains compared to using model parallelism alone.

Jeff Rasley's avatar
Jeff Rasley committed
121
  *Read more*: [ZeRO paper](https://arxiv.org/abs/1910.02054),
Shaden Smith's avatar
Shaden Smith committed
122
  and [GPT tutorial](/tutorials/megatron).
Shaden Smith's avatar
Shaden Smith committed
123

Jeff Rasley's avatar
Jeff Rasley committed
124
![DeepSpeed Speedup](/assets/images/deepspeed-speedup.png)
Shaden Smith's avatar
Shaden Smith committed
125
126
127
128
129
130
131
132
133
134
135
<p align="center">
<em>The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of NVIDIA Megatron-LM) over using Megatron-LM alone.</em>
</p>


## Fast convergence for effectiveness
DeepSpeed supports advanced hyperparameter tuning and large batch size
optimizers such as [LAMB](https://arxiv.org/abs/1904.00962). These improve the
effectiveness of model training and reduce the number of samples required to
convergence to desired accuracy.

Shaden Smith's avatar
Shaden Smith committed
136
*Read more*: [Tuning tutorial](/tutorials/1Cycle).
Shaden Smith's avatar
Shaden Smith committed
137
138
139


## Good Usability
Jeff Rasley's avatar
Jeff Rasley committed
140
Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to 13 billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.4 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM.
Shaden Smith's avatar
Shaden Smith committed
141
142
143
144
145


## Features

Below we provide a brief feature list, see our detailed [feature
Shaden Smith's avatar
Shaden Smith committed
146
overview](/features/) for descriptions and usage.
Shaden Smith's avatar
Shaden Smith committed
147

Shaden Smith's avatar
Shaden Smith committed
148
* [Distributed Training with Mixed Precision](/features/#distributed-training-with-mixed-precision)
Shaden Smith's avatar
Shaden Smith committed
149
150
    * 16-bit mixed precision
    * Single-GPU/Multi-GPU/Multi-Node
Shaden Smith's avatar
Shaden Smith committed
151
* [Model Parallelism](/features/#model-parallelism)
Shaden Smith's avatar
Shaden Smith committed
152
153
    * Support for Custom Model Parallelism
    * Integration with Megatron-LM
Shaden Smith's avatar
Shaden Smith committed
154
* [The Zero Redundancy Optimizer (ZeRO)](/features/#the-zero-redundancy-optimizer)
Jeff Rasley's avatar
Jeff Rasley committed
155
156
157
158
    * Optimizer State and Gradient Partitioning
    * Activation Partitioning
    * Constant Buffer Optimization
    * Contiguous Memory Optimization
Minjia Zhang's avatar
Minjia Zhang committed
159
160
161
* [ZeRO-Offload](/features/#zero-offload)
    * Leverage both CPU/GPU memory for model training
    * Support 10B model training on a single GPU
Shaden Smith's avatar
Shaden Smith committed
162
* [Additional Memory and Bandwidth Optimizations](/features/#additional-memory-and-bandwidth-optimizations)
Shaden Smith's avatar
Shaden Smith committed
163
    * Smart Gradient Accumulation
Jeff Rasley's avatar
Jeff Rasley committed
164
    * Communication/Computation Overlap
Shaden Smith's avatar
Shaden Smith committed
165
* [Training Features](/features/#training-features)
Shaden Smith's avatar
Shaden Smith committed
166
    * Simplified training API
Jeff Rasley's avatar
Jeff Rasley committed
167
    * Activation Checkpointing API
Shaden Smith's avatar
Shaden Smith committed
168
169
    * Gradient Clipping
    * Automatic loss scaling with mixed precision
Shaden Smith's avatar
Shaden Smith committed
170
* [Training Optimizers](/features/#training-optimizers)
Shaden Smith's avatar
Shaden Smith committed
171
    * Fused Adam optimizer and arbitrary `torch.optim.Optimizer`
Jeff Rasley's avatar
Jeff Rasley committed
172
    * CPU-Adam: High-Performance vectorized Adam
Shaden Smith's avatar
Shaden Smith committed
173
174
175
    * Memory bandwidth optimized FP16 Optimizer
    * Large Batch Training with LAMB Optimizer
    * Memory efficient Training with ZeRO Optimizer
Shaden Smith's avatar
Shaden Smith committed
176
177
* [Training Agnostic Checkpointing](/features/#training-agnostic-checkpointing)
* [Advanced Parameter Search](/features/#advanced-parameter-search)
Shaden Smith's avatar
Shaden Smith committed
178
179
    * Learning Rate Range Test
    * 1Cycle Learning Rate Schedule
Shaden Smith's avatar
Shaden Smith committed
180
181
* [Simplified Data Loader](/features/#simplified-data-loader)
* [Performance Analysis and Debugging](/features/#performance-analysis-and-debugging)
Shaden Smith's avatar
Shaden Smith committed
182
183
184
185


# Contributing
DeepSpeed welcomes your contributions! Please see our
Shaden Smith's avatar
Shaden Smith committed
186
[contributing](/contributing/) guide for more details on formatting, testing,
Shaden Smith's avatar
Shaden Smith committed
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
etc.

## Contributor License Agreement
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to, and
actually do, grant us the rights to use your contribution. For details, visit
https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply
follow the instructions provided by the bot. You will only need to do this once across
all repos using our CLA.

## Code of Conduct
This project has adopted the [Microsoft Open Source Code of
Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or
comments.

# Publications
1. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. (2019) ZeRO: Memory Optimization Towards Training A Trillion Parameter Models. [ArXiv:1910.02054](https://arxiv.org/abs/1910.02054)