README.md 11.8 KB
Newer Older
1
[![Build Status](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_apis/build/status/microsoft.DeepSpeed?branchName=master)](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_build/latest?definitionId=1&branchName=master)
2
[![PyPI version](https://badge.fury.io/py/deepspeed.svg)](https://badge.fury.io/py/deepspeed)
Shaden Smith's avatar
Shaden Smith committed
3
[![Documentation Status](https://readthedocs.org/projects/deepspeed/badge/?version=latest)](https://deepspeed.readthedocs.io/en/latest/?badge=latest)
Shaden Smith's avatar
Shaden Smith committed
4
[![License MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/Microsoft/DeepSpeed/blob/master/LICENSE)
Jeff Rasley's avatar
Jeff Rasley committed
5
6
[![Docker Pulls](https://img.shields.io/docker/pulls/deepspeed/deepspeed)](https://hub.docker.com/r/deepspeed/deepspeed)

Shaden Smith's avatar
Shaden Smith committed
7

Jeff Rasley's avatar
Jeff Rasley committed
8
9
[DeepSpeed](https://www.deepspeed.ai/) is a deep learning optimization
library that makes distributed training easy, efficient, and effective.
Shaden Smith's avatar
Shaden Smith committed
10
11

<p align="center"><i><b>10x Larger Models</b></i></p>
Jeff Rasley's avatar
Jeff Rasley committed
12
<p align="center"><i><b>10x Faster Training</b></i></p>
Shaden Smith's avatar
Shaden Smith committed
13
14
<p align="center"><i><b>Minimal Code Change</b></i></p>

Jeff Rasley's avatar
Jeff Rasley committed
15
16
17
18
19
20
21
DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU:
* Extreme scale: Using current generation of GPU clusters with hundreds of devices,  3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters.  
* Extremely memory efficient: With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models.
* Extremely long sequence length: Sparse attention of DeepSpeed powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution comparing with dense transformers.  
* Extremely communication efficient: 3D parallelism improves communication efficiency allows users to train multi-billion-parameter models 2–7x faster on clusters with limited network bandwidth.  1-bit Adam reduces communication volume by up to 5x while achieving similar convergence efficiency to Adam, allowing for scaling to different types of GPU clusters and networks.

Early adopters of DeepSpeed have already produced
Jeff Rasley's avatar
Jeff Rasley committed
22
23
24
25
a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft),
establishing a new SOTA in the LM category.

Shaden Smith's avatar
Shaden Smith committed
26
27
28
29
30
DeepSpeed is an important part of Microsoft’s new
[AI at Scale](https://www.microsoft.com/en-us/research/project/ai-at-scale/)
initiative to enable next-generation AI capabilities at scale, where you can find more
information [here](https://innovation.microsoft.com/en-us/exploring-ai-at-scale).

Jeff Rasley's avatar
Jeff Rasley committed
31
32
33
**_For further documentation, tutorials, and technical deep-dives please see [deepspeed.ai](https://www.deepspeed.ai/)!_**


34
# News
35
36
37
* [2020/11/12] [Simplified install, JIT compiled ops, PyPI releases, and reduced dependencies](#installation)
* [2020/11/10] [Efficient and robust compressed training through progressive layer dropping](https://www.deepspeed.ai/news/2020/10/28/progressive-layer-dropping-news.html)
* [2020/09/10] [DeepSpeed v0.3: Extreme-scale model training for everyone](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/)
Jeff Rasley's avatar
Jeff Rasley committed
38
  * [Powering 10x longer sequences and 6x faster execution through DeepSpeed Sparse Attention](https://www.deepspeed.ai/news/2020/09/08/sparse-attention-news.html)
Jeff Rasley's avatar
Jeff Rasley committed
39
  * [Training a trillion parameters with pipeline parallelism](https://www.deepspeed.ai/news/2020/09/08/pipeline-parallelism.html)
Jeff Rasley's avatar
Jeff Rasley committed
40
41
  * [Up to 5x less communication and 3.4x faster training through 1-bit Adam](https://www.deepspeed.ai/news/2020/09/08/onebit-adam-news.html)
  * [10x bigger model training on a single GPU with ZeRO-Offload](https://www.deepspeed.ai/news/2020/09/08/ZeRO-Offload.html)
Jeff Rasley's avatar
Jeff Rasley committed
42
* [2020/08/07] [DeepSpeed Microsoft Research Webinar](https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-On-Demand.html) is now available on-demand
Shaden Smith's avatar
Shaden Smith committed
43
44


45
# Table of Contents
Shaden Smith's avatar
Shaden Smith committed
46
47
48
| Section                                 | Description                                 |
| --------------------------------------- | ------------------------------------------- |
| [Why DeepSpeed?](#why-deepspeed)        |  DeepSpeed overview                         |
49
50
51
52
53
| [Install](#installation)                |  Installation details                       |
| [Features](#features)                   |  Feature list and overview                  |
| [Further Reading](#further-reading)     |  Documentation, tutorials, etc.             |
| [Contributing](#contributing)           |  Instructions for contributing              |
| [Publications](#publications)           |  Publications related to DeepSpeed          |
Shaden Smith's avatar
Shaden Smith committed
54

55
# Why DeepSpeed?
Shaden Smith's avatar
Shaden Smith committed
56
57
58
59
60
Training advanced deep learning models is challenging. Beyond model design,
model scientists also need to set up the state-of-the-art training techniques
such as distributed training, mixed precision, gradient accumulation, and
checkpointing. Yet still, scientists may not achieve the desired system
performance and convergence rate. Large model sizes are even more challenging:
Rahul Prasad's avatar
Rahul Prasad committed
61
a large model easily runs out of memory with pure data parallelism and it is
Shaden Smith's avatar
Shaden Smith committed
62
63
64
difficult to use model parallelism. DeepSpeed addresses these challenges to
accelerate model development *and* training.

65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# Installation

The quickest way to get started with DeepSpeed is via pip, this will install
the latest release of DeepSpeed which is not tied to specific PyTorch or CUDA
versions. DeepSpeed includes several C++/CUDA extensions that we commonly refer
to as our 'ops'.  By default, all of these extensions/ops will be built
just-in-time (JIT) using [torch's JIT C++ extension loader that relies on
ninja](https://pytorch.org/docs/stable/cpp_extension.html) to build and
dynamically link them at runtime.

```bash
pip install deepspeed
```

After installation you can validate your install and see which extensions/ops
your machine is compatible with via the DeepSpeed environment report.
Jeff Rasley's avatar
Jeff Rasley committed
81

82
83
84
85
86
87
88
89
90
```bash
ds_report
```

If you would like to pre-install any of the DeepSpeed extensions/ops (instead
of JIT compiling) or install pre-compiled ops via PyPI please see our [advanced
installation instructions](https://www.deepspeed.ai/tutorials/advanced-install/).

# Features
Jeff Rasley's avatar
Jeff Rasley committed
91
Below we provide a brief feature list, see our detailed [feature
Shaden Smith's avatar
Shaden Smith committed
92
overview](https://www.deepspeed.ai/features/) for descriptions and usage.
Jeff Rasley's avatar
Jeff Rasley committed
93

Shaden Smith's avatar
Shaden Smith committed
94
* [Distributed Training with Mixed Precision](https://www.deepspeed.ai/features/#distributed-training-with-mixed-precision)
95
96
  * 16-bit mixed precision
  * Single-GPU/Multi-GPU/Multi-Node
Shaden Smith's avatar
Shaden Smith committed
97
* [Model Parallelism](https://www.deepspeed.ai/features/#model-parallelism)
98
99
  * Support for Custom Model Parallelism
  * Integration with Megatron-LM
Jeff Rasley's avatar
Jeff Rasley committed
100
101
102
103
104
105
106
107
108
109
110
111
* [Pipeline Parallelism](https://www.deepspeed.ai/tutorials/pipeline/)
  * 3D Parallelism
* [The Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/)
  * Optimizer State and Gradient Partitioning
  * Activation Partitioning
  * Constant Buffer Optimization
  * Contiguous Memory Optimization
* [ZeRO-Offload](https://www.deepspeed.ai/tutorials/zero-offload/)
  * Leverage both CPU/GPU memory for model training
  * Support 10B model training on a single GPU
* [Ultra-fast dense transformer kernels](https://www.deepspeed.ai/news/2020/05/18/bert-record.html)
* [Sparse attention](https://www.deepspeed.ai/news/2020/09/08/sparse-attention.html)
Shaden Smith's avatar
Shaden Smith committed
112
  * Memory- and compute-efficient sparse kernels
Jeff Rasley's avatar
Jeff Rasley committed
113
114
115
116
117
118
  * Support 10x long sequences than dense
  * Flexible support to different sparse structures
* [1-bit Adam](https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html)
  * Custom communication collective
  * Up to 5x communication volume saving
* [Additional Memory and Bandwidth Optimizations](https://www.deepspeed.ai/features/#additional-memory-and-bandwidth-optimizations)
119
  * Smart Gradient Accumulation
Jeff Rasley's avatar
Jeff Rasley committed
120
  * Communication/Computation Overlap
Shaden Smith's avatar
Shaden Smith committed
121
* [Training Features](https://www.deepspeed.ai/features/#training-features)
122
123
124
  * Simplified training API
  * Gradient Clipping
  * Automatic loss scaling with mixed precision
Shaden Smith's avatar
Shaden Smith committed
125
* [Training Optimizers](https://www.deepspeed.ai/features/#training-optimizers)
126
127
128
129
  * Fused Adam optimizer and arbitrary `torch.optim.Optimizer`
  * Memory bandwidth optimized FP16 Optimizer
  * Large Batch Training with LAMB Optimizer
  * Memory efficient Training with ZeRO Optimizer
Jeff Rasley's avatar
Jeff Rasley committed
130
  * CPU-Adam
Shaden Smith's avatar
Shaden Smith committed
131
132
* [Training Agnostic Checkpointing](https://www.deepspeed.ai/features/#training-agnostic-checkpointing)
* [Advanced Parameter Search](https://www.deepspeed.ai/features/#advanced-parameter-search)
133
134
  * Learning Rate Range Test
  * 1Cycle Learning Rate Schedule
Shaden Smith's avatar
Shaden Smith committed
135
136
* [Simplified Data Loader](https://www.deepspeed.ai/features/#simplified-data-loader)
* [Performance Analysis and Debugging](https://www.deepspeed.ai/features/#performance-analysis-and-debugging)
Jeff Rasley's avatar
Jeff Rasley committed
137
138
139



140
# Further Reading
141

142
All DeepSpeed documentation can be found on our website: [deepspeed.ai](https://www.deepspeed.ai/)
Shaden Smith's avatar
Shaden Smith committed
143
144


145
146
| Article                                                                                        | Description                                  |
| ---------------------------------------------------------------------------------------------- | -------------------------------------------- |
Shaden Smith's avatar
Shaden Smith committed
147
| [DeepSpeed Features](https://www.deepspeed.ai/features/)                                       |  DeepSpeed features                          |
148
| [Getting Started](https://www.deepspeed.ai/getting-started/)                                   |  First steps with DeepSpeed                         |
149
| [DeepSpeed JSON Configuration](https://www.deepspeed.ai/docs/config-json/)                     |  Configuring DeepSpeed                       |
Shaden Smith's avatar
Shaden Smith committed
150
| [API Documentation](https://deepspeed.readthedocs.io/en/latest/)                               |  Generated DeepSpeed API documentation       |
Shaden Smith's avatar
Shaden Smith committed
151
| [CIFAR-10 Tutorial](https://www.deepspeed.ai/tutorials/cifar-10)                               |  Getting started with CIFAR-10 and DeepSpeed |
Shaden Smith's avatar
Shaden Smith committed
152
| [Megatron-LM Tutorial](https://www.deepspeed.ai/tutorials/megatron/)                           |  Train GPT2 with DeepSpeed and Megatron-LM   |
153
| [BERT Pre-training Tutorial](https://www.deepspeed.ai/tutorials/bert-pretraining/)             |  Pre-train BERT with DeepSpeed |
Shaden Smith's avatar
Shaden Smith committed
154
155
| [Learning Rate Range Test Tutorial](https://www.deepspeed.ai/tutorials/lrrt/)                  |  Faster training with large learning rates   |
| [1Cycle Tutorial](https://www.deepspeed.ai/tutorials/1Cycle/)                                  |  SOTA learning schedule in DeepSpeed         |
Shaden Smith's avatar
Shaden Smith committed
156
157
158



159
# Contributing
Jeff Rasley's avatar
Jeff Rasley committed
160
161
162
DeepSpeed welcomes your contributions! Please see our
[contributing](CONTRIBUTING.md) guide for more details on formatting, testing,
etc.
Shaden Smith's avatar
Shaden Smith committed
163

164
## Contributor License Agreement
Shaden Smith's avatar
Shaden Smith committed
165
166
167
168
169
170
171
172
173
174
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to, and
actually do, grant us the rights to use your contribution. For details, visit
https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply
follow the instructions provided by the bot. You will only need to do this once across
all repos using our CLA.

175
## Code of Conduct
Shaden Smith's avatar
Shaden Smith committed
176
177
178
This project has adopted the [Microsoft Open Source Code of
Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
Olatunji Ruwase's avatar
Olatunji Ruwase committed
179
[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
Jeff Rasley's avatar
Jeff Rasley committed
180

181
# Publications
Jeff Rasley's avatar
Jeff Rasley committed
182
1. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. (2019) ZeRO: Memory Optimization Towards Training A Trillion Parameter Models. [ArXiv:1910.02054](https://arxiv.org/abs/1910.02054)