README.md 11.6 KB
Newer Older
1
[![Build Status](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_apis/build/status/microsoft.DeepSpeed?branchName=master)](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_build/latest?definitionId=1&branchName=master)
Shaden Smith's avatar
Shaden Smith committed
2
[![Documentation Status](https://readthedocs.org/projects/deepspeed/badge/?version=latest)](https://deepspeed.readthedocs.io/en/latest/?badge=latest)
Shaden Smith's avatar
Shaden Smith committed
3
[![License MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/Microsoft/DeepSpeed/blob/master/LICENSE)
Jeff Rasley's avatar
Jeff Rasley committed
4
5
[![Docker Pulls](https://img.shields.io/docker/pulls/deepspeed/deepspeed)](https://hub.docker.com/r/deepspeed/deepspeed)

Shaden Smith's avatar
Shaden Smith committed
6

Jeff Rasley's avatar
Jeff Rasley committed
7
8
[DeepSpeed](https://www.deepspeed.ai/) is a deep learning optimization
library that makes distributed training easy, efficient, and effective.
Shaden Smith's avatar
Shaden Smith committed
9
10

<p align="center"><i><b>10x Larger Models</b></i></p>
Jeff Rasley's avatar
Jeff Rasley committed
11
<p align="center"><i><b>10x Faster Training</b></i></p>
Shaden Smith's avatar
Shaden Smith committed
12
13
<p align="center"><i><b>Minimal Code Change</b></i></p>

Jeff Rasley's avatar
Jeff Rasley committed
14
15
16
17
18
19
20
DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU:
* Extreme scale: Using current generation of GPU clusters with hundreds of devices,  3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters.  
* Extremely memory efficient: With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models.
* Extremely long sequence length: Sparse attention of DeepSpeed powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution comparing with dense transformers.  
* Extremely communication efficient: 3D parallelism improves communication efficiency allows users to train multi-billion-parameter models 2–7x faster on clusters with limited network bandwidth.  1-bit Adam reduces communication volume by up to 5x while achieving similar convergence efficiency to Adam, allowing for scaling to different types of GPU clusters and networks.

Early adopters of DeepSpeed have already produced
Jeff Rasley's avatar
Jeff Rasley committed
21
22
23
24
a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft),
establishing a new SOTA in the LM category.

Shaden Smith's avatar
Shaden Smith committed
25
26
27
28
29
DeepSpeed is an important part of Microsoft’s new
[AI at Scale](https://www.microsoft.com/en-us/research/project/ai-at-scale/)
initiative to enable next-generation AI capabilities at scale, where you can find more
information [here](https://innovation.microsoft.com/en-us/exploring-ai-at-scale).

Jeff Rasley's avatar
Jeff Rasley committed
30
31
32
**_For further documentation, tutorials, and technical deep-dives please see [deepspeed.ai](https://www.deepspeed.ai/)!_**


33
# News
Jeff Rasley's avatar
Jeff Rasley committed
34
* [2020/09/10] [DeepSpeed: Extreme-scale model training for everyone](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/)
Jeff Rasley's avatar
Jeff Rasley committed
35
  * [Powering 10x longer sequences and 6x faster execution through DeepSpeed Sparse Attention](https://www.deepspeed.ai/news/2020/09/08/sparse-attention-news.html)
Jeff Rasley's avatar
Jeff Rasley committed
36
  * [Training a trillion parameters with pipeline parallelism](https://www.deepspeed.ai/news/2020/09/08/pipeline-parallelism.html)
Jeff Rasley's avatar
Jeff Rasley committed
37
38
  * [Up to 5x less communication and 3.4x faster training through 1-bit Adam](https://www.deepspeed.ai/news/2020/09/08/onebit-adam-news.html)
  * [10x bigger model training on a single GPU with ZeRO-Offload](https://www.deepspeed.ai/news/2020/09/08/ZeRO-Offload.html)
Jeff Rasley's avatar
Jeff Rasley committed
39
* [2020/08/07] [DeepSpeed Microsoft Research Webinar](https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-On-Demand.html) is now available on-demand
Shaden Smith's avatar
Shaden Smith committed
40
* [2020/07/24] [DeepSpeed Microsoft Research Webinar](https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-On-Demand.html) on August 6th, 2020
41
  [![DeepSpeed webinar](docs/assets/images/webinar-aug2020.png)](https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-Live.html)
Shaden Smith's avatar
Shaden Smith committed
42
* [2020/05/19] [ZeRO-2 & DeepSpeed: Shattering Barriers of Deep Learning Speed & Scale](https://www.microsoft.com/en-us/research/blog/zero-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/)
43
44
* [2020/05/19] [An Order-of-Magnitude Larger and Faster Training with ZeRO-2](https://www.deepspeed.ai/news/2020/05/18/zero-stage2.html)
* [2020/05/19] [The Fastest and Most Efficient BERT Training through Optimized Transformer Kernels](https://www.deepspeed.ai/news/2020/05/18/bert-record.html)
Jeff Rasley's avatar
Jeff Rasley committed
45
46
* [2020/02/13] [Turing-NLG: A 17-billion-parameter language model by Microsoft](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/)
* [2020/02/13] [ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
Shaden Smith's avatar
Shaden Smith committed
47
48


49
# Table of Contents
Shaden Smith's avatar
Shaden Smith committed
50
51
52
| Section                                 | Description                                 |
| --------------------------------------- | ------------------------------------------- |
| [Why DeepSpeed?](#why-deepspeed)        |  DeepSpeed overview                         |
53
54
| [Features](#features)                   |  DeepSpeed features                         |
| [Further Reading](#further-reading)     |  DeepSpeed documentation, tutorials, etc.   |
Shaden Smith's avatar
Shaden Smith committed
55
| [Contributing](#contributing)           |  Instructions for contributing to DeepSpeed |
56
| [Publications](#publications)           |  DeepSpeed publications                     |
Shaden Smith's avatar
Shaden Smith committed
57

58
# Why DeepSpeed?
Shaden Smith's avatar
Shaden Smith committed
59
60
61
62
63
Training advanced deep learning models is challenging. Beyond model design,
model scientists also need to set up the state-of-the-art training techniques
such as distributed training, mixed precision, gradient accumulation, and
checkpointing. Yet still, scientists may not achieve the desired system
performance and convergence rate. Large model sizes are even more challenging:
Rahul Prasad's avatar
Rahul Prasad committed
64
a large model easily runs out of memory with pure data parallelism and it is
Shaden Smith's avatar
Shaden Smith committed
65
66
67
difficult to use model parallelism. DeepSpeed addresses these challenges to
accelerate model development *and* training.

68
# Features
Jeff Rasley's avatar
Jeff Rasley committed
69
70

Below we provide a brief feature list, see our detailed [feature
Shaden Smith's avatar
Shaden Smith committed
71
overview](https://www.deepspeed.ai/features/) for descriptions and usage.
Jeff Rasley's avatar
Jeff Rasley committed
72

Shaden Smith's avatar
Shaden Smith committed
73
* [Distributed Training with Mixed Precision](https://www.deepspeed.ai/features/#distributed-training-with-mixed-precision)
74
75
  * 16-bit mixed precision
  * Single-GPU/Multi-GPU/Multi-Node
Shaden Smith's avatar
Shaden Smith committed
76
* [Model Parallelism](https://www.deepspeed.ai/features/#model-parallelism)
77
78
  * Support for Custom Model Parallelism
  * Integration with Megatron-LM
Jeff Rasley's avatar
Jeff Rasley committed
79
80
81
82
83
84
85
86
87
88
89
90
* [Pipeline Parallelism](https://www.deepspeed.ai/tutorials/pipeline/)
  * 3D Parallelism
* [The Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/)
  * Optimizer State and Gradient Partitioning
  * Activation Partitioning
  * Constant Buffer Optimization
  * Contiguous Memory Optimization
* [ZeRO-Offload](https://www.deepspeed.ai/tutorials/zero-offload/)
  * Leverage both CPU/GPU memory for model training
  * Support 10B model training on a single GPU
* [Ultra-fast dense transformer kernels](https://www.deepspeed.ai/news/2020/05/18/bert-record.html)
* [Sparse attention](https://www.deepspeed.ai/news/2020/09/08/sparse-attention.html)
Shaden Smith's avatar
Shaden Smith committed
91
  * Memory- and compute-efficient sparse kernels
Jeff Rasley's avatar
Jeff Rasley committed
92
93
94
95
96
97
  * Support 10x long sequences than dense
  * Flexible support to different sparse structures
* [1-bit Adam](https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html)
  * Custom communication collective
  * Up to 5x communication volume saving
* [Additional Memory and Bandwidth Optimizations](https://www.deepspeed.ai/features/#additional-memory-and-bandwidth-optimizations)
98
  * Smart Gradient Accumulation
Jeff Rasley's avatar
Jeff Rasley committed
99
  * Communication/Computation Overlap
Shaden Smith's avatar
Shaden Smith committed
100
* [Training Features](https://www.deepspeed.ai/features/#training-features)
101
102
103
  * Simplified training API
  * Gradient Clipping
  * Automatic loss scaling with mixed precision
Shaden Smith's avatar
Shaden Smith committed
104
* [Training Optimizers](https://www.deepspeed.ai/features/#training-optimizers)
105
106
107
108
  * Fused Adam optimizer and arbitrary `torch.optim.Optimizer`
  * Memory bandwidth optimized FP16 Optimizer
  * Large Batch Training with LAMB Optimizer
  * Memory efficient Training with ZeRO Optimizer
Jeff Rasley's avatar
Jeff Rasley committed
109
  * CPU-Adam
Shaden Smith's avatar
Shaden Smith committed
110
111
* [Training Agnostic Checkpointing](https://www.deepspeed.ai/features/#training-agnostic-checkpointing)
* [Advanced Parameter Search](https://www.deepspeed.ai/features/#advanced-parameter-search)
112
113
  * Learning Rate Range Test
  * 1Cycle Learning Rate Schedule
Shaden Smith's avatar
Shaden Smith committed
114
115
* [Simplified Data Loader](https://www.deepspeed.ai/features/#simplified-data-loader)
* [Performance Analysis and Debugging](https://www.deepspeed.ai/features/#performance-analysis-and-debugging)
Jeff Rasley's avatar
Jeff Rasley committed
116
117
118



119
# Further Reading
120

121
All DeepSpeed documentation can be found on our website: [deepspeed.ai](https://www.deepspeed.ai/)
Shaden Smith's avatar
Shaden Smith committed
122
123


124
125
| Article                                                                                        | Description                                  |
| ---------------------------------------------------------------------------------------------- | -------------------------------------------- |
Shaden Smith's avatar
Shaden Smith committed
126
| [DeepSpeed Features](https://www.deepspeed.ai/features/)                                       |  DeepSpeed features                          |
127
| [Getting Started](https://www.deepspeed.ai/getting-started/)                                   |  First steps with DeepSpeed                         |
128
| [DeepSpeed JSON Configuration](https://www.deepspeed.ai/docs/config-json/)                     |  Configuring DeepSpeed                       |
Shaden Smith's avatar
Shaden Smith committed
129
| [API Documentation](https://deepspeed.readthedocs.io/en/latest/)                               |  Generated DeepSpeed API documentation       |
Shaden Smith's avatar
Shaden Smith committed
130
| [CIFAR-10 Tutorial](https://www.deepspeed.ai/tutorials/cifar-10)                               |  Getting started with CIFAR-10 and DeepSpeed |
Shaden Smith's avatar
Shaden Smith committed
131
| [Megatron-LM Tutorial](https://www.deepspeed.ai/tutorials/megatron/)                           |  Train GPT2 with DeepSpeed and Megatron-LM   |
132
| [BERT Pre-training Tutorial](https://www.deepspeed.ai/tutorials/bert-pretraining/)             |  Pre-train BERT with DeepSpeed |
Shaden Smith's avatar
Shaden Smith committed
133
134
| [Learning Rate Range Test Tutorial](https://www.deepspeed.ai/tutorials/lrrt/)                  |  Faster training with large learning rates   |
| [1Cycle Tutorial](https://www.deepspeed.ai/tutorials/1Cycle/)                                  |  SOTA learning schedule in DeepSpeed         |
Shaden Smith's avatar
Shaden Smith committed
135
136
137



138
# Contributing
Jeff Rasley's avatar
Jeff Rasley committed
139
140
141
DeepSpeed welcomes your contributions! Please see our
[contributing](CONTRIBUTING.md) guide for more details on formatting, testing,
etc.
Shaden Smith's avatar
Shaden Smith committed
142

143
## Contributor License Agreement
Shaden Smith's avatar
Shaden Smith committed
144
145
146
147
148
149
150
151
152
153
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to, and
actually do, grant us the rights to use your contribution. For details, visit
https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply
follow the instructions provided by the bot. You will only need to do this once across
all repos using our CLA.

154
## Code of Conduct
Shaden Smith's avatar
Shaden Smith committed
155
156
157
This project has adopted the [Microsoft Open Source Code of
Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
Olatunji Ruwase's avatar
Olatunji Ruwase committed
158
[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
Jeff Rasley's avatar
Jeff Rasley committed
159

160
# Publications
Jeff Rasley's avatar
Jeff Rasley committed
161
1. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. (2019) ZeRO: Memory Optimization Towards Training A Trillion Parameter Models. [ArXiv:1910.02054](https://arxiv.org/abs/1910.02054)