Commit c25a91b6 authored by aiss's avatar aiss
Browse files

Merge branch 'ds-v0.9.2-rocm' into 'main'

Ds v0.9.2 rocm

See merge request dcutoolkit/deeplearing/deepspeed!2
parents d1596c94 af82b300
---
layout: archive
---
{{ content }}
{% if paginator %}
{% assign posts = paginator.posts %}
{% else %}
{% assign posts = site.posts %}
{% endif %}
<h2>{{ site.data.ui-text[site.locale].recent_posts | default: "Recent Posts" }}</h2>
{% assign news = posts | where: "sneak_preview", "false" %}
{% for post in news %}
{% include archive-single.html %}
{% if post.image %}
<a href="{{ post.link }}"><img src="{{ post.image }}"></a>
{% endif %}
{% endfor %}
{% include paginator.html %}
...@@ -181,7 +181,7 @@ Example of <i>**scheduler**</i> ...@@ -181,7 +181,7 @@ Example of <i>**scheduler**</i>
### Communication options ### Communication options
<i>**communication_data_type**</i>: [boolean] <i>**communication_data_type**</i>: [string]
| Description | Default | | Description | Default |
| ----------------------------------------------------------------------------------------------------------------------------- | ------- | | ----------------------------------------------------------------------------------------------------------------------------- | ------- |
...@@ -250,7 +250,7 @@ Example of <i>**scheduler**</i> ...@@ -250,7 +250,7 @@ Example of <i>**scheduler**</i>
| Description | Default | | Description | Default |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| <i>**initial_scale_power**</i> is a **fp16** parameter representing the power of the initial dynamic loss scale value. The actual loss scale is computed as 2<sup><i>**initial_scale_power**</i></sup>. | `32` | | <i>**initial_scale_power**</i> is a **fp16** parameter representing the power of the initial dynamic loss scale value. The actual loss scale is computed as 2<sup><i>**initial_scale_power**</i></sup>. | `16` |
<i>**fp16:loss_scale_window**</i>: [integer] <i>**fp16:loss_scale_window**</i>: [integer]
...@@ -692,7 +692,7 @@ Configuring the asynchronous I/O module for offloading parameter and optimizer s ...@@ -692,7 +692,7 @@ Configuring the asynchronous I/O module for offloading parameter and optimizer s
| Description | Default | | Description | Default |
|---------------------------------------------------------------------------------------------------------------------------| ------- | |---------------------------------------------------------------------------------------------------------------------------| ------- |
| Whether to run autotuing experiments whose results already exist. Setting it to true would overwrite the existing result. | `false` | | Whether to run autotuning experiments whose results already exist. Setting it to true would overwrite the existing result. | `false` |
<i>**metric**</i>: [string] <i>**metric**</i>: [string]
...@@ -849,7 +849,7 @@ Configuring the asynchronous I/O module for offloading parameter and optimizer s ...@@ -849,7 +849,7 @@ Configuring the asynchronous I/O module for offloading parameter and optimizer s
| Description | Default | | Description | Default |
| ------------------------------------------------------------- | ------- | | ------------------------------------------------------------- | ------- |
| Inserts torch.cuda.synchronize() at each checkpoint boundary. | `false` | | Inserts get_accelerator().synchronize() at each checkpoint boundary. | `false` |
<i>**profile**</i>: [boolean] <i>**profile**</i>: [boolean]
......
---
title: "Feature Overview"
layout: single
permalink: /features/
toc: true
toc_label: "Contents"
---
## Distributed Training with Mixed Precision
### Mixed Precision Training
Enable 16-bit (FP16) training by in the `deepspeed_config` JSON.
```json
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
}
```
### Single-GPU, Multi-GPU, and Multi-Node Training
Easily switch between single-GPU, single-node multi-GPU, or multi-node multi-GPU
execution by specifying resources with a hostfile.
```bash
deepspeed --hostfile=<hostfile> \
<client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
```
The script `<client_entry.py>` will execute on the resources specified in
[`<hostfile>`](/getting-started/#resource-configuration-multi-node).
## Pipeline Parallelism
DeepSpeed provides [pipeline parallelism](/tutorials/pipeline/) for memory-
and communication- efficient training. DeepSpeed supports a hybrid
combination of data, model, and pipeline parallelism and has scaled to over
[one trillion parameters using 3D parallelism]({{ site.press_release_v3 }}).
Pipeline parallelism can also improve communication efficiency and has
accelerated training by up to 7x on low-bandwidth clusters.
## Model Parallelism
### Support for Custom Model Parallelism
DeepSpeed supports all forms of model parallelism including tensor slicing
based approaches such as the
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM). It does so by only
requiring the model parallelism framework to provide a *model parallelism
unit* (`mpu`) that implements a few bookkeeping functionalities:
```python
mpu.get_model_parallel_rank()
mpu.get_model_parallel_group()
mpu.get_model_parallel_world_size()
mpu.get_data_parallel_rank()
mpu.get_data_parallel_group()
mpu.get_data_parallel_world_size()
```
### Integration with Megatron-LM
DeepSpeed is fully compatible with [Megatron](https://github.com/NVIDIA/Megatron-LM).
Please see the [Megatron-LM tutorial](/tutorials/megatron/) for details.
## The Zero Redundancy Optimizer
The Zero Redundancy Optimizer ([ZeRO](https://arxiv.org/abs/1910.02054)) is at
the heart of DeepSpeed and enables large model training at a scale that is
simply not possible with model parallelism alone. When enabled, ZeRO allows
training models with over 13 billion parameters without any model parallelism,
and up to 200 billion parameter models with model parallelism on current
generation hardware.
For more details see the [ZeRO paper](https://arxiv.org/abs/1910.02054), [GPT
tutorial](/tutorials/megatron/) on integration with
DeepSpeed.
### Optimizer State and Gradient Partitioning
Optimizer State and Gradient Partitioning in ZeRO reduces the memory consumption of the
model states (optimizer states, gradients and parameters) by 8x compared to standard
data parallelism by partitioning these states across data parallel process instead of
replicating them.
### Activation Partitioning
Activation Partitioning is a memory optimization in ZeRO that can reduce the memory
consumed by activations during model parallel training (MP). In MP certain
activations maybe required by all MP processes, resulting in a replication of
activations across MP GPUs. Activation Partitioning stores these activations in a
partitioned state once they are used for computation in the forward propagation. These
activations are allgathered right before they are needed again during the backward propagation.
By storing activations in a partitioned state, ZeRO in DeepSpeed can reduce the activation
memory footprint proportional to the MP degree.
### Constant Buffer Optimization (CBO)
CBO enables high network and memory throughput while restricting memory usage to a
constant size. For memory- and network-bound operations such as normalization or
allreduce collectives, the performance depends on the size of the operand. Simply fusing
all operands into a single large operand can enable great throughput at the expense of
unnecessary memory overhead. CBO in DeepSpeed fuses smaller operands into approximately a
pre-defined sized buffer large enough to achieve great performance without the
unnecessary memory overhead.
### Contiguous Memory Optimization (CMO)
CMO reduces memory fragmentation during training, preventing out of memory errors
due to lack of contiguous memory. Memory fragmentation is a result of interleaving between
short lived and long lived memory objects. During the forward propagation activation
checkpoints are long lived but the activations that recomputed are short lived. Similarly,
during the backward computation, the activation gradients are short lived while the parameter
gradients are long lived. CMO transfers activation checkpoints and parameter gradients
to contiguous buffers preventing memory fragmentation.
## ZeRO-Offload
ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. It allows training up to 13-billion-parameter models on a single NVIDIA V100 GPU, 10x larger than the state-of-the-art, while retaining high training throughput of over 30 teraflops per GPU.
For more details see the [ZeRO-Offload release blog]( https://www.microsoft.com/en-us/research/?p=689370&secret=iSlooB), and [tutorial](/tutorials/zero-offload/) on integration with DeepSpeed.
## Additional Memory and Bandwidth Optimizations
### Smart Gradient Accumulation
Gradient accumulation allows running larger batch size with limited memory by breaking an
effective batch into several sequential micro-batches, and averaging the parameter
gradients across these micro-batches. Furthermore, instead of averaging the gradients of
each micro-batch across all GPUs, the gradients are averaged locally during each step of
the sequence, and a single `allreduce` is done at the end of the sequence to produce the
averaged gradients for the effective batch across all GPUs. This strategy significantly
reduces the communication involved over the approach of averaging globally for each
micro-batch, specially when the number of micro-batches per effective batch is large.
### Communication Overlapping
During back propagation, DeepSpeed can overlap the communication required for averaging
parameter gradients that have already been computed with the ongoing gradient computation.
This computation-communication overlap allows DeepSpeed to achieve higher throughput even
at modest batch sizes.
## Training Features
### Simplified training API
The DeepSpeed core API consists of just a handful of methods:
* initialization: `initialize`
* training: `backward` and `step`
* argument parsing: `add_config_arguments`
* checkpointing : `load_checkpoint` and `store_checkpoint`
DeepSpeed supports most of the features described in this document, via the use of these API,
along with a `deepspeed_config` JSON file for enabling and disabling the features.
Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.
### Activation Checkpointing API
DeepSpeed's Activation Checkpointing API supports activation checkpoint partitioning,
cpu checkpointing, and contiguous memory optimizations, while also allowing layerwise
profiling. Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.
### Gradient Clipping
```json
{
"gradient_clipping": 1.0
}
```
DeepSpeed handles gradient clipping under the hood based on the max gradient norm
specified by the user.
Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.
### Automatic loss scaling with mixed precision
DeepSpeed internally handles loss scaling for mixed precision training. The parameters
for loss scaling can be specified in the `deepspeed_config` JSON file.
Please see the [core API doc](https://deepspeed.readthedocs.io/) for more details.
## Training Optimizers
### 1-bit Adam, 0/1 Adam and 1-bit LAMB optimizers with up to 26x less communication
DeepSpeed has three communication-efficient optimizers called 1-bit Adam, 0/1 Adam and 1-bit LAMB.
They offer the same convergence as Adam/LAMB, incur up to 26x less communication that enables
up to 6.6x higher throughput for BERT-Large pretraining and up to 2.7x higher throughput
for SQuAD fine-tuning on bandwidth-limited clusters. For more details on usage and performance,
please refer to the [1-bit Adam tutorial](https://www.deepspeed.ai/tutorials/onebit-adam),
[1-bit Adam blog post](https://www.deepspeed.ai/news/2020/09/09/onebit-adam-blog-post.md),
[0/1 Adam tutorial](https://www.deepspeed.ai/tutorials/zero-one-adam)
and [1-bit LAMB tutorial](https://www.deepspeed.ai/tutorials/onebit-lamb/). For technical details,
please refer to the [1-bit Adam paper](https://arxiv.org/abs/2102.02888), [0/1 Adam paper](https://arxiv.org/abs/2202.06009) and
[1-bit LAMB paper](https://arxiv.org/abs/2104.06069).
### Fused Adam optimizer and arbitrary torch.optim.Optimizer
With DeepSpeed, the user can choose to use a high performance implementation of ADAM from
NVIDIA, or any training optimizer that extends torch's `torch.optim.Optimizer` class.
### CPU-Adam: High-Performance vectorized implementation of Adam
We introduce an efficient implementation of Adam optimizer on CPU that improves the parameter-update
performance by nearly an order of magnitude. We use the AVX SIMD instructions on Intel-x86 architecture
for the CPU-Adam implementation. We support both AVX-512 and AVX-2 instruction sets. DeepSpeed uses
AVX-2 by default which can be switched to AVX-512 by setting the build flag, `DS_BUILD_AVX512` to 1 when
installing DeepSpeed. Using AVX-512, we observe 5.1x to 6.5x speedups considering the model-size between
1 to 10 billion parameters with respect to torch-adam.
### Memory bandwidth optimized FP16 Optimizer
Mixed precision training is handled by the DeepSpeed FP16 Optimizer. This optimizer not
only handles FP16 training but is also highly efficient. The performance of weight update
is primarily dominated by the memory bandwidth, and the achieved memory bandwidth is
dependent on the size of the input operands. The FP16 Optimizer is designed to maximize
the achievable memory bandwidth by merging all the parameters of the model into a single
large buffer, and applying the weight updates in a single kernel, allowing it to achieve
high memory bandwidth.
### Large Batch Training with LAMB Optimizer
<!-- **TODO: port tutorial** -->
DeepSpeed makes it easy to train with large batch sizes by enabling the LAMB Optimizer.
For more details on LAMB, see the [LAMB paper](https://arxiv.org/pdf/1904.00962.pdf).
### Memory-Efficient Training with ZeRO Optimizer
DeepSpeed can train models with up to 13 billion parameters without model parallelism, and
models with up to 200 billion parameters with 16-way model parallelism. This leap in
model size is possible through the memory efficiency achieved via the ZeRO Optimizer. For
more details see [ZeRO paper](https://arxiv.org/abs/1910.02054) .
## Training Agnostic Checkpointing
DeepSpeed can simplify checkpointing for you regardless of whether you are using data
parallel training, model parallel training, mixed-precision training, a mix of these
three, or using the zero optimizer to enable larger model sizes.
Please see the [Getting Started](/getting-started/) guide
and the [core API doc](https://deepspeed.readthedocs.io/) for more details.
## Advanced parameter search
DeepSpeed supports multiple Learning Rate Schedules to enable faster convergence for
large batch scaling.
### Learning Rate Range Test
Please refer to the [Learning Rate Range Test](/tutorials/lrrt/) tutorial.
### 1Cycle Learning Rate Schedule
Please refer to the [1Cycle Learning Rate Schedule](/tutorials/1Cycle/) tutorial.
## Simplified Data Loader
DeepSpeed abstracts away data parallelism and model parallelism from the user when it
comes to data loading. Users simply provide a PyTorch dataset, and DeepSpeed data loader
can automatically handle batch creation appropriately.
## Curriculum Learning
Please refer to the [Curriculum Learning](/tutorials/curriculum-learning/) tutorial.
## Performance Analysis and Debugging
DeepSpeed provides a set of tools for performance analysis and debugging.
### Wall Clock Breakdown
DeepSpeed provides a detailed breakdown of the time spent
in different parts of the training.
This can be enabled by setting the following in the `deepspeed_config` file.
```json
{
"wall_clock_breakdown": true,
}
```
### Timing Activation Checkpoint Functions
When activation checkpointing is enabled, profiling the forward and backward time of each checkpoint function can be enabled in the `deepspeed_config` file.
```json
{
"activation_checkpointing": {
"profile": true
}
}
```
### Flops Profiler
The DeepSpeed flops profiler measures the time, flops and parameters of a PyTorch model and shows which modules or layers are the bottleneck. When used with the DeepSpeed runtime, the flops profiler can be configured in the `deepspeed_config` file as follows:
```json
{
"flops_profiler": {
"enabled": true,
"profile_step": 1,
"module_depth": -1,
"top_modules": 3,
"detailed": true,
}
}
```
The flops profiler can also be used as a standalone package. Please refer to the [Flops Profiler](/tutorials/flops-profiler) tutorial for more details.
### Autotuning
The DeepSpeed Autotuner uses model information, system information, and heuristics to efficiently tune Zero stage, micro batch size, and other Zero configurations. Using the autotuning feature requires no code change from DeepSpeed users. While `"autotuning": {"enabled": true}` is the minimal required to enable auotuning, there are other parameters users can define to configure the autotuning process. Below shows major parameters and their default values in the autotuning configuration. Please refer to the [Autotuning](/tutorials/autotuning) tutorial for more details.
```json
{
"autotuning": {
"enabled": true,
"results_dir": null,
"exps_dir": null,
"overwrite": false,
"metric": "throughput",
"num_nodes": null,
"num_gpus": null,
"start_profile_step": 3,
"end_profile_step": 5,
"fast": true,
"num_tuning_micro_batch_sizes": 3,
"tuner_type": "model_based",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"arg_mappings": null
}
}
```
The flops profiler can also be used as a standalone package. Please refer to the [Flops Profiler](/tutorials/flops-profiler) tutorial for more details.
## Sparse Attention
DeepSpeed offers sparse attention to support long sequences. Please refer to the [Sparse Attention](/tutorials/sparse-attention/) tutorial.
```bash
--deepspeed_sparse_attention
```
```json
"sparse_attention": {
"mode": "fixed",
"block": 16,
"different_layout_per_head": true,
"num_local_blocks": 4,
"num_global_blocks": 1,
"attention": "bidirectional",
"horizontal_global_attention": false,
"num_different_global_patterns": 4
}
```
## Mixture of Experts (MoE)
To learn more about training Mixture of Experts (MoE) models with DeepSpeed, see our [tutorial](https://www.deepspeed.ai/tutorials/mixture-of-experts/) for more details.
...@@ -364,7 +364,7 @@ They offer the same convergence as Adam/LAMB, incur up to 26x less communication ...@@ -364,7 +364,7 @@ They offer the same convergence as Adam/LAMB, incur up to 26x less communication
up to 6.6x higher throughput for BERT-Large pretraining and up to 2.7x higher throughput up to 6.6x higher throughput for BERT-Large pretraining and up to 2.7x higher throughput
for SQuAD fine-tuning on bandwidth-limited clusters. For more details on usage and performance, for SQuAD fine-tuning on bandwidth-limited clusters. For more details on usage and performance,
please refer to the [1-bit Adam tutorial](https://www.deepspeed.ai/tutorials/onebit-adam), please refer to the [1-bit Adam tutorial](https://www.deepspeed.ai/tutorials/onebit-adam),
[1-bit Adam blog post](https://www.deepspeed.ai/news/2020/09/09/onebit-adam-blog-post.md), [1-bit Adam blog post](https://www.deepspeed.ai/2020/09/08/onebit-adam-blog-post.html),
[0/1 Adam tutorial](https://www.deepspeed.ai/tutorials/zero-one-adam) [0/1 Adam tutorial](https://www.deepspeed.ai/tutorials/zero-one-adam)
and [1-bit LAMB tutorial](https://www.deepspeed.ai/tutorials/onebit-lamb/). For technical details, and [1-bit LAMB tutorial](https://www.deepspeed.ai/tutorials/onebit-lamb/). For technical details,
please refer to the [1-bit Adam paper](https://arxiv.org/abs/2102.02888), [0/1 Adam paper](https://arxiv.org/abs/2202.06009) and please refer to the [1-bit Adam paper](https://arxiv.org/abs/2102.02888), [0/1 Adam paper](https://arxiv.org/abs/2202.06009) and
......
...@@ -3,5 +3,5 @@ title: "ZeRO & DeepSpeed: New system optimizations enable training models with o ...@@ -3,5 +3,5 @@ title: "ZeRO & DeepSpeed: New system optimizations enable training models with o
date: 2020-02-13 date: 2020-02-13
link: https://www.microsoft.com/en-us/research/blog/ZeRO-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/ link: https://www.microsoft.com/en-us/research/blog/ZeRO-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/
excerpt: "" excerpt: ""
tags: training ZeRO tags: training ZeRO English
--- ---
...@@ -3,5 +3,5 @@ title: "Turing-NLG: A 17-billion-parameter language model by Microsoft" ...@@ -3,5 +3,5 @@ title: "Turing-NLG: A 17-billion-parameter language model by Microsoft"
date: 2020-02-13 date: 2020-02-13
link: https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/ link: https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/
excerpt: "DeepSpeed was used to train the world's largest language model." excerpt: "DeepSpeed was used to train the world's largest language model."
tags: training tags: training English
--- ---
--- ---
title: "ZeRO stage 1 with reduced communication" title: "ZeRO stage 1 with reduced communication"
sneak_preview: true sneak_preview: true
tags: training ZeRO English
excerpt: "Partition-aware ZeRO with up to 2x reduction in communication time!" excerpt: "Partition-aware ZeRO with up to 2x reduction in communication time!"
--- ---
......
--- ---
title: "The Fastest and Most Efficient BERT Training through Optimized Transformer Kernels" title: "The Fastest and Most Efficient BERT Training through Optimized Transformer Kernels"
excerpt: "" excerpt: ""
tags: training
date: 2020-05-19 00:00:00 date: 2020-05-19 00:00:00
toc: false toc: false
tags: training tags: training English
--- ---
We introduce new technology to accelerate single GPU performance via kernel We introduce new technology to accelerate single GPU performance via kernel
...@@ -18,6 +17,6 @@ NVIDIA V100 GPUs**, compared with the best published result of 67 minutes on ...@@ -18,6 +17,6 @@ NVIDIA V100 GPUs**, compared with the best published result of 67 minutes on
the same number and generation of GPUs. the same number and generation of GPUs.
* Brief overview, see our [press release](https://www.microsoft.com/en-us/research/blog/ZeRO-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/). * Brief overview, see our [press release](https://www.microsoft.com/en-us/research/blog/ZeRO-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/).
* Detailed technology deep dive, see our [blog post](https://www.deepspeed.ai/news/2020/05/27/fastest-bert-training.html). * Detailed technology deep dive, see our [blog post](https://www.deepspeed.ai/2020/05/27/fastest-bert-training.html).
* Tutorial on how to reproduce our results, see our [BERT pre-training tutorial](https://www.deepspeed.ai/tutorials/bert-pretraining/). * Tutorial on how to reproduce our results, see our [BERT pre-training tutorial](https://www.deepspeed.ai/tutorials/bert-pretraining/).
* The source code for our transformer kernels can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed) and BERT pre-training code can be found in the [DeepSpeedExamples repo](https://github.com/microsoft/deepspeedexamples). * The source code for our transformer kernels can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed) and BERT pre-training code can be found in the [DeepSpeedExamples repo](https://github.com/microsoft/deepspeedexamples).
...@@ -2,6 +2,6 @@ ...@@ -2,6 +2,6 @@
title: "ZeRO-2 & DeepSpeed: Shattering Barriers of Deep Learning Speed & Scale" title: "ZeRO-2 & DeepSpeed: Shattering Barriers of Deep Learning Speed & Scale"
excerpt: "" excerpt: ""
link: https://www.microsoft.com/en-us/research/blog/ZeRO-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/ link: https://www.microsoft.com/en-us/research/blog/ZeRO-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/
tags: training ZeRO tags: training ZeRO English
date: 2020-05-19 02:00:00 date: 2020-05-19 02:00:00
--- ---
--- ---
title: "An Order-of-Magnitude Larger and Faster Training with ZeRO-2" title: "An Order-of-Magnitude Larger and Faster Training with ZeRO-2"
excerpt: "" excerpt: ""
tags: training ZeRO tags: training ZeRO English
date: 2020-05-19 01:00:00 date: 2020-05-19 01:00:00
toc: false toc: false
--- ---
......
--- ---
title: "Microsoft DeepSpeed achieves the fastest BERT training time" title: "Microsoft DeepSpeed achieves the fastest BERT training time"
excerpt: "" excerpt: ""
tags: training tags: training English
date: 2020-05-28 00:00:00 date: 2020-05-28 00:00:00
--- ---
......
--- ---
title: "DeepSpeed Microsoft Research Webinar on August 6th, 2020" title: "DeepSpeed Microsoft Research Webinar on August 6th, 2020"
excerpt: "" excerpt: ""
tags: presentations tags: presentations English
link: https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-On-Demand.html link: https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-On-Demand.html
image: /assets/images/webinar-aug2020.png image: /assets/images/webinar-aug2020.png
date: 2020-07-24 00:00:00 date: 2020-07-24 00:00:00
......
--- ---
title: "DeepSpeed Microsoft Research Webinar is now on-demand" title: "DeepSpeed Microsoft Research Webinar is now on-demand"
excerpt: "" excerpt: ""
tags: presentations tags: presentations English
link: https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-On-Demand.html link: https://note.microsoft.com/MSR-Webinar-DeepSpeed-Registration-On-Demand.html
date: 2020-08-07 00:00:00 date: 2020-08-07 00:00:00
--- ---
--- ---
title: "Powering 10x longer sequences and 6x faster execution through DeepSpeed Sparse Attention" title: "Powering 10x longer sequences and 6x faster execution through DeepSpeed Sparse Attention"
excerpt: "" excerpt: ""
tags: training tags: training English
date: 2020-09-09 00:00:00 date: 2020-09-09 00:00:00
toc: false toc: false
--- ---
...@@ -9,6 +9,6 @@ toc: false ...@@ -9,6 +9,6 @@ toc: false
DeepSpeed offers sparse attention kernels, an instrumental technology to support long sequences of model inputs, whether for text, image, or sound. Compared with the classic dense Transformers, it powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution with comparable accuracy. It also outperforms state-of-the-art sparse implementations with 1.5-3x faster execution. Furthermore, our sparse kernels support efficient execution of flexible sparse format and empower users to innovate on their custom sparse structures. DeepSpeed offers sparse attention kernels, an instrumental technology to support long sequences of model inputs, whether for text, image, or sound. Compared with the classic dense Transformers, it powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution with comparable accuracy. It also outperforms state-of-the-art sparse implementations with 1.5-3x faster execution. Furthermore, our sparse kernels support efficient execution of flexible sparse format and empower users to innovate on their custom sparse structures.
* Brief overview, see our [press release]({{ site.press_release_v3 }}). * Brief overview, see our [press release]({{ site.press_release_v3 }}).
* Detailed technology deep dive, see our [blog post](https://www.deepspeed.ai/news/2020/09/08/sparse-attention.html). * Detailed technology deep dive, see our [blog post](https://www.deepspeed.ai/2020/09/08/sparse-attention.html).
* Tutorial on how to use sparse attention, see our [Sparse attention tutorial](https://www.deepspeed.ai/tutorials/sparse-attention/). * Tutorial on how to use sparse attention, see our [Sparse attention tutorial](https://www.deepspeed.ai/tutorials/sparse-attention/).
* The source code for our sparse attention kernels can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed) and BERT pre-training code using sparse attention can be found in the [DeepSpeedExamples repo](https://github.com/microsoft/deepspeedexamples). * The source code for our sparse attention kernels can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed) and BERT pre-training code using sparse attention can be found in the [DeepSpeedExamples repo](https://github.com/microsoft/deepspeedexamples).
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
title: "10x bigger model training on a single GPU with ZeRO-Offload" title: "10x bigger model training on a single GPU with ZeRO-Offload"
excerpt: "" excerpt: ""
date: 2020-09-09 00:00:00 date: 2020-09-09 00:00:00
tags: training ZeRO tags: training ZeRO English
toc: false toc: false
--- ---
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
title: "DeepSpeed with 1-bit Adam: 5x less communication and 3.4x faster training" title: "DeepSpeed with 1-bit Adam: 5x less communication and 3.4x faster training"
excerpt: "" excerpt: ""
date: 2020-09-09 00:00:00 date: 2020-09-09 00:00:00
tags: training tags: training English
--- ---
## 1. Introduction ## 1. Introduction
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
title: "Up to 5x less communication and 3.4x faster training through 1-bit Adam" title: "Up to 5x less communication and 3.4x faster training through 1-bit Adam"
excerpt: "" excerpt: ""
date: 2020-09-09 00:00:00 date: 2020-09-09 00:00:00
tags: training tags: training English
toc: false toc: false
--- ---
...@@ -15,6 +15,6 @@ across distributed devices. We introduce a new algorithm - 1-bit Adam - and ...@@ -15,6 +15,6 @@ across distributed devices. We introduce a new algorithm - 1-bit Adam - and
its efficient implementation in DeepSpeed. 1-bit Adam offers the ***same convergence*** as Adam, incurs up to ***5x less communication*** that enables up to ***3.5x higher throughput for BERT-Large pretraining*** and up to ***2.7x higher throughput for SQuAD fine-tuning*** on bandwidth-limited clusters. its efficient implementation in DeepSpeed. 1-bit Adam offers the ***same convergence*** as Adam, incurs up to ***5x less communication*** that enables up to ***3.5x higher throughput for BERT-Large pretraining*** and up to ***2.7x higher throughput for SQuAD fine-tuning*** on bandwidth-limited clusters.
* Brief overview, see our [press release]({{ site.press_release_v3 }}). * Brief overview, see our [press release]({{ site.press_release_v3 }}).
* Detailed technology deep dive, see our [blog post](https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html). * Detailed technology deep dive, see our [blog post](https://www.deepspeed.ai/2020/09/08/onebit-adam-blog-post.html).
* Tutorial on how to reproduce our results, see our [1-bit Adam tutorial](/tutorials/onebit-adam/). * Tutorial on how to reproduce our results, see our [1-bit Adam tutorial](/tutorials/onebit-adam/).
* The source code for 1-bit Adam can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed). The implementation of 1-bit Adam is in [onebit_adam.py](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/fp16/onebit_adam.py) and CUDA-Aware communication for 1-bit Adam is in [custom_collectives.py](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/custom_collectives.py). Example codes to try this feature can be found in the [DeepSpeedExamples repo](https://github.com/microsoft/deepspeedexamples) as shown in the [tutorial](/tutorials/onebit-adam/). * The source code for 1-bit Adam can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed). The implementation of 1-bit Adam is in [onebit_adam.py](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/fp16/onebit_adam.py) and CUDA-Aware communication for 1-bit Adam is in [custom_collectives.py](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/custom_collectives.py). Example codes to try this feature can be found in the [DeepSpeedExamples repo](https://github.com/microsoft/deepspeedexamples) as shown in the [tutorial](/tutorials/onebit-adam/).
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
title: "Training a Trillion Parameters with Pipeline Parallelism" title: "Training a Trillion Parameters with Pipeline Parallelism"
excerpt: "" excerpt: ""
date: 2020-09-09 00:00:00 date: 2020-09-09 00:00:00
tags: training tags: training English
--- ---
DeepSpeed includes new support for pipeline parallelism! DeepSpeed's training DeepSpeed includes new support for pipeline parallelism! DeepSpeed's training
......
...@@ -2,10 +2,10 @@ ...@@ -2,10 +2,10 @@
title: "DeepSpeed Sparse Attention" title: "DeepSpeed Sparse Attention"
excerpt: "" excerpt: ""
date: 2020-09-09 01:00:00 date: 2020-09-09 01:00:00
tags: training inference tags: training inference English
--- ---
Attention-based deep learning models such as the transformers are highly effective in capturing relationship between tokens in an input sequence, even across long distances. As a result, they are used with text, image, and sound-based inputs, where the sequence length can be in thousands of tokens. However, despite the effectiveness of attention modules to capture long term dependencies, in practice, their application to long sequence input is limited by compute and memory requirements of the attention computation that grow quadratically, `O(n^2)`, with the sequence length `n`. Attention-based deep learning models such as the transformers are highly effective in capturing the relationship between tokens in an input sequence, even across long distances. As a result, they are used with text, image, and sound-based inputs, where the sequence length can be in thousands of tokens. However, despite the effectiveness of attention modules to capture long term dependencies, in practice, their application to long sequence input is limited by compute and memory requirements of the attention computation that grow quadratically, `O(n^2)`, with the sequence length `n`.
To address this limitation, DeepSpeed offers a suite of sparse attention kernels --an instrumental technology that can reduce the compute and memory requirement of attention computation by orders-of-magnitude via block-sparse computation. The suite not only alleviates the memory bottleneck of attention calculation, but also performs sparse computation efficiently. Its APIs allow convenient integration with any transformer-based models. Along with providing a wide spectrum of sparsity structures, it has the flexibility of handling any user-defined block-sparse structures. More specifically, sparse attention (SA) can be designed to compute local attention between nearby tokens, or global attention via summary tokens computed with local attention. Moreover, SA can also allow random attention, or any combination of local, global, and random attention as shown in the following figure with blue, orange, and green blocks, respectively. As a result, SA decreases the memory footprint to `O(wn)`, in which `1 < w < n` is a parameter, whose value depends on the attention structure. To address this limitation, DeepSpeed offers a suite of sparse attention kernels --an instrumental technology that can reduce the compute and memory requirement of attention computation by orders-of-magnitude via block-sparse computation. The suite not only alleviates the memory bottleneck of attention calculation, but also performs sparse computation efficiently. Its APIs allow convenient integration with any transformer-based models. Along with providing a wide spectrum of sparsity structures, it has the flexibility of handling any user-defined block-sparse structures. More specifically, sparse attention (SA) can be designed to compute local attention between nearby tokens, or global attention via summary tokens computed with local attention. Moreover, SA can also allow random attention, or any combination of local, global, and random attention as shown in the following figure with blue, orange, and green blocks, respectively. As a result, SA decreases the memory footprint to `O(wn)`, in which `1 < w < n` is a parameter, whose value depends on the attention structure.
...@@ -27,7 +27,7 @@ In a pre-training experiment, we ran BERT model under three settings: dense, den ...@@ -27,7 +27,7 @@ In a pre-training experiment, we ran BERT model under three settings: dense, den
![Maximum sequence runnable on BERT](/assets/images/sa_maximum_sequence_runnable_on_bert.png){: .align-center} ![Maximum sequence runnable on BERT](/assets/images/sa_maximum_sequence_runnable_on_bert.png){: .align-center}
* **up to 6.3x faster computation** * **Up to 6.3x faster computation**
We continued the pre-training experiment for different batch sizes and sequence lengths, using [BERT base/large](https://github.com/microsoft/DeepSpeedExamples/tree/master/bing_bert) and [Megatron GPT2](https://github.com/microsoft/DeepSpeedExamples/tree/master/Megatron-LM). In this experiment we let the training to continue for 100 iteration and recorded the average time per last 30 iterations. SA reduces total computation comparing with dense and improves training speed: the boost is higher with increased sequence length and it is up to 6.3x faster for BERT base, 5.3x for BERT large, and 6.1x for GPT2. Following charts show these results. We continued the pre-training experiment for different batch sizes and sequence lengths, using [BERT base/large](https://github.com/microsoft/DeepSpeedExamples/tree/master/bing_bert) and [Megatron GPT2](https://github.com/microsoft/DeepSpeedExamples/tree/master/Megatron-LM). In this experiment we let the training to continue for 100 iteration and recorded the average time per last 30 iterations. SA reduces total computation comparing with dense and improves training speed: the boost is higher with increased sequence length and it is up to 6.3x faster for BERT base, 5.3x for BERT large, and 6.1x for GPT2. Following charts show these results.
![Training time for BERT base with varying sequence length](/assets/images/sa_bert_base_time_result.png){: .align-center} ![Training time for BERT base with varying sequence length](/assets/images/sa_bert_base_time_result.png){: .align-center}
...@@ -36,14 +36,14 @@ We continued the pre-training experiment for different batch sizes and sequence ...@@ -36,14 +36,14 @@ We continued the pre-training experiment for different batch sizes and sequence
![Training time for GPT2 with varying sequence length](/assets/images/sa_gpt2_time_result.png){: .align-center} ![Training time for GPT2 with varying sequence length](/assets/images/sa_gpt2_time_result.png){: .align-center}
* **higher accuracy** * **Higher accuracy**
Related works along the line of sparse attention ([Sparse Transformer](https://arxiv.org/pdf/1904.10509.pdf), [Longformer](https://arxiv.org/pdf/2004.05150.pdf), [BigBird](https://arxiv.org/pdf/2007.14062.pdf)) have shown comparable or higher accuracy than full attention. Our experience is well aligned. In addition to lower memory overhead and faster computation, we also observe cases in production where SA reaches higher accuracy and faster convergence. The following chart illustrates accuracy of training a production model based on BERT for long document comprehension (2,048 sequence length). The experiment is performed in three settings: dense starting from scratch, SA starting from scratch, and SA continued training from a checkpoint of using dense with sequence length of 512. We have observed that, for pre-training from scratch, SA converges faster with higher accuracy comparing with dense. Furthermore, SA continuing training from a pre-trained checkpoint performs even better, with respect to both time and accuracy. Related works along the line of sparse attention ([Sparse Transformer](https://arxiv.org/pdf/1904.10509.pdf), [Longformer](https://arxiv.org/pdf/2004.05150.pdf), [BigBird](https://arxiv.org/pdf/2007.14062.pdf)) have shown comparable or higher accuracy than full attention. Our experience is well aligned. In addition to lower memory overhead and faster computation, we also observe cases in production where SA reaches higher accuracy and faster convergence. The following chart illustrates accuracy of training a production model based on BERT for long document comprehension (2,048 sequence length). The experiment is performed in three settings: dense starting from scratch, SA starting from scratch, and SA continued training from a checkpoint of using dense with sequence length of 512. We have observed that, for pre-training from scratch, SA converges faster with higher accuracy comparing with dense. Furthermore, SA continuing training from a pre-trained checkpoint performs even better, with respect to both time and accuracy.
![Accuracy of long document comprehension application](/assets/images/sa_long_document_comprehension_result.png){: .align-center} ![Accuracy of long document comprehension application](/assets/images/sa_long_document_comprehension_result.png){: .align-center}
* **comparison with state of the art, Longformer** * **Comparison with state of the art, Longformer**
We compared SA with Longformer, a state-of-the-art sparse structure and implementation. In our experiment, SA uses `Fixed` sparsity, and two implementations have comparable accuracy. On system performance, SA outperforms Longformer both in training and inference: We compared SA with Longformer, a state-of-the-art sparse structure and implementation. In our experiment, SA uses `Fixed` sparsity, and two implementations have comparable accuracy. On system performance, SA outperforms Longformer both in training and inference:
* **1.47x** faster execution pre-training MLM on Wikitext103 * **1.47x** faster execution pre-training MLM on Wikitext103
We ran an experiment following the [notebook](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) offered by Longformer. In this experiment, we pre-train an MLM model using RoBERTa-base checkpoint. This is done on 8 V100-SXM2 GPU. Following table shows the details of the result in which using DeepSpeed Sparse Attention shows 1.47x speed up. We ran an experiment following the [notebook](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) offered by Longformer. In this experiment, we pre-train an MLM model using RoBERTa-base checkpoint. This is done on 8 V100-SXM2 GPU. Following table shows the details of the result in which using DeepSpeed Sparse Attention shows 1.47x speed up.
...@@ -73,7 +73,7 @@ Through our Long Document Comprehension application we described above, we also ...@@ -73,7 +73,7 @@ Through our Long Document Comprehension application we described above, we also
|32 |1.24 | |32 |1.24 |
|16 |1.23 | |16 |1.23 |
* **flexibility to handle any block-sparse structure** * **Flexibility to handle any block-sparse structure**
DeepSpeed Sparse Attention suite does not target at any specific sparse structure but enables model scientists to explore any block sparse structure with efficient system support. Currently, we have added popular sparse structure like: DeepSpeed Sparse Attention suite does not target at any specific sparse structure but enables model scientists to explore any block sparse structure with efficient system support. Currently, we have added popular sparse structure like:
* [Fixed](https://arxiv.org/pdf/1904.10509.pdf) (from OpenAI Sparse Transformer) * [Fixed](https://arxiv.org/pdf/1904.10509.pdf) (from OpenAI Sparse Transformer)
* [BigBird](https://arxiv.org/pdf/2007.14062.pdf) (from Google) * [BigBird](https://arxiv.org/pdf/2007.14062.pdf) (from Google)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
title: "Progressive Layer Dropping" title: "Progressive Layer Dropping"
excerpt: "" excerpt: ""
date: 2020-10-29 00:00:00 date: 2020-10-29 00:00:00
tags: training tags: training English
toc: false toc: false
--- ---
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment