Unverified Commit f2ac7eaf authored by Jeff Rasley's avatar Jeff Rasley Committed by GitHub
Browse files

ZeRO-2 (#217)



Updates for ZeRO stage 2 + ZeRO stage 1 w. RS
Co-authored-by: default avatarTunji Ruwase <olruwase@microsoft.com>
Co-authored-by: default avatarSamyam Rajbhandari <samyamr@microsoft.com>
Co-authored-by: default avatarShaden Smith <ShadenTSmith@gmail.com>
Co-authored-by: default avatarElton Zheng <eltonz@microsoft.com>
Co-authored-by: default avatarShaden Smith <Shaden.Smith@microsoft.com>
Co-authored-by: default avataryuxionghe <yuxhe@microsoft.com>
Co-authored-by: default avatarArash Ashari <arashari@microsoft.com>
parent c61e23b4
...@@ -8,11 +8,11 @@ DeepSpeed is a deep learning optimization library that makes distributed trainin ...@@ -8,11 +8,11 @@ DeepSpeed is a deep learning optimization library that makes distributed trainin
efficient, and effective. efficient, and effective.
<p align="center"><i><b>10x Larger Models</b></i></p> <p align="center"><i><b>10x Larger Models</b></i></p>
<p align="center"><i><b>5x Faster Training</b></i></p> <p align="center"><i><b>10x Faster Training</b></i></p>
<p align="center"><i><b>Minimal Code Change</b></i></p> <p align="center"><i><b>Minimal Code Change</b></i></p>
DeepSpeed can train DL models with over a hundred billion parameters on current DeepSpeed can train DL models with over a hundred billion parameters on current
generation of GPU clusters, while achieving over 5x in system performance generation of GPU clusters, while achieving over 10x in system performance
compared to the state-of-art. Early adopters of DeepSpeed have already produced compared to the state-of-art. Early adopters of DeepSpeed have already produced
a language model (LM) with over 17B parameters called a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft), [Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft),
...@@ -22,9 +22,9 @@ establishing a new SOTA in the LM category. ...@@ -22,9 +22,9 @@ establishing a new SOTA in the LM category.
{% assign news = site.posts | where: "sneak_preview", "false" %} {% assign news = site.posts | where: "sneak_preview", "false" %}
{% for post in news limit:5 %} {% for post in news limit:5 %}
{% if post.link %} {% if post.link %}
* [{{ post.title }}]({{ post.link }}) * [{{ post.date | date: "%Y/%m/%d" }}] [{{ post.title }}]({{ post.link }}) {% if post.new_post %} <span style="color:dodgerblue">**NEW!**</span> {% endif %}
{% else %} {% else %}
* [{{ post.title }}]({{ post.url }}) * [{{ post.date | date: "%Y/%m/%d"}}] [{{ post.title }}]({{ post.url }}) {% if post.new_post %} <span style="color:dodgerblue">**NEW!**</span> {% endif %}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
...@@ -54,19 +54,20 @@ DeepSpeed achieves high performance and fast convergence through a combination o ...@@ -54,19 +54,20 @@ DeepSpeed achieves high performance and fast convergence through a combination o
efficiency optimizations on compute/communication/memory/IO and effectiveness efficiency optimizations on compute/communication/memory/IO and effectiveness
optimizations on advanced hyperparameter tuning and optimizers. For example: optimizations on advanced hyperparameter tuning and optimizers. For example:
* DeepSpeed trains BERT-large to parity in 14 hours using 64 GPUs (4 DGX-2 boxes) and in * <span style="color:dodgerblue">DeepSpeed trains BERT-large to parity in 44
3.7 hours using 256 GPUs (16 DGX-2 boxes). mins using 1024 V100 GPUs (64 DGX-2 boxes) and in 2.4 hours using 256 GPUs
(16 DGX-2 boxes).</span>
**BERT-large Training Times** **BERT-large Training Times**
| Devices | Source | Training Time (hours) | | Devices | Source | Training Time |
| ------------- | --------- | ---------------------:| | -------------- | --------- | ---------------------:|
| 64 TPUs | Google | 96 | | 1024 V100 GPUs | DeepSpeed | **44** min|
| 64 V100 GPUs | DeepSpeed | **14** | | 256 V100 GPUs | DeepSpeed | **2.4** hr|
| 256 V100 GPUs | NVIDIA | 3.9 | | 64 V100 GPUs | DeepSpeed | **8.68** hr|
| 256 V100 GPUs | DeepSpeed | **3.7** | | 16 V100 GPUs | DeepSpeed | **33.22** hr|
*BERT Tutorial*: Coming Soon *BERT codes and tutorials will be available soon.*
* DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA * DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA
Megatron on Azure GPUs. Megatron on Azure GPUs.
...@@ -77,37 +78,42 @@ optimizations on advanced hyperparameter tuning and optimizers. For example: ...@@ -77,37 +78,42 @@ optimizations on advanced hyperparameter tuning and optimizers. For example:
## Memory efficiency ## Memory efficiency
DeepSpeed provides memory-efficient data parallelism and enables training models without DeepSpeed provides memory-efficient data parallelism and enables training models without
model parallelism. For example, DeepSpeed can train models with up to 6 billion parameters on model parallelism. For example, DeepSpeed can train models with up to 13 billion parameters on
NVIDIA V100 GPUs with 32GB of device memory. In comparison, existing frameworks (e.g., NVIDIA V100 GPUs with 32GB of device memory. In comparison, existing frameworks (e.g.,
PyTorch's Distributed Data Parallel) run out of memory with 1.5 billion parameter models. PyTorch's Distributed Data Parallel) run out of memory with 1.4 billion parameter models.
DeepSpeed reduces the training memory footprint through a novel solution called Zero DeepSpeed reduces the training memory footprint through a novel solution called Zero
Redundancy Optimizer (ZeRO). Unlike basic data parallelism where memory states are Redundancy Optimizer (ZeRO). Unlike basic data parallelism where memory states are
replicated across data-parallel processes, ZeRO partitions model states to save replicated across data-parallel processes, ZeRO partitions model states and gradients to save
significant memory. The current implementation (stage 1 of ZeRO) reduces memory by up to significant memory. Furthermore, it also reduces activation memory and fragmented memory.
4x relative to the state-of-art. You can read more about ZeRO in our [paper](https://arxiv.org/abs/1910.02054). The current implementation (ZeRO-2) reduces memory by up to
8x relative to the state-of-art. You can read more about ZeRO in our [paper](https://arxiv.org/abs/1910.02054), and
in our blog posts related to
[ZeRO-1](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/). <!-- and [ZeRO-2](linklink). -->
With this impressive memory reduction, early adopters of DeepSpeed have already With this impressive memory reduction, early adopters of DeepSpeed have already
produced a language model (LM) with over 17B parameters called produced a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft), <a href="https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft">
<span style="color:dodgerblue">Turing-NLG</span></a>,
establishing a new SOTA in the LM category. establishing a new SOTA in the LM category.
## Scalability ## Scalability
DeepSpeed supports efficient data parallelism, model parallelism, and their DeepSpeed supports efficient data parallelism, model parallelism, and their
combination. ZeRO boosts the scaling capability and efficiency further. combination. ZeRO boosts the scaling capability and efficiency further.
* DeepSpeed provides system support to run models up to 100 billion parameters, * <span style="color:dodgerblue">DeepSpeed provides system support to run models up to 170 billion parameters,
10x larger than the state-of-art (8 billion NVIDIA GPT, 11 billion Google T5). 10x larger than the state-of-art (8 billion NVIDIA GPT, 11 billion Google T5).</span>
* DeepSpeed can run large models more efficiently, up to 6x faster for models with * <span style="color:dodgerblue">DeepSpeed can run large models more efficiently, up to 10x
various sizes spanning 1.5B to 100B. More specifically, the data parallelism powered by ZeRO faster for models with
various sizes spanning 1.5B to 170B.</span> More specifically, the data parallelism powered by ZeRO
is complementary and can be combined with different types of model parallelism. It allows is complementary and can be combined with different types of model parallelism. It allows
DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering
significant performance gains compared to using model parallelism alone. significant performance gains compared to using model parallelism alone.
*Read more*: [technical report](https://arxiv.org/abs/1910.02054), *Read more*: [ZeRO paper](https://arxiv.org/abs/1910.02054),
and [GPT tutorial](/tutorials/megatron). and [GPT tutorial](/tutorials/megatron).
![DeepSpeed-vs-Megatron](/assets/images/DeepSpeed-vs-Megatron.png) ![DeepSpeed Speedup](/assets/images/deepspeed-speedup.png)
<p align="center"> <p align="center">
<em>The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of NVIDIA Megatron-LM) over using Megatron-LM alone.</em> <em>The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of NVIDIA Megatron-LM) over using Megatron-LM alone.</em>
</p> </p>
...@@ -123,7 +129,7 @@ convergence to desired accuracy. ...@@ -123,7 +129,7 @@ convergence to desired accuracy.
## Good Usability ## Good Usability
Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to six billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.3 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM. Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to 13 billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.4 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM.
## Features ## Features
...@@ -137,12 +143,17 @@ overview](features) for descriptions and usage. ...@@ -137,12 +143,17 @@ overview](features) for descriptions and usage.
* [Model Parallelism](features.md#model-parallelism) * [Model Parallelism](features.md#model-parallelism)
* Support for Custom Model Parallelism * Support for Custom Model Parallelism
* Integration with Megatron-LM * Integration with Megatron-LM
* [Memory and Bandwidth Optimizations](features.md#memory-and-bandwidth-optimizations) * [The Zero Redundancy Optimizer (ZeRO)](features.md#the-zero-redundancy-optimizer)
* The Zero Redundancy Optimizer (ZeRO) * Optimizer State and Gradient Partitioning
* Constant Buffer Optimization (CBO) * Activation Partitioning
* Constant Buffer Optimization
* Contiguous Memory Optimization
* [Additional Memory and Bandwidth Optimizations](features.md#additional-memory-and-bandwidth-optimizations)
* Smart Gradient Accumulation * Smart Gradient Accumulation
* Communication/Computation Overlap
* [Training Features](features.md#training-features) * [Training Features](features.md#training-features)
* Simplified training API * Simplified training API
* Activation Checkpointing API
* Gradient Clipping * Gradient Clipping
* Automatic loss scaling with mixed precision * Automatic loss scaling with mixed precision
* [Training Optimizers](features.md#training-optimizers) * [Training Optimizers](features.md#training-optimizers)
......
...@@ -56,6 +56,19 @@ class BingBertSquadFuncTestCase(BaseTestCase): ...@@ -56,6 +56,19 @@ class BingBertSquadFuncTestCase(BaseTestCase):
succ = self.run_test(test_config, 0.01) succ = self.run_test(test_config, 0.01)
self.assertTrue(succ) self.assertTrue(succ)
def test_gpu4_fp16_zero2(self):
test_config = {
"gpus": 4,
"deepspeed": False,
"json": "deepspeed_bsz24_fp16_zero2_config.json",
"max_steps": 8,
"max_epoch_steps": 4,
"other_args": "--fp16 --print_steps 1"
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_gpu1_fp16(self): def test_gpu1_fp16(self):
test_config = { test_config = {
"gpus": 1, "gpus": 1,
...@@ -151,6 +164,7 @@ class BingBertSquadFuncTestCase(BaseTestCase): ...@@ -151,6 +164,7 @@ class BingBertSquadFuncTestCase(BaseTestCase):
def suite(): def suite():
suite = unittest.TestSuite() suite = unittest.TestSuite()
suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp16')) suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp16'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp16_zero2'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu1_fp16')) suite.addTest(BingBertSquadFuncTestCase('test_gpu1_fp16'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp32')) suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp32'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu1_fp32')) suite.addTest(BingBertSquadFuncTestCase('test_gpu1_fp32'))
......
{ {
"tensorboard": {
"enabled": false,
"job_name": "MyJob"
},
"zero_optimization": true,
"disable_allgather": false,
"allgather_size": 200000,
"wall_clock_breakdown": false,
"train_batch_size": 24, "train_batch_size": 24,
"train_micro_batch_size_per_gpu": 3, "train_micro_batch_size_per_gpu": 6,
"steps_per_print": 1, "steps_per_print": 1,
"optimizer": { "optimizer": {
"type": "Adam", "type": "Adam",
...@@ -21,5 +13,8 @@ ...@@ -21,5 +13,8 @@
"gradient_clipping": 1.0, "gradient_clipping": 1.0,
"fp16": { "fp16": {
"enabled": true "enabled": true
},
"zero_optimization": {
"stage": 1
} }
} }
{
"train_batch_size": 24,
"train_micro_batch_size_per_gpu": 6,
"steps_per_print": 1,
"optimizer": {
"type": "Adam",
"params": {
"lr": 3e-5,
"weight_decay": 0.0,
"bias_correction": false
}
},
"gradient_clipping": 1.0,
"fp16": {
"enabled": true
},
"zero_optimization": {
"stage": 2
}
}
{ {
"train_batch_size": 24, "train_batch_size": 24,
"train_micro_batch_size_per_gpu": 3, "train_micro_batch_size_per_gpu": 6,
"steps_per_print": 1, "steps_per_print": 1,
"optimizer": { "optimizer": {
"type": "Adam", "type": "Adam",
......
...@@ -122,7 +122,7 @@ echo "deepspeed: ${enable_deepspeed}" ...@@ -122,7 +122,7 @@ echo "deepspeed: ${enable_deepspeed}"
echo "other_args: ${other_args}" echo "other_args: ${other_args}"
EFFECTIVE_BATCH_SIZE=${batch_size} EFFECTIVE_BATCH_SIZE=${batch_size}
MAX_GPU_BATCH_SIZE=3 MAX_GPU_BATCH_SIZE=6
PER_GPU_BATCH_SIZE=$((EFFECTIVE_BATCH_SIZE/num_gpus)) PER_GPU_BATCH_SIZE=$((EFFECTIVE_BATCH_SIZE/num_gpus))
if [[ $PER_GPU_BATCH_SIZE -lt $MAX_GPU_BATCH_SIZE ]]; then if [[ $PER_GPU_BATCH_SIZE -lt $MAX_GPU_BATCH_SIZE ]]; then
GRAD_ACCUM_STEPS=1 GRAD_ACCUM_STEPS=1
......
...@@ -2,10 +2,11 @@ ...@@ -2,10 +2,11 @@
"train_batch_size": 4, "train_batch_size": 4,
"gradient_accumulation_steps": 1, "gradient_accumulation_steps": 1,
"steps_per_print": 1, "steps_per_print": 1,
"zero_optimization": true, "zero_optimization": {
"stage":1
},
"optimizer": { "optimizer": {
"type": "Adam", "type": "Adam",
"legacy_fusion": false,
"params": { "params": {
"lr": 0.00015, "lr": 0.00015,
"max_grad_norm": 1.0 "max_grad_norm": 1.0
......
{
"train_batch_size": 4,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": {
"stage":2
},
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
}
},
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
}
}
...@@ -2,12 +2,14 @@ ...@@ -2,12 +2,14 @@
"train_batch_size": 8, "train_batch_size": 8,
"gradient_accumulation_steps": 1, "gradient_accumulation_steps": 1,
"steps_per_print": 1, "steps_per_print": 1,
"zero_optimization": false, "zero_optimization": {
"stage":0
},
"optimizer": { "optimizer": {
"type": "Adam", "type": "Adam",
"legacy_fusion": false,
"params": { "params": {
"lr": 0.00015 "lr": 0.00015,
"max_grad_norm": 1.0
} }
}, },
......
...@@ -2,10 +2,11 @@ ...@@ -2,10 +2,11 @@
"train_batch_size": 8, "train_batch_size": 8,
"gradient_accumulation_steps": 1, "gradient_accumulation_steps": 1,
"steps_per_print": 1, "steps_per_print": 1,
"zero_optimization": true, "zero_optimization":{
"stage":1
},
"optimizer": { "optimizer": {
"type": "Adam", "type": "Adam",
"legacy_fusion": false,
"params": { "params": {
"lr": 0.00015, "lr": 0.00015,
"max_grad_norm": 1.0 "max_grad_norm": 1.0
......
{
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": {
"stage":2
},
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
}
},
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"activation_checkpointing": {
"partition_activations": true,
"contiguous_memory_optimization": true
}
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment