Unverified Commit f2ac7eaf authored by Jeff Rasley's avatar Jeff Rasley Committed by GitHub
Browse files

ZeRO-2 (#217)



Updates for ZeRO stage 2 + ZeRO stage 1 w. RS
Co-authored-by: default avatarTunji Ruwase <olruwase@microsoft.com>
Co-authored-by: default avatarSamyam Rajbhandari <samyamr@microsoft.com>
Co-authored-by: default avatarShaden Smith <ShadenTSmith@gmail.com>
Co-authored-by: default avatarElton Zheng <eltonz@microsoft.com>
Co-authored-by: default avatarShaden Smith <Shaden.Smith@microsoft.com>
Co-authored-by: default avataryuxionghe <yuxhe@microsoft.com>
Co-authored-by: default avatarArash Ashari <arashari@microsoft.com>
parent c61e23b4
......@@ -8,11 +8,11 @@ DeepSpeed is a deep learning optimization library that makes distributed trainin
efficient, and effective.
<p align="center"><i><b>10x Larger Models</b></i></p>
<p align="center"><i><b>5x Faster Training</b></i></p>
<p align="center"><i><b>10x Faster Training</b></i></p>
<p align="center"><i><b>Minimal Code Change</b></i></p>
DeepSpeed can train DL models with over a hundred billion parameters on current
generation of GPU clusters, while achieving over 5x in system performance
generation of GPU clusters, while achieving over 10x in system performance
compared to the state-of-art. Early adopters of DeepSpeed have already produced
a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft),
......@@ -22,9 +22,9 @@ establishing a new SOTA in the LM category.
{% assign news = site.posts | where: "sneak_preview", "false" %}
{% for post in news limit:5 %}
{% if post.link %}
* [{{ post.title }}]({{ post.link }})
* [{{ post.date | date: "%Y/%m/%d" }}] [{{ post.title }}]({{ post.link }}) {% if post.new_post %} <span style="color:dodgerblue">**NEW!**</span> {% endif %}
{% else %}
* [{{ post.title }}]({{ post.url }})
* [{{ post.date | date: "%Y/%m/%d"}}] [{{ post.title }}]({{ post.url }}) {% if post.new_post %} <span style="color:dodgerblue">**NEW!**</span> {% endif %}
{% endif %}
{% endfor %}
......@@ -54,19 +54,20 @@ DeepSpeed achieves high performance and fast convergence through a combination o
efficiency optimizations on compute/communication/memory/IO and effectiveness
optimizations on advanced hyperparameter tuning and optimizers. For example:
* DeepSpeed trains BERT-large to parity in 14 hours using 64 GPUs (4 DGX-2 boxes) and in
3.7 hours using 256 GPUs (16 DGX-2 boxes).
* <span style="color:dodgerblue">DeepSpeed trains BERT-large to parity in 44
mins using 1024 V100 GPUs (64 DGX-2 boxes) and in 2.4 hours using 256 GPUs
(16 DGX-2 boxes).</span>
**BERT-large Training Times**
| Devices | Source | Training Time (hours) |
| ------------- | --------- | ---------------------:|
| 64 TPUs | Google | 96 |
| 64 V100 GPUs | DeepSpeed | **14** |
| 256 V100 GPUs | NVIDIA | 3.9 |
| 256 V100 GPUs | DeepSpeed | **3.7** |
| Devices | Source | Training Time |
| -------------- | --------- | ---------------------:|
| 1024 V100 GPUs | DeepSpeed | **44** min|
| 256 V100 GPUs | DeepSpeed | **2.4** hr|
| 64 V100 GPUs | DeepSpeed | **8.68** hr|
| 16 V100 GPUs | DeepSpeed | **33.22** hr|
*BERT Tutorial*: Coming Soon
*BERT codes and tutorials will be available soon.*
* DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA
Megatron on Azure GPUs.
......@@ -77,37 +78,42 @@ optimizations on advanced hyperparameter tuning and optimizers. For example:
## Memory efficiency
DeepSpeed provides memory-efficient data parallelism and enables training models without
model parallelism. For example, DeepSpeed can train models with up to 6 billion parameters on
model parallelism. For example, DeepSpeed can train models with up to 13 billion parameters on
NVIDIA V100 GPUs with 32GB of device memory. In comparison, existing frameworks (e.g.,
PyTorch's Distributed Data Parallel) run out of memory with 1.5 billion parameter models.
PyTorch's Distributed Data Parallel) run out of memory with 1.4 billion parameter models.
DeepSpeed reduces the training memory footprint through a novel solution called Zero
Redundancy Optimizer (ZeRO). Unlike basic data parallelism where memory states are
replicated across data-parallel processes, ZeRO partitions model states to save
significant memory. The current implementation (stage 1 of ZeRO) reduces memory by up to
4x relative to the state-of-art. You can read more about ZeRO in our [paper](https://arxiv.org/abs/1910.02054).
replicated across data-parallel processes, ZeRO partitions model states and gradients to save
significant memory. Furthermore, it also reduces activation memory and fragmented memory.
The current implementation (ZeRO-2) reduces memory by up to
8x relative to the state-of-art. You can read more about ZeRO in our [paper](https://arxiv.org/abs/1910.02054), and
in our blog posts related to
[ZeRO-1](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/). <!-- and [ZeRO-2](linklink). -->
With this impressive memory reduction, early adopters of DeepSpeed have already
produced a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft),
<a href="https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft">
<span style="color:dodgerblue">Turing-NLG</span></a>,
establishing a new SOTA in the LM category.
## Scalability
DeepSpeed supports efficient data parallelism, model parallelism, and their
combination. ZeRO boosts the scaling capability and efficiency further.
* DeepSpeed provides system support to run models up to 100 billion parameters,
10x larger than the state-of-art (8 billion NVIDIA GPT, 11 billion Google T5).
* DeepSpeed can run large models more efficiently, up to 6x faster for models with
various sizes spanning 1.5B to 100B. More specifically, the data parallelism powered by ZeRO
* <span style="color:dodgerblue">DeepSpeed provides system support to run models up to 170 billion parameters,
10x larger than the state-of-art (8 billion NVIDIA GPT, 11 billion Google T5).</span>
* <span style="color:dodgerblue">DeepSpeed can run large models more efficiently, up to 10x
faster for models with
various sizes spanning 1.5B to 170B.</span> More specifically, the data parallelism powered by ZeRO
is complementary and can be combined with different types of model parallelism. It allows
DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering
significant performance gains compared to using model parallelism alone.
*Read more*: [technical report](https://arxiv.org/abs/1910.02054),
*Read more*: [ZeRO paper](https://arxiv.org/abs/1910.02054),
and [GPT tutorial](/tutorials/megatron).
![DeepSpeed-vs-Megatron](/assets/images/DeepSpeed-vs-Megatron.png)
![DeepSpeed Speedup](/assets/images/deepspeed-speedup.png)
<p align="center">
<em>The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of NVIDIA Megatron-LM) over using Megatron-LM alone.</em>
</p>
......@@ -123,7 +129,7 @@ convergence to desired accuracy.
## Good Usability
Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to six billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.3 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM.
Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to 13 billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.4 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM.
## Features
......@@ -137,12 +143,17 @@ overview](features) for descriptions and usage.
* [Model Parallelism](features.md#model-parallelism)
* Support for Custom Model Parallelism
* Integration with Megatron-LM
* [Memory and Bandwidth Optimizations](features.md#memory-and-bandwidth-optimizations)
* The Zero Redundancy Optimizer (ZeRO)
* Constant Buffer Optimization (CBO)
* [The Zero Redundancy Optimizer (ZeRO)](features.md#the-zero-redundancy-optimizer)
* Optimizer State and Gradient Partitioning
* Activation Partitioning
* Constant Buffer Optimization
* Contiguous Memory Optimization
* [Additional Memory and Bandwidth Optimizations](features.md#additional-memory-and-bandwidth-optimizations)
* Smart Gradient Accumulation
* Communication/Computation Overlap
* [Training Features](features.md#training-features)
* Simplified training API
* Activation Checkpointing API
* Gradient Clipping
* Automatic loss scaling with mixed precision
* [Training Optimizers](features.md#training-optimizers)
......
......@@ -56,6 +56,19 @@ class BingBertSquadFuncTestCase(BaseTestCase):
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_gpu4_fp16_zero2(self):
test_config = {
"gpus": 4,
"deepspeed": False,
"json": "deepspeed_bsz24_fp16_zero2_config.json",
"max_steps": 8,
"max_epoch_steps": 4,
"other_args": "--fp16 --print_steps 1"
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_gpu1_fp16(self):
test_config = {
"gpus": 1,
......@@ -151,6 +164,7 @@ class BingBertSquadFuncTestCase(BaseTestCase):
def suite():
suite = unittest.TestSuite()
suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp16'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp16_zero2'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu1_fp16'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu4_fp32'))
suite.addTest(BingBertSquadFuncTestCase('test_gpu1_fp32'))
......
{
"tensorboard": {
"enabled": false,
"job_name": "MyJob"
},
"zero_optimization": true,
"disable_allgather": false,
"allgather_size": 200000,
"wall_clock_breakdown": false,
"train_batch_size": 24,
"train_micro_batch_size_per_gpu": 3,
"train_micro_batch_size_per_gpu": 6,
"steps_per_print": 1,
"optimizer": {
"type": "Adam",
......@@ -21,5 +13,8 @@
"gradient_clipping": 1.0,
"fp16": {
"enabled": true
},
"zero_optimization": {
"stage": 1
}
}
{
"train_batch_size": 24,
"train_micro_batch_size_per_gpu": 6,
"steps_per_print": 1,
"optimizer": {
"type": "Adam",
"params": {
"lr": 3e-5,
"weight_decay": 0.0,
"bias_correction": false
}
},
"gradient_clipping": 1.0,
"fp16": {
"enabled": true
},
"zero_optimization": {
"stage": 2
}
}
{
"train_batch_size": 24,
"train_micro_batch_size_per_gpu": 3,
"train_micro_batch_size_per_gpu": 6,
"steps_per_print": 1,
"optimizer": {
"type": "Adam",
......
......@@ -122,7 +122,7 @@ echo "deepspeed: ${enable_deepspeed}"
echo "other_args: ${other_args}"
EFFECTIVE_BATCH_SIZE=${batch_size}
MAX_GPU_BATCH_SIZE=3
MAX_GPU_BATCH_SIZE=6
PER_GPU_BATCH_SIZE=$((EFFECTIVE_BATCH_SIZE/num_gpus))
if [[ $PER_GPU_BATCH_SIZE -lt $MAX_GPU_BATCH_SIZE ]]; then
GRAD_ACCUM_STEPS=1
......
......@@ -2,10 +2,11 @@
"train_batch_size": 4,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"zero_optimization": {
"stage":1
},
"optimizer": {
"type": "Adam",
"legacy_fusion": false,
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
......
{
"train_batch_size": 4,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": {
"stage":2
},
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
}
},
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
}
}
......@@ -2,12 +2,14 @@
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": false,
"zero_optimization": {
"stage":0
},
"optimizer": {
"type": "Adam",
"legacy_fusion": false,
"params": {
"lr": 0.00015
"lr": 0.00015,
"max_grad_norm": 1.0
}
},
......
......@@ -2,10 +2,11 @@
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"zero_optimization":{
"stage":1
},
"optimizer": {
"type": "Adam",
"legacy_fusion": false,
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
......
{
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": {
"stage":2
},
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
}
},
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"activation_checkpointing": {
"partition_activations": true,
"contiguous_memory_optimization": true
}
}
......@@ -2,10 +2,11 @@
"train_batch_size": 4,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"zero_optimization": {
"stage":2
},
"optimizer": {
"type": "Adam",
"legacy_fusion": false,
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
......
......@@ -2,11 +2,10 @@
"train_batch_size": 16,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"zero_optimization": 1,
"disable_allgather": true,
"optimizer": {
"type": "Adam",
"legacy_fusion": false,
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
......
......@@ -2,11 +2,12 @@
"train_batch_size": 32,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"zero_optimization": {
"stage":1
},
"disable_allgather": true,
"optimizer": {
"type": "Adam",
"legacy_fusion": false,
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
......
......@@ -2,11 +2,10 @@
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"zero_optimization": 1,
"disable_allgather": true,
"optimizer": {
"type": "Adam",
"legacy_fusion": false,
"params": {
"lr": 0.00015,
"max_grad_norm": 1.0
......
......@@ -85,6 +85,7 @@ gpt_options=" \
--checkpoint-activations \
--checkpoint-num-layers ${ckpt_num_layers} \
--fp16 \
--cache-dir /tmp/cache_dir \
--log-interval 1 \
${other_args} \
${ds_opt} \
......@@ -92,7 +93,7 @@ gpt_options=" \
"
work_dir="../../../DeepSpeedExamples/Megatron-LM/"
run_cmd="(cd ${work_dir} && deepspeed --num_gpus $gpus pretrain_gpt2.py ${gpt_options})"
run_cmd="(cd ${work_dir} && deepspeed --num_nodes $nodes --num_gpus $gpus pretrain_gpt2.py ${gpt_options})"
echo ${run_cmd}
eval ${run_cmd}
......
......@@ -43,7 +43,7 @@ class GPT2CheckpointTestCase(BaseTestCase):
def tearDown(self):
os.chdir(self.save_dir)
def test_mp4_gpu16_node1_with_zero(self):
def test_mp4_gpu16_node1_with_zero1(self):
test_config = {
"mp": 2,
"gpus": 4,
......@@ -55,12 +55,34 @@ class GPT2CheckpointTestCase(BaseTestCase):
"seq_length": 256,
"heads": 16,
"deepspeed": True,
"tag": "ds_zero",
"tag": "ds_zero1",
"zero": True,
"other_args": "",
"checkpoint_name": "ckpt_mp4_gpu16_w_zero",
"checkpoint_name": "ckpt_mp4_gpu16_w_zero1",
"checkpoint_interval": 1000,
"json": "ds_config_func_bs8.json",
"json": "ds_config_func_bs8_zero1.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp4_gpu16_node1_with_zero2(self):
test_config = {
"mp": 2,
"gpus": 4,
"nodes": 1,
"bs": 8,
"steps": 1100,
"layers": 2,
"hidden_size": 256,
"seq_length": 256,
"heads": 16,
"deepspeed": True,
"tag": "ds_zero2",
"zero": True,
"other_args": "",
"checkpoint_name": "ckpt_mp4_gpu16_w_zero2",
"checkpoint_interval": 1000,
"json": "ds_config_func_bs8_zero2.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
......@@ -184,7 +206,8 @@ class GPT2CheckpointTestCase(BaseTestCase):
def checkpoint_suite():
suite = unittest.TestSuite()
suite.addTest(GPT2CheckpointTestCase('test_mp4_gpu16_node1_with_zero'))
suite.addTest(GPT2CheckpointTestCase('test_mp4_gpu16_node1_with_zero1'))
suite.addTest(GPT2CheckpointTestCase('test_mp4_gpu16_node1_with_zero2'))
suite.addTest(GPT2CheckpointTestCase('test_mp4_gpu16_node1_without_zero'))
return suite
......
......@@ -43,7 +43,7 @@ class GPT2FuncTestCase(BaseTestCase):
def tearDown(self):
os.chdir(self.save_dir)
def test_mp1_gpu1_node1(self):
def test_mp1_gpu1_node1_zero1(self):
test_config = {
"mp": 1,
"gpus": 1,
......@@ -55,13 +55,13 @@ class GPT2FuncTestCase(BaseTestCase):
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs4.json",
"json": "ds_config_func_bs4_zero1.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp1_gpu2_node1(self):
def test_mp1_gpu2_node1_zero1(self):
test_config = {
"mp": 1,
"gpus": 2,
......@@ -73,13 +73,13 @@ class GPT2FuncTestCase(BaseTestCase):
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs8.json",
"json": "ds_config_func_bs8_zero1.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp2_gpu4_node1(self):
def test_mp2_gpu4_node1_zero1(self):
test_config = {
"mp": 2,
"gpus": 4,
......@@ -91,7 +91,79 @@ class GPT2FuncTestCase(BaseTestCase):
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs8.json",
"json": "ds_config_func_bs8_zero1.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp4_gpu4_node1_zero1(self):
test_config = {
"mp": 4,
"gpus": 4,
"nodes": 1,
"bs": 8,
"steps": 1000,
"layers": 12,
"hidden_size": 768,
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs8_zero1.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp1_gpu1_node1_zero2(self):
test_config = {
"mp": 1,
"gpus": 1,
"nodes": 1,
"bs": 4,
"steps": 1000,
"layers": 12,
"hidden_size": 768,
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs4_zero2.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp1_gpu2_node1_zero2(self):
test_config = {
"mp": 1,
"gpus": 2,
"nodes": 1,
"bs": 8,
"steps": 1000,
"layers": 12,
"hidden_size": 768,
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs8_zero2.json",
}
succ = self.run_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp2_gpu4_node1_zero2(self):
test_config = {
"mp": 2,
"gpus": 4,
"nodes": 1,
"bs": 8,
"steps": 1000,
"layers": 12,
"hidden_size": 768,
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs8_zero2.json",
}
succ = self.run_test(test_config, 0.01)
......@@ -100,7 +172,7 @@ class GPT2FuncTestCase(BaseTestCase):
succ = self.run_partition_activations_test(test_config, 0.01)
self.assertTrue(succ)
def test_mp4_gpu4_node1(self):
def test_mp4_gpu4_node1_zero2(self):
test_config = {
"mp": 4,
"gpus": 4,
......@@ -112,7 +184,7 @@ class GPT2FuncTestCase(BaseTestCase):
"seq_length": 256,
"heads": 12,
"deepspeed": False,
"json": "ds_config_func_bs8.json",
"json": "ds_config_func_bs8_zero2.json",
}
succ = self.run_test(test_config, 0.01)
......@@ -144,11 +216,12 @@ class GPT2FuncTestCase(BaseTestCase):
print("\n")
print("{0}: starting......".format(self.id()))
baseline_prefix = "gpt2_func_"
prefix = "gpt2_partition_activation_"
# baseline run...
test_config["deepspeed"] = False
base_file = self.gen_output_name(test_config, prefix)
base_file = self.gen_output_name(test_config, baseline_prefix)
# skip baseline run if it exists.
if not self.has_loss_data(base_file):
......@@ -159,7 +232,7 @@ class GPT2FuncTestCase(BaseTestCase):
# DeepSpeed run...
test_config["deepspeed"] = True
test_config["other_args"] = "--partition-activations"
test_config["other_args"] = "--deepspeed-activation-checkpointing"
print("{0}: DeepSpeed run.".format(self.id()))
test_file = self.gen_output_name(test_config, prefix)
self.run_gpt2_test(test_config, test_file)
......@@ -217,10 +290,16 @@ class GPT2FuncTestCase(BaseTestCase):
def suite():
suite = unittest.TestSuite()
suite.addTest(GPT2FuncTestCase('test_mp1_gpu1_node1'))
suite.addTest(GPT2FuncTestCase('test_mp1_gpu2_node1'))
suite.addTest(GPT2FuncTestCase('test_mp2_gpu4_node1'))
suite.addTest(GPT2FuncTestCase('test_mp4_gpu4_node1'))
suite.addTest(GPT2FuncTestCase('test_mp1_gpu1_node1_zero1'))
suite.addTest(GPT2FuncTestCase('test_mp1_gpu2_node1_zero1'))
suite.addTest(GPT2FuncTestCase('test_mp2_gpu4_node1_zero1'))
suite.addTest(GPT2FuncTestCase('test_mp4_gpu4_node1_zero1'))
suite.addTest(GPT2FuncTestCase('test_mp1_gpu1_node1_zero2'))
suite.addTest(GPT2FuncTestCase('test_mp1_gpu2_node1_zero2'))
suite.addTest(GPT2FuncTestCase('test_mp2_gpu4_node1_zero2'))
suite.addTest(GPT2FuncTestCase('test_mp4_gpu4_node1_zero2'))
suite.addTest(GPT2FuncTestCase('test_optimizer_scheduler'))
return suite
......
......@@ -29,14 +29,16 @@ def pytest_hack(runner_result):
assert runner_result.wasSuccessful() # fail the test
def test_run():
runner = unittest.TextTestRunner(failfast=True)
# Add test suites here.
pytest_hack(runner.run(Megatron_GPT2.suite()))
pytest_hack(runner.run(Megatron_GPT2.checkpoint_suite()))
pytest_hack(runner.run(BingBertSquad.suite()))
#def test_megatron():
# runner = unittest.TextTestRunner(failfast=True)
# pytest_hack(runner.run(Megatron_GPT2.suite()))
#
#
#def test_megatron_checkpoint():
# runner = unittest.TextTestRunner(failfast=True)
# pytest_hack(runner.run(Megatron_GPT2.checkpoint_suite()))
if __name__ == '__main__':
test_run()
def test_squad():
runner = unittest.TextTestRunner(failfast=True)
pytest_hack(runner.run(BingBertSquad.suite()))
import os
import json
import argparse
import torch
import deepspeed
from torch.utils.data.distributed import DistributedSampler
class SimpleModel(torch.nn.Module):
def __init__(self, hidden_dim, empty_grad=False):
super(SimpleModel, self).__init__()
self.linear = torch.nn.Linear(hidden_dim, hidden_dim)
if empty_grad:
self.layers2 = torch.nn.ModuleList([torch.nn.Linear(hidden_dim, hidden_dim)])
self.cross_entropy_loss = torch.nn.CrossEntropyLoss()
def forward(self, x, y):
hidden_dim = x
hidden_dim = self.linear(hidden_dim)
return self.cross_entropy_loss(hidden_dim, y)
def create_config_from_dict(tmpdir, config_dict):
config_path = os.path.join(tmpdir, 'temp_config.json')
with open(config_path, 'w') as fd:
json.dump(config_dict, fd)
return config_path
def get_data_loader(model, total_samples, hidden_dim, device):
batch_size = model.train_micro_batch_size_per_gpu()
train_data = torch.randn(total_samples, hidden_dim, device=device, dtype=torch.half)
train_label = torch.empty(total_samples,
dtype=torch.long,
device=device).random_(hidden_dim)
train_dataset = torch.utils.data.TensorDataset(train_data, train_label)
sampler = DistributedSampler(train_dataset)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size,
sampler=sampler)
return train_loader
def get_args(tmpdir, config_dict):
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int, default=0)
parser.add_argument('--zero', type=int, default=0)
args = parser.parse_args() #args=''
config_dict["zero_optimization"]["stage"] = args.zero
print('config_dict["zero_optimization"]', config_dict["zero_optimization"])
config_path = create_config_from_dict(tmpdir, config_dict)
args.deepspeed_config = config_path
return args
def print0(msg):
if torch.distributed.get_rank() == 0:
print(msg, flush=True)
rank = int(os.environ['RANK'])
print('seed:', 2222 + rank)
torch.random.manual_seed(2222 + rank)
config_dict = {
"train_batch_size": 8,
"steps_per_print": 1,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015,
}
},
"fp16": {
"enabled": True,
"initial_scale_power": 15
},
"zero_optimization": {
"stage": 0,
"reduce_bucket_size": 20
}
}
# "initial_scale_power": 15
args = get_args('/tmp/', config_dict)
hidden_dim = 4
model = SimpleModel(hidden_dim, empty_grad=False)
model, _, _,_ = deepspeed.initialize(args=args,
model=model,
model_parameters=model.parameters(),
dist_init_required=True)
def print_params(tag, model):
if torch.distributed.get_rank() == 0:
for n, p in model.named_parameters():
print0("{} {}:{}".format(tag, n, p))
data_loader = get_data_loader(model=model,
total_samples=1000,
hidden_dim=hidden_dim,
device=model.device)
#print_params('pre-train', model)
for n, batch in enumerate(data_loader):
loss = model(batch[0], batch[1])
if torch.distributed.get_rank() == 0:
print("LOSS:", loss.item())
model.backward(loss)
model.step()
#print_params('step={}'.format(n), model)
if n == 5: break
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment