Unverified Commit 580dd87c authored by Stas Bekman's avatar Stas Bekman Committed by GitHub
Browse files

[Deepspeed] add support for bf16 mode (#14569)



* [WIP] add support for bf16 mode

* prep for bf16

* prep for bf16

* fix; zero2/bf16 is ok

* check bf16 is available

* test fixes

* enable zero3_bf16

* config files

* docs

* split stage_dtype; merge back to non-dtype-specific config file

* fix doc

* cleanup

* cleanup

* bfloat16 => bf16 to match the PR changes

* s/zero_gather_fp16_weights_on_model_save/zero_gather_16bit_weights_on_model_save/; s/save_fp16_model/save_16bit_model/

* test fixes/skipping

* move

* fix

* Update docs/source/main_classes/deepspeed.mdx
Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>

* backticks

* cleanup

* cleanup

* cleanup

* new version

* add note about grad accum in bf16
Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
parent c1f209da
...@@ -367,7 +367,7 @@ cat <<'EOT' > ds_config_zero3.json ...@@ -367,7 +367,7 @@ cat <<'EOT' > ds_config_zero3.json
"stage3_param_persistence_threshold": "auto", "stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9, "stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9, "stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
}, },
"gradient_accumulation_steps": "auto", "gradient_accumulation_steps": "auto",
...@@ -652,7 +652,7 @@ The following is an example of configuration for ZeRO stage 3: ...@@ -652,7 +652,7 @@ The following is an example of configuration for ZeRO stage 3:
"stage3_param_persistence_threshold": "auto", "stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9, "stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9, "stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
} }
} }
``` ```
...@@ -691,7 +691,7 @@ The following configuration values depend on the model's hidden size: ...@@ -691,7 +691,7 @@ The following configuration values depend on the model's hidden size:
therefore set these values to `auto` and the [`Trainer`] will automatically assign the recommended therefore set these values to `auto` and the [`Trainer`] will automatically assign the recommended
values. But, of course, feel free to set these explicitly as well. values. But, of course, feel free to set these explicitly as well.
`stage3_gather_fp16_weights_on_model_save` enables model fp16 weights consolidation when model gets saved. With large `stage3_gather_16bit_weights_on_model_save` enables model fp16 weights consolidation when model gets saved. With large
models and multiple GPUs this is an expensive operation both in terms of memory and speed. It's currently required if models and multiple GPUs this is an expensive operation both in terms of memory and speed. It's currently required if
you plan to resume the training. Watch out for future updates that will remove this limitation and make things more you plan to resume the training. Watch out for future updates that will remove this limitation and make things more
flexible. flexible.
...@@ -760,8 +760,8 @@ The following configuration example enables NVMe to offload both optimizer state ...@@ -760,8 +760,8 @@ The following configuration example enables NVMe to offload both optimizer state
"stage3_param_persistence_threshold": "auto", "stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9, "stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9, "stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
} },
} }
``` ```
...@@ -966,7 +966,7 @@ Here is a full ZeRO-3 auto-configuration file `ds_config_zero3.json`: ...@@ -966,7 +966,7 @@ Here is a full ZeRO-3 auto-configuration file `ds_config_zero3.json`:
"stage3_param_persistence_threshold": "auto", "stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9, "stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9, "stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
}, },
"gradient_accumulation_steps": "auto", "gradient_accumulation_steps": "auto",
...@@ -1029,7 +1029,7 @@ values look like, but we highly recommend using the one with multiple `auto` set ...@@ -1029,7 +1029,7 @@ values look like, but we highly recommend using the one with multiple `auto` set
"stage3_param_persistence_threshold": 1e4, "stage3_param_persistence_threshold": 1e4,
"stage3_max_live_parameters": 1e9, "stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9, "stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
}, },
"steps_per_print": 2000, "steps_per_print": 2000,
...@@ -1232,6 +1232,7 @@ the much more efficient tf32 format for some operations, but the results will st ...@@ -1232,6 +1232,7 @@ the much more efficient tf32 format for some operations, but the results will st
benchmarks, please, see [TensorFloat-32(TF32) on Ampere devices](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices). The document includes benchmarks, please, see [TensorFloat-32(TF32) on Ampere devices](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices). The document includes
instructions on how to disable this automatic conversion if for some reason you prefer not to use it. instructions on how to disable this automatic conversion if for some reason you prefer not to use it.
With the 🤗 Trainer you can use `--tf32` to enable it, or disable it with `--tf32 0` or `--no_tf32`. By default the PyTorch default is used.
...@@ -1241,7 +1242,9 @@ instructions on how to disable this automatic conversion if for some reason you ...@@ -1241,7 +1242,9 @@ instructions on how to disable this automatic conversion if for some reason you
You can use automatic mixed precision with either a pytorch-like AMP way or the apex-like way: You can use automatic mixed precision with either a pytorch-like AMP way or the apex-like way:
To configure pytorch AMP-like mode set: ### fp16
To configure pytorch AMP-like mode with fp16 (float16) set:
```json ```json
{ {
...@@ -1259,7 +1262,7 @@ To configure pytorch AMP-like mode set: ...@@ -1259,7 +1262,7 @@ To configure pytorch AMP-like mode set:
and the [`Trainer`] will automatically enable or disable it based on the value of and the [`Trainer`] will automatically enable or disable it based on the value of
`args.fp16_backend`. The rest of config values are up to you. `args.fp16_backend`. The rest of config values are up to you.
This mode gets enabled when `--fp16 --fp16_backend amp` command line args are passed. This mode gets enabled when `--fp16 --fp16_backend amp` or `--fp16_full_eval` command line args are passed.
You can also enable/disable this mode explicitly: You can also enable/disable this mode explicitly:
...@@ -1281,6 +1284,43 @@ configuration. ...@@ -1281,6 +1284,43 @@ configuration.
Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#fp16-training-options). Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#fp16-training-options).
### bf16
If bf16 (bfloat16) is desired instead of fp16 then the following configuration section is to be used:
```json
{
"bf16": {
"enabled": "auto"
}
}
```
bf16 has the same dynamic range as fp32 and thus doesn't require loss scaling.
This mode gets enabled when `--bf16` or `--bf16_full_eval` command line args are passed.
You can also enable/disable this mode explicitly:
```json
{
"bf16": {
"enabled": true
}
}
```
<Tip>
As of `deepspeed==0.6.0` the bf16 support is new and experimental.
If you use [gradient accumulation](#gradient-accumulation) with bf16-enabled, you need to be aware that it'll accumulate gradients in bf16, which may not be what you want due to this format's low precision, as it may lead to a lossy accumulation.
</Tip>
### apex
To configure apex AMP-like mode set: To configure apex AMP-like mode set:
```json ```json
...@@ -1411,15 +1451,14 @@ When a model is saved under ZeRO-2, you end up having the normal `pytorch_model. ...@@ -1411,15 +1451,14 @@ When a model is saved under ZeRO-2, you end up having the normal `pytorch_model.
they are only the fp16 version of the weights. they are only the fp16 version of the weights.
Under ZeRO-3, things are much more complicated, since the model weights are partitioned out over multiple GPUs, Under ZeRO-3, things are much more complicated, since the model weights are partitioned out over multiple GPUs,
therefore `"stage3_gather_fp16_weights_on_model_save": true` is required to get the `Trainer` to save the fp16 therefore `"stage3_gather_16bit_weights_on_model_save": true` is required to get the `Trainer` to save the fp16
version of the weights. If this setting is `False` ``pytorch_model.bin` won't be created. This is because by default DeepSpeed's `state_dict` contains a placeholder and not the real weights. If we were to save this `state_dict`` it version of the weights. If this setting is `False` `pytorch_model.bin` won't be created. This is because by default DeepSpeed's `state_dict` contains a placeholder and not the real weights. If we were to save this `state_dict` it won't be possible to load it back.
won't be possible to load it back.
```json ```json
{ {
"zero_optimization": { "zero_optimization": {
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
} }
} }
``` ```
......
...@@ -45,7 +45,7 @@ ...@@ -45,7 +45,7 @@
"stage3_param_persistence_threshold": "auto", "stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9, "stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9, "stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
}, },
"gradient_accumulation_steps": "auto", "gradient_accumulation_steps": "auto",
......
...@@ -98,7 +98,7 @@ _deps = [ ...@@ -98,7 +98,7 @@ _deps = [
"cookiecutter==1.7.2", "cookiecutter==1.7.2",
"dataclasses", "dataclasses",
"datasets", "datasets",
"deepspeed>=0.5.9", "deepspeed>=0.6.0",
"fairscale>0.3", "fairscale>0.3",
"faiss-cpu", "faiss-cpu",
"fastapi", "fastapi",
......
...@@ -73,7 +73,7 @@ class HfDeepSpeedConfig: ...@@ -73,7 +73,7 @@ class HfDeepSpeedConfig:
# zero stage - this is done as early as possible, before model is created, to allow # zero stage - this is done as early as possible, before model is created, to allow
# ``is_deepspeed_zero3_enabled`` query and getting to the early deepspeed config object # ``is_deepspeed_zero3_enabled`` query and getting to the early deepspeed config object
# during ``zero.Init()`` which needs whether fp16 is enabled, dtype, etc. # during ``zero.Init()`` which needs to know the dtype, and some other hparams.
self._stage = self.get_value("zero_optimization.stage", -1) self._stage = self.get_value("zero_optimization.stage", -1)
# offload # offload
...@@ -169,10 +169,12 @@ class HfTrainerDeepSpeedConfig(HfDeepSpeedConfig): ...@@ -169,10 +169,12 @@ class HfTrainerDeepSpeedConfig(HfDeepSpeedConfig):
def __init__(self, config_file_or_dict): def __init__(self, config_file_or_dict):
super().__init__(config_file_or_dict) super().__init__(config_file_or_dict)
self._dtype = torch.float16 self._dtype = None
self.mismatches = [] self.mismatches = []
def dtype(self): def dtype(self):
if self._dtype is None:
raise ValueError("trainer_config_process() wasn't called yet to tell dtype")
return self._dtype return self._dtype
def fill_match(self, ds_key_long, hf_val, hf_key=None, must_match=True): def fill_match(self, ds_key_long, hf_val, hf_key=None, must_match=True):
...@@ -228,26 +230,33 @@ class HfTrainerDeepSpeedConfig(HfDeepSpeedConfig): ...@@ -228,26 +230,33 @@ class HfTrainerDeepSpeedConfig(HfDeepSpeedConfig):
# total_num_steps - will get set in trainer_config_finalize # total_num_steps - will get set in trainer_config_finalize
# fp16 # fp16
if args.fp16: if args.fp16 or args.fp16_full_eval:
fp16_backend = "apex" if args.fp16_backend == "apex" else "amp" fp16_backend = "apex" if args.fp16_backend == "apex" else "amp"
else: else:
fp16_backend = None fp16_backend = None
# amp: similar to the pytorch native amp - it has a bunch of optional params but we won't set # amp: similar to the pytorch native amp - it has a bunch of optional params but we won't set
# any here unless the user did the work # any here unless the user did the work
self.fill_match("fp16.enabled", fp16_backend == "amp", "fp16+fp16_backend(amp)") self.fill_match(
"fp16.enabled",
((args.fp16 or args.fp16_full_eval) and fp16_backend == "amp"),
"fp16|fp16_full_eval+fp16_backend(amp)",
)
# apex: delegates amp work to apex (which needs to be available), but it cannot be used with any # apex: delegates amp work to apex (which needs to be available), but it cannot be used with any
# ZeRO features # ZeRO features
self.fill_match("amp.enabled", fp16_backend == "apex", "fp16+fp16_backend(apex)") self.fill_match("amp.enabled", fp16_backend == "apex", "fp16+fp16_backend(apex)")
self.fill_match("amp.opt_level", args.fp16_opt_level, "fp16_opt_level") self.fill_match("amp.opt_level", args.fp16_opt_level, "fp16_opt_level")
# only if we have an explicit fp16.enabled = False then it's fp32, if it's True or this self.fill_match("bf16.enabled", (args.bf16 or args.bf16_full_eval), "bf16|bf16_full_eval")
# whole config section is missing then the fallback is fp16
if self.is_false("fp16.enabled"): # deepspeed's default mode is fp16 unless there is a config that says differently
if self.is_true("bfoat16.enabled"):
self._dtype = torch.bfloat16
elif self.is_false("fp16.enabled"):
self._dtype = torch.float32 self._dtype = torch.float32
# later there will be other dtypes besides just fp16 and fp32 else:
# also not quite sure what dtype should be under apex, defaulting to fp16 for now self._dtype = torch.float16
def trainer_config_finalize(self, args, model, num_training_steps): def trainer_config_finalize(self, args, model, num_training_steps):
""" """
......
...@@ -8,7 +8,7 @@ deps = { ...@@ -8,7 +8,7 @@ deps = {
"cookiecutter": "cookiecutter==1.7.2", "cookiecutter": "cookiecutter==1.7.2",
"dataclasses": "dataclasses", "dataclasses": "dataclasses",
"datasets": "datasets", "datasets": "datasets",
"deepspeed": "deepspeed>=0.5.9", "deepspeed": "deepspeed>=0.6.0",
"fairscale": "fairscale>0.3", "fairscale": "fairscale>0.3",
"faiss-cpu": "faiss-cpu", "faiss-cpu": "faiss-cpu",
"fastapi": "fastapi", "fastapi": "fastapi",
......
...@@ -1687,7 +1687,7 @@ class Trainer: ...@@ -1687,7 +1687,7 @@ class Trainer:
self.save_model(output_dir, _internal_call=True) self.save_model(output_dir, _internal_call=True)
if self.deepspeed: if self.deepspeed:
# under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed # under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed
# config `stage3_gather_fp16_weights_on_model_save` is True # config `stage3_gather_16bit_weights_on_model_save` is True
self.deepspeed.save_checkpoint(output_dir) self.deepspeed.save_checkpoint(output_dir)
# Save optimizer and scheduler # Save optimizer and scheduler
...@@ -2101,12 +2101,12 @@ class Trainer: ...@@ -2101,12 +2101,12 @@ class Trainer:
# logger.info(f"deepspeed zero3: removing {file}, see zero_to_fp32.py to recover weights") # logger.info(f"deepspeed zero3: removing {file}, see zero_to_fp32.py to recover weights")
os.remove(file) os.remove(file)
# now save the real model if stage3_gather_fp16_weights_on_model_save=True # now save the real model if stage3_gather_16bit_weights_on_model_save=True
# if false it will not be saved. # if false it will not be saved.
# This must be called on all ranks # This must be called on all ranks
if not self.deepspeed.save_fp16_model(output_dir, WEIGHTS_NAME): if not self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME):
logger.warning( logger.warning(
"deepspeed.save_fp16_model didn't save the model, since stage3_gather_fp16_weights_on_model_save=false. " "deepspeed.save_16bit_model didn't save the model, since stage3_gather_16bit_weights_on_model_save=false. "
"Saving the full checkpoint instead, use zero_to_fp32.py to recover weights" "Saving the full checkpoint instead, use zero_to_fp32.py to recover weights"
) )
self.deepspeed.save_checkpoint(output_dir) self.deepspeed.save_checkpoint(output_dir)
......
...@@ -8,6 +8,10 @@ ...@@ -8,6 +8,10 @@
"min_loss_scale": 1 "min_loss_scale": 1
}, },
"bf16": {
"enabled": "auto"
},
"optimizer": { "optimizer": {
"type": "AdamW", "type": "AdamW",
"params": { "params": {
......
...@@ -8,6 +8,10 @@ ...@@ -8,6 +8,10 @@
"min_loss_scale": 1 "min_loss_scale": 1
}, },
"bf16": {
"enabled": "auto"
},
"optimizer": { "optimizer": {
"type": "AdamW", "type": "AdamW",
"params": { "params": {
...@@ -45,7 +49,7 @@ ...@@ -45,7 +49,7 @@
"stage3_param_persistence_threshold": "auto", "stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9, "stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9, "stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true "stage3_gather_16bit_weights_on_model_save": true
}, },
"gradient_accumulation_steps": "auto", "gradient_accumulation_steps": "auto",
......
This diff is collapsed.
...@@ -205,8 +205,19 @@ task_cmds = make_task_cmds() ...@@ -205,8 +205,19 @@ task_cmds = make_task_cmds()
ZERO2 = "zero2" ZERO2 = "zero2"
ZERO3 = "zero3" ZERO3 = "zero3"
stages = [ZERO2, ZERO3] stages = [ZERO2, ZERO3]
# future preparation:
# for now test just fp16, as these tests are quite slow
# FP16 = "fp16"
# BF16 = "bf16"
#
# dtypes = [FP16]
# so just hardcoding --fp16 for now
# if is_torch_bf16_available():
# dtypes += [BF16]
def parameterized_custom_name_func(func, param_num, param): def parameterized_custom_name_func(func, param_num, param):
# customize the test name generator function as we want both params to appear in the sub-test # customize the test name generator function as we want both params to appear in the sub-test
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment