"examples/vscode:/vscode.git/clone" did not exist on "49bee0aea44ef29c08d48f818f356275ef223da8"
Unverified Commit 4d461067 authored by Stas Bekman's avatar Stas Bekman Committed by GitHub
Browse files

[Trainer] tf32 arg doc (#16674)



* [Trainer] tf32 arg doc

* Update src/transformers/training_args.py
Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
parent f4d4f0a1
......@@ -247,8 +247,10 @@ class TrainingArguments:
Whether to use full float16 evaluation instead of 32-bit. This will be faster and save memory but can harm
metric values.
tf32 (`bool`, *optional*):
Whether to enable tf32 mode, available in Ampere and newer GPU architectures. This is an experimental API
and it may change.
Whether to enable the TF32 mode, available in Ampere and newer GPU architectures. The default value depends
on PyTorch's version default of `torch.backends.cuda.matmul.allow_tf32`. For more details please refer to
the [TF32](https://huggingface.co/docs/transformers/performance#tf32) documentation. This is an
experimental API and it may change.
local_rank (`int`, *optional*, defaults to -1):
Rank of the process during distributed training.
xpu_backend (`str`, *optional*):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment