Unverified Commit ec9b18f6 authored by jeffhataws's avatar jeffhataws Committed by GitHub
Browse files

Fix --bf16 option support for Neuron after PR #22300 (#22307)

This PR fixes the "RuntimeError: No CUDA GPUs are available"
when running with --bf16 option on Neuron.

Related PRs:
https://github.com/huggingface/transformers/pull/20684
https://github.com/huggingface/transformers/pull/22300
parent aef488c5
......@@ -588,7 +588,12 @@ class Trainer:
if args.fp16 or args.bf16:
if args.half_precision_backend == "auto":
if args.device == torch.device("cpu"):
if is_torch_neuroncore_available():
if args.fp16:
raise ValueError("Tried to use `fp16` but this option is not yet supported on Neuron.")
else:
args.half_precision_backend = "cpu_amp"
elif args.device == torch.device("cpu"):
if args.fp16:
raise ValueError("Tried to use `fp16` but it is not supported on cpu")
elif _is_native_cpu_amp_available:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment