Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
d35f7296
"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "f456b4d10b5c72e0cfe835d3f22a71079c513eb2"
Unverified
Commit
d35f7296
authored
Mar 21, 2023
by
Yanming W
Committed by
GitHub
Mar 21, 2023
Browse files
Restore fp16 support on xla gpu device (#22300)
parent
67c2dbdb
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/trainer.py
src/transformers/trainer.py
+1
-1
No files found.
src/transformers/trainer.py
View file @
d35f7296
...
@@ -598,7 +598,7 @@ class Trainer:
...
@@ -598,7 +598,7 @@ class Trainer:
logger
.
info
(
f
"Using
{
args
.
half_precision_backend
}
half precision backend"
)
logger
.
info
(
f
"Using
{
args
.
half_precision_backend
}
half precision backend"
)
self
.
do_grad_scaling
=
False
self
.
do_grad_scaling
=
False
if
(
args
.
fp16
or
args
.
bf16
)
and
not
(
args
.
deepspeed
or
is_sagemaker_mp_enabled
()
or
is_torch_tpu_available
()
):
if
(
args
.
fp16
or
args
.
bf16
)
and
not
(
args
.
deepspeed
or
is_sagemaker_mp_enabled
()):
# deepspeed and SageMaker Model Parallel manage their own half precision
# deepspeed and SageMaker Model Parallel manage their own half precision
if
args
.
half_precision_backend
==
"cuda_amp"
:
if
args
.
half_precision_backend
==
"cuda_amp"
:
self
.
use_cuda_amp
=
True
self
.
use_cuda_amp
=
True
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment