Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
59dcea3f
Unverified
Commit
59dcea3f
authored
Jul 31, 2023
by
Younes Belkada
Committed by
GitHub
Jul 31, 2023
Browse files
[`PreTrainedModel`] Wrap `cuda` and `to` method correctly (#25206)
wrap `cuda` and `to` method correctly
parent
67b85f24
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
1 deletion
+3
-1
src/transformers/modeling_utils.py
src/transformers/modeling_utils.py
+3
-1
No files found.
src/transformers/modeling_utils.py
View file @
59dcea3f
...
...
@@ -25,7 +25,7 @@ import tempfile
import
warnings
from
contextlib
import
contextmanager
from
dataclasses
import
dataclass
from
functools
import
partial
from
functools
import
partial
,
wraps
from
typing
import
Any
,
Callable
,
Dict
,
List
,
Optional
,
Tuple
,
Union
import
torch
...
...
@@ -1912,6 +1912,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
mem
=
mem
+
mem_bufs
return
mem
@
wraps
(
torch
.
nn
.
Module
.
cuda
)
def
cuda
(
self
,
*
args
,
**
kwargs
):
# Checks if the model has been loaded in 8-bit
if
getattr
(
self
,
"is_quantized"
,
False
):
...
...
@@ -1922,6 +1923,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
else
:
return
super
().
cuda
(
*
args
,
**
kwargs
)
@
wraps
(
torch
.
nn
.
Module
.
to
)
def
to
(
self
,
*
args
,
**
kwargs
):
# Checks if the model has been loaded in 8-bit
if
getattr
(
self
,
"is_quantized"
,
False
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment