Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
0b5024ce
Unverified
Commit
0b5024ce
authored
Sep 20, 2023
by
Younes Belkada
Committed by
GitHub
Sep 20, 2023
Browse files
[`Trainer`] Refactor trainer + bnb logic (#26248)
* refactor trainer + bnb logic * remove logger.info * oops
parent
f94c9b3d
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
16 additions
and
12 deletions
+16
-12
src/transformers/trainer.py
src/transformers/trainer.py
+16
-12
No files found.
src/transformers/trainer.py
View file @
0b5024ce
...
...
@@ -402,19 +402,23 @@ class Trainer:
" to `True` to avoid any unexpected behavior such as device placement mismatching."
)
_is_peft_model
=
is_peft_available
()
and
isinstance
(
model
,
PeftModel
)
_is_quantized_and_base_model
=
getattr
(
model
,
"is_quantized"
,
False
)
and
not
getattr
(
model
,
"_hf_peft_config_loaded"
,
False
)
# At this stage the model is already loaded
if
getattr
(
model
,
"is_quantized"
,
False
)
and
not
getattr
(
model
,
"_hf_peft_config_loaded"
,
False
):
if
getattr
(
model
,
"_is_quantized_training_enabled"
,
False
):
logger
.
info
(
"The model is quantized. To train this model you need to add additional modules"
" inside the model such as adapters using `peft` library and freeze the model weights. Please"
" check the examples in https://github.com/huggingface/peft for more details."
)
else
:
raise
ValueError
(
"The model you want to train is loaded in 8-bit precision. if you want to fine-tune an 8-bit"
" model, please make sure that you have installed `bitsandbytes>=0.37.0`. "
)
if
_is_quantized_and_base_model
and
not
_is_peft_model
:
raise
ValueError
(
"You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of"
" the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft"
" for more details"
)
elif
_is_quantized_and_base_model
and
not
getattr
(
model
,
"_is_quantized_training_enabled"
,
False
):
raise
ValueError
(
"The model you want to train is loaded in 8-bit precision. if you want to fine-tune an 8-bit"
" model, please make sure that you have installed `bitsandbytes>=0.37.0`. "
)
# Setup Sharded DDP training
self
.
sharded_ddp
=
None
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment