Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
a71def02
Unverified
Commit
a71def02
authored
Apr 08, 2024
by
Younes Belkada
Committed by
GitHub
Apr 08, 2024
Browse files
Trainer / Core : Do not change init signature order (#30126)
* Update trainer.py * fix copies
parent
1897874e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
4 deletions
+4
-4
src/transformers/trainer.py
src/transformers/trainer.py
+4
-4
No files found.
src/transformers/trainer.py
View file @
a71def02
...
...
@@ -304,9 +304,6 @@ class Trainer:
The tokenizer used to preprocess the data. If provided, will be used to automatically pad the inputs to the
maximum length when batching inputs, and it will be saved along the model to make it easier to rerun an
interrupted training or reuse the fine-tuned model.
image_processor ([`BaseImageProcessor`], *optional*):
The image processor used to preprocess the data. If provided, it will be saved along the model to make it easier
to rerun an interrupted training or reuse the fine-tuned model.
model_init (`Callable[[], PreTrainedModel]`, *optional*):
A function that instantiates the model to be used. If provided, each call to [`~Trainer.train`] will start
from a new instance of the model as given by this function.
...
...
@@ -331,6 +328,9 @@ class Trainer:
by this function will be reflected in the predictions received by `compute_metrics`.
Note that the labels (second parameter) will be `None` if the dataset does not have them.
image_processor ([`BaseImageProcessor`], *optional*):
The image processor used to preprocess the data. If provided, it will be saved along the model to make it easier
to rerun an interrupted training or reuse the fine-tuned model.
Important attributes:
...
...
@@ -361,12 +361,12 @@ class Trainer:
train_dataset
:
Optional
[
Union
[
Dataset
,
IterableDataset
]]
=
None
,
eval_dataset
:
Optional
[
Union
[
Dataset
,
Dict
[
str
,
Dataset
]]]
=
None
,
tokenizer
:
Optional
[
PreTrainedTokenizerBase
]
=
None
,
image_processor
:
Optional
[
"BaseImageProcessor"
]
=
None
,
model_init
:
Optional
[
Callable
[[],
PreTrainedModel
]]
=
None
,
compute_metrics
:
Optional
[
Callable
[[
EvalPrediction
],
Dict
]]
=
None
,
callbacks
:
Optional
[
List
[
TrainerCallback
]]
=
None
,
optimizers
:
Tuple
[
torch
.
optim
.
Optimizer
,
torch
.
optim
.
lr_scheduler
.
LambdaLR
]
=
(
None
,
None
),
preprocess_logits_for_metrics
:
Optional
[
Callable
[[
torch
.
Tensor
,
torch
.
Tensor
],
torch
.
Tensor
]]
=
None
,
image_processor
:
Optional
[
"BaseImageProcessor"
]
=
None
,
):
if
args
is
None
:
output_dir
=
"tmp_trainer"
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment