"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "b09912c8f452ac485933ac0f86937aa01de3c398"
Unverified Commit 60d5f8f9 authored by Zach Mueller's avatar Zach Mueller Committed by GitHub
Browse files

🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190)

* Alias

* Note alias

* Tests and src

* Rest

* Clean

* Change typing?

* Fix tests

* Deprecation versions
parent c86d020e
...@@ -215,7 +215,7 @@ pip install transformers datasets evaluate ...@@ -215,7 +215,7 @@ pip install transformers datasets evaluate
```py ```py
>>> training_args = TrainingArguments( >>> training_args = TrainingArguments(
... output_dir="my_awesome_qa_model", ... output_dir="my_awesome_qa_model",
... evaluation_strategy="epoch", ... eval_strategy="epoch",
... learning_rate=2e-5, ... learning_rate=2e-5,
... per_device_train_batch_size=16, ... per_device_train_batch_size=16,
... per_device_eval_batch_size=16, ... per_device_eval_batch_size=16,
......
...@@ -317,7 +317,7 @@ pip install -q datasets transformers evaluate ...@@ -317,7 +317,7 @@ pip install -q datasets transformers evaluate
... per_device_train_batch_size=2, ... per_device_train_batch_size=2,
... per_device_eval_batch_size=2, ... per_device_eval_batch_size=2,
... save_total_limit=3, ... save_total_limit=3,
... evaluation_strategy="steps", ... eval_strategy="steps",
... save_strategy="steps", ... save_strategy="steps",
... save_steps=20, ... save_steps=20,
... eval_steps=20, ... eval_steps=20,
......
...@@ -185,7 +185,7 @@ tokenized_imdb = imdb.map(preprocess_function, batched=True) ...@@ -185,7 +185,7 @@ tokenized_imdb = imdb.map(preprocess_function, batched=True)
... per_device_eval_batch_size=16, ... per_device_eval_batch_size=16,
... num_train_epochs=2, ... num_train_epochs=2,
... weight_decay=0.01, ... weight_decay=0.01,
... evaluation_strategy="epoch", ... eval_strategy="epoch",
... save_strategy="epoch", ... save_strategy="epoch",
... load_best_model_at_end=True, ... load_best_model_at_end=True,
... push_to_hub=True, ... push_to_hub=True,
......
...@@ -211,7 +211,7 @@ Hugging Face 계정에 로그인하면 모델을 업로드하고 커뮤니티에 ...@@ -211,7 +211,7 @@ Hugging Face 계정에 로그인하면 모델을 업로드하고 커뮤니티에
```py ```py
>>> training_args = Seq2SeqTrainingArguments( >>> training_args = Seq2SeqTrainingArguments(
... output_dir="my_awesome_billsum_model", ... output_dir="my_awesome_billsum_model",
... evaluation_strategy="epoch", ... eval_strategy="epoch",
... learning_rate=2e-5, ... learning_rate=2e-5,
... per_device_train_batch_size=16, ... per_device_train_batch_size=16,
... per_device_eval_batch_size=16, ... per_device_eval_batch_size=16,
......
...@@ -288,7 +288,7 @@ Hugging Face 계정에 로그인하여 모델을 업로드하고 커뮤니티에 ...@@ -288,7 +288,7 @@ Hugging Face 계정에 로그인하여 모델을 업로드하고 커뮤니티에
... per_device_eval_batch_size=16, ... per_device_eval_batch_size=16,
... num_train_epochs=2, ... num_train_epochs=2,
... weight_decay=0.01, ... weight_decay=0.01,
... evaluation_strategy="epoch", ... eval_strategy="epoch",
... save_strategy="epoch", ... save_strategy="epoch",
... load_best_model_at_end=True, ... load_best_model_at_end=True,
... push_to_hub=True, ... push_to_hub=True,
......
...@@ -209,7 +209,7 @@ pip install transformers datasets evaluate sacrebleu ...@@ -209,7 +209,7 @@ pip install transformers datasets evaluate sacrebleu
```py ```py
>>> training_args = Seq2SeqTrainingArguments( >>> training_args = Seq2SeqTrainingArguments(
... output_dir="my_awesome_opus_books_model", ... output_dir="my_awesome_opus_books_model",
... evaluation_strategy="epoch", ... eval_strategy="epoch",
... learning_rate=2e-5, ... learning_rate=2e-5,
... per_device_train_batch_size=16, ... per_device_train_batch_size=16,
... per_device_eval_batch_size=16, ... per_device_eval_batch_size=16,
......
...@@ -358,7 +358,7 @@ You should probably TRAIN this model on a down-stream task to be able to use it ...@@ -358,7 +358,7 @@ You should probably TRAIN this model on a down-stream task to be able to use it
>>> args = TrainingArguments( >>> args = TrainingArguments(
... new_model_name, ... new_model_name,
... remove_unused_columns=False, ... remove_unused_columns=False,
... evaluation_strategy="epoch", ... eval_strategy="epoch",
... save_strategy="epoch", ... save_strategy="epoch",
... learning_rate=5e-5, ... learning_rate=5e-5,
... per_device_train_batch_size=batch_size, ... per_device_train_batch_size=batch_size,
......
...@@ -129,12 +129,12 @@ rendered properly in your Markdown viewer. ...@@ -129,12 +129,12 @@ rendered properly in your Markdown viewer.
... return metric.compute(predictions=predictions, references=labels) ... return metric.compute(predictions=predictions, references=labels)
``` ```
미세 튜닝 중에 평가 지표를 모니터링하려면 훈련 인수에 `evaluation_strategy` 파라미터를 지정하여 각 에폭이 끝날 때 평가 지표를 확인할 수 있습니다: 미세 튜닝 중에 평가 지표를 모니터링하려면 훈련 인수에 `eval_strategy` 파라미터를 지정하여 각 에폭이 끝날 때 평가 지표를 확인할 수 있습니다:
```py ```py
>>> from transformers import TrainingArguments, Trainer >>> from transformers import TrainingArguments, Trainer
>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") >>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch")
``` ```
### 훈련 하기[[trainer]] ### 훈련 하기[[trainer]]
......
...@@ -180,7 +180,7 @@ Nesse ponto, restam apenas três passos: ...@@ -180,7 +180,7 @@ Nesse ponto, restam apenas três passos:
```py ```py
>>> training_args = TrainingArguments( >>> training_args = TrainingArguments(
... output_dir="./results", ... output_dir="./results",
... evaluation_strategy="epoch", ... eval_strategy="epoch",
... learning_rate=2e-5, ... learning_rate=2e-5,
... per_device_train_batch_size=16, ... per_device_train_batch_size=16,
... per_device_eval_batch_size=16, ... per_device_eval_batch_size=16,
......
...@@ -146,13 +146,13 @@ todos os modelos de 🤗 Transformers retornam logits). ...@@ -146,13 +146,13 @@ todos os modelos de 🤗 Transformers retornam logits).
... return metric.compute(predictions=predictions, references=labels) ... return metric.compute(predictions=predictions, references=labels)
``` ```
Se quiser controlar as suas métricas de avaliação durante o fine-tuning, especifique o parâmetro `evaluation_strategy` Se quiser controlar as suas métricas de avaliação durante o fine-tuning, especifique o parâmetro `eval_strategy`
nos seus argumentos de treinamento para que o modelo considere a métrica de avaliação ao final de cada época: nos seus argumentos de treinamento para que o modelo considere a métrica de avaliação ao final de cada época:
```py ```py
>>> from transformers import TrainingArguments >>> from transformers import TrainingArguments
>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") >>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch")
``` ```
### Trainer ### Trainer
......
...@@ -288,7 +288,7 @@ Wav2Vec2 分词器仅训练了大写字符,因此您需要确保文本与分 ...@@ -288,7 +288,7 @@ Wav2Vec2 分词器仅训练了大写字符,因此您需要确保文本与分
... gradient_checkpointing=True, ... gradient_checkpointing=True,
... fp16=True, ... fp16=True,
... group_by_length=True, ... group_by_length=True,
... evaluation_strategy="steps", ... eval_strategy="steps",
... per_device_eval_batch_size=8, ... per_device_eval_batch_size=8,
... save_steps=1000, ... save_steps=1000,
... eval_steps=1000, ... eval_steps=1000,
......
...@@ -125,12 +125,12 @@ rendered properly in your Markdown viewer. ...@@ -125,12 +125,12 @@ rendered properly in your Markdown viewer.
... return metric.compute(predictions=predictions, references=labels) ... return metric.compute(predictions=predictions, references=labels)
``` ```
如果您希望在微调过程中监视评估指标,请在您的训练参数中指定 `evaluation_strategy` 参数,以在每个`epoch`结束时展示评估指标: 如果您希望在微调过程中监视评估指标,请在您的训练参数中指定 `eval_strategy` 参数,以在每个`epoch`结束时展示评估指标:
```py ```py
>>> from transformers import TrainingArguments, Trainer >>> from transformers import TrainingArguments, Trainer
>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") >>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch")
``` ```
### 训练器 ### 训练器
......
...@@ -490,7 +490,7 @@ python3 xla_spawn.py --num_cores ${NUM_TPUS} run_mlm.py --output_dir="./runs" \ ...@@ -490,7 +490,7 @@ python3 xla_spawn.py --num_cores ${NUM_TPUS} run_mlm.py --output_dir="./runs" \
--do_train \ --do_train \
--do_eval \ --do_eval \
--logging_steps="500" \ --logging_steps="500" \
--evaluation_strategy="epoch" \ --eval_strategy="epoch" \
--report_to="tensorboard" \ --report_to="tensorboard" \
--save_strategy="no" --save_strategy="no"
``` ```
...@@ -538,7 +538,7 @@ python3 -m torch.distributed.launch --nproc_per_node ${NUM_GPUS} run_mlm.py \ ...@@ -538,7 +538,7 @@ python3 -m torch.distributed.launch --nproc_per_node ${NUM_GPUS} run_mlm.py \
--do_train \ --do_train \
--do_eval \ --do_eval \
--logging_steps="500" \ --logging_steps="500" \
--evaluation_strategy="steps" \ --eval_strategy="steps" \
--report_to="tensorboard" \ --report_to="tensorboard" \
--save_strategy="no" --save_strategy="no"
``` ```
...@@ -18,7 +18,7 @@ python finetune_trainer.py \ ...@@ -18,7 +18,7 @@ python finetune_trainer.py \
--learning_rate=3e-5 \ --learning_rate=3e-5 \
--fp16 \ --fp16 \
--do_train --do_eval --do_predict \ --do_train --do_eval --do_predict \
--evaluation_strategy steps \ --eval_strategy steps \
--predict_with_generate \ --predict_with_generate \
--n_val 1000 \ --n_val 1000 \
"$@" "$@"
...@@ -20,7 +20,7 @@ python xla_spawn.py --num_cores $TPU_NUM_CORES \ ...@@ -20,7 +20,7 @@ python xla_spawn.py --num_cores $TPU_NUM_CORES \
finetune_trainer.py \ finetune_trainer.py \
--learning_rate=3e-5 \ --learning_rate=3e-5 \
--do_train --do_eval \ --do_train --do_eval \
--evaluation_strategy steps \ --eval_strategy steps \
--prediction_loss_only \ --prediction_loss_only \
--n_val 1000 \ --n_val 1000 \
"$@" "$@"
...@@ -271,7 +271,7 @@ def main(): ...@@ -271,7 +271,7 @@ def main():
max_source_length=data_args.max_source_length, max_source_length=data_args.max_source_length,
prefix=model.config.prefix or "", prefix=model.config.prefix or "",
) )
if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO if training_args.do_eval or training_args.eval_strategy != EvaluationStrategy.NO
else None else None
) )
test_dataset = ( test_dataset = (
......
...@@ -32,7 +32,7 @@ python finetune_trainer.py \ ...@@ -32,7 +32,7 @@ python finetune_trainer.py \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN \
--val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \ --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \
--do_train --do_eval --do_predict \ --do_train --do_eval --do_predict \
--evaluation_strategy steps \ --eval_strategy steps \
--predict_with_generate --logging_first_step \ --predict_with_generate --logging_first_step \
--task translation --label_smoothing_factor 0.1 \ --task translation --label_smoothing_factor 0.1 \
"$@" "$@"
...@@ -33,7 +33,7 @@ python xla_spawn.py --num_cores $TPU_NUM_CORES \ ...@@ -33,7 +33,7 @@ python xla_spawn.py --num_cores $TPU_NUM_CORES \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN \
--val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \ --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \
--do_train --do_eval \ --do_train --do_eval \
--evaluation_strategy steps \ --eval_strategy steps \
--prediction_loss_only \ --prediction_loss_only \
--task translation --label_smoothing_factor 0.1 \ --task translation --label_smoothing_factor 0.1 \
"$@" "$@"
...@@ -34,6 +34,6 @@ python finetune_trainer.py \ ...@@ -34,6 +34,6 @@ python finetune_trainer.py \
--logging_first_step \ --logging_first_step \
--max_target_length 56 --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN\ --max_target_length 56 --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN\
--do_train --do_eval --do_predict \ --do_train --do_eval --do_predict \
--evaluation_strategy steps \ --eval_strategy steps \
--predict_with_generate --sortish_sampler \ --predict_with_generate --sortish_sampler \
"$@" "$@"
...@@ -29,7 +29,7 @@ python finetune_trainer.py \ ...@@ -29,7 +29,7 @@ python finetune_trainer.py \
--num_train_epochs 6 \ --num_train_epochs 6 \
--save_steps 25000 --eval_steps 25000 --logging_steps 1000 \ --save_steps 25000 --eval_steps 25000 --logging_steps 1000 \
--do_train --do_eval --do_predict \ --do_train --do_eval --do_predict \
--evaluation_strategy steps \ --eval_strategy steps \
--predict_with_generate --logging_first_step \ --predict_with_generate --logging_first_step \
--task translation \ --task translation \
"$@" "$@"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment