"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "021887682224daf29264f98c759a45e88c82e244"
Unverified Commit f991daed authored by Stas Bekman's avatar Stas Bekman Committed by GitHub
Browse files

defensive programming + expand/correct README (#10295)

parent 9e147d31
...@@ -27,12 +27,15 @@ For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2s ...@@ -27,12 +27,15 @@ For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2s
- `MarianMTModel` - `MarianMTModel`
- `PegasusForConditionalGeneration` - `PegasusForConditionalGeneration`
- `MBartForConditionalGeneration` - `MBartForConditionalGeneration`
- `FSMTForConditionalGeneration` - `FSMTForConditionalGeneration` (translation only)
- `T5ForConditionalGeneration` - `T5ForConditionalGeneration`
`run_seq2seq.py` is a lightweight example of how to download and preprocess a dataset from the [🤗 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it. `run_seq2seq.py` is a lightweight example of how to download and preprocess a dataset from the [🤗 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it.
For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets.html#json-files For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets.html#json-files
and you also will find examples of these below.
### Summarization
Here is an example on a summarization task: Here is an example on a summarization task:
```bash ```bash
...@@ -42,14 +45,20 @@ python examples/seq2seq/run_seq2seq.py \ ...@@ -42,14 +45,20 @@ python examples/seq2seq/run_seq2seq.py \
--do_eval \ --do_eval \
--task summarization \ --task summarization \
--dataset_name xsum \ --dataset_name xsum \
--output_dir ~/tmp/tst-summarization \ --output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \ --per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \ --per_device_eval_batch_size=4 \
--overwrite_output_dir \ --overwrite_output_dir \
--predict_with_generate --predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500
``` ```
And here is how you would use it on your own files (replace `path_to_csv_or_jsonlines_file`, `text_column_name` and `summary_column_name` by the relevant values): CNN/DailyMail dataset is another commonly used dataset for the task of summarization. To use it replace `--dataset_name xsum` with `--dataset_name cnn_dailymail --dataset_config "3.0.0"`.
And here is how you would use it on your own files, after adjusting the values for the arguments
`--train_file`, `--validation_file`, `--text_column` and `--summary_column` to match your setup:
```bash ```bash
python examples/seq2seq/run_seq2seq.py \ python examples/seq2seq/run_seq2seq.py \
--model_name_or_path t5-small \ --model_name_or_path t5-small \
...@@ -58,51 +67,180 @@ python examples/seq2seq/run_seq2seq.py \ ...@@ -58,51 +67,180 @@ python examples/seq2seq/run_seq2seq.py \
--task summarization \ --task summarization \
--train_file path_to_csv_or_jsonlines_file \ --train_file path_to_csv_or_jsonlines_file \
--validation_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \
--output_dir ~/tmp/tst-summarization \ --output_dir /tmp/tst-summarization \
--overwrite_output_dir \ --overwrite_output_dir \
--per_device_train_batch_size=4 \ --per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \ --per_device_eval_batch_size=4 \
--predict_with_generate \ --predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500
```
The task of summarization supports custom CSV and JSONLINES formats.
#### Custom CSV Files
If it's a csv file the training and validation files should have a column for the inputs texts and a column for the summaries.
If the csv file has just two columns as in the following example:
```csv
text,summary
"I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder","I'm sitting in a room where I'm waiting for something to happen"
"I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.","I'm a gardener and I'm a big fan of flowers."
"Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share","It's that time of year again."
```
The first column is assumed to be for `text` and the second is for summary.
If the csv file has multiple columns, you can then specify the names of the columns to use:
```bash
--text_column text_column_name \ --text_column text_column_name \
--summary_column summary_column_name --summary_column summary_column_name \
```
For example if the columns were:
```csv
id,date,text,summary
```
and you wanted to select only `text` and `summary`, then you'd pass these additional arguments:
```bash
--text_column text \
--summary_column summary \
```
#### Custom JSONFILES Files
The second supported format is jsonlines. Here is an example of a jsonlines custom data file.
```json
{"text": "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder", "summary": "I'm sitting in a room where I'm waiting for something to happen"}
{"text": "I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.", "summary": "I'm a gardener and I'm a big fan of flowers."}
{"text": "Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share", "summary": "It's that time of year again."}
``` ```
The training and validation files should have a column for the inputs texts and a column for the summaries.
Here is an example of a translation fine-tuning: Same as with the CSV files, by default the first value will be used as the text record and the second as the summary record. Therefore you can use any key names for the entries, in this example `text` and `summary` were used.
And as with the CSV files, you can specify which values to select from the file, by explicitly specifying the corresponding key names. In our example this again would be:
```bash
--text_column text \
--summary_column summary \
```
### Translation
Here is an example of a translation fine-tuning with T5:
```bash ```bash
python examples/seq2seq/run_seq2seq.py \ python examples/seq2seq/run_seq2seq.py \
--model_name_or_path sshleifer/student_marian_en_ro_6_1 \ --model_name_or_path t5-small \
--do_train \ --do_train \
--do_eval \ --do_eval \
--task translation_en_to_ro \ --task translation_en_to_ro \
--dataset_name wmt16 \ --dataset_name wmt16 \
--dataset_config_name ro-en \ --dataset_config_name ro-en \
--source_lang en_XX \ --source_prefix "translate English to Romanian: " \
--target_lang ro_RO\ --output_dir /tmp/tst-translation \
--output_dir ~/tmp/tst-translation \
--per_device_train_batch_size=4 \ --per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \ --per_device_eval_batch_size=4 \
--overwrite_output_dir \ --overwrite_output_dir \
--predict_with_generate --predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500
``` ```
And here is how you would use it on your own files (replace `path_to_jsonlines_file`, by the relevant values): And the same with MBart:
```bash ```bash
python examples/seq2seq/run_seq2seq.py \ python examples/seq2seq/run_seq2seq.py \
--model_name_or_path sshleifer/student_marian_en_ro_6_1 \ --model_name_or_path facebook/mbart-large-en-ro \
--do_train \ --do_train \
--do_eval \ --do_eval \
--task translation_en_to_ro \ --task translation_en_to_ro \
--dataset_name wmt16 \ --dataset_name wmt16 \
--dataset_config_name ro-en \ --dataset_config_name ro-en \
--source_lang en_XX \ --source_lang en_XX \
--target_lang ro_RO\ --target_lang ro_RO \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500
```
Note, that depending on the used model additional language-specific command-line arguments are sometimes required. Specifically:
* MBart models require:
```
--source_lang en_XX \
--target_lang ro_RO \
```
* T5 requires:
```
--source_prefix "translate English to Romanian: "
```
* yet, other models, require nether.
Also, if you switch to a different language pair, make sure to adjust the source and target values in all command line arguments.
And here is how you would use the translation finetuning on your own files, after adjusting the
values for the arguments `--train_file`, `--validation_file` to match your setup:
```bash
python examples/seq2seq/run_seq2seq.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--task translation_en_to_ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--source_prefix "translate English to Romanian: " \
--train_file path_to_jsonlines_file \ --train_file path_to_jsonlines_file \
--validation_file path_to_jsonlines_file \ --validation_file path_to_jsonlines_file \
--output_dir ~/tmp/tst-translation \ --output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \ --per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \ --per_device_eval_batch_size=4 \
--overwrite_output_dir \ --overwrite_output_dir \
--predict_with_generate --predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500
``` ```
Here the files are expected to be JSONLINES files, with each input being a dictionary with a key `"translation"` containing one key per language (here `"en"` and `"ro"`).
The task of translation supports only custom JSONLINES files, with each line being a dictionary with a key `"translation"` and its value another dictionary whose keys is the language pair. For example:
```json
{ "translation": { "en": "Others have dismissed him as a joke.", "ro": "Alții l-au numit o glumă." } }
{ "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar alții așteaptă implozia." } }
```
Here the languages are Romanian (`ro`) and English (`en`).
If you want to use a pre-processed dataset that leads to high bleu scores, but for the `en-de` language pair, you can use `--dataset_name wmt14-en-de-pre-processed`, as following:
```bash
python examples/seq2seq/run_seq2seq.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--task translation_en_to_de \
--dataset_name wmt14-en-de-pre-processed \
--source_prefix "translate English to German: " \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500
```
...@@ -403,10 +403,18 @@ def main(): ...@@ -403,10 +403,18 @@ def main():
text_column = dataset_columns[0] if dataset_columns is not None else column_names[0] text_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
else: else:
text_column = data_args.text_column text_column = data_args.text_column
if text_column not in column_names:
raise ValueError(
f"--text_column' value '{data_args.text_column}' needs to be one of: {', '.join(column_names)}"
)
if data_args.summary_column is None: if data_args.summary_column is None:
summary_column = dataset_columns[1] if dataset_columns is not None else column_names[1] summary_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
else: else:
summary_column = data_args.summary_column summary_column = data_args.summary_column
if summary_column not in column_names:
raise ValueError(
f"--summary_column' value '{data_args.summary_column}' needs to be one of: {', '.join(column_names)}"
)
else: else:
# Get the language codes for input/target. # Get the language codes for input/target.
lang_search = re.match("translation_([a-z]+)_to_([a-z]+)", data_args.task) lang_search = re.match("translation_([a-z]+)_to_([a-z]+)", data_args.task)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment