README.md 8.32 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

Sylvain Gugger's avatar
Sylvain Gugger committed
17
## Summarization
18

Sylvain Gugger's avatar
Sylvain Gugger committed
19
This directory contains examples for finetuning and evaluating transformers on summarization  tasks.
20
Please tag @patil-suraj with any issues/unexpected behaviors, or send a PR!
21
For deprecated `bertabs` instructions, see [`bertabs/README.md`](https://github.com/huggingface/transformers/blob/master/examples/research_projects/bertabs/README.md).
22
For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2seq`](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq).
23

24
25
### Supported Architectures

26
- `BartForConditionalGeneration`
27
28
- `FSMTForConditionalGeneration` (translation only)
- `MBartForConditionalGeneration`
29
30
31
- `MarianMTModel`
- `PegasusForConditionalGeneration`
- `T5ForConditionalGeneration`
32
- `MT5ForConditionalGeneration`
33

Sylvain Gugger's avatar
Sylvain Gugger committed
34
`run_summarization.py` is a lightweight example of how to download and preprocess a dataset from the [馃 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it.
35
36

For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets.html#json-files
37
38
and you also will find examples of these below.

Sylvain Gugger's avatar
Sylvain Gugger committed
39
## With Trainer
40
41
42

Here is an example on a summarization task:
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
43
python examples/pytorch/summarization/run_summarization.py \
44
45
46
    --model_name_or_path t5-small \
    --do_train \
    --do_eval \
47
48
49
    --dataset_name cnn_dailymail \
    --dataset_config "3.0.0" \
    --source_prefix "summarize: " \
50
    --output_dir /tmp/tst-summarization \
51
52
53
    --per_device_train_batch_size=4 \
    --per_device_eval_batch_size=4 \
    --overwrite_output_dir \
54
    --predict_with_generate
55
56
```

57
58
59
60
61
Only T5 models `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` must use an additional argument: `--source_prefix "summarize: "`.

We used CNN/DailyMail dataset in this example as `t5-small` was trained on it and one can get good scores even when pre-training with a very small sample.

Extreme Summarization (XSum) Dataset is another commonly used dataset for the task of summarization. To use it replace `--dataset_name cnn_dailymail --dataset_config "3.0.0"` with  `--dataset_name xsum`.
62
63
64
65

And here is how you would use it on your own files, after adjusting the values for the arguments
`--train_file`, `--validation_file`, `--text_column` and `--summary_column` to match your setup:

66
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
67
python examples/pytorch/summarization/run_summarization.py \
68
    --model_name_or_path t5-small \
69
70
    --do_train \
    --do_eval \
71
72
    --train_file path_to_csv_or_jsonlines_file \
    --validation_file path_to_csv_or_jsonlines_file \
73
    --source_prefix "summarize: " \
74
    --output_dir /tmp/tst-summarization \
75
76
77
    --overwrite_output_dir \
    --per_device_train_batch_size=4 \
    --per_device_eval_batch_size=4 \
78
    --predict_with_generate
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
```

The task of summarization supports custom CSV and JSONLINES formats.

#### Custom CSV Files

If it's a csv file the training and validation files should have a column for the inputs texts and a column for the summaries.

If the csv file has just two columns as in the following example:

```csv
text,summary
"I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder","I'm sitting in a room where I'm waiting for something to happen"
"I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.","I'm a gardener and I'm a big fan of flowers."
"Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share","It's that time of year again."
```

The first column is assumed to be for `text` and the second is for summary.

If the csv file has multiple columns, you can then specify the names of the columns to use:

```bash
101
    --text_column text_column_name \
102
    --summary_column summary_column_name \
103
104
```

105
106
107
108
109
110
111
112
113
114
115
116
117
For example if the columns were:

```csv
id,date,text,summary
```

and you wanted to select only `text` and `summary`, then you'd pass these additional arguments:

```bash
    --text_column text \
    --summary_column summary \
```

118
#### Custom JSONLINES Files
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137

The second supported format is jsonlines. Here is an example of a jsonlines custom data file.


```json
{"text": "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder", "summary": "I'm sitting in a room where I'm waiting for something to happen"}
{"text": "I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.", "summary": "I'm a gardener and I'm a big fan of flowers."}
{"text": "Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share", "summary": "It's that time of year again."}
```

Same as with the CSV files, by default the first value will be used as the text record and the second as the summary record. Therefore you can use any key names for the entries, in this example `text` and `summary` were used.

And as with the CSV files, you can specify which values to select from the file, by explicitly specifying the corresponding key names. In our example this again would be:

```bash
    --text_column text \
    --summary_column summary \
```

Sylvain Gugger's avatar
Sylvain Gugger committed
138
## With Accelerate
139

Sylvain Gugger's avatar
Sylvain Gugger committed
140
Based on the script [`run_summarization_no_trainer.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
141

Sylvain Gugger's avatar
Sylvain Gugger committed
142
143
144
Like `run_summarization.py`, this script allows you to fine-tune any of the models supported on a
summarization task, the main difference is that this
script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.
145

Sylvain Gugger's avatar
Sylvain Gugger committed
146
147
148
149
It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer
or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by
the mean of the [馃 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally
after installing it:
150
151

```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
152
pip install accelerate
153
154
```

Sylvain Gugger's avatar
Sylvain Gugger committed
155
then
156

157
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
158
python run_summarization_no_trainer.py \
159
    --model_name_or_path t5-small \
Sylvain Gugger's avatar
Sylvain Gugger committed
160
161
162
163
    --dataset_name cnn_dailymail \
    --dataset_config "3.0.0" \
    --source_prefix "summarize: " \
    --output_dir ~/tmp/tst-summarization
164
165
```

Sylvain Gugger's avatar
Sylvain Gugger committed
166
You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run
167

Sylvain Gugger's avatar
Sylvain Gugger committed
168
169
170
```bash
accelerate config
```
171

Sylvain Gugger's avatar
Sylvain Gugger committed
172
and reply to the questions asked. Then
173

174
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
175
176
accelerate test
```
177

(Bill) Yuchen Lin's avatar
(Bill) Yuchen Lin committed
178
that will check everything is ready for training. Finally, you can launch training with
179
180

```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
181
accelerate launch run_summarization_no_trainer.py \
182
    --model_name_or_path t5-small \
Sylvain Gugger's avatar
Sylvain Gugger committed
183
184
185
186
    --dataset_name cnn_dailymail \
    --dataset_config "3.0.0" \
    --source_prefix "summarize: " \
    --output_dir ~/tmp/tst-summarization
187
```
188

Sylvain Gugger's avatar
Sylvain Gugger committed
189
This command is the same and will work for:
190

Sylvain Gugger's avatar
Sylvain Gugger committed
191
192
193
194
- a CPU-only setup
- a setup with one GPU
- a distributed training with several GPUs (single or multi node)
- a training on TPUs
195

Sylvain Gugger's avatar
Sylvain Gugger committed
196
Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.