"docs/source/en/model_doc/auto.mdx" did not exist on "8406fa6dd538c6e1b5a218b119e8efd771023112"
serialization.mdx 19.2 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Steven Liu's avatar
Steven Liu committed
13
# Export to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
14

Steven Liu's avatar
Steven Liu committed
15
16
17
18
If you need to deploy 馃 Transformers models in production environments, we recommend
exporting them to a serialized format that can be loaded and executed on specialized
runtimes and hardware. In this guide, we'll show you how to export 馃 Transformers
models to [ONNX (Open Neural Network eXchange)](http://onnx.ai).
Sylvain Gugger's avatar
Sylvain Gugger committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
24
ONNX is an open standard that defines a common set of operators and a common file format
to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network.
Sylvain Gugger's avatar
Sylvain Gugger committed
25

Steven Liu's avatar
Steven Liu committed
26
27
28
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Sylvain Gugger's avatar
Sylvain Gugger committed
29

Steven Liu's avatar
Steven Liu committed
30
31
32
33
馃 Transformers provides a [`transformers.onnx`](main_classes/onnx) package that enables
you to convert model checkpoints to an ONNX graph by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are
designed to be easily extendable to other architectures.
Sylvain Gugger's avatar
Sylvain Gugger committed
34

35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
<Tip>

You can also export 馃 Transformers models with the [`optimum.exporters.onnx` package](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model)
from 馃 Optimum.

Once exported, a model can be:

- Optimized for inference via techniques such as quantization and graph optimization.
- Run with ONNX Runtime via [`ORTModelForXXX` classes](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort),
which follow the same `AutoModel` API as the one you are used to in 馃 Transformers.
- Run with [optimized inference pipelines](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines),
which has the same API as the [`pipeline`] function in 馃 Transformers.

To explore all these features,  check out the [馃 Optimum library](https://github.com/huggingface/optimum).

</Tip>

lewtun's avatar
lewtun committed
52
Ready-made configurations include the following architectures:
Sylvain Gugger's avatar
Sylvain Gugger committed
53

54
<!--This table is automatically generated by `make fix-copies`, do not fill manually!-->
Sylvain Gugger's avatar
Sylvain Gugger committed
55
56
57

- ALBERT
- BART
Jim Rohrer's avatar
Jim Rohrer committed
58
- BEiT
Sylvain Gugger's avatar
Sylvain Gugger committed
59
- BERT
60
- BigBird
61
- BigBird-Pegasus
62
63
- Blenderbot
- BlenderbotSmall
64
- BLOOM
Sylvain Gugger's avatar
Sylvain Gugger committed
65
- CamemBERT
66
- Chinese-CLIP
67
- CLIP
rooa's avatar
rooa committed
68
- CodeGen
69
- Conditional DETR
70
- ConvBERT
71
- ConvNeXT
72
- Data2VecText
73
- Data2VecVision
74
75
- DeBERTa
- DeBERTa-v2
76
- DeiT
regisss's avatar
regisss committed
77
- DETR
Sylvain Gugger's avatar
Sylvain Gugger committed
78
- DistilBERT
Alara Dirik's avatar
Alara Dirik committed
79
- EfficientNet
80
- ELECTRA
81
- ERNIE
82
- FlauBERT
Sylvain Gugger's avatar
Sylvain Gugger committed
83
- GPT Neo
84
- GPT-J
85
- GPT-Sw3
86
- GroupViT
87
- I-BERT
88
- ImageGPT
Sylvain Gugger's avatar
Sylvain Gugger committed
89
- LayoutLM
90
- LayoutLMv3
gcheron's avatar
gcheron committed
91
- LeViT
92
- Longformer
Daniel Stancl's avatar
Daniel Stancl committed
93
- LongT5
94
- M2M100
95
- Marian
Sylvain Gugger's avatar
Sylvain Gugger committed
96
- mBART
97
- MEGA
98
- MobileBERT
99
- MobileNetV1
100
- MobileNetV2
101
- MobileViT
102
- MT5
Sylvain Gugger's avatar
Sylvain Gugger committed
103
- OpenAI GPT-2
104
- OWL-ViT
105
- Perceiver
Gunjan Chhablani's avatar
Gunjan Chhablani committed
106
- PLBart
107
- PoolFormer
Erin's avatar
Erin committed
108
- RemBERT
regisss's avatar
regisss committed
109
- ResNet
Sylvain Gugger's avatar
Sylvain Gugger committed
110
- RoBERTa
111
- RoBERTa-PreLayerNorm
112
- RoFormer
113
- SegFormer
114
- SqueezeBERT
Shehan Munasinghe's avatar
Shehan Munasinghe committed
115
- SwiftFormer
116
- Swin Transformer
Sylvain Gugger's avatar
Sylvain Gugger committed
117
- T5
118
- Table Transformer
119
- Vision Encoder decoder
lewtun's avatar
lewtun committed
120
- ViT
121
- Whisper
Jannis Vamvas's avatar
Jannis Vamvas committed
122
- X-MOD
Ritik Nandwal's avatar
Ritik Nandwal committed
123
- XLM
Sylvain Gugger's avatar
Sylvain Gugger committed
124
- XLM-RoBERTa
125
- XLM-RoBERTa-XL
NielsRogge's avatar
NielsRogge committed
126
- YOLOS
Sylvain Gugger's avatar
Sylvain Gugger committed
127

lewtun's avatar
lewtun committed
128
In the next two sections, we'll show you how to:
Sylvain Gugger's avatar
Sylvain Gugger committed
129

lewtun's avatar
lewtun committed
130
131
* Export a supported model using the `transformers.onnx` package.
* Export a custom model for an unsupported architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
132

Steven Liu's avatar
Steven Liu committed
133
## Exporting a model to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
134

135
136
137
138
139
140
141
142
<Tip>

The recommended way of exporting a model is now to use
[`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli),
do not worry it is very similar to `transformers.onnx`!

</Tip>

Steven Liu's avatar
Steven Liu committed
143
144
To export a 馃 Transformers model to ONNX, you'll first need to install some extra
dependencies:
Sylvain Gugger's avatar
Sylvain Gugger committed
145

lewtun's avatar
lewtun committed
146
147
148
149
150
```bash
pip install transformers[onnx]
```

The `transformers.onnx` package can then be used as a Python module:
Sylvain Gugger's avatar
Sylvain Gugger committed
151
152
153
154

```bash
python -m transformers.onnx --help

lewtun's avatar
lewtun committed
155
usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output
Sylvain Gugger's avatar
Sylvain Gugger committed
156
157
158
159
160
161
162

positional arguments:
  output                Path indicating where to store generated ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
lewtun's avatar
lewtun committed
163
164
165
166
                        Model ID on huggingface.co or path on disk to load model from.
  --feature {causal-lm, ...}
                        The type of features to export the model with.
  --opset OPSET         ONNX opset version to export the model with.
167
  --atol ATOL           Absolute difference tolerance when validating the model.
Sylvain Gugger's avatar
Sylvain Gugger committed
168
169
170
171
172
```

Exporting a checkpoint using a ready-made configuration can be done as follows:

```bash
lewtun's avatar
lewtun committed
173
python -m transformers.onnx --model=distilbert-base-uncased onnx/
Sylvain Gugger's avatar
Sylvain Gugger committed
174
175
```

Steven Liu's avatar
Steven Liu committed
176
You should see the following logs:
Sylvain Gugger's avatar
Sylvain Gugger committed
177
178
179

```bash
Validating ONNX model...
180
        -[鉁揮 ONNX model output names match reference model ({'last_hidden_state'})
lewtun's avatar
lewtun committed
181
182
183
184
        - Validating ONNX Model output "last_hidden_state":
                -[鉁揮 (2, 8, 768) matches (2, 8, 768)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
185
186
```

Steven Liu's avatar
Steven Liu committed
187
188
189
This exports an ONNX graph of the checkpoint defined by the `--model` argument. In this
example, it is `distilbert-base-uncased`, but it can be any checkpoint on the Hugging
Face Hub or one that's stored locally.
Sylvain Gugger's avatar
Sylvain Gugger committed
190

lewtun's avatar
lewtun committed
191
The resulting `model.onnx` file can then be run on one of the [many
Steven Liu's avatar
Steven Liu committed
192
193
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
lewtun's avatar
lewtun committed
194
Runtime](https://onnxruntime.ai/) as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
195

lewtun's avatar
lewtun committed
196
197
198
199
200
201
202
203
204
205
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Sylvain Gugger's avatar
Sylvain Gugger committed
206

Steven Liu's avatar
Steven Liu committed
207
208
The required output names (like `["last_hidden_state"]`) can be obtained by taking a
look at the ONNX configuration of each model. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
209

lewtun's avatar
lewtun committed
210
211
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
212

lewtun's avatar
lewtun committed
213
214
215
216
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
Sylvain Gugger's avatar
Sylvain Gugger committed
217
218
```

Steven Liu's avatar
Steven Liu committed
219
220
The process is identical for TensorFlow checkpoints on the Hub. For example, we can
export a pure TensorFlow checkpoint from the [Keras
221
222
223
224
225
226
organization](https://huggingface.co/keras-io) as follows:

```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```

Steven Liu's avatar
Steven Liu committed
227
228
229
To export a model that's stored locally, you'll need to have the model's weights and
tokenizer files stored in a directory. For example, we can load and save a checkpoint as
follows:
230

Steven Liu's avatar
Steven Liu committed
231
<frameworkcontent> <pt>
232
233
234
235
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification

>>> # Load tokenizer and PyTorch weights form the Hub
236
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
237
238
239
240
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-pt-checkpoint")
>>> pt_model.save_pretrained("local-pt-checkpoint")
Sylvain Gugger's avatar
Sylvain Gugger committed
241
242
243
244
245
246
247
248
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
249
</pt> <tf>
Sylvain Gugger's avatar
Sylvain Gugger committed
250
```python
251
252
253
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

>>> # Load tokenizer and TensorFlow weights from the Hub
254
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
255
256
257
258
259
260
261
262
263
264
265
266
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-tf-checkpoint")
>>> tf_model.save_pretrained("local-tf-checkpoint")
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
267
</tf> </frameworkcontent>
268

Steven Liu's avatar
Steven Liu committed
269
## Selecting features for different model tasks
lewtun's avatar
lewtun committed
270

271
272
273
274
275
276
277
278
<Tip>

The recommended way of exporting a model is now to use `optimum.exporters.onnx`.
You can check the [馃 Optimum documentation](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#selecting-a-task)
to learn how to select a task.

</Tip>

Steven Liu's avatar
Steven Liu committed
279
280
281
Each ready-made configuration comes with a set of _features_ that enable you to export
models for different types of tasks. As shown in the table below, each feature is
associated with a different `AutoClass`:
lewtun's avatar
lewtun committed
282
283
284
285
286
287
288
289
290
291
292
293

| Feature                              | Auto Class                           |
| ------------------------------------ | ------------------------------------ |
| `causal-lm`, `causal-lm-with-past`   | `AutoModelForCausalLM`               |
| `default`, `default-with-past`       | `AutoModel`                          |
| `masked-lm`                          | `AutoModelForMaskedLM`               |
| `question-answering`                 | `AutoModelForQuestionAnswering`      |
| `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM`              |
| `sequence-classification`            | `AutoModelForSequenceClassification` |
| `token-classification`               | `AutoModelForTokenClassification`    |

For each configuration, you can find the list of supported features via the
Steven Liu's avatar
Steven Liu committed
294
[`~transformers.onnx.FeaturesManager`]. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
295
296

```python
lewtun's avatar
lewtun committed
297
>>> from transformers.onnx.features import FeaturesManager
Sylvain Gugger's avatar
Sylvain Gugger committed
298

lewtun's avatar
lewtun committed
299
300
301
>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys())
>>> print(distilbert_features)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]
Sylvain Gugger's avatar
Sylvain Gugger committed
302
303
```

lewtun's avatar
lewtun committed
304
You can then pass one of these features to the `--feature` argument in the
Steven Liu's avatar
Steven Liu committed
305
306
`transformers.onnx` package. For example, to export a text-classification model we can
pick a fine-tuned model from the Hub and run:
Sylvain Gugger's avatar
Sylvain Gugger committed
307

lewtun's avatar
lewtun committed
308
309
310
311
```bash
python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \
                            --feature=sequence-classification onnx/
```
Sylvain Gugger's avatar
Sylvain Gugger committed
312

Steven Liu's avatar
Steven Liu committed
313
This displays the following logs:
lewtun's avatar
lewtun committed
314
315
316

```bash
Validating ONNX model...
317
        -[鉁揮 ONNX model output names match reference model ({'logits'})
lewtun's avatar
lewtun committed
318
319
320
321
        - Validating ONNX Model output "logits":
                -[鉁揮 (2, 2) matches (2, 2)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
322
323
```

Steven Liu's avatar
Steven Liu committed
324
325
326
Notice that in this case, the output names from the fine-tuned model are `logits`
instead of the `last_hidden_state` we saw with the `distilbert-base-uncased` checkpoint
earlier. This is expected since the fine-tuned model has a sequence classification head.
lewtun's avatar
lewtun committed
327
328
329

<Tip>

Steven Liu's avatar
Steven Liu committed
330
331
332
The features that have a `with-past` suffix (like `causal-lm-with-past`) correspond to
model classes with precomputed hidden states (key and values in the attention blocks)
that can be used for fast autoregressive decoding.
lewtun's avatar
lewtun committed
333
334

</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
335

336
337
338
339
340
341
342
<Tip>

For `VisionEncoderDecoder` type models, the encoder and decoder parts are
exported separately as two ONNX files named `encoder_model.onnx` and `decoder_model.onnx` respectively.

</Tip>

Sylvain Gugger's avatar
Sylvain Gugger committed
343

Steven Liu's avatar
Steven Liu committed
344
## Exporting a model for an unsupported architecture
Sylvain Gugger's avatar
Sylvain Gugger committed
345

346
347
348
349
350
351
352
353
354
<Tip>

If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is
supported in [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/package_reference/configuration#supported-architectures),
and if it is not, [contribute to 馃 Optimum](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/contribute)
directly.

</Tip>

Steven Liu's avatar
Steven Liu committed
355
356
If you wish to export a model whose architecture is not natively supported by the
library, there are three main steps to follow:
Sylvain Gugger's avatar
Sylvain Gugger committed
357

lewtun's avatar
lewtun committed
358
359
360
1. Implement a custom ONNX configuration.
2. Export the model to ONNX.
3. Validate the outputs of the PyTorch and exported models.
Sylvain Gugger's avatar
Sylvain Gugger committed
361

Steven Liu's avatar
Steven Liu committed
362
363
In this section, we'll look at how DistilBERT was implemented to show what's involved
with each step.
Sylvain Gugger's avatar
Sylvain Gugger committed
364

Steven Liu's avatar
Steven Liu committed
365
### Implementing a custom ONNX configuration
Sylvain Gugger's avatar
Sylvain Gugger committed
366

Steven Liu's avatar
Steven Liu committed
367
368
Let's start with the ONNX configuration object. We provide three abstract classes that
you should inherit from, depending on the type of model architecture you wish to export:
Sylvain Gugger's avatar
Sylvain Gugger committed
369

370
371
372
* Encoder-based models inherit from [`~onnx.config.OnnxConfig`]
* Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`]
* Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`]
Sylvain Gugger's avatar
Sylvain Gugger committed
373
374
375

<Tip>

lewtun's avatar
lewtun committed
376
377
A good way to implement a custom ONNX configuration is to look at the existing
implementation in the `configuration_<model_name>.py` file of a similar architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
378
379
380

</Tip>

lewtun's avatar
lewtun committed
381
382
Since DistilBERT is an encoder-based model, its configuration inherits from
`OnnxConfig`:
Sylvain Gugger's avatar
Sylvain Gugger committed
383

lewtun's avatar
lewtun committed
384
385
386
387
388
389
390
391
392
393
394
395
396
397
```python
>>> from typing import Mapping, OrderedDict
>>> from transformers.onnx import OnnxConfig


>>> class DistilBertOnnxConfig(OnnxConfig):
...     @property
...     def inputs(self) -> Mapping[str, Mapping[int, str]]:
...         return OrderedDict(
...             [
...                 ("input_ids", {0: "batch", 1: "sequence"}),
...                 ("attention_mask", {0: "batch", 1: "sequence"}),
...             ]
...         )
Sylvain Gugger's avatar
Sylvain Gugger committed
398
399
```

Steven Liu's avatar
Steven Liu committed
400
401
402
403
404
Every configuration object must implement the `inputs` property and return a mapping,
where each key corresponds to an expected input, and each value indicates the axis of
that input. For DistilBERT, we can see that two inputs are required: `input_ids` and
`attention_mask`. These inputs have the same shape of `(batch_size, sequence_length)`
which is why we see the same axes used in the configuration.
Sylvain Gugger's avatar
Sylvain Gugger committed
405
406
407

<Tip>

Steven Liu's avatar
Steven Liu committed
408
409
410
411
412
Notice that `inputs` property for `DistilBertOnnxConfig` returns an `OrderedDict`. This
ensures that the inputs are matched with their relative position within the
`PreTrainedModel.forward()` method when tracing the graph. We recommend using an
`OrderedDict` for the `inputs` and `outputs` properties when implementing custom ONNX
configurations.
Sylvain Gugger's avatar
Sylvain Gugger committed
413
414
415

</Tip>

Steven Liu's avatar
Steven Liu committed
416
417
Once you have implemented an ONNX configuration, you can instantiate it by providing the
base model's configuration as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
418

lewtun's avatar
lewtun committed
419
420
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
421

lewtun's avatar
lewtun committed
422
423
424
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config = DistilBertOnnxConfig(config)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
425

Steven Liu's avatar
Steven Liu committed
426
427
The resulting object has several useful properties. For example, you can view the ONNX
operator set that will be used during the export:
Sylvain Gugger's avatar
Sylvain Gugger committed
428

lewtun's avatar
lewtun committed
429
430
431
432
```python
>>> print(onnx_config.default_onnx_opset)
11
```
Sylvain Gugger's avatar
Sylvain Gugger committed
433

lewtun's avatar
lewtun committed
434
You can also view the outputs associated with the model as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
435

lewtun's avatar
lewtun committed
436
437
438
439
```python
>>> print(onnx_config.outputs)
OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
440

Steven Liu's avatar
Steven Liu committed
441
442
443
444
445
446
447
448
Notice that the outputs property follows the same structure as the inputs; it returns an
`OrderedDict` of named outputs and their shapes. The output structure is linked to the
choice of feature that the configuration is initialised with. By default, the ONNX
configuration is initialized with the `default` feature that corresponds to exporting a
model loaded with the `AutoModel` class. If you want to export a model for another task,
just provide a different feature to the `task` argument when you initialize the ONNX
configuration. For example, if we wished to export DistilBERT with a sequence
classification head, we could use:
Sylvain Gugger's avatar
Sylvain Gugger committed
449

lewtun's avatar
lewtun committed
450
451
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
452

lewtun's avatar
lewtun committed
453
454
455
456
457
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch'})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
458
459
460

<Tip>

Steven Liu's avatar
Steven Liu committed
461
All of the base properties and methods associated with [`~onnx.config.OnnxConfig`] and
462
the other configuration classes can be overridden if needed. Check out [`BartOnnxConfig`]
Steven Liu's avatar
Steven Liu committed
463
for an advanced example.
Sylvain Gugger's avatar
Sylvain Gugger committed
464
465
466

</Tip>

Steven Liu's avatar
Steven Liu committed
467
### Exporting the model
Sylvain Gugger's avatar
Sylvain Gugger committed
468

Steven Liu's avatar
Steven Liu committed
469
470
471
472
Once you have implemented the ONNX configuration, the next step is to export the model.
Here we can use the `export()` function provided by the `transformers.onnx` package.
This function expects the ONNX configuration, along with the base model and tokenizer,
and the path to save the exported file:
Sylvain Gugger's avatar
Sylvain Gugger committed
473

lewtun's avatar
lewtun committed
474
475
476
477
```python
>>> from pathlib import Path
>>> from transformers.onnx import export
>>> from transformers import AutoTokenizer, AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
478

lewtun's avatar
lewtun committed
479
480
481
482
>>> onnx_path = Path("model.onnx")
>>> model_ckpt = "distilbert-base-uncased"
>>> base_model = AutoModel.from_pretrained(model_ckpt)
>>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
Sylvain Gugger's avatar
Sylvain Gugger committed
483

lewtun's avatar
lewtun committed
484
485
>>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
486

Steven Liu's avatar
Steven Liu committed
487
488
489
The `onnx_inputs` and `onnx_outputs` returned by the `export()` function are lists of
the keys defined in the `inputs` and `outputs` properties of the configuration. Once the
model is exported, you can test that the model is well formed as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
490

lewtun's avatar
lewtun committed
491
492
```python
>>> import onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
493

lewtun's avatar
lewtun committed
494
495
496
>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
497
498
499

<Tip>

Steven Liu's avatar
Steven Liu committed
500
501
502
503
504
505
If your model is larger than 2GB, you will see that many additional files are created
during the export. This is _expected_ because ONNX uses [Protocol
Buffers](https://developers.google.com/protocol-buffers/) to store the model and these
have a size limit of 2GB. See the [ONNX
documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) for
instructions on how to load models with external data.
Sylvain Gugger's avatar
Sylvain Gugger committed
506
507
508

</Tip>

Steven Liu's avatar
Steven Liu committed
509
### Validating the model outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
510

Steven Liu's avatar
Steven Liu committed
511
512
513
The final step is to validate that the outputs from the base and exported model agree
within some absolute tolerance. Here we can use the `validate_model_outputs()` function
provided by the `transformers.onnx` package as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
514

lewtun's avatar
lewtun committed
515
516
```python
>>> from transformers.onnx import validate_model_outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
517

lewtun's avatar
lewtun committed
518
519
520
>>> validate_model_outputs(
...     onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
... )
Sylvain Gugger's avatar
Sylvain Gugger committed
521
522
```

Steven Liu's avatar
Steven Liu committed
523
524
525
526
This function uses the [`~transformers.onnx.OnnxConfig.generate_dummy_inputs`] method to
generate inputs for the base and exported model, and the absolute tolerance can be
defined in the configuration. We generally find numerical agreement in the 1e-6 to 1e-4
range, although anything smaller than 1e-3 is likely to be OK.
Sylvain Gugger's avatar
Sylvain Gugger committed
527

Steven Liu's avatar
Steven Liu committed
528
## Contributing a new configuration to 馃 Transformers
Sylvain Gugger's avatar
Sylvain Gugger committed
529

Steven Liu's avatar
Steven Liu committed
530
531
532
We are looking to expand the set of ready-made configurations and welcome contributions
from the community! If you would like to contribute your addition to the library, you
will need to:
Sylvain Gugger's avatar
Sylvain Gugger committed
533

lewtun's avatar
lewtun committed
534
535
* Implement the ONNX configuration in the corresponding `configuration_<model_name>.py`
file
Steven Liu's avatar
Steven Liu committed
536
537
* Include the model architecture and corresponding features in
  [`~onnx.features.FeatureManager`]
538
* Add your model architecture to the tests in `test_onnx_v2.py`
Sylvain Gugger's avatar
Sylvain Gugger committed
539

lewtun's avatar
lewtun committed
540
Check out how the configuration for [IBERT was
Steven Liu's avatar
Steven Liu committed
541
contributed](https://github.com/huggingface/transformers/pull/14868/files) to get an
542
idea of what's involved.