serialization.mdx 17.3 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Steven Liu's avatar
Steven Liu committed
13
# Export to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
14

Steven Liu's avatar
Steven Liu committed
15
16
17
18
If you need to deploy 馃 Transformers models in production environments, we recommend
exporting them to a serialized format that can be loaded and executed on specialized
runtimes and hardware. In this guide, we'll show you how to export 馃 Transformers
models to [ONNX (Open Neural Network eXchange)](http://onnx.ai).
Sylvain Gugger's avatar
Sylvain Gugger committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
24
<Tip>

Once exported, a model can be optimized for inference via techniques such as
quantization and pruning. If you are interested in optimizing your models to run with
maximum efficiency, check out the [馃 Optimum
lewtun's avatar
lewtun committed
25
library](https://github.com/huggingface/optimum).
Sylvain Gugger's avatar
Sylvain Gugger committed
26

Steven Liu's avatar
Steven Liu committed
27
</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
28

Steven Liu's avatar
Steven Liu committed
29
30
31
32
33
ONNX is an open standard that defines a common set of operators and a common file format
to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network.
Sylvain Gugger's avatar
Sylvain Gugger committed
34

Steven Liu's avatar
Steven Liu committed
35
36
37
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Sylvain Gugger's avatar
Sylvain Gugger committed
38

Steven Liu's avatar
Steven Liu committed
39
40
41
42
馃 Transformers provides a [`transformers.onnx`](main_classes/onnx) package that enables
you to convert model checkpoints to an ONNX graph by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are
designed to be easily extendable to other architectures.
Sylvain Gugger's avatar
Sylvain Gugger committed
43

lewtun's avatar
lewtun committed
44
Ready-made configurations include the following architectures:
Sylvain Gugger's avatar
Sylvain Gugger committed
45

46
<!--This table is automatically generated by `make fix-copies`, do not fill manually!-->
Sylvain Gugger's avatar
Sylvain Gugger committed
47
48
49

- ALBERT
- BART
Jim Rohrer's avatar
Jim Rohrer committed
50
- BEiT
Sylvain Gugger's avatar
Sylvain Gugger committed
51
- BERT
52
- BigBird
53
- BigBird-Pegasus
54
55
- Blenderbot
- BlenderbotSmall
56
- BLOOM
Sylvain Gugger's avatar
Sylvain Gugger committed
57
- CamemBERT
58
- CLIP
rooa's avatar
rooa committed
59
- CodeGen
60
- Conditional DETR
61
- ConvBERT
62
- ConvNeXT
63
- Data2VecText
64
- Data2VecVision
65
66
- DeBERTa
- DeBERTa-v2
67
- DeiT
regisss's avatar
regisss committed
68
- DETR
Sylvain Gugger's avatar
Sylvain Gugger committed
69
- DistilBERT
70
- ELECTRA
71
- ERNIE
72
- FlauBERT
Sylvain Gugger's avatar
Sylvain Gugger committed
73
- GPT Neo
74
- GPT-J
75
- GroupViT
76
- I-BERT
Sylvain Gugger's avatar
Sylvain Gugger committed
77
- LayoutLM
78
- LayoutLMv3
gcheron's avatar
gcheron committed
79
- LeViT
80
- Longformer
Daniel Stancl's avatar
Daniel Stancl committed
81
- LongT5
82
- M2M100
83
- Marian
Sylvain Gugger's avatar
Sylvain Gugger committed
84
- mBART
85
- MobileBERT
86
- MobileViT
87
- MT5
Sylvain Gugger's avatar
Sylvain Gugger committed
88
- OpenAI GPT-2
89
- OWL-ViT
90
- Perceiver
Gunjan Chhablani's avatar
Gunjan Chhablani committed
91
- PLBart
regisss's avatar
regisss committed
92
- ResNet
Sylvain Gugger's avatar
Sylvain Gugger committed
93
- RoBERTa
94
- RoFormer
95
- SegFormer
96
- SqueezeBERT
97
- Swin Transformer
Sylvain Gugger's avatar
Sylvain Gugger committed
98
- T5
lewtun's avatar
lewtun committed
99
- ViT
Ritik Nandwal's avatar
Ritik Nandwal committed
100
- XLM
Sylvain Gugger's avatar
Sylvain Gugger committed
101
- XLM-RoBERTa
102
- XLM-RoBERTa-XL
NielsRogge's avatar
NielsRogge committed
103
- YOLOS
Sylvain Gugger's avatar
Sylvain Gugger committed
104

lewtun's avatar
lewtun committed
105
In the next two sections, we'll show you how to:
Sylvain Gugger's avatar
Sylvain Gugger committed
106

lewtun's avatar
lewtun committed
107
108
* Export a supported model using the `transformers.onnx` package.
* Export a custom model for an unsupported architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
109

Steven Liu's avatar
Steven Liu committed
110
## Exporting a model to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
111

Steven Liu's avatar
Steven Liu committed
112
113
To export a 馃 Transformers model to ONNX, you'll first need to install some extra
dependencies:
Sylvain Gugger's avatar
Sylvain Gugger committed
114

lewtun's avatar
lewtun committed
115
116
117
118
119
```bash
pip install transformers[onnx]
```

The `transformers.onnx` package can then be used as a Python module:
Sylvain Gugger's avatar
Sylvain Gugger committed
120
121
122
123

```bash
python -m transformers.onnx --help

lewtun's avatar
lewtun committed
124
usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output
Sylvain Gugger's avatar
Sylvain Gugger committed
125
126
127
128
129
130
131

positional arguments:
  output                Path indicating where to store generated ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
lewtun's avatar
lewtun committed
132
133
134
135
136
                        Model ID on huggingface.co or path on disk to load model from.
  --feature {causal-lm, ...}
                        The type of features to export the model with.
  --opset OPSET         ONNX opset version to export the model with.
  --atol ATOL           Absolute difference tolerence when validating the model.
Sylvain Gugger's avatar
Sylvain Gugger committed
137
138
139
140
141
```

Exporting a checkpoint using a ready-made configuration can be done as follows:

```bash
lewtun's avatar
lewtun committed
142
python -m transformers.onnx --model=distilbert-base-uncased onnx/
Sylvain Gugger's avatar
Sylvain Gugger committed
143
144
```

Steven Liu's avatar
Steven Liu committed
145
You should see the following logs:
Sylvain Gugger's avatar
Sylvain Gugger committed
146
147
148

```bash
Validating ONNX model...
149
        -[鉁揮 ONNX model output names match reference model ({'last_hidden_state'})
lewtun's avatar
lewtun committed
150
151
152
153
        - Validating ONNX Model output "last_hidden_state":
                -[鉁揮 (2, 8, 768) matches (2, 8, 768)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
154
155
```

Steven Liu's avatar
Steven Liu committed
156
157
158
This exports an ONNX graph of the checkpoint defined by the `--model` argument. In this
example, it is `distilbert-base-uncased`, but it can be any checkpoint on the Hugging
Face Hub or one that's stored locally.
Sylvain Gugger's avatar
Sylvain Gugger committed
159

lewtun's avatar
lewtun committed
160
The resulting `model.onnx` file can then be run on one of the [many
Steven Liu's avatar
Steven Liu committed
161
162
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
lewtun's avatar
lewtun committed
163
Runtime](https://onnxruntime.ai/) as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
164

lewtun's avatar
lewtun committed
165
166
167
168
169
170
171
172
173
174
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Sylvain Gugger's avatar
Sylvain Gugger committed
175

Steven Liu's avatar
Steven Liu committed
176
177
The required output names (like `["last_hidden_state"]`) can be obtained by taking a
look at the ONNX configuration of each model. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
178

lewtun's avatar
lewtun committed
179
180
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
181

lewtun's avatar
lewtun committed
182
183
184
185
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
Sylvain Gugger's avatar
Sylvain Gugger committed
186
187
```

Steven Liu's avatar
Steven Liu committed
188
189
The process is identical for TensorFlow checkpoints on the Hub. For example, we can
export a pure TensorFlow checkpoint from the [Keras
190
191
192
193
194
195
organization](https://huggingface.co/keras-io) as follows:

```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```

Steven Liu's avatar
Steven Liu committed
196
197
198
To export a model that's stored locally, you'll need to have the model's weights and
tokenizer files stored in a directory. For example, we can load and save a checkpoint as
follows:
199

Steven Liu's avatar
Steven Liu committed
200
<frameworkcontent> <pt>
201
202
203
204
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification

>>> # Load tokenizer and PyTorch weights form the Hub
205
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
206
207
208
209
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-pt-checkpoint")
>>> pt_model.save_pretrained("local-pt-checkpoint")
Sylvain Gugger's avatar
Sylvain Gugger committed
210
211
212
213
214
215
216
217
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
218
</pt> <tf>
Sylvain Gugger's avatar
Sylvain Gugger committed
219
```python
220
221
222
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

>>> # Load tokenizer and TensorFlow weights from the Hub
223
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
224
225
226
227
228
229
230
231
232
233
234
235
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-tf-checkpoint")
>>> tf_model.save_pretrained("local-tf-checkpoint")
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
236
</tf> </frameworkcontent>
237

Steven Liu's avatar
Steven Liu committed
238
## Selecting features for different model tasks
lewtun's avatar
lewtun committed
239

Steven Liu's avatar
Steven Liu committed
240
241
242
Each ready-made configuration comes with a set of _features_ that enable you to export
models for different types of tasks. As shown in the table below, each feature is
associated with a different `AutoClass`:
lewtun's avatar
lewtun committed
243
244
245
246
247
248
249
250
251
252
253
254

| Feature                              | Auto Class                           |
| ------------------------------------ | ------------------------------------ |
| `causal-lm`, `causal-lm-with-past`   | `AutoModelForCausalLM`               |
| `default`, `default-with-past`       | `AutoModel`                          |
| `masked-lm`                          | `AutoModelForMaskedLM`               |
| `question-answering`                 | `AutoModelForQuestionAnswering`      |
| `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM`              |
| `sequence-classification`            | `AutoModelForSequenceClassification` |
| `token-classification`               | `AutoModelForTokenClassification`    |

For each configuration, you can find the list of supported features via the
Steven Liu's avatar
Steven Liu committed
255
[`~transformers.onnx.FeaturesManager`]. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
256
257

```python
lewtun's avatar
lewtun committed
258
>>> from transformers.onnx.features import FeaturesManager
Sylvain Gugger's avatar
Sylvain Gugger committed
259

lewtun's avatar
lewtun committed
260
261
262
>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys())
>>> print(distilbert_features)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]
Sylvain Gugger's avatar
Sylvain Gugger committed
263
264
```

lewtun's avatar
lewtun committed
265
You can then pass one of these features to the `--feature` argument in the
Steven Liu's avatar
Steven Liu committed
266
267
`transformers.onnx` package. For example, to export a text-classification model we can
pick a fine-tuned model from the Hub and run:
Sylvain Gugger's avatar
Sylvain Gugger committed
268

lewtun's avatar
lewtun committed
269
270
271
272
```bash
python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \
                            --feature=sequence-classification onnx/
```
Sylvain Gugger's avatar
Sylvain Gugger committed
273

Steven Liu's avatar
Steven Liu committed
274
This displays the following logs:
lewtun's avatar
lewtun committed
275
276
277

```bash
Validating ONNX model...
278
        -[鉁揮 ONNX model output names match reference model ({'logits'})
lewtun's avatar
lewtun committed
279
280
281
282
        - Validating ONNX Model output "logits":
                -[鉁揮 (2, 2) matches (2, 2)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
283
284
```

Steven Liu's avatar
Steven Liu committed
285
286
287
Notice that in this case, the output names from the fine-tuned model are `logits`
instead of the `last_hidden_state` we saw with the `distilbert-base-uncased` checkpoint
earlier. This is expected since the fine-tuned model has a sequence classification head.
lewtun's avatar
lewtun committed
288
289
290

<Tip>

Steven Liu's avatar
Steven Liu committed
291
292
293
The features that have a `with-past` suffix (like `causal-lm-with-past`) correspond to
model classes with precomputed hidden states (key and values in the attention blocks)
that can be used for fast autoregressive decoding.
lewtun's avatar
lewtun committed
294
295

</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
296
297


Steven Liu's avatar
Steven Liu committed
298
## Exporting a model for an unsupported architecture
Sylvain Gugger's avatar
Sylvain Gugger committed
299

Steven Liu's avatar
Steven Liu committed
300
301
If you wish to export a model whose architecture is not natively supported by the
library, there are three main steps to follow:
Sylvain Gugger's avatar
Sylvain Gugger committed
302

lewtun's avatar
lewtun committed
303
304
305
1. Implement a custom ONNX configuration.
2. Export the model to ONNX.
3. Validate the outputs of the PyTorch and exported models.
Sylvain Gugger's avatar
Sylvain Gugger committed
306

Steven Liu's avatar
Steven Liu committed
307
308
In this section, we'll look at how DistilBERT was implemented to show what's involved
with each step.
Sylvain Gugger's avatar
Sylvain Gugger committed
309

Steven Liu's avatar
Steven Liu committed
310
### Implementing a custom ONNX configuration
Sylvain Gugger's avatar
Sylvain Gugger committed
311

Steven Liu's avatar
Steven Liu committed
312
313
Let's start with the ONNX configuration object. We provide three abstract classes that
you should inherit from, depending on the type of model architecture you wish to export:
Sylvain Gugger's avatar
Sylvain Gugger committed
314

315
316
317
* Encoder-based models inherit from [`~onnx.config.OnnxConfig`]
* Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`]
* Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`]
Sylvain Gugger's avatar
Sylvain Gugger committed
318
319
320

<Tip>

lewtun's avatar
lewtun committed
321
322
A good way to implement a custom ONNX configuration is to look at the existing
implementation in the `configuration_<model_name>.py` file of a similar architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
323
324
325

</Tip>

lewtun's avatar
lewtun committed
326
327
Since DistilBERT is an encoder-based model, its configuration inherits from
`OnnxConfig`:
Sylvain Gugger's avatar
Sylvain Gugger committed
328

lewtun's avatar
lewtun committed
329
330
331
332
333
334
335
336
337
338
339
340
341
342
```python
>>> from typing import Mapping, OrderedDict
>>> from transformers.onnx import OnnxConfig


>>> class DistilBertOnnxConfig(OnnxConfig):
...     @property
...     def inputs(self) -> Mapping[str, Mapping[int, str]]:
...         return OrderedDict(
...             [
...                 ("input_ids", {0: "batch", 1: "sequence"}),
...                 ("attention_mask", {0: "batch", 1: "sequence"}),
...             ]
...         )
Sylvain Gugger's avatar
Sylvain Gugger committed
343
344
```

Steven Liu's avatar
Steven Liu committed
345
346
347
348
349
Every configuration object must implement the `inputs` property and return a mapping,
where each key corresponds to an expected input, and each value indicates the axis of
that input. For DistilBERT, we can see that two inputs are required: `input_ids` and
`attention_mask`. These inputs have the same shape of `(batch_size, sequence_length)`
which is why we see the same axes used in the configuration.
Sylvain Gugger's avatar
Sylvain Gugger committed
350
351
352

<Tip>

Steven Liu's avatar
Steven Liu committed
353
354
355
356
357
Notice that `inputs` property for `DistilBertOnnxConfig` returns an `OrderedDict`. This
ensures that the inputs are matched with their relative position within the
`PreTrainedModel.forward()` method when tracing the graph. We recommend using an
`OrderedDict` for the `inputs` and `outputs` properties when implementing custom ONNX
configurations.
Sylvain Gugger's avatar
Sylvain Gugger committed
358
359
360

</Tip>

Steven Liu's avatar
Steven Liu committed
361
362
Once you have implemented an ONNX configuration, you can instantiate it by providing the
base model's configuration as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
363

lewtun's avatar
lewtun committed
364
365
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
366

lewtun's avatar
lewtun committed
367
368
369
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config = DistilBertOnnxConfig(config)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
370

Steven Liu's avatar
Steven Liu committed
371
372
The resulting object has several useful properties. For example, you can view the ONNX
operator set that will be used during the export:
Sylvain Gugger's avatar
Sylvain Gugger committed
373

lewtun's avatar
lewtun committed
374
375
376
377
```python
>>> print(onnx_config.default_onnx_opset)
11
```
Sylvain Gugger's avatar
Sylvain Gugger committed
378

lewtun's avatar
lewtun committed
379
You can also view the outputs associated with the model as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
380

lewtun's avatar
lewtun committed
381
382
383
384
```python
>>> print(onnx_config.outputs)
OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
385

Steven Liu's avatar
Steven Liu committed
386
387
388
389
390
391
392
393
Notice that the outputs property follows the same structure as the inputs; it returns an
`OrderedDict` of named outputs and their shapes. The output structure is linked to the
choice of feature that the configuration is initialised with. By default, the ONNX
configuration is initialized with the `default` feature that corresponds to exporting a
model loaded with the `AutoModel` class. If you want to export a model for another task,
just provide a different feature to the `task` argument when you initialize the ONNX
configuration. For example, if we wished to export DistilBERT with a sequence
classification head, we could use:
Sylvain Gugger's avatar
Sylvain Gugger committed
394

lewtun's avatar
lewtun committed
395
396
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
397

lewtun's avatar
lewtun committed
398
399
400
401
402
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch'})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
403
404
405

<Tip>

Steven Liu's avatar
Steven Liu committed
406
407
408
All of the base properties and methods associated with [`~onnx.config.OnnxConfig`] and
the other configuration classes can be overriden if needed. Check out [`BartOnnxConfig`]
for an advanced example.
Sylvain Gugger's avatar
Sylvain Gugger committed
409
410
411

</Tip>

Steven Liu's avatar
Steven Liu committed
412
### Exporting the model
Sylvain Gugger's avatar
Sylvain Gugger committed
413

Steven Liu's avatar
Steven Liu committed
414
415
416
417
Once you have implemented the ONNX configuration, the next step is to export the model.
Here we can use the `export()` function provided by the `transformers.onnx` package.
This function expects the ONNX configuration, along with the base model and tokenizer,
and the path to save the exported file:
Sylvain Gugger's avatar
Sylvain Gugger committed
418

lewtun's avatar
lewtun committed
419
420
421
422
```python
>>> from pathlib import Path
>>> from transformers.onnx import export
>>> from transformers import AutoTokenizer, AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
423

lewtun's avatar
lewtun committed
424
425
426
427
>>> onnx_path = Path("model.onnx")
>>> model_ckpt = "distilbert-base-uncased"
>>> base_model = AutoModel.from_pretrained(model_ckpt)
>>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
Sylvain Gugger's avatar
Sylvain Gugger committed
428

lewtun's avatar
lewtun committed
429
430
>>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
431

Steven Liu's avatar
Steven Liu committed
432
433
434
The `onnx_inputs` and `onnx_outputs` returned by the `export()` function are lists of
the keys defined in the `inputs` and `outputs` properties of the configuration. Once the
model is exported, you can test that the model is well formed as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
435

lewtun's avatar
lewtun committed
436
437
```python
>>> import onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
438

lewtun's avatar
lewtun committed
439
440
441
>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
442
443
444

<Tip>

Steven Liu's avatar
Steven Liu committed
445
446
447
448
449
450
If your model is larger than 2GB, you will see that many additional files are created
during the export. This is _expected_ because ONNX uses [Protocol
Buffers](https://developers.google.com/protocol-buffers/) to store the model and these
have a size limit of 2GB. See the [ONNX
documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) for
instructions on how to load models with external data.
Sylvain Gugger's avatar
Sylvain Gugger committed
451
452
453

</Tip>

Steven Liu's avatar
Steven Liu committed
454
### Validating the model outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
455

Steven Liu's avatar
Steven Liu committed
456
457
458
The final step is to validate that the outputs from the base and exported model agree
within some absolute tolerance. Here we can use the `validate_model_outputs()` function
provided by the `transformers.onnx` package as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
459

lewtun's avatar
lewtun committed
460
461
```python
>>> from transformers.onnx import validate_model_outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
462

lewtun's avatar
lewtun committed
463
464
465
>>> validate_model_outputs(
...     onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
... )
Sylvain Gugger's avatar
Sylvain Gugger committed
466
467
```

Steven Liu's avatar
Steven Liu committed
468
469
470
471
This function uses the [`~transformers.onnx.OnnxConfig.generate_dummy_inputs`] method to
generate inputs for the base and exported model, and the absolute tolerance can be
defined in the configuration. We generally find numerical agreement in the 1e-6 to 1e-4
range, although anything smaller than 1e-3 is likely to be OK.
Sylvain Gugger's avatar
Sylvain Gugger committed
472

Steven Liu's avatar
Steven Liu committed
473
## Contributing a new configuration to 馃 Transformers
Sylvain Gugger's avatar
Sylvain Gugger committed
474

Steven Liu's avatar
Steven Liu committed
475
476
477
We are looking to expand the set of ready-made configurations and welcome contributions
from the community! If you would like to contribute your addition to the library, you
will need to:
Sylvain Gugger's avatar
Sylvain Gugger committed
478

lewtun's avatar
lewtun committed
479
480
* Implement the ONNX configuration in the corresponding `configuration_<model_name>.py`
file
Steven Liu's avatar
Steven Liu committed
481
482
* Include the model architecture and corresponding features in
  [`~onnx.features.FeatureManager`]
483
* Add your model architecture to the tests in `test_onnx_v2.py`
Sylvain Gugger's avatar
Sylvain Gugger committed
484

lewtun's avatar
lewtun committed
485
Check out how the configuration for [IBERT was
Steven Liu's avatar
Steven Liu committed
486
487
contributed](https://github.com/huggingface/transformers/pull/14868/files) to get an
idea of what's involved.