serialization.mdx 17.5 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Steven Liu's avatar
Steven Liu committed
13
# Export to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
14

Steven Liu's avatar
Steven Liu committed
15
16
17
18
If you need to deploy 馃 Transformers models in production environments, we recommend
exporting them to a serialized format that can be loaded and executed on specialized
runtimes and hardware. In this guide, we'll show you how to export 馃 Transformers
models to [ONNX (Open Neural Network eXchange)](http://onnx.ai).
Sylvain Gugger's avatar
Sylvain Gugger committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
24
<Tip>

Once exported, a model can be optimized for inference via techniques such as
quantization and pruning. If you are interested in optimizing your models to run with
maximum efficiency, check out the [馃 Optimum
lewtun's avatar
lewtun committed
25
library](https://github.com/huggingface/optimum).
Sylvain Gugger's avatar
Sylvain Gugger committed
26

Steven Liu's avatar
Steven Liu committed
27
</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
28

Steven Liu's avatar
Steven Liu committed
29
30
31
32
33
ONNX is an open standard that defines a common set of operators and a common file format
to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network.
Sylvain Gugger's avatar
Sylvain Gugger committed
34

Steven Liu's avatar
Steven Liu committed
35
36
37
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Sylvain Gugger's avatar
Sylvain Gugger committed
38

Steven Liu's avatar
Steven Liu committed
39
40
41
42
馃 Transformers provides a [`transformers.onnx`](main_classes/onnx) package that enables
you to convert model checkpoints to an ONNX graph by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are
designed to be easily extendable to other architectures.
Sylvain Gugger's avatar
Sylvain Gugger committed
43

lewtun's avatar
lewtun committed
44
Ready-made configurations include the following architectures:
Sylvain Gugger's avatar
Sylvain Gugger committed
45

46
<!--This table is automatically generated by `make fix-copies`, do not fill manually!-->
Sylvain Gugger's avatar
Sylvain Gugger committed
47
48
49

- ALBERT
- BART
Jim Rohrer's avatar
Jim Rohrer committed
50
- BEiT
Sylvain Gugger's avatar
Sylvain Gugger committed
51
- BERT
52
- BigBird
53
- BigBird-Pegasus
54
55
- Blenderbot
- BlenderbotSmall
56
- BLOOM
Sylvain Gugger's avatar
Sylvain Gugger committed
57
- CamemBERT
58
- CLIP
rooa's avatar
rooa committed
59
- CodeGen
60
- Conditional DETR
61
- ConvBERT
62
- ConvNeXT
63
- Data2VecText
64
- Data2VecVision
65
66
- DeBERTa
- DeBERTa-v2
67
- DeiT
regisss's avatar
regisss committed
68
- DETR
Sylvain Gugger's avatar
Sylvain Gugger committed
69
- DistilBERT
70
- ELECTRA
71
- ERNIE
72
- FlauBERT
Sylvain Gugger's avatar
Sylvain Gugger committed
73
- GPT Neo
74
- GPT-J
75
- GroupViT
76
- I-BERT
Sylvain Gugger's avatar
Sylvain Gugger committed
77
- LayoutLM
78
- LayoutLMv3
gcheron's avatar
gcheron committed
79
- LeViT
80
- Longformer
Daniel Stancl's avatar
Daniel Stancl committed
81
- LongT5
82
- M2M100
83
- Marian
Sylvain Gugger's avatar
Sylvain Gugger committed
84
- mBART
85
- MobileBERT
86
- MobileViT
87
- MT5
Sylvain Gugger's avatar
Sylvain Gugger committed
88
- OpenAI GPT-2
89
- OWL-ViT
90
- Perceiver
Gunjan Chhablani's avatar
Gunjan Chhablani committed
91
- PLBart
regisss's avatar
regisss committed
92
- ResNet
Sylvain Gugger's avatar
Sylvain Gugger committed
93
- RoBERTa
94
- RoFormer
95
- SegFormer
96
- SqueezeBERT
97
- Swin Transformer
Sylvain Gugger's avatar
Sylvain Gugger committed
98
- T5
99
- Vision Encoder decoder
lewtun's avatar
lewtun committed
100
- ViT
Ritik Nandwal's avatar
Ritik Nandwal committed
101
- XLM
Sylvain Gugger's avatar
Sylvain Gugger committed
102
- XLM-RoBERTa
103
- XLM-RoBERTa-XL
NielsRogge's avatar
NielsRogge committed
104
- YOLOS
Sylvain Gugger's avatar
Sylvain Gugger committed
105

lewtun's avatar
lewtun committed
106
In the next two sections, we'll show you how to:
Sylvain Gugger's avatar
Sylvain Gugger committed
107

lewtun's avatar
lewtun committed
108
109
* Export a supported model using the `transformers.onnx` package.
* Export a custom model for an unsupported architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
110

Steven Liu's avatar
Steven Liu committed
111
## Exporting a model to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
112

Steven Liu's avatar
Steven Liu committed
113
114
To export a 馃 Transformers model to ONNX, you'll first need to install some extra
dependencies:
Sylvain Gugger's avatar
Sylvain Gugger committed
115

lewtun's avatar
lewtun committed
116
117
118
119
120
```bash
pip install transformers[onnx]
```

The `transformers.onnx` package can then be used as a Python module:
Sylvain Gugger's avatar
Sylvain Gugger committed
121
122
123
124

```bash
python -m transformers.onnx --help

lewtun's avatar
lewtun committed
125
usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output
Sylvain Gugger's avatar
Sylvain Gugger committed
126
127
128
129
130
131
132

positional arguments:
  output                Path indicating where to store generated ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
lewtun's avatar
lewtun committed
133
134
135
136
137
                        Model ID on huggingface.co or path on disk to load model from.
  --feature {causal-lm, ...}
                        The type of features to export the model with.
  --opset OPSET         ONNX opset version to export the model with.
  --atol ATOL           Absolute difference tolerence when validating the model.
Sylvain Gugger's avatar
Sylvain Gugger committed
138
139
140
141
142
```

Exporting a checkpoint using a ready-made configuration can be done as follows:

```bash
lewtun's avatar
lewtun committed
143
python -m transformers.onnx --model=distilbert-base-uncased onnx/
Sylvain Gugger's avatar
Sylvain Gugger committed
144
145
```

Steven Liu's avatar
Steven Liu committed
146
You should see the following logs:
Sylvain Gugger's avatar
Sylvain Gugger committed
147
148
149

```bash
Validating ONNX model...
150
        -[鉁揮 ONNX model output names match reference model ({'last_hidden_state'})
lewtun's avatar
lewtun committed
151
152
153
154
        - Validating ONNX Model output "last_hidden_state":
                -[鉁揮 (2, 8, 768) matches (2, 8, 768)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
155
156
```

Steven Liu's avatar
Steven Liu committed
157
158
159
This exports an ONNX graph of the checkpoint defined by the `--model` argument. In this
example, it is `distilbert-base-uncased`, but it can be any checkpoint on the Hugging
Face Hub or one that's stored locally.
Sylvain Gugger's avatar
Sylvain Gugger committed
160

lewtun's avatar
lewtun committed
161
The resulting `model.onnx` file can then be run on one of the [many
Steven Liu's avatar
Steven Liu committed
162
163
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
lewtun's avatar
lewtun committed
164
Runtime](https://onnxruntime.ai/) as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
165

lewtun's avatar
lewtun committed
166
167
168
169
170
171
172
173
174
175
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Sylvain Gugger's avatar
Sylvain Gugger committed
176

Steven Liu's avatar
Steven Liu committed
177
178
The required output names (like `["last_hidden_state"]`) can be obtained by taking a
look at the ONNX configuration of each model. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
179

lewtun's avatar
lewtun committed
180
181
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
182

lewtun's avatar
lewtun committed
183
184
185
186
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
Sylvain Gugger's avatar
Sylvain Gugger committed
187
188
```

Steven Liu's avatar
Steven Liu committed
189
190
The process is identical for TensorFlow checkpoints on the Hub. For example, we can
export a pure TensorFlow checkpoint from the [Keras
191
192
193
194
195
196
organization](https://huggingface.co/keras-io) as follows:

```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```

Steven Liu's avatar
Steven Liu committed
197
198
199
To export a model that's stored locally, you'll need to have the model's weights and
tokenizer files stored in a directory. For example, we can load and save a checkpoint as
follows:
200

Steven Liu's avatar
Steven Liu committed
201
<frameworkcontent> <pt>
202
203
204
205
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification

>>> # Load tokenizer and PyTorch weights form the Hub
206
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
207
208
209
210
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-pt-checkpoint")
>>> pt_model.save_pretrained("local-pt-checkpoint")
Sylvain Gugger's avatar
Sylvain Gugger committed
211
212
213
214
215
216
217
218
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
219
</pt> <tf>
Sylvain Gugger's avatar
Sylvain Gugger committed
220
```python
221
222
223
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

>>> # Load tokenizer and TensorFlow weights from the Hub
224
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
225
226
227
228
229
230
231
232
233
234
235
236
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-tf-checkpoint")
>>> tf_model.save_pretrained("local-tf-checkpoint")
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
237
</tf> </frameworkcontent>
238

Steven Liu's avatar
Steven Liu committed
239
## Selecting features for different model tasks
lewtun's avatar
lewtun committed
240

Steven Liu's avatar
Steven Liu committed
241
242
243
Each ready-made configuration comes with a set of _features_ that enable you to export
models for different types of tasks. As shown in the table below, each feature is
associated with a different `AutoClass`:
lewtun's avatar
lewtun committed
244
245
246
247
248
249
250
251
252
253
254
255

| Feature                              | Auto Class                           |
| ------------------------------------ | ------------------------------------ |
| `causal-lm`, `causal-lm-with-past`   | `AutoModelForCausalLM`               |
| `default`, `default-with-past`       | `AutoModel`                          |
| `masked-lm`                          | `AutoModelForMaskedLM`               |
| `question-answering`                 | `AutoModelForQuestionAnswering`      |
| `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM`              |
| `sequence-classification`            | `AutoModelForSequenceClassification` |
| `token-classification`               | `AutoModelForTokenClassification`    |

For each configuration, you can find the list of supported features via the
Steven Liu's avatar
Steven Liu committed
256
[`~transformers.onnx.FeaturesManager`]. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
257
258

```python
lewtun's avatar
lewtun committed
259
>>> from transformers.onnx.features import FeaturesManager
Sylvain Gugger's avatar
Sylvain Gugger committed
260

lewtun's avatar
lewtun committed
261
262
263
>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys())
>>> print(distilbert_features)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]
Sylvain Gugger's avatar
Sylvain Gugger committed
264
265
```

lewtun's avatar
lewtun committed
266
You can then pass one of these features to the `--feature` argument in the
Steven Liu's avatar
Steven Liu committed
267
268
`transformers.onnx` package. For example, to export a text-classification model we can
pick a fine-tuned model from the Hub and run:
Sylvain Gugger's avatar
Sylvain Gugger committed
269

lewtun's avatar
lewtun committed
270
271
272
273
```bash
python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \
                            --feature=sequence-classification onnx/
```
Sylvain Gugger's avatar
Sylvain Gugger committed
274

Steven Liu's avatar
Steven Liu committed
275
This displays the following logs:
lewtun's avatar
lewtun committed
276
277
278

```bash
Validating ONNX model...
279
        -[鉁揮 ONNX model output names match reference model ({'logits'})
lewtun's avatar
lewtun committed
280
281
282
283
        - Validating ONNX Model output "logits":
                -[鉁揮 (2, 2) matches (2, 2)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
284
285
```

Steven Liu's avatar
Steven Liu committed
286
287
288
Notice that in this case, the output names from the fine-tuned model are `logits`
instead of the `last_hidden_state` we saw with the `distilbert-base-uncased` checkpoint
earlier. This is expected since the fine-tuned model has a sequence classification head.
lewtun's avatar
lewtun committed
289
290
291

<Tip>

Steven Liu's avatar
Steven Liu committed
292
293
294
The features that have a `with-past` suffix (like `causal-lm-with-past`) correspond to
model classes with precomputed hidden states (key and values in the attention blocks)
that can be used for fast autoregressive decoding.
lewtun's avatar
lewtun committed
295
296

</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
297

298
299
300
301
302
303
304
<Tip>

For `VisionEncoderDecoder` type models, the encoder and decoder parts are
exported separately as two ONNX files named `encoder_model.onnx` and `decoder_model.onnx` respectively.

</Tip>

Sylvain Gugger's avatar
Sylvain Gugger committed
305

Steven Liu's avatar
Steven Liu committed
306
## Exporting a model for an unsupported architecture
Sylvain Gugger's avatar
Sylvain Gugger committed
307

Steven Liu's avatar
Steven Liu committed
308
309
If you wish to export a model whose architecture is not natively supported by the
library, there are three main steps to follow:
Sylvain Gugger's avatar
Sylvain Gugger committed
310

lewtun's avatar
lewtun committed
311
312
313
1. Implement a custom ONNX configuration.
2. Export the model to ONNX.
3. Validate the outputs of the PyTorch and exported models.
Sylvain Gugger's avatar
Sylvain Gugger committed
314

Steven Liu's avatar
Steven Liu committed
315
316
In this section, we'll look at how DistilBERT was implemented to show what's involved
with each step.
Sylvain Gugger's avatar
Sylvain Gugger committed
317

Steven Liu's avatar
Steven Liu committed
318
### Implementing a custom ONNX configuration
Sylvain Gugger's avatar
Sylvain Gugger committed
319

Steven Liu's avatar
Steven Liu committed
320
321
Let's start with the ONNX configuration object. We provide three abstract classes that
you should inherit from, depending on the type of model architecture you wish to export:
Sylvain Gugger's avatar
Sylvain Gugger committed
322

323
324
325
* Encoder-based models inherit from [`~onnx.config.OnnxConfig`]
* Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`]
* Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`]
Sylvain Gugger's avatar
Sylvain Gugger committed
326
327
328

<Tip>

lewtun's avatar
lewtun committed
329
330
A good way to implement a custom ONNX configuration is to look at the existing
implementation in the `configuration_<model_name>.py` file of a similar architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
331
332
333

</Tip>

lewtun's avatar
lewtun committed
334
335
Since DistilBERT is an encoder-based model, its configuration inherits from
`OnnxConfig`:
Sylvain Gugger's avatar
Sylvain Gugger committed
336

lewtun's avatar
lewtun committed
337
338
339
340
341
342
343
344
345
346
347
348
349
350
```python
>>> from typing import Mapping, OrderedDict
>>> from transformers.onnx import OnnxConfig


>>> class DistilBertOnnxConfig(OnnxConfig):
...     @property
...     def inputs(self) -> Mapping[str, Mapping[int, str]]:
...         return OrderedDict(
...             [
...                 ("input_ids", {0: "batch", 1: "sequence"}),
...                 ("attention_mask", {0: "batch", 1: "sequence"}),
...             ]
...         )
Sylvain Gugger's avatar
Sylvain Gugger committed
351
352
```

Steven Liu's avatar
Steven Liu committed
353
354
355
356
357
Every configuration object must implement the `inputs` property and return a mapping,
where each key corresponds to an expected input, and each value indicates the axis of
that input. For DistilBERT, we can see that two inputs are required: `input_ids` and
`attention_mask`. These inputs have the same shape of `(batch_size, sequence_length)`
which is why we see the same axes used in the configuration.
Sylvain Gugger's avatar
Sylvain Gugger committed
358
359
360

<Tip>

Steven Liu's avatar
Steven Liu committed
361
362
363
364
365
Notice that `inputs` property for `DistilBertOnnxConfig` returns an `OrderedDict`. This
ensures that the inputs are matched with their relative position within the
`PreTrainedModel.forward()` method when tracing the graph. We recommend using an
`OrderedDict` for the `inputs` and `outputs` properties when implementing custom ONNX
configurations.
Sylvain Gugger's avatar
Sylvain Gugger committed
366
367
368

</Tip>

Steven Liu's avatar
Steven Liu committed
369
370
Once you have implemented an ONNX configuration, you can instantiate it by providing the
base model's configuration as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
371

lewtun's avatar
lewtun committed
372
373
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
374

lewtun's avatar
lewtun committed
375
376
377
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config = DistilBertOnnxConfig(config)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
378

Steven Liu's avatar
Steven Liu committed
379
380
The resulting object has several useful properties. For example, you can view the ONNX
operator set that will be used during the export:
Sylvain Gugger's avatar
Sylvain Gugger committed
381

lewtun's avatar
lewtun committed
382
383
384
385
```python
>>> print(onnx_config.default_onnx_opset)
11
```
Sylvain Gugger's avatar
Sylvain Gugger committed
386

lewtun's avatar
lewtun committed
387
You can also view the outputs associated with the model as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
388

lewtun's avatar
lewtun committed
389
390
391
392
```python
>>> print(onnx_config.outputs)
OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
393

Steven Liu's avatar
Steven Liu committed
394
395
396
397
398
399
400
401
Notice that the outputs property follows the same structure as the inputs; it returns an
`OrderedDict` of named outputs and their shapes. The output structure is linked to the
choice of feature that the configuration is initialised with. By default, the ONNX
configuration is initialized with the `default` feature that corresponds to exporting a
model loaded with the `AutoModel` class. If you want to export a model for another task,
just provide a different feature to the `task` argument when you initialize the ONNX
configuration. For example, if we wished to export DistilBERT with a sequence
classification head, we could use:
Sylvain Gugger's avatar
Sylvain Gugger committed
402

lewtun's avatar
lewtun committed
403
404
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
405

lewtun's avatar
lewtun committed
406
407
408
409
410
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch'})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
411
412
413

<Tip>

Steven Liu's avatar
Steven Liu committed
414
415
416
All of the base properties and methods associated with [`~onnx.config.OnnxConfig`] and
the other configuration classes can be overriden if needed. Check out [`BartOnnxConfig`]
for an advanced example.
Sylvain Gugger's avatar
Sylvain Gugger committed
417
418
419

</Tip>

Steven Liu's avatar
Steven Liu committed
420
### Exporting the model
Sylvain Gugger's avatar
Sylvain Gugger committed
421

Steven Liu's avatar
Steven Liu committed
422
423
424
425
Once you have implemented the ONNX configuration, the next step is to export the model.
Here we can use the `export()` function provided by the `transformers.onnx` package.
This function expects the ONNX configuration, along with the base model and tokenizer,
and the path to save the exported file:
Sylvain Gugger's avatar
Sylvain Gugger committed
426

lewtun's avatar
lewtun committed
427
428
429
430
```python
>>> from pathlib import Path
>>> from transformers.onnx import export
>>> from transformers import AutoTokenizer, AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
431

lewtun's avatar
lewtun committed
432
433
434
435
>>> onnx_path = Path("model.onnx")
>>> model_ckpt = "distilbert-base-uncased"
>>> base_model = AutoModel.from_pretrained(model_ckpt)
>>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
Sylvain Gugger's avatar
Sylvain Gugger committed
436

lewtun's avatar
lewtun committed
437
438
>>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
439

Steven Liu's avatar
Steven Liu committed
440
441
442
The `onnx_inputs` and `onnx_outputs` returned by the `export()` function are lists of
the keys defined in the `inputs` and `outputs` properties of the configuration. Once the
model is exported, you can test that the model is well formed as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
443

lewtun's avatar
lewtun committed
444
445
```python
>>> import onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
446

lewtun's avatar
lewtun committed
447
448
449
>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
450
451
452

<Tip>

Steven Liu's avatar
Steven Liu committed
453
454
455
456
457
458
If your model is larger than 2GB, you will see that many additional files are created
during the export. This is _expected_ because ONNX uses [Protocol
Buffers](https://developers.google.com/protocol-buffers/) to store the model and these
have a size limit of 2GB. See the [ONNX
documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) for
instructions on how to load models with external data.
Sylvain Gugger's avatar
Sylvain Gugger committed
459
460
461

</Tip>

Steven Liu's avatar
Steven Liu committed
462
### Validating the model outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
463

Steven Liu's avatar
Steven Liu committed
464
465
466
The final step is to validate that the outputs from the base and exported model agree
within some absolute tolerance. Here we can use the `validate_model_outputs()` function
provided by the `transformers.onnx` package as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
467

lewtun's avatar
lewtun committed
468
469
```python
>>> from transformers.onnx import validate_model_outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
470

lewtun's avatar
lewtun committed
471
472
473
>>> validate_model_outputs(
...     onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
... )
Sylvain Gugger's avatar
Sylvain Gugger committed
474
475
```

Steven Liu's avatar
Steven Liu committed
476
477
478
479
This function uses the [`~transformers.onnx.OnnxConfig.generate_dummy_inputs`] method to
generate inputs for the base and exported model, and the absolute tolerance can be
defined in the configuration. We generally find numerical agreement in the 1e-6 to 1e-4
range, although anything smaller than 1e-3 is likely to be OK.
Sylvain Gugger's avatar
Sylvain Gugger committed
480

Steven Liu's avatar
Steven Liu committed
481
## Contributing a new configuration to 馃 Transformers
Sylvain Gugger's avatar
Sylvain Gugger committed
482

Steven Liu's avatar
Steven Liu committed
483
484
485
We are looking to expand the set of ready-made configurations and welcome contributions
from the community! If you would like to contribute your addition to the library, you
will need to:
Sylvain Gugger's avatar
Sylvain Gugger committed
486

lewtun's avatar
lewtun committed
487
488
* Implement the ONNX configuration in the corresponding `configuration_<model_name>.py`
file
Steven Liu's avatar
Steven Liu committed
489
490
* Include the model architecture and corresponding features in
  [`~onnx.features.FeatureManager`]
491
* Add your model architecture to the tests in `test_onnx_v2.py`
Sylvain Gugger's avatar
Sylvain Gugger committed
492

lewtun's avatar
lewtun committed
493
Check out how the configuration for [IBERT was
Steven Liu's avatar
Steven Liu committed
494
495
contributed](https://github.com/huggingface/transformers/pull/14868/files) to get an
idea of what's involved.