serialization.mdx 17.5 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Steven Liu's avatar
Steven Liu committed
13
# Export to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
14

Steven Liu's avatar
Steven Liu committed
15
16
17
18
If you need to deploy 馃 Transformers models in production environments, we recommend
exporting them to a serialized format that can be loaded and executed on specialized
runtimes and hardware. In this guide, we'll show you how to export 馃 Transformers
models to [ONNX (Open Neural Network eXchange)](http://onnx.ai).
Sylvain Gugger's avatar
Sylvain Gugger committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
24
<Tip>

Once exported, a model can be optimized for inference via techniques such as
quantization and pruning. If you are interested in optimizing your models to run with
maximum efficiency, check out the [馃 Optimum
lewtun's avatar
lewtun committed
25
library](https://github.com/huggingface/optimum).
Sylvain Gugger's avatar
Sylvain Gugger committed
26

Steven Liu's avatar
Steven Liu committed
27
</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
28

Steven Liu's avatar
Steven Liu committed
29
30
31
32
33
ONNX is an open standard that defines a common set of operators and a common file format
to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network.
Sylvain Gugger's avatar
Sylvain Gugger committed
34

Steven Liu's avatar
Steven Liu committed
35
36
37
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Sylvain Gugger's avatar
Sylvain Gugger committed
38

Steven Liu's avatar
Steven Liu committed
39
40
41
42
馃 Transformers provides a [`transformers.onnx`](main_classes/onnx) package that enables
you to convert model checkpoints to an ONNX graph by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are
designed to be easily extendable to other architectures.
Sylvain Gugger's avatar
Sylvain Gugger committed
43

lewtun's avatar
lewtun committed
44
Ready-made configurations include the following architectures:
Sylvain Gugger's avatar
Sylvain Gugger committed
45

46
<!--This table is automatically generated by `make fix-copies`, do not fill manually!-->
Sylvain Gugger's avatar
Sylvain Gugger committed
47
48
49

- ALBERT
- BART
Jim Rohrer's avatar
Jim Rohrer committed
50
- BEiT
Sylvain Gugger's avatar
Sylvain Gugger committed
51
- BERT
52
- BigBird
53
- BigBird-Pegasus
54
55
- Blenderbot
- BlenderbotSmall
56
- BLOOM
Sylvain Gugger's avatar
Sylvain Gugger committed
57
- CamemBERT
58
- CLIP
rooa's avatar
rooa committed
59
- CodeGen
60
- Conditional DETR
61
- ConvBERT
62
- ConvNeXT
63
- Data2VecText
64
- Data2VecVision
65
66
- DeBERTa
- DeBERTa-v2
67
- DeiT
regisss's avatar
regisss committed
68
- DETR
Sylvain Gugger's avatar
Sylvain Gugger committed
69
- DistilBERT
70
- ELECTRA
71
- ERNIE
72
- FlauBERT
Sylvain Gugger's avatar
Sylvain Gugger committed
73
- GPT Neo
74
- GPT-J
75
- GroupViT
76
- I-BERT
77
- ImageGPT
Sylvain Gugger's avatar
Sylvain Gugger committed
78
- LayoutLM
79
- LayoutLMv3
gcheron's avatar
gcheron committed
80
- LeViT
81
- Longformer
Daniel Stancl's avatar
Daniel Stancl committed
82
- LongT5
83
- M2M100
84
- Marian
Sylvain Gugger's avatar
Sylvain Gugger committed
85
- mBART
86
- MobileBERT
87
- MobileNetV2
88
- MobileViT
89
- MT5
Sylvain Gugger's avatar
Sylvain Gugger committed
90
- OpenAI GPT-2
91
- OWL-ViT
92
- Perceiver
Gunjan Chhablani's avatar
Gunjan Chhablani committed
93
- PLBart
regisss's avatar
regisss committed
94
- ResNet
Sylvain Gugger's avatar
Sylvain Gugger committed
95
- RoBERTa
96
- RoFormer
97
- SegFormer
98
- SqueezeBERT
99
- Swin Transformer
Sylvain Gugger's avatar
Sylvain Gugger committed
100
- T5
101
- Table Transformer
102
- Vision Encoder decoder
lewtun's avatar
lewtun committed
103
- ViT
104
- Whisper
Ritik Nandwal's avatar
Ritik Nandwal committed
105
- XLM
Sylvain Gugger's avatar
Sylvain Gugger committed
106
- XLM-RoBERTa
107
- XLM-RoBERTa-XL
NielsRogge's avatar
NielsRogge committed
108
- YOLOS
Sylvain Gugger's avatar
Sylvain Gugger committed
109

lewtun's avatar
lewtun committed
110
In the next two sections, we'll show you how to:
Sylvain Gugger's avatar
Sylvain Gugger committed
111

lewtun's avatar
lewtun committed
112
113
* Export a supported model using the `transformers.onnx` package.
* Export a custom model for an unsupported architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
114

Steven Liu's avatar
Steven Liu committed
115
## Exporting a model to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
116

Steven Liu's avatar
Steven Liu committed
117
118
To export a 馃 Transformers model to ONNX, you'll first need to install some extra
dependencies:
Sylvain Gugger's avatar
Sylvain Gugger committed
119

lewtun's avatar
lewtun committed
120
121
122
123
124
```bash
pip install transformers[onnx]
```

The `transformers.onnx` package can then be used as a Python module:
Sylvain Gugger's avatar
Sylvain Gugger committed
125
126
127
128

```bash
python -m transformers.onnx --help

lewtun's avatar
lewtun committed
129
usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output
Sylvain Gugger's avatar
Sylvain Gugger committed
130
131
132
133
134
135
136

positional arguments:
  output                Path indicating where to store generated ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
lewtun's avatar
lewtun committed
137
138
139
140
                        Model ID on huggingface.co or path on disk to load model from.
  --feature {causal-lm, ...}
                        The type of features to export the model with.
  --opset OPSET         ONNX opset version to export the model with.
141
  --atol ATOL           Absolute difference tolerance when validating the model.
Sylvain Gugger's avatar
Sylvain Gugger committed
142
143
144
145
146
```

Exporting a checkpoint using a ready-made configuration can be done as follows:

```bash
lewtun's avatar
lewtun committed
147
python -m transformers.onnx --model=distilbert-base-uncased onnx/
Sylvain Gugger's avatar
Sylvain Gugger committed
148
149
```

Steven Liu's avatar
Steven Liu committed
150
You should see the following logs:
Sylvain Gugger's avatar
Sylvain Gugger committed
151
152
153

```bash
Validating ONNX model...
154
        -[鉁揮 ONNX model output names match reference model ({'last_hidden_state'})
lewtun's avatar
lewtun committed
155
156
157
158
        - Validating ONNX Model output "last_hidden_state":
                -[鉁揮 (2, 8, 768) matches (2, 8, 768)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
159
160
```

Steven Liu's avatar
Steven Liu committed
161
162
163
This exports an ONNX graph of the checkpoint defined by the `--model` argument. In this
example, it is `distilbert-base-uncased`, but it can be any checkpoint on the Hugging
Face Hub or one that's stored locally.
Sylvain Gugger's avatar
Sylvain Gugger committed
164

lewtun's avatar
lewtun committed
165
The resulting `model.onnx` file can then be run on one of the [many
Steven Liu's avatar
Steven Liu committed
166
167
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
lewtun's avatar
lewtun committed
168
Runtime](https://onnxruntime.ai/) as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
169

lewtun's avatar
lewtun committed
170
171
172
173
174
175
176
177
178
179
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Sylvain Gugger's avatar
Sylvain Gugger committed
180

Steven Liu's avatar
Steven Liu committed
181
182
The required output names (like `["last_hidden_state"]`) can be obtained by taking a
look at the ONNX configuration of each model. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
183

lewtun's avatar
lewtun committed
184
185
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
186

lewtun's avatar
lewtun committed
187
188
189
190
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
Sylvain Gugger's avatar
Sylvain Gugger committed
191
192
```

Steven Liu's avatar
Steven Liu committed
193
194
The process is identical for TensorFlow checkpoints on the Hub. For example, we can
export a pure TensorFlow checkpoint from the [Keras
195
196
197
198
199
200
organization](https://huggingface.co/keras-io) as follows:

```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```

Steven Liu's avatar
Steven Liu committed
201
202
203
To export a model that's stored locally, you'll need to have the model's weights and
tokenizer files stored in a directory. For example, we can load and save a checkpoint as
follows:
204

Steven Liu's avatar
Steven Liu committed
205
<frameworkcontent> <pt>
206
207
208
209
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification

>>> # Load tokenizer and PyTorch weights form the Hub
210
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
211
212
213
214
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-pt-checkpoint")
>>> pt_model.save_pretrained("local-pt-checkpoint")
Sylvain Gugger's avatar
Sylvain Gugger committed
215
216
217
218
219
220
221
222
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
223
</pt> <tf>
Sylvain Gugger's avatar
Sylvain Gugger committed
224
```python
225
226
227
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

>>> # Load tokenizer and TensorFlow weights from the Hub
228
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
229
230
231
232
233
234
235
236
237
238
239
240
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-tf-checkpoint")
>>> tf_model.save_pretrained("local-tf-checkpoint")
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
241
</tf> </frameworkcontent>
242

Steven Liu's avatar
Steven Liu committed
243
## Selecting features for different model tasks
lewtun's avatar
lewtun committed
244

Steven Liu's avatar
Steven Liu committed
245
246
247
Each ready-made configuration comes with a set of _features_ that enable you to export
models for different types of tasks. As shown in the table below, each feature is
associated with a different `AutoClass`:
lewtun's avatar
lewtun committed
248
249
250
251
252
253
254
255
256
257
258
259

| Feature                              | Auto Class                           |
| ------------------------------------ | ------------------------------------ |
| `causal-lm`, `causal-lm-with-past`   | `AutoModelForCausalLM`               |
| `default`, `default-with-past`       | `AutoModel`                          |
| `masked-lm`                          | `AutoModelForMaskedLM`               |
| `question-answering`                 | `AutoModelForQuestionAnswering`      |
| `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM`              |
| `sequence-classification`            | `AutoModelForSequenceClassification` |
| `token-classification`               | `AutoModelForTokenClassification`    |

For each configuration, you can find the list of supported features via the
Steven Liu's avatar
Steven Liu committed
260
[`~transformers.onnx.FeaturesManager`]. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
261
262

```python
lewtun's avatar
lewtun committed
263
>>> from transformers.onnx.features import FeaturesManager
Sylvain Gugger's avatar
Sylvain Gugger committed
264

lewtun's avatar
lewtun committed
265
266
267
>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys())
>>> print(distilbert_features)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]
Sylvain Gugger's avatar
Sylvain Gugger committed
268
269
```

lewtun's avatar
lewtun committed
270
You can then pass one of these features to the `--feature` argument in the
Steven Liu's avatar
Steven Liu committed
271
272
`transformers.onnx` package. For example, to export a text-classification model we can
pick a fine-tuned model from the Hub and run:
Sylvain Gugger's avatar
Sylvain Gugger committed
273

lewtun's avatar
lewtun committed
274
275
276
277
```bash
python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \
                            --feature=sequence-classification onnx/
```
Sylvain Gugger's avatar
Sylvain Gugger committed
278

Steven Liu's avatar
Steven Liu committed
279
This displays the following logs:
lewtun's avatar
lewtun committed
280
281
282

```bash
Validating ONNX model...
283
        -[鉁揮 ONNX model output names match reference model ({'logits'})
lewtun's avatar
lewtun committed
284
285
286
287
        - Validating ONNX Model output "logits":
                -[鉁揮 (2, 2) matches (2, 2)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
288
289
```

Steven Liu's avatar
Steven Liu committed
290
291
292
Notice that in this case, the output names from the fine-tuned model are `logits`
instead of the `last_hidden_state` we saw with the `distilbert-base-uncased` checkpoint
earlier. This is expected since the fine-tuned model has a sequence classification head.
lewtun's avatar
lewtun committed
293
294
295

<Tip>

Steven Liu's avatar
Steven Liu committed
296
297
298
The features that have a `with-past` suffix (like `causal-lm-with-past`) correspond to
model classes with precomputed hidden states (key and values in the attention blocks)
that can be used for fast autoregressive decoding.
lewtun's avatar
lewtun committed
299
300

</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
301

302
303
304
305
306
307
308
<Tip>

For `VisionEncoderDecoder` type models, the encoder and decoder parts are
exported separately as two ONNX files named `encoder_model.onnx` and `decoder_model.onnx` respectively.

</Tip>

Sylvain Gugger's avatar
Sylvain Gugger committed
309

Steven Liu's avatar
Steven Liu committed
310
## Exporting a model for an unsupported architecture
Sylvain Gugger's avatar
Sylvain Gugger committed
311

Steven Liu's avatar
Steven Liu committed
312
313
If you wish to export a model whose architecture is not natively supported by the
library, there are three main steps to follow:
Sylvain Gugger's avatar
Sylvain Gugger committed
314

lewtun's avatar
lewtun committed
315
316
317
1. Implement a custom ONNX configuration.
2. Export the model to ONNX.
3. Validate the outputs of the PyTorch and exported models.
Sylvain Gugger's avatar
Sylvain Gugger committed
318

Steven Liu's avatar
Steven Liu committed
319
320
In this section, we'll look at how DistilBERT was implemented to show what's involved
with each step.
Sylvain Gugger's avatar
Sylvain Gugger committed
321

Steven Liu's avatar
Steven Liu committed
322
### Implementing a custom ONNX configuration
Sylvain Gugger's avatar
Sylvain Gugger committed
323

Steven Liu's avatar
Steven Liu committed
324
325
Let's start with the ONNX configuration object. We provide three abstract classes that
you should inherit from, depending on the type of model architecture you wish to export:
Sylvain Gugger's avatar
Sylvain Gugger committed
326

327
328
329
* Encoder-based models inherit from [`~onnx.config.OnnxConfig`]
* Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`]
* Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`]
Sylvain Gugger's avatar
Sylvain Gugger committed
330
331
332

<Tip>

lewtun's avatar
lewtun committed
333
334
A good way to implement a custom ONNX configuration is to look at the existing
implementation in the `configuration_<model_name>.py` file of a similar architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
335
336
337

</Tip>

lewtun's avatar
lewtun committed
338
339
Since DistilBERT is an encoder-based model, its configuration inherits from
`OnnxConfig`:
Sylvain Gugger's avatar
Sylvain Gugger committed
340

lewtun's avatar
lewtun committed
341
342
343
344
345
346
347
348
349
350
351
352
353
354
```python
>>> from typing import Mapping, OrderedDict
>>> from transformers.onnx import OnnxConfig


>>> class DistilBertOnnxConfig(OnnxConfig):
...     @property
...     def inputs(self) -> Mapping[str, Mapping[int, str]]:
...         return OrderedDict(
...             [
...                 ("input_ids", {0: "batch", 1: "sequence"}),
...                 ("attention_mask", {0: "batch", 1: "sequence"}),
...             ]
...         )
Sylvain Gugger's avatar
Sylvain Gugger committed
355
356
```

Steven Liu's avatar
Steven Liu committed
357
358
359
360
361
Every configuration object must implement the `inputs` property and return a mapping,
where each key corresponds to an expected input, and each value indicates the axis of
that input. For DistilBERT, we can see that two inputs are required: `input_ids` and
`attention_mask`. These inputs have the same shape of `(batch_size, sequence_length)`
which is why we see the same axes used in the configuration.
Sylvain Gugger's avatar
Sylvain Gugger committed
362
363
364

<Tip>

Steven Liu's avatar
Steven Liu committed
365
366
367
368
369
Notice that `inputs` property for `DistilBertOnnxConfig` returns an `OrderedDict`. This
ensures that the inputs are matched with their relative position within the
`PreTrainedModel.forward()` method when tracing the graph. We recommend using an
`OrderedDict` for the `inputs` and `outputs` properties when implementing custom ONNX
configurations.
Sylvain Gugger's avatar
Sylvain Gugger committed
370
371
372

</Tip>

Steven Liu's avatar
Steven Liu committed
373
374
Once you have implemented an ONNX configuration, you can instantiate it by providing the
base model's configuration as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
375

lewtun's avatar
lewtun committed
376
377
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
378

lewtun's avatar
lewtun committed
379
380
381
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config = DistilBertOnnxConfig(config)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
382

Steven Liu's avatar
Steven Liu committed
383
384
The resulting object has several useful properties. For example, you can view the ONNX
operator set that will be used during the export:
Sylvain Gugger's avatar
Sylvain Gugger committed
385

lewtun's avatar
lewtun committed
386
387
388
389
```python
>>> print(onnx_config.default_onnx_opset)
11
```
Sylvain Gugger's avatar
Sylvain Gugger committed
390

lewtun's avatar
lewtun committed
391
You can also view the outputs associated with the model as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
392

lewtun's avatar
lewtun committed
393
394
395
396
```python
>>> print(onnx_config.outputs)
OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
397

Steven Liu's avatar
Steven Liu committed
398
399
400
401
402
403
404
405
Notice that the outputs property follows the same structure as the inputs; it returns an
`OrderedDict` of named outputs and their shapes. The output structure is linked to the
choice of feature that the configuration is initialised with. By default, the ONNX
configuration is initialized with the `default` feature that corresponds to exporting a
model loaded with the `AutoModel` class. If you want to export a model for another task,
just provide a different feature to the `task` argument when you initialize the ONNX
configuration. For example, if we wished to export DistilBERT with a sequence
classification head, we could use:
Sylvain Gugger's avatar
Sylvain Gugger committed
406

lewtun's avatar
lewtun committed
407
408
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
409

lewtun's avatar
lewtun committed
410
411
412
413
414
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch'})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
415
416
417

<Tip>

Steven Liu's avatar
Steven Liu committed
418
All of the base properties and methods associated with [`~onnx.config.OnnxConfig`] and
419
the other configuration classes can be overridden if needed. Check out [`BartOnnxConfig`]
Steven Liu's avatar
Steven Liu committed
420
for an advanced example.
Sylvain Gugger's avatar
Sylvain Gugger committed
421
422
423

</Tip>

Steven Liu's avatar
Steven Liu committed
424
### Exporting the model
Sylvain Gugger's avatar
Sylvain Gugger committed
425

Steven Liu's avatar
Steven Liu committed
426
427
428
429
Once you have implemented the ONNX configuration, the next step is to export the model.
Here we can use the `export()` function provided by the `transformers.onnx` package.
This function expects the ONNX configuration, along with the base model and tokenizer,
and the path to save the exported file:
Sylvain Gugger's avatar
Sylvain Gugger committed
430

lewtun's avatar
lewtun committed
431
432
433
434
```python
>>> from pathlib import Path
>>> from transformers.onnx import export
>>> from transformers import AutoTokenizer, AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
435

lewtun's avatar
lewtun committed
436
437
438
439
>>> onnx_path = Path("model.onnx")
>>> model_ckpt = "distilbert-base-uncased"
>>> base_model = AutoModel.from_pretrained(model_ckpt)
>>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
Sylvain Gugger's avatar
Sylvain Gugger committed
440

lewtun's avatar
lewtun committed
441
442
>>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
443

Steven Liu's avatar
Steven Liu committed
444
445
446
The `onnx_inputs` and `onnx_outputs` returned by the `export()` function are lists of
the keys defined in the `inputs` and `outputs` properties of the configuration. Once the
model is exported, you can test that the model is well formed as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
447

lewtun's avatar
lewtun committed
448
449
```python
>>> import onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
450

lewtun's avatar
lewtun committed
451
452
453
>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
454
455
456

<Tip>

Steven Liu's avatar
Steven Liu committed
457
458
459
460
461
462
If your model is larger than 2GB, you will see that many additional files are created
during the export. This is _expected_ because ONNX uses [Protocol
Buffers](https://developers.google.com/protocol-buffers/) to store the model and these
have a size limit of 2GB. See the [ONNX
documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) for
instructions on how to load models with external data.
Sylvain Gugger's avatar
Sylvain Gugger committed
463
464
465

</Tip>

Steven Liu's avatar
Steven Liu committed
466
### Validating the model outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
467

Steven Liu's avatar
Steven Liu committed
468
469
470
The final step is to validate that the outputs from the base and exported model agree
within some absolute tolerance. Here we can use the `validate_model_outputs()` function
provided by the `transformers.onnx` package as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
471

lewtun's avatar
lewtun committed
472
473
```python
>>> from transformers.onnx import validate_model_outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
474

lewtun's avatar
lewtun committed
475
476
477
>>> validate_model_outputs(
...     onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
... )
Sylvain Gugger's avatar
Sylvain Gugger committed
478
479
```

Steven Liu's avatar
Steven Liu committed
480
481
482
483
This function uses the [`~transformers.onnx.OnnxConfig.generate_dummy_inputs`] method to
generate inputs for the base and exported model, and the absolute tolerance can be
defined in the configuration. We generally find numerical agreement in the 1e-6 to 1e-4
range, although anything smaller than 1e-3 is likely to be OK.
Sylvain Gugger's avatar
Sylvain Gugger committed
484

Steven Liu's avatar
Steven Liu committed
485
## Contributing a new configuration to 馃 Transformers
Sylvain Gugger's avatar
Sylvain Gugger committed
486

Steven Liu's avatar
Steven Liu committed
487
488
489
We are looking to expand the set of ready-made configurations and welcome contributions
from the community! If you would like to contribute your addition to the library, you
will need to:
Sylvain Gugger's avatar
Sylvain Gugger committed
490

lewtun's avatar
lewtun committed
491
492
* Implement the ONNX configuration in the corresponding `configuration_<model_name>.py`
file
Steven Liu's avatar
Steven Liu committed
493
494
* Include the model architecture and corresponding features in
  [`~onnx.features.FeatureManager`]
495
* Add your model architecture to the tests in `test_onnx_v2.py`
Sylvain Gugger's avatar
Sylvain Gugger committed
496

lewtun's avatar
lewtun committed
497
Check out how the configuration for [IBERT was
Steven Liu's avatar
Steven Liu committed
498
499
contributed](https://github.com/huggingface/transformers/pull/14868/files) to get an
idea of what's involved.