serialization.mdx 17.5 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Steven Liu's avatar
Steven Liu committed
13
# Export to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
14

Steven Liu's avatar
Steven Liu committed
15
16
17
18
If you need to deploy 馃 Transformers models in production environments, we recommend
exporting them to a serialized format that can be loaded and executed on specialized
runtimes and hardware. In this guide, we'll show you how to export 馃 Transformers
models to [ONNX (Open Neural Network eXchange)](http://onnx.ai).
Sylvain Gugger's avatar
Sylvain Gugger committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
24
<Tip>

Once exported, a model can be optimized for inference via techniques such as
quantization and pruning. If you are interested in optimizing your models to run with
maximum efficiency, check out the [馃 Optimum
lewtun's avatar
lewtun committed
25
library](https://github.com/huggingface/optimum).
Sylvain Gugger's avatar
Sylvain Gugger committed
26

Steven Liu's avatar
Steven Liu committed
27
</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
28

Steven Liu's avatar
Steven Liu committed
29
30
31
32
33
ONNX is an open standard that defines a common set of operators and a common file format
to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network.
Sylvain Gugger's avatar
Sylvain Gugger committed
34

Steven Liu's avatar
Steven Liu committed
35
36
37
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Sylvain Gugger's avatar
Sylvain Gugger committed
38

Steven Liu's avatar
Steven Liu committed
39
40
41
42
馃 Transformers provides a [`transformers.onnx`](main_classes/onnx) package that enables
you to convert model checkpoints to an ONNX graph by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are
designed to be easily extendable to other architectures.
Sylvain Gugger's avatar
Sylvain Gugger committed
43

lewtun's avatar
lewtun committed
44
Ready-made configurations include the following architectures:
Sylvain Gugger's avatar
Sylvain Gugger committed
45

46
<!--This table is automatically generated by `make fix-copies`, do not fill manually!-->
Sylvain Gugger's avatar
Sylvain Gugger committed
47
48
49

- ALBERT
- BART
Jim Rohrer's avatar
Jim Rohrer committed
50
- BEiT
Sylvain Gugger's avatar
Sylvain Gugger committed
51
- BERT
52
- BigBird
53
- BigBird-Pegasus
54
55
- Blenderbot
- BlenderbotSmall
56
- BLOOM
Sylvain Gugger's avatar
Sylvain Gugger committed
57
- CamemBERT
58
- CLIP
rooa's avatar
rooa committed
59
- CodeGen
60
- Conditional DETR
61
- ConvBERT
62
- ConvNeXT
63
- Data2VecText
64
- Data2VecVision
65
66
- DeBERTa
- DeBERTa-v2
67
- DeiT
regisss's avatar
regisss committed
68
- DETR
Sylvain Gugger's avatar
Sylvain Gugger committed
69
- DistilBERT
70
- ELECTRA
71
- ERNIE
72
- FlauBERT
Sylvain Gugger's avatar
Sylvain Gugger committed
73
- GPT Neo
74
- GPT-J
75
- GroupViT
76
- I-BERT
77
- ImageGPT
Sylvain Gugger's avatar
Sylvain Gugger committed
78
- LayoutLM
79
- LayoutLMv3
gcheron's avatar
gcheron committed
80
- LeViT
81
- Longformer
Daniel Stancl's avatar
Daniel Stancl committed
82
- LongT5
83
- M2M100
84
- Marian
Sylvain Gugger's avatar
Sylvain Gugger committed
85
- mBART
86
- MobileBERT
87
- MobileNetV1
88
- MobileNetV2
89
- MobileViT
90
- MT5
Sylvain Gugger's avatar
Sylvain Gugger committed
91
- OpenAI GPT-2
92
- OWL-ViT
93
- Perceiver
Gunjan Chhablani's avatar
Gunjan Chhablani committed
94
- PLBart
regisss's avatar
regisss committed
95
- ResNet
Sylvain Gugger's avatar
Sylvain Gugger committed
96
- RoBERTa
97
- RoFormer
98
- SegFormer
99
- SqueezeBERT
100
- Swin Transformer
Sylvain Gugger's avatar
Sylvain Gugger committed
101
- T5
102
- Table Transformer
103
- Vision Encoder decoder
lewtun's avatar
lewtun committed
104
- ViT
105
- Whisper
Ritik Nandwal's avatar
Ritik Nandwal committed
106
- XLM
Sylvain Gugger's avatar
Sylvain Gugger committed
107
- XLM-RoBERTa
108
- XLM-RoBERTa-XL
NielsRogge's avatar
NielsRogge committed
109
- YOLOS
Sylvain Gugger's avatar
Sylvain Gugger committed
110

lewtun's avatar
lewtun committed
111
In the next two sections, we'll show you how to:
Sylvain Gugger's avatar
Sylvain Gugger committed
112

lewtun's avatar
lewtun committed
113
114
* Export a supported model using the `transformers.onnx` package.
* Export a custom model for an unsupported architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
115

Steven Liu's avatar
Steven Liu committed
116
## Exporting a model to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
117

Steven Liu's avatar
Steven Liu committed
118
119
To export a 馃 Transformers model to ONNX, you'll first need to install some extra
dependencies:
Sylvain Gugger's avatar
Sylvain Gugger committed
120

lewtun's avatar
lewtun committed
121
122
123
124
125
```bash
pip install transformers[onnx]
```

The `transformers.onnx` package can then be used as a Python module:
Sylvain Gugger's avatar
Sylvain Gugger committed
126
127
128
129

```bash
python -m transformers.onnx --help

lewtun's avatar
lewtun committed
130
usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output
Sylvain Gugger's avatar
Sylvain Gugger committed
131
132
133
134
135
136
137

positional arguments:
  output                Path indicating where to store generated ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
lewtun's avatar
lewtun committed
138
139
140
141
                        Model ID on huggingface.co or path on disk to load model from.
  --feature {causal-lm, ...}
                        The type of features to export the model with.
  --opset OPSET         ONNX opset version to export the model with.
142
  --atol ATOL           Absolute difference tolerance when validating the model.
Sylvain Gugger's avatar
Sylvain Gugger committed
143
144
145
146
147
```

Exporting a checkpoint using a ready-made configuration can be done as follows:

```bash
lewtun's avatar
lewtun committed
148
python -m transformers.onnx --model=distilbert-base-uncased onnx/
Sylvain Gugger's avatar
Sylvain Gugger committed
149
150
```

Steven Liu's avatar
Steven Liu committed
151
You should see the following logs:
Sylvain Gugger's avatar
Sylvain Gugger committed
152
153
154

```bash
Validating ONNX model...
155
        -[鉁揮 ONNX model output names match reference model ({'last_hidden_state'})
lewtun's avatar
lewtun committed
156
157
158
159
        - Validating ONNX Model output "last_hidden_state":
                -[鉁揮 (2, 8, 768) matches (2, 8, 768)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
160
161
```

Steven Liu's avatar
Steven Liu committed
162
163
164
This exports an ONNX graph of the checkpoint defined by the `--model` argument. In this
example, it is `distilbert-base-uncased`, but it can be any checkpoint on the Hugging
Face Hub or one that's stored locally.
Sylvain Gugger's avatar
Sylvain Gugger committed
165

lewtun's avatar
lewtun committed
166
The resulting `model.onnx` file can then be run on one of the [many
Steven Liu's avatar
Steven Liu committed
167
168
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
lewtun's avatar
lewtun committed
169
Runtime](https://onnxruntime.ai/) as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
170

lewtun's avatar
lewtun committed
171
172
173
174
175
176
177
178
179
180
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Sylvain Gugger's avatar
Sylvain Gugger committed
181

Steven Liu's avatar
Steven Liu committed
182
183
The required output names (like `["last_hidden_state"]`) can be obtained by taking a
look at the ONNX configuration of each model. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
184

lewtun's avatar
lewtun committed
185
186
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
187

lewtun's avatar
lewtun committed
188
189
190
191
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
Sylvain Gugger's avatar
Sylvain Gugger committed
192
193
```

Steven Liu's avatar
Steven Liu committed
194
195
The process is identical for TensorFlow checkpoints on the Hub. For example, we can
export a pure TensorFlow checkpoint from the [Keras
196
197
198
199
200
201
organization](https://huggingface.co/keras-io) as follows:

```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```

Steven Liu's avatar
Steven Liu committed
202
203
204
To export a model that's stored locally, you'll need to have the model's weights and
tokenizer files stored in a directory. For example, we can load and save a checkpoint as
follows:
205

Steven Liu's avatar
Steven Liu committed
206
<frameworkcontent> <pt>
207
208
209
210
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification

>>> # Load tokenizer and PyTorch weights form the Hub
211
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
212
213
214
215
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-pt-checkpoint")
>>> pt_model.save_pretrained("local-pt-checkpoint")
Sylvain Gugger's avatar
Sylvain Gugger committed
216
217
218
219
220
221
222
223
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
224
</pt> <tf>
Sylvain Gugger's avatar
Sylvain Gugger committed
225
```python
226
227
228
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

>>> # Load tokenizer and TensorFlow weights from the Hub
229
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
230
231
232
233
234
235
236
237
238
239
240
241
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-tf-checkpoint")
>>> tf_model.save_pretrained("local-tf-checkpoint")
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
242
</tf> </frameworkcontent>
243

Steven Liu's avatar
Steven Liu committed
244
## Selecting features for different model tasks
lewtun's avatar
lewtun committed
245

Steven Liu's avatar
Steven Liu committed
246
247
248
Each ready-made configuration comes with a set of _features_ that enable you to export
models for different types of tasks. As shown in the table below, each feature is
associated with a different `AutoClass`:
lewtun's avatar
lewtun committed
249
250
251
252
253
254
255
256
257
258
259
260

| Feature                              | Auto Class                           |
| ------------------------------------ | ------------------------------------ |
| `causal-lm`, `causal-lm-with-past`   | `AutoModelForCausalLM`               |
| `default`, `default-with-past`       | `AutoModel`                          |
| `masked-lm`                          | `AutoModelForMaskedLM`               |
| `question-answering`                 | `AutoModelForQuestionAnswering`      |
| `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM`              |
| `sequence-classification`            | `AutoModelForSequenceClassification` |
| `token-classification`               | `AutoModelForTokenClassification`    |

For each configuration, you can find the list of supported features via the
Steven Liu's avatar
Steven Liu committed
261
[`~transformers.onnx.FeaturesManager`]. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
262
263

```python
lewtun's avatar
lewtun committed
264
>>> from transformers.onnx.features import FeaturesManager
Sylvain Gugger's avatar
Sylvain Gugger committed
265

lewtun's avatar
lewtun committed
266
267
268
>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys())
>>> print(distilbert_features)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]
Sylvain Gugger's avatar
Sylvain Gugger committed
269
270
```

lewtun's avatar
lewtun committed
271
You can then pass one of these features to the `--feature` argument in the
Steven Liu's avatar
Steven Liu committed
272
273
`transformers.onnx` package. For example, to export a text-classification model we can
pick a fine-tuned model from the Hub and run:
Sylvain Gugger's avatar
Sylvain Gugger committed
274

lewtun's avatar
lewtun committed
275
276
277
278
```bash
python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \
                            --feature=sequence-classification onnx/
```
Sylvain Gugger's avatar
Sylvain Gugger committed
279

Steven Liu's avatar
Steven Liu committed
280
This displays the following logs:
lewtun's avatar
lewtun committed
281
282
283

```bash
Validating ONNX model...
284
        -[鉁揮 ONNX model output names match reference model ({'logits'})
lewtun's avatar
lewtun committed
285
286
287
288
        - Validating ONNX Model output "logits":
                -[鉁揮 (2, 2) matches (2, 2)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
289
290
```

Steven Liu's avatar
Steven Liu committed
291
292
293
Notice that in this case, the output names from the fine-tuned model are `logits`
instead of the `last_hidden_state` we saw with the `distilbert-base-uncased` checkpoint
earlier. This is expected since the fine-tuned model has a sequence classification head.
lewtun's avatar
lewtun committed
294
295
296

<Tip>

Steven Liu's avatar
Steven Liu committed
297
298
299
The features that have a `with-past` suffix (like `causal-lm-with-past`) correspond to
model classes with precomputed hidden states (key and values in the attention blocks)
that can be used for fast autoregressive decoding.
lewtun's avatar
lewtun committed
300
301

</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
302

303
304
305
306
307
308
309
<Tip>

For `VisionEncoderDecoder` type models, the encoder and decoder parts are
exported separately as two ONNX files named `encoder_model.onnx` and `decoder_model.onnx` respectively.

</Tip>

Sylvain Gugger's avatar
Sylvain Gugger committed
310

Steven Liu's avatar
Steven Liu committed
311
## Exporting a model for an unsupported architecture
Sylvain Gugger's avatar
Sylvain Gugger committed
312

Steven Liu's avatar
Steven Liu committed
313
314
If you wish to export a model whose architecture is not natively supported by the
library, there are three main steps to follow:
Sylvain Gugger's avatar
Sylvain Gugger committed
315

lewtun's avatar
lewtun committed
316
317
318
1. Implement a custom ONNX configuration.
2. Export the model to ONNX.
3. Validate the outputs of the PyTorch and exported models.
Sylvain Gugger's avatar
Sylvain Gugger committed
319

Steven Liu's avatar
Steven Liu committed
320
321
In this section, we'll look at how DistilBERT was implemented to show what's involved
with each step.
Sylvain Gugger's avatar
Sylvain Gugger committed
322

Steven Liu's avatar
Steven Liu committed
323
### Implementing a custom ONNX configuration
Sylvain Gugger's avatar
Sylvain Gugger committed
324

Steven Liu's avatar
Steven Liu committed
325
326
Let's start with the ONNX configuration object. We provide three abstract classes that
you should inherit from, depending on the type of model architecture you wish to export:
Sylvain Gugger's avatar
Sylvain Gugger committed
327

328
329
330
* Encoder-based models inherit from [`~onnx.config.OnnxConfig`]
* Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`]
* Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`]
Sylvain Gugger's avatar
Sylvain Gugger committed
331
332
333

<Tip>

lewtun's avatar
lewtun committed
334
335
A good way to implement a custom ONNX configuration is to look at the existing
implementation in the `configuration_<model_name>.py` file of a similar architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
336
337
338

</Tip>

lewtun's avatar
lewtun committed
339
340
Since DistilBERT is an encoder-based model, its configuration inherits from
`OnnxConfig`:
Sylvain Gugger's avatar
Sylvain Gugger committed
341

lewtun's avatar
lewtun committed
342
343
344
345
346
347
348
349
350
351
352
353
354
355
```python
>>> from typing import Mapping, OrderedDict
>>> from transformers.onnx import OnnxConfig


>>> class DistilBertOnnxConfig(OnnxConfig):
...     @property
...     def inputs(self) -> Mapping[str, Mapping[int, str]]:
...         return OrderedDict(
...             [
...                 ("input_ids", {0: "batch", 1: "sequence"}),
...                 ("attention_mask", {0: "batch", 1: "sequence"}),
...             ]
...         )
Sylvain Gugger's avatar
Sylvain Gugger committed
356
357
```

Steven Liu's avatar
Steven Liu committed
358
359
360
361
362
Every configuration object must implement the `inputs` property and return a mapping,
where each key corresponds to an expected input, and each value indicates the axis of
that input. For DistilBERT, we can see that two inputs are required: `input_ids` and
`attention_mask`. These inputs have the same shape of `(batch_size, sequence_length)`
which is why we see the same axes used in the configuration.
Sylvain Gugger's avatar
Sylvain Gugger committed
363
364
365

<Tip>

Steven Liu's avatar
Steven Liu committed
366
367
368
369
370
Notice that `inputs` property for `DistilBertOnnxConfig` returns an `OrderedDict`. This
ensures that the inputs are matched with their relative position within the
`PreTrainedModel.forward()` method when tracing the graph. We recommend using an
`OrderedDict` for the `inputs` and `outputs` properties when implementing custom ONNX
configurations.
Sylvain Gugger's avatar
Sylvain Gugger committed
371
372
373

</Tip>

Steven Liu's avatar
Steven Liu committed
374
375
Once you have implemented an ONNX configuration, you can instantiate it by providing the
base model's configuration as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
376

lewtun's avatar
lewtun committed
377
378
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
379

lewtun's avatar
lewtun committed
380
381
382
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config = DistilBertOnnxConfig(config)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
383

Steven Liu's avatar
Steven Liu committed
384
385
The resulting object has several useful properties. For example, you can view the ONNX
operator set that will be used during the export:
Sylvain Gugger's avatar
Sylvain Gugger committed
386

lewtun's avatar
lewtun committed
387
388
389
390
```python
>>> print(onnx_config.default_onnx_opset)
11
```
Sylvain Gugger's avatar
Sylvain Gugger committed
391

lewtun's avatar
lewtun committed
392
You can also view the outputs associated with the model as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
393

lewtun's avatar
lewtun committed
394
395
396
397
```python
>>> print(onnx_config.outputs)
OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
398

Steven Liu's avatar
Steven Liu committed
399
400
401
402
403
404
405
406
Notice that the outputs property follows the same structure as the inputs; it returns an
`OrderedDict` of named outputs and their shapes. The output structure is linked to the
choice of feature that the configuration is initialised with. By default, the ONNX
configuration is initialized with the `default` feature that corresponds to exporting a
model loaded with the `AutoModel` class. If you want to export a model for another task,
just provide a different feature to the `task` argument when you initialize the ONNX
configuration. For example, if we wished to export DistilBERT with a sequence
classification head, we could use:
Sylvain Gugger's avatar
Sylvain Gugger committed
407

lewtun's avatar
lewtun committed
408
409
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
410

lewtun's avatar
lewtun committed
411
412
413
414
415
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch'})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
416
417
418

<Tip>

Steven Liu's avatar
Steven Liu committed
419
All of the base properties and methods associated with [`~onnx.config.OnnxConfig`] and
420
the other configuration classes can be overridden if needed. Check out [`BartOnnxConfig`]
Steven Liu's avatar
Steven Liu committed
421
for an advanced example.
Sylvain Gugger's avatar
Sylvain Gugger committed
422
423
424

</Tip>

Steven Liu's avatar
Steven Liu committed
425
### Exporting the model
Sylvain Gugger's avatar
Sylvain Gugger committed
426

Steven Liu's avatar
Steven Liu committed
427
428
429
430
Once you have implemented the ONNX configuration, the next step is to export the model.
Here we can use the `export()` function provided by the `transformers.onnx` package.
This function expects the ONNX configuration, along with the base model and tokenizer,
and the path to save the exported file:
Sylvain Gugger's avatar
Sylvain Gugger committed
431

lewtun's avatar
lewtun committed
432
433
434
435
```python
>>> from pathlib import Path
>>> from transformers.onnx import export
>>> from transformers import AutoTokenizer, AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
436

lewtun's avatar
lewtun committed
437
438
439
440
>>> onnx_path = Path("model.onnx")
>>> model_ckpt = "distilbert-base-uncased"
>>> base_model = AutoModel.from_pretrained(model_ckpt)
>>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
Sylvain Gugger's avatar
Sylvain Gugger committed
441

lewtun's avatar
lewtun committed
442
443
>>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
444

Steven Liu's avatar
Steven Liu committed
445
446
447
The `onnx_inputs` and `onnx_outputs` returned by the `export()` function are lists of
the keys defined in the `inputs` and `outputs` properties of the configuration. Once the
model is exported, you can test that the model is well formed as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
448

lewtun's avatar
lewtun committed
449
450
```python
>>> import onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
451

lewtun's avatar
lewtun committed
452
453
454
>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
455
456
457

<Tip>

Steven Liu's avatar
Steven Liu committed
458
459
460
461
462
463
If your model is larger than 2GB, you will see that many additional files are created
during the export. This is _expected_ because ONNX uses [Protocol
Buffers](https://developers.google.com/protocol-buffers/) to store the model and these
have a size limit of 2GB. See the [ONNX
documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) for
instructions on how to load models with external data.
Sylvain Gugger's avatar
Sylvain Gugger committed
464
465
466

</Tip>

Steven Liu's avatar
Steven Liu committed
467
### Validating the model outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
468

Steven Liu's avatar
Steven Liu committed
469
470
471
The final step is to validate that the outputs from the base and exported model agree
within some absolute tolerance. Here we can use the `validate_model_outputs()` function
provided by the `transformers.onnx` package as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
472

lewtun's avatar
lewtun committed
473
474
```python
>>> from transformers.onnx import validate_model_outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
475

lewtun's avatar
lewtun committed
476
477
478
>>> validate_model_outputs(
...     onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
... )
Sylvain Gugger's avatar
Sylvain Gugger committed
479
480
```

Steven Liu's avatar
Steven Liu committed
481
482
483
484
This function uses the [`~transformers.onnx.OnnxConfig.generate_dummy_inputs`] method to
generate inputs for the base and exported model, and the absolute tolerance can be
defined in the configuration. We generally find numerical agreement in the 1e-6 to 1e-4
range, although anything smaller than 1e-3 is likely to be OK.
Sylvain Gugger's avatar
Sylvain Gugger committed
485

Steven Liu's avatar
Steven Liu committed
486
## Contributing a new configuration to 馃 Transformers
Sylvain Gugger's avatar
Sylvain Gugger committed
487

Steven Liu's avatar
Steven Liu committed
488
489
490
We are looking to expand the set of ready-made configurations and welcome contributions
from the community! If you would like to contribute your addition to the library, you
will need to:
Sylvain Gugger's avatar
Sylvain Gugger committed
491

lewtun's avatar
lewtun committed
492
493
* Implement the ONNX configuration in the corresponding `configuration_<model_name>.py`
file
Steven Liu's avatar
Steven Liu committed
494
495
* Include the model architecture and corresponding features in
  [`~onnx.features.FeatureManager`]
496
* Add your model architecture to the tests in `test_onnx_v2.py`
Sylvain Gugger's avatar
Sylvain Gugger committed
497

lewtun's avatar
lewtun committed
498
Check out how the configuration for [IBERT was
Steven Liu's avatar
Steven Liu committed
499
500
contributed](https://github.com/huggingface/transformers/pull/14868/files) to get an
idea of what's involved.