serialization.mdx 17.6 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Steven Liu's avatar
Steven Liu committed
13
# Export to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
14

Steven Liu's avatar
Steven Liu committed
15
16
17
18
If you need to deploy 馃 Transformers models in production environments, we recommend
exporting them to a serialized format that can be loaded and executed on specialized
runtimes and hardware. In this guide, we'll show you how to export 馃 Transformers
models to [ONNX (Open Neural Network eXchange)](http://onnx.ai).
Sylvain Gugger's avatar
Sylvain Gugger committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
24
<Tip>

Once exported, a model can be optimized for inference via techniques such as
quantization and pruning. If you are interested in optimizing your models to run with
maximum efficiency, check out the [馃 Optimum
lewtun's avatar
lewtun committed
25
library](https://github.com/huggingface/optimum).
Sylvain Gugger's avatar
Sylvain Gugger committed
26

Steven Liu's avatar
Steven Liu committed
27
</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
28

Steven Liu's avatar
Steven Liu committed
29
30
31
32
33
ONNX is an open standard that defines a common set of operators and a common file format
to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network.
Sylvain Gugger's avatar
Sylvain Gugger committed
34

Steven Liu's avatar
Steven Liu committed
35
36
37
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Sylvain Gugger's avatar
Sylvain Gugger committed
38

Steven Liu's avatar
Steven Liu committed
39
40
41
42
馃 Transformers provides a [`transformers.onnx`](main_classes/onnx) package that enables
you to convert model checkpoints to an ONNX graph by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are
designed to be easily extendable to other architectures.
Sylvain Gugger's avatar
Sylvain Gugger committed
43

lewtun's avatar
lewtun committed
44
Ready-made configurations include the following architectures:
Sylvain Gugger's avatar
Sylvain Gugger committed
45

46
<!--This table is automatically generated by `make fix-copies`, do not fill manually!-->
Sylvain Gugger's avatar
Sylvain Gugger committed
47
48
49

- ALBERT
- BART
Jim Rohrer's avatar
Jim Rohrer committed
50
- BEiT
Sylvain Gugger's avatar
Sylvain Gugger committed
51
- BERT
52
- BigBird
53
- BigBird-Pegasus
54
55
- Blenderbot
- BlenderbotSmall
56
- BLOOM
Sylvain Gugger's avatar
Sylvain Gugger committed
57
- CamemBERT
58
- Chinese-CLIP
59
- CLIP
rooa's avatar
rooa committed
60
- CodeGen
61
- Conditional DETR
62
- ConvBERT
63
- ConvNeXT
64
- Data2VecText
65
- Data2VecVision
66
67
- DeBERTa
- DeBERTa-v2
68
- DeiT
regisss's avatar
regisss committed
69
- DETR
Sylvain Gugger's avatar
Sylvain Gugger committed
70
- DistilBERT
71
- ELECTRA
72
- ERNIE
73
- FlauBERT
Sylvain Gugger's avatar
Sylvain Gugger committed
74
- GPT Neo
75
- GPT-J
76
- GroupViT
77
- I-BERT
78
- ImageGPT
Sylvain Gugger's avatar
Sylvain Gugger committed
79
- LayoutLM
80
- LayoutLMv3
gcheron's avatar
gcheron committed
81
- LeViT
82
- Longformer
Daniel Stancl's avatar
Daniel Stancl committed
83
- LongT5
84
- M2M100
85
- Marian
Sylvain Gugger's avatar
Sylvain Gugger committed
86
- mBART
87
- MobileBERT
88
- MobileNetV1
89
- MobileNetV2
90
- MobileViT
91
- MT5
Sylvain Gugger's avatar
Sylvain Gugger committed
92
- OpenAI GPT-2
93
- OWL-ViT
94
- Perceiver
Gunjan Chhablani's avatar
Gunjan Chhablani committed
95
- PLBart
Erin's avatar
Erin committed
96
- RemBERT
regisss's avatar
regisss committed
97
- ResNet
Sylvain Gugger's avatar
Sylvain Gugger committed
98
- RoBERTa
99
- RoFormer
100
- SegFormer
101
- SqueezeBERT
102
- Swin Transformer
Sylvain Gugger's avatar
Sylvain Gugger committed
103
- T5
104
- Table Transformer
105
- Vision Encoder decoder
lewtun's avatar
lewtun committed
106
- ViT
107
- Whisper
Ritik Nandwal's avatar
Ritik Nandwal committed
108
- XLM
Sylvain Gugger's avatar
Sylvain Gugger committed
109
- XLM-RoBERTa
110
- XLM-RoBERTa-XL
NielsRogge's avatar
NielsRogge committed
111
- YOLOS
Sylvain Gugger's avatar
Sylvain Gugger committed
112

lewtun's avatar
lewtun committed
113
In the next two sections, we'll show you how to:
Sylvain Gugger's avatar
Sylvain Gugger committed
114

lewtun's avatar
lewtun committed
115
116
* Export a supported model using the `transformers.onnx` package.
* Export a custom model for an unsupported architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
117

Steven Liu's avatar
Steven Liu committed
118
## Exporting a model to ONNX
Sylvain Gugger's avatar
Sylvain Gugger committed
119

Steven Liu's avatar
Steven Liu committed
120
121
To export a 馃 Transformers model to ONNX, you'll first need to install some extra
dependencies:
Sylvain Gugger's avatar
Sylvain Gugger committed
122

lewtun's avatar
lewtun committed
123
124
125
126
127
```bash
pip install transformers[onnx]
```

The `transformers.onnx` package can then be used as a Python module:
Sylvain Gugger's avatar
Sylvain Gugger committed
128
129
130
131

```bash
python -m transformers.onnx --help

lewtun's avatar
lewtun committed
132
usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output
Sylvain Gugger's avatar
Sylvain Gugger committed
133
134
135
136
137
138
139

positional arguments:
  output                Path indicating where to store generated ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
lewtun's avatar
lewtun committed
140
141
142
143
                        Model ID on huggingface.co or path on disk to load model from.
  --feature {causal-lm, ...}
                        The type of features to export the model with.
  --opset OPSET         ONNX opset version to export the model with.
144
  --atol ATOL           Absolute difference tolerance when validating the model.
Sylvain Gugger's avatar
Sylvain Gugger committed
145
146
147
148
149
```

Exporting a checkpoint using a ready-made configuration can be done as follows:

```bash
lewtun's avatar
lewtun committed
150
python -m transformers.onnx --model=distilbert-base-uncased onnx/
Sylvain Gugger's avatar
Sylvain Gugger committed
151
152
```

Steven Liu's avatar
Steven Liu committed
153
You should see the following logs:
Sylvain Gugger's avatar
Sylvain Gugger committed
154
155
156

```bash
Validating ONNX model...
157
        -[鉁揮 ONNX model output names match reference model ({'last_hidden_state'})
lewtun's avatar
lewtun committed
158
159
160
161
        - Validating ONNX Model output "last_hidden_state":
                -[鉁揮 (2, 8, 768) matches (2, 8, 768)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
162
163
```

Steven Liu's avatar
Steven Liu committed
164
165
166
This exports an ONNX graph of the checkpoint defined by the `--model` argument. In this
example, it is `distilbert-base-uncased`, but it can be any checkpoint on the Hugging
Face Hub or one that's stored locally.
Sylvain Gugger's avatar
Sylvain Gugger committed
167

lewtun's avatar
lewtun committed
168
The resulting `model.onnx` file can then be run on one of the [many
Steven Liu's avatar
Steven Liu committed
169
170
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
lewtun's avatar
lewtun committed
171
Runtime](https://onnxruntime.ai/) as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
172

lewtun's avatar
lewtun committed
173
174
175
176
177
178
179
180
181
182
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Sylvain Gugger's avatar
Sylvain Gugger committed
183

Steven Liu's avatar
Steven Liu committed
184
185
The required output names (like `["last_hidden_state"]`) can be obtained by taking a
look at the ONNX configuration of each model. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
186

lewtun's avatar
lewtun committed
187
188
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
189

lewtun's avatar
lewtun committed
190
191
192
193
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
Sylvain Gugger's avatar
Sylvain Gugger committed
194
195
```

Steven Liu's avatar
Steven Liu committed
196
197
The process is identical for TensorFlow checkpoints on the Hub. For example, we can
export a pure TensorFlow checkpoint from the [Keras
198
199
200
201
202
203
organization](https://huggingface.co/keras-io) as follows:

```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```

Steven Liu's avatar
Steven Liu committed
204
205
206
To export a model that's stored locally, you'll need to have the model's weights and
tokenizer files stored in a directory. For example, we can load and save a checkpoint as
follows:
207

Steven Liu's avatar
Steven Liu committed
208
<frameworkcontent> <pt>
209
210
211
212
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification

>>> # Load tokenizer and PyTorch weights form the Hub
213
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
214
215
216
217
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-pt-checkpoint")
>>> pt_model.save_pretrained("local-pt-checkpoint")
Sylvain Gugger's avatar
Sylvain Gugger committed
218
219
220
221
222
223
224
225
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
226
</pt> <tf>
Sylvain Gugger's avatar
Sylvain Gugger committed
227
```python
228
229
230
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

>>> # Load tokenizer and TensorFlow weights from the Hub
231
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
232
233
234
235
236
237
238
239
240
241
242
243
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-tf-checkpoint")
>>> tf_model.save_pretrained("local-tf-checkpoint")
```

Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:

```bash
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Steven Liu's avatar
Steven Liu committed
244
</tf> </frameworkcontent>
245

Steven Liu's avatar
Steven Liu committed
246
## Selecting features for different model tasks
lewtun's avatar
lewtun committed
247

Steven Liu's avatar
Steven Liu committed
248
249
250
Each ready-made configuration comes with a set of _features_ that enable you to export
models for different types of tasks. As shown in the table below, each feature is
associated with a different `AutoClass`:
lewtun's avatar
lewtun committed
251
252
253
254
255
256
257
258
259
260
261
262

| Feature                              | Auto Class                           |
| ------------------------------------ | ------------------------------------ |
| `causal-lm`, `causal-lm-with-past`   | `AutoModelForCausalLM`               |
| `default`, `default-with-past`       | `AutoModel`                          |
| `masked-lm`                          | `AutoModelForMaskedLM`               |
| `question-answering`                 | `AutoModelForQuestionAnswering`      |
| `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM`              |
| `sequence-classification`            | `AutoModelForSequenceClassification` |
| `token-classification`               | `AutoModelForTokenClassification`    |

For each configuration, you can find the list of supported features via the
Steven Liu's avatar
Steven Liu committed
263
[`~transformers.onnx.FeaturesManager`]. For example, for DistilBERT we have:
Sylvain Gugger's avatar
Sylvain Gugger committed
264
265

```python
lewtun's avatar
lewtun committed
266
>>> from transformers.onnx.features import FeaturesManager
Sylvain Gugger's avatar
Sylvain Gugger committed
267

lewtun's avatar
lewtun committed
268
269
270
>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys())
>>> print(distilbert_features)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]
Sylvain Gugger's avatar
Sylvain Gugger committed
271
272
```

lewtun's avatar
lewtun committed
273
You can then pass one of these features to the `--feature` argument in the
Steven Liu's avatar
Steven Liu committed
274
275
`transformers.onnx` package. For example, to export a text-classification model we can
pick a fine-tuned model from the Hub and run:
Sylvain Gugger's avatar
Sylvain Gugger committed
276

lewtun's avatar
lewtun committed
277
278
279
280
```bash
python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \
                            --feature=sequence-classification onnx/
```
Sylvain Gugger's avatar
Sylvain Gugger committed
281

Steven Liu's avatar
Steven Liu committed
282
This displays the following logs:
lewtun's avatar
lewtun committed
283
284
285

```bash
Validating ONNX model...
286
        -[鉁揮 ONNX model output names match reference model ({'logits'})
lewtun's avatar
lewtun committed
287
288
289
290
        - Validating ONNX Model output "logits":
                -[鉁揮 (2, 2) matches (2, 2)
                -[鉁揮 all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
291
292
```

Steven Liu's avatar
Steven Liu committed
293
294
295
Notice that in this case, the output names from the fine-tuned model are `logits`
instead of the `last_hidden_state` we saw with the `distilbert-base-uncased` checkpoint
earlier. This is expected since the fine-tuned model has a sequence classification head.
lewtun's avatar
lewtun committed
296
297
298

<Tip>

Steven Liu's avatar
Steven Liu committed
299
300
301
The features that have a `with-past` suffix (like `causal-lm-with-past`) correspond to
model classes with precomputed hidden states (key and values in the attention blocks)
that can be used for fast autoregressive decoding.
lewtun's avatar
lewtun committed
302
303

</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
304

305
306
307
308
309
310
311
<Tip>

For `VisionEncoderDecoder` type models, the encoder and decoder parts are
exported separately as two ONNX files named `encoder_model.onnx` and `decoder_model.onnx` respectively.

</Tip>

Sylvain Gugger's avatar
Sylvain Gugger committed
312

Steven Liu's avatar
Steven Liu committed
313
## Exporting a model for an unsupported architecture
Sylvain Gugger's avatar
Sylvain Gugger committed
314

Steven Liu's avatar
Steven Liu committed
315
316
If you wish to export a model whose architecture is not natively supported by the
library, there are three main steps to follow:
Sylvain Gugger's avatar
Sylvain Gugger committed
317

lewtun's avatar
lewtun committed
318
319
320
1. Implement a custom ONNX configuration.
2. Export the model to ONNX.
3. Validate the outputs of the PyTorch and exported models.
Sylvain Gugger's avatar
Sylvain Gugger committed
321

Steven Liu's avatar
Steven Liu committed
322
323
In this section, we'll look at how DistilBERT was implemented to show what's involved
with each step.
Sylvain Gugger's avatar
Sylvain Gugger committed
324

Steven Liu's avatar
Steven Liu committed
325
### Implementing a custom ONNX configuration
Sylvain Gugger's avatar
Sylvain Gugger committed
326

Steven Liu's avatar
Steven Liu committed
327
328
Let's start with the ONNX configuration object. We provide three abstract classes that
you should inherit from, depending on the type of model architecture you wish to export:
Sylvain Gugger's avatar
Sylvain Gugger committed
329

330
331
332
* Encoder-based models inherit from [`~onnx.config.OnnxConfig`]
* Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`]
* Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`]
Sylvain Gugger's avatar
Sylvain Gugger committed
333
334
335

<Tip>

lewtun's avatar
lewtun committed
336
337
A good way to implement a custom ONNX configuration is to look at the existing
implementation in the `configuration_<model_name>.py` file of a similar architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
338
339
340

</Tip>

lewtun's avatar
lewtun committed
341
342
Since DistilBERT is an encoder-based model, its configuration inherits from
`OnnxConfig`:
Sylvain Gugger's avatar
Sylvain Gugger committed
343

lewtun's avatar
lewtun committed
344
345
346
347
348
349
350
351
352
353
354
355
356
357
```python
>>> from typing import Mapping, OrderedDict
>>> from transformers.onnx import OnnxConfig


>>> class DistilBertOnnxConfig(OnnxConfig):
...     @property
...     def inputs(self) -> Mapping[str, Mapping[int, str]]:
...         return OrderedDict(
...             [
...                 ("input_ids", {0: "batch", 1: "sequence"}),
...                 ("attention_mask", {0: "batch", 1: "sequence"}),
...             ]
...         )
Sylvain Gugger's avatar
Sylvain Gugger committed
358
359
```

Steven Liu's avatar
Steven Liu committed
360
361
362
363
364
Every configuration object must implement the `inputs` property and return a mapping,
where each key corresponds to an expected input, and each value indicates the axis of
that input. For DistilBERT, we can see that two inputs are required: `input_ids` and
`attention_mask`. These inputs have the same shape of `(batch_size, sequence_length)`
which is why we see the same axes used in the configuration.
Sylvain Gugger's avatar
Sylvain Gugger committed
365
366
367

<Tip>

Steven Liu's avatar
Steven Liu committed
368
369
370
371
372
Notice that `inputs` property for `DistilBertOnnxConfig` returns an `OrderedDict`. This
ensures that the inputs are matched with their relative position within the
`PreTrainedModel.forward()` method when tracing the graph. We recommend using an
`OrderedDict` for the `inputs` and `outputs` properties when implementing custom ONNX
configurations.
Sylvain Gugger's avatar
Sylvain Gugger committed
373
374
375

</Tip>

Steven Liu's avatar
Steven Liu committed
376
377
Once you have implemented an ONNX configuration, you can instantiate it by providing the
base model's configuration as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
378

lewtun's avatar
lewtun committed
379
380
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
381

lewtun's avatar
lewtun committed
382
383
384
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config = DistilBertOnnxConfig(config)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
385

Steven Liu's avatar
Steven Liu committed
386
387
The resulting object has several useful properties. For example, you can view the ONNX
operator set that will be used during the export:
Sylvain Gugger's avatar
Sylvain Gugger committed
388

lewtun's avatar
lewtun committed
389
390
391
392
```python
>>> print(onnx_config.default_onnx_opset)
11
```
Sylvain Gugger's avatar
Sylvain Gugger committed
393

lewtun's avatar
lewtun committed
394
You can also view the outputs associated with the model as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
395

lewtun's avatar
lewtun committed
396
397
398
399
```python
>>> print(onnx_config.outputs)
OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
400

Steven Liu's avatar
Steven Liu committed
401
402
403
404
405
406
407
408
Notice that the outputs property follows the same structure as the inputs; it returns an
`OrderedDict` of named outputs and their shapes. The output structure is linked to the
choice of feature that the configuration is initialised with. By default, the ONNX
configuration is initialized with the `default` feature that corresponds to exporting a
model loaded with the `AutoModel` class. If you want to export a model for another task,
just provide a different feature to the `task` argument when you initialize the ONNX
configuration. For example, if we wished to export DistilBERT with a sequence
classification head, we could use:
Sylvain Gugger's avatar
Sylvain Gugger committed
409

lewtun's avatar
lewtun committed
410
411
```python
>>> from transformers import AutoConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
412

lewtun's avatar
lewtun committed
413
414
415
416
417
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch'})])
```
Sylvain Gugger's avatar
Sylvain Gugger committed
418
419
420

<Tip>

Steven Liu's avatar
Steven Liu committed
421
All of the base properties and methods associated with [`~onnx.config.OnnxConfig`] and
422
the other configuration classes can be overridden if needed. Check out [`BartOnnxConfig`]
Steven Liu's avatar
Steven Liu committed
423
for an advanced example.
Sylvain Gugger's avatar
Sylvain Gugger committed
424
425
426

</Tip>

Steven Liu's avatar
Steven Liu committed
427
### Exporting the model
Sylvain Gugger's avatar
Sylvain Gugger committed
428

Steven Liu's avatar
Steven Liu committed
429
430
431
432
Once you have implemented the ONNX configuration, the next step is to export the model.
Here we can use the `export()` function provided by the `transformers.onnx` package.
This function expects the ONNX configuration, along with the base model and tokenizer,
and the path to save the exported file:
Sylvain Gugger's avatar
Sylvain Gugger committed
433

lewtun's avatar
lewtun committed
434
435
436
437
```python
>>> from pathlib import Path
>>> from transformers.onnx import export
>>> from transformers import AutoTokenizer, AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
438

lewtun's avatar
lewtun committed
439
440
441
442
>>> onnx_path = Path("model.onnx")
>>> model_ckpt = "distilbert-base-uncased"
>>> base_model = AutoModel.from_pretrained(model_ckpt)
>>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
Sylvain Gugger's avatar
Sylvain Gugger committed
443

lewtun's avatar
lewtun committed
444
445
>>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
446

Steven Liu's avatar
Steven Liu committed
447
448
449
The `onnx_inputs` and `onnx_outputs` returned by the `export()` function are lists of
the keys defined in the `inputs` and `outputs` properties of the configuration. Once the
model is exported, you can test that the model is well formed as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
450

lewtun's avatar
lewtun committed
451
452
```python
>>> import onnx
Sylvain Gugger's avatar
Sylvain Gugger committed
453

lewtun's avatar
lewtun committed
454
455
456
>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
457
458
459

<Tip>

Steven Liu's avatar
Steven Liu committed
460
461
462
463
464
465
If your model is larger than 2GB, you will see that many additional files are created
during the export. This is _expected_ because ONNX uses [Protocol
Buffers](https://developers.google.com/protocol-buffers/) to store the model and these
have a size limit of 2GB. See the [ONNX
documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) for
instructions on how to load models with external data.
Sylvain Gugger's avatar
Sylvain Gugger committed
466
467
468

</Tip>

Steven Liu's avatar
Steven Liu committed
469
### Validating the model outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
470

Steven Liu's avatar
Steven Liu committed
471
472
473
The final step is to validate that the outputs from the base and exported model agree
within some absolute tolerance. Here we can use the `validate_model_outputs()` function
provided by the `transformers.onnx` package as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
474

lewtun's avatar
lewtun committed
475
476
```python
>>> from transformers.onnx import validate_model_outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
477

lewtun's avatar
lewtun committed
478
479
480
>>> validate_model_outputs(
...     onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
... )
Sylvain Gugger's avatar
Sylvain Gugger committed
481
482
```

Steven Liu's avatar
Steven Liu committed
483
484
485
486
This function uses the [`~transformers.onnx.OnnxConfig.generate_dummy_inputs`] method to
generate inputs for the base and exported model, and the absolute tolerance can be
defined in the configuration. We generally find numerical agreement in the 1e-6 to 1e-4
range, although anything smaller than 1e-3 is likely to be OK.
Sylvain Gugger's avatar
Sylvain Gugger committed
487

Steven Liu's avatar
Steven Liu committed
488
## Contributing a new configuration to 馃 Transformers
Sylvain Gugger's avatar
Sylvain Gugger committed
489

Steven Liu's avatar
Steven Liu committed
490
491
492
We are looking to expand the set of ready-made configurations and welcome contributions
from the community! If you would like to contribute your addition to the library, you
will need to:
Sylvain Gugger's avatar
Sylvain Gugger committed
493

lewtun's avatar
lewtun committed
494
495
* Implement the ONNX configuration in the corresponding `configuration_<model_name>.py`
file
Steven Liu's avatar
Steven Liu committed
496
497
* Include the model architecture and corresponding features in
  [`~onnx.features.FeatureManager`]
498
* Add your model architecture to the tests in `test_onnx_v2.py`
Sylvain Gugger's avatar
Sylvain Gugger committed
499

lewtun's avatar
lewtun committed
500
Check out how the configuration for [IBERT was
Steven Liu's avatar
Steven Liu committed
501
502
contributed](https://github.com/huggingface/transformers/pull/14868/files) to get an
idea of what's involved.