- 22 Dec, 2021 1 commit
-
-
Michael Benayoun authored
* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)" This reverts commit 0f4e39c5. * is_torch_available test to avoid failing imports * sorting parameterize parameters to solve ERROR gw0 gw1 * tests fix * tests fix * GPT2 with past fix * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially * Removed onnx file * Implemented suggestions * Fixed __init__ to resolve conflict with master * Remove commented import
-
- 08 Dec, 2021 2 commits
-
-
Michael Benayoun authored
* Added support for other features for already supported models * Partial support for causal and seq2seq models * Partial support for causal and seq2seq models * OnnxSeq2SeqConfigWithPast to support seq2seq models * Parameterized the onnx tests * Restored run_mlm.py * Restored run_mlm.py * [WIP] BART update * BART and MBART * Added comments * Another sequence length of the past_key_values
- 21 Sep, 2021 1 commit
-
-
Nishant Prabhu authored
* Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * cleanup * Removed regression/ folder * Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * cleanup * Fixed import error * Remove unnecessary import statements * Changed max_2d_positions from class variable to instance variable of the config class * Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * cleanup * Add support for exporting PyTorch LayoutLM to ONNX * cleanup * Fixed import error * Changed max_2d_positions from class variable to instance variable of the config class * Use super class generate_dummy_inputs method Co-authored-by:
Michael Benayoun <mickbenayoun@gmail.com> * Add support for Masked LM, sequence classification and token classification Co-authored-by:
Michael Benayoun <mickbenayoun@gmail.com> * Removed uncessary import and method * Fixed code styling * Raise error if PyTorch is not installed * Remove unnecessary import statement Co-authored-by:
Michael Benayoun <mickbenayoun@gmail.com>
-
- 09 Aug, 2021 1 commit
-
-
Lysandre Debut authored
* Add MBART to models exportable with ONNX * unittest mock * Add tests * Misc fixes
-
- 06 Aug, 2021 2 commits
-
-
Lysandre Debut authored
-
Michael Benayoun authored
T5 with past ONNX export, and more explicit past_key_values inputs and outputs names for ONNX model Authored-by:Michael Benayoun <michael@huggingface.co>
-
- 05 Aug, 2021 1 commit
-
-
Michael Benayoun authored
GPT-Neo ONNX export and task / feature refactoring Authored-by:Michael Benayoun <michael@huggingface.co>
-
- 29 Jul, 2021 1 commit
-
-
Funtowicz Morgan authored
* Raise an issue if the pytorch version is < 1.8.0 * Attempt to add a test to ensure it correctly raises. * Missing docstring. * Second attempt, patch with string absolute import. * Let's do the call before checking it was called ... * use the correct function ...
馃う * Raise ImportError and AssertionError respectively when unable to find torch and torch version is not sufficient. * Correct path mock patching * relax constraint for torch_onnx_dict_inputs to ge instead of eq. * Style. * Split each version requirements for torch. * Let's compare version directly. * Import torch_version after checking pytorch is installed. * @require_torch
-
- 16 Jul, 2021 1 commit
-
-
Funtowicz Morgan authored
* Set model in eval mode when exporting to ONNX. * Disable t5 for now. * Disable T5 with past too. * Style.
-
- 08 Jul, 2021 1 commit
-
-
Funtowicz Morgan authored
* Laying down building stone for more flexible ONNX export capabilities * Ability to provide a map of config key to override before exporting. * Makes it possible to export BART with/without past keys. * Supports simple mathematical syntax for OnnxVariable.repeated * Effectively apply value override from onnx config for model * Supports export with additional features such as with-past for seq2seq * Store the output path directly in the args for uniform usage across. * Make BART_ONNX_CONFIG_* constants and fix imports. * Support BERT model. * Use tokenizer for more flexibility in defining the inputs of a model. * Add TODO as remainder to provide the batch/sequence_length as CLI args * Enable optimizations to be done on the model. * Enable GPT2 + past * Improve model validation with outputs containing nested structures * Enable Roberta * Enable Albert * Albert requires opset >= 12 * BERT-like models requires opset >= 12 * Remove double printing. * Enable XLM-Roberta * Enable DistilBERT * Disable optimization by default * Fix missing setattr when applying optimizer_features * Add value field to OnnxVariable to define constant input (not from tokenizers) * Add T5 support. * Simplify model type retrieval * Example exporting token_classification pipeline for DistilBERT. * Refactoring to package `transformers.onnx` * Solve circular dependency & __main__ * Remove unnecessary imports in `__init__` * Licences * Use @Narsil's suggestion to forward the model's configuration to the ONNXConfig to avoid interpolation. * Onnx export v2 fixes (#12388) * Tiny fixes Remove `convert_pytorch` from onnxruntime-less runtimes Correct reference to model * Style * Fix Copied from * LongFormer ONNX config. * Removed optimizations * Remvoe bad merge relicas. * Remove unused constants. * Remove some deleted constants from imports. * Fix unittest to remove usage of PyTorch model for onnx.utils. * Fix distilbert export * Enable ONNX export test for supported model. * Style. * Fix lint. * Enable all supported default models. * GPT2 only has one output * Fix bad property name when overriding config. * Added unittests and docstrings. * Disable with_past tests for now. * Enable outputs validation for default export. * Remove graph opt lvls. * Last commit with on-going past commented. * Style. * Disabled `with_past` for now * Remove unused imports. * Remove framework argument * Remove TFPreTrainedModel reference * Add documentation * Add onnxruntime tests to CircleCI * Add test * Rename `convert_pytorch` to `export` * Use OrderedDict for dummy inputs * WIP Wav2Vec2 * Revert "WIP Wav2Vec2" This reverts commit f665efb04c92525c3530e589029f0ae7afdf603e. * Style * Use OrderedDict for I/O * Style. * Specify OrderedDict documentation. * Style :) Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-