• Funtowicz Morgan's avatar
    [RFC] Laying down building stone for more flexible ONNX export capabilities (#11786) · 2aa3cd93
    Funtowicz Morgan authored
    
    
    * Laying down building stone for more flexible ONNX export capabilities
    
    * Ability to provide a map of config key to override before exporting.
    
    * Makes it possible to export BART with/without past keys.
    
    * Supports simple mathematical syntax for OnnxVariable.repeated
    
    * Effectively apply value override from onnx config for model
    
    * Supports export with additional features such as with-past for seq2seq
    
    * Store the output path directly in the args for uniform usage across.
    
    * Make BART_ONNX_CONFIG_* constants and fix imports.
    
    * Support BERT model.
    
    * Use tokenizer for more flexibility in defining the inputs of a model.
    
    * Add TODO as remainder to provide the batch/sequence_length as CLI args
    
    * Enable optimizations to be done on the model.
    
    * Enable GPT2 + past
    
    * Improve model validation with outputs containing nested structures
    
    * Enable Roberta
    
    * Enable Albert
    
    * Albert requires opset >= 12
    
    * BERT-like models requires opset >= 12
    
    * Remove double printing.
    
    * Enable XLM-Roberta
    
    * Enable DistilBERT
    
    * Disable optimization by default
    
    * Fix missing setattr when applying optimizer_features
    
    * Add value field to OnnxVariable to define constant input (not from tokenizers)
    
    * Add T5 support.
    
    * Simplify model type retrieval
    
    * Example exporting token_classification pipeline for DistilBERT.
    
    * Refactoring to package `transformers.onnx`
    
    * Solve circular dependency & __main__
    
    * Remove unnecessary imports in `__init__`
    
    * Licences
    
    * Use @Narsil's suggestion to forward the model's configuration to the ONNXConfig to avoid interpolation.
    
    * Onnx export v2 fixes (#12388)
    
    * Tiny fixes
    Remove `convert_pytorch` from onnxruntime-less runtimes
    Correct reference to model
    
    * Style
    
    * Fix Copied from
    
    * LongFormer ONNX config.
    
    * Removed optimizations
    
    * Remvoe bad merge relicas.
    
    * Remove unused constants.
    
    * Remove some deleted constants from imports.
    
    * Fix unittest to remove usage of PyTorch model for onnx.utils.
    
    * Fix distilbert export
    
    * Enable ONNX export test for supported model.
    
    * Style.
    
    * Fix lint.
    
    * Enable all supported default models.
    
    * GPT2 only has one output
    
    * Fix bad property name when overriding config.
    
    * Added unittests and docstrings.
    
    * Disable with_past tests for now.
    
    * Enable outputs validation for default export.
    
    * Remove graph opt lvls.
    
    * Last commit with on-going past commented.
    
    * Style.
    
    * Disabled `with_past` for now
    
    * Remove unused imports.
    
    * Remove framework argument
    
    * Remove TFPreTrainedModel reference
    
    * Add documentation
    
    * Add onnxruntime tests to CircleCI
    
    * Add test
    
    * Rename `convert_pytorch` to `export`
    
    * Use OrderedDict for dummy inputs
    
    * WIP Wav2Vec2
    
    * Revert "WIP Wav2Vec2"
    
    This reverts commit f665efb04c92525c3530e589029f0ae7afdf603e.
    
    * Style
    
    * Use OrderedDict for I/O
    
    * Style.
    
    * Specify OrderedDict documentation.
    
    * Style :)
    Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
    Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
    2aa3cd93
test_modeling_tf_common.py 65.8 KB