1. 06 May, 2022 1 commit
  2. 04 May, 2022 1 commit
  3. 26 Apr, 2022 1 commit
  4. 25 Apr, 2022 2 commits
  5. 22 Apr, 2022 1 commit
  6. 19 Apr, 2022 1 commit
  7. 12 Apr, 2022 1 commit
  8. 01 Apr, 2022 1 commit
  9. 25 Mar, 2022 1 commit
  10. 23 Mar, 2022 1 commit
    • Sylvain Gugger's avatar
      Reorganize file utils (#16264) · 4975002d
      Sylvain Gugger authored
      * Split file_utils in several submodules
      
      * Fixes
      
      * Add back more objects
      
      * More fixes
      
      * Who exactly decided to import that from there?
      
      * Second suggestion to code with code review
      
      * Revert wront move
      
      * Fix imports
      
      * Adapt all imports
      
      * Adapt all imports everywhere
      
      * Revert this import, will fix in a separate commit
      4975002d
  11. 14 Mar, 2022 1 commit
  12. 10 Mar, 2022 1 commit
  13. 09 Mar, 2022 1 commit
    • lewtun's avatar
      Add ONNX export for ViT (#15658) · 50dd314d
      lewtun authored
      
      
      * Add ONNX support for ViT
      
      * Refactor to use generic preprocessor
      
      * Add vision dep to tests
      
      * Extend ONNX slow tests to ViT
      
      * Add dummy image generator
      
      * Use model_type to determine modality
      
      * Add deprecation warnings for tokenizer argument
      
      * Add warning when overwriting the preprocessor
      
      * Add optional args to docstrings
      
      * Add minimum PyTorch version to OnnxConfig
      
      * Refactor OnnxConfig class variables from CONSTANT_NAME to snake_case
      
      * Add reasonable value for default atol
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      50dd314d
  14. 02 Mar, 2022 1 commit
  15. 23 Feb, 2022 1 commit
  16. 10 Feb, 2022 1 commit
  17. 08 Feb, 2022 1 commit
  18. 07 Feb, 2022 1 commit
  19. 11 Jan, 2022 1 commit
    • Virus's avatar
      Adds IBERT to models exportable with ONNX (#14868) · c4fa908f
      Virus authored
      * Add IBertOnnxConfig and tests
      
      * add all the supported features for IBERT and remove outputs in IbertOnnxConfig
      
      * use OnnxConfig
      
      * fix codestyle
      
      * remove serialization.rst
      
      * codestyle
      c4fa908f
  20. 23 Dec, 2021 1 commit
    • lewtun's avatar
      Add ONNX support for MarianMT models (#14586) · 6b655cc6
      lewtun authored
      * First commit to add MarianMT to ONNX
      
      * Now MarianModel.forward() automatically generates decoder_input_ids, like BartModel.forward()
      
      * Adjusted MarianOnnxConfig.inputs and outputs to work with seq2seq-lm feature
      
      * Style fix
      
      * Added support for other features for already supported models
      
      * Partial support for causal and seq2seq models
      
      * Partial support for causal and seq2seq models
      
      * Add default task for MarianMT ONNX
      
      * Remove automatic creation of decoder_input_ids
      
      * Extend inputs and outputs for MarianMT ONNX config
      
      * Add MarianMT to ONNX unit tests
      
      * Refactor
      
      * OnnxSeq2SeqConfigWithPast to support seq2seq models
      
      * Parameterized the onnx tests
      
      * Restored run_mlm.py
      
      * Restored run_mlm.py
      
      * [WIP] BART update
      
      * BART and MBART
      
      * Add past_key_values and fix dummy decoder inputs
      
      Using a sequence length of 1 in generate_dummy_outputs() produces large discrepancies, presumably due to some hidden optimisations.
      
      * Refactor MarianOnnxConfig to remove custom past_key_values logic
      
      * Fix quality
      
      * Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
      
      This reverts commit 0f4e39c5.
      
      * is_torch_available test to avoid failing imports
      
      * sorting parameterize parameters to solve ERROR gw0 gw1
      
      * tests fix
      
      * tests fix
      
      * GPT2 with past fix
      
      * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
      
      * Removed onnx file
      
      * Refactor Marian export to account for base changes
      
      * Fix copies
      
      * Implemented suggestions
      
      * Extend support for causal LM
      
      * Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
      
      This reverts commit 0f4e39c5.
      
      * is_torch_available test to avoid failing imports
      
      * sorting parameterize parameters to solve ERROR gw0 gw1
      
      * tests fix
      
      * tests fix
      
      * GPT2 with past fix
      
      * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
      
      * Removed onnx file
      
      * Implemented suggestions
      
      * Fixed __init__ to resolve conflict with master
      
      * Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
      
      This reverts commit 0f4e39c5
      
      .
      
      * is_torch_available test to avoid failing imports
      
      * sorting parameterize parameters to solve ERROR gw0 gw1
      
      * tests fix
      
      * tests fix
      
      * GPT2 with past fix
      
      * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
      
      * Removed onnx file
      
      * Implemented suggestions
      
      * Fixed __init__ to resolve conflict with master
      
      * Remove commented import
      
      * Remove ONNX model
      
      * Remove redundant class method
      
      * Tidy up imports
      
      * Fix quality
      
      * Refactor dummy input function
      
      * Add copied from statements to Marian config functions
      
      * Remove false copied from comments
      
      * Fix copy from comment
      Co-authored-by: default avatarMassimiliano Bruni <massimiliano.bruni@hcl.com>
      Co-authored-by: default avatarMichael Benayoun <mickbenayoun@gmail.com>
      6b655cc6
  21. 22 Dec, 2021 1 commit
    • Michael Benayoun's avatar
      Onnx enable tasks for supported models (part 2) (#14700) · 13504dcb
      Michael Benayoun authored
      * Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
      
      This reverts commit 0f4e39c5.
      
      * is_torch_available test to avoid failing imports
      
      * sorting parameterize parameters to solve ERROR gw0 gw1
      
      * tests fix
      
      * tests fix
      
      * GPT2 with past fix
      
      * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
      
      * Removed onnx file
      
      * Implemented suggestions
      
      * Fixed __init__ to resolve conflict with master
      
      * Remove commented import
      13504dcb
  22. 08 Dec, 2021 2 commits
  23. 21 Sep, 2021 1 commit
    • Nishant Prabhu's avatar
      Layoutlm onnx support (Issue #13300) (#13562) · ddd4d02f
      Nishant Prabhu authored
      
      
      * Add support for exporting PyTorch LayoutLM to ONNX
      
      * Added tests for converting LayoutLM to ONNX
      
      * Add support for exporting PyTorch LayoutLM to ONNX
      
      * Added tests for converting LayoutLM to ONNX
      
      * cleanup
      
      * Removed regression/ folder
      
      * Add support for exporting PyTorch LayoutLM to ONNX
      
      * Added tests for converting LayoutLM to ONNX
      
      * cleanup
      
      * Fixed import error
      
      * Remove unnecessary import statements
      
      * Changed max_2d_positions from class variable to instance variable of the config class
      
      * Add support for exporting PyTorch LayoutLM to ONNX
      
      * Added tests for converting LayoutLM to ONNX
      
      * cleanup
      
      * Add support for exporting PyTorch LayoutLM to ONNX
      
      * cleanup
      
      * Fixed import error
      
      * Changed max_2d_positions from class variable to instance variable of the config class
      
      * Use super class generate_dummy_inputs method
      Co-authored-by: default avatarMichael Benayoun <mickbenayoun@gmail.com>
      
      * Add support for Masked LM, sequence classification and token classification
      Co-authored-by: default avatarMichael Benayoun <mickbenayoun@gmail.com>
      
      * Removed uncessary import and method
      
      * Fixed code styling
      
      * Raise error if PyTorch is not installed
      
      * Remove unnecessary import statement
      Co-authored-by: default avatarMichael Benayoun <mickbenayoun@gmail.com>
      ddd4d02f
  24. 09 Aug, 2021 1 commit
  25. 06 Aug, 2021 2 commits
  26. 05 Aug, 2021 1 commit
  27. 29 Jul, 2021 1 commit
    • Funtowicz Morgan's avatar
      ONNX v2 raises an Exception when using PyTorch < 1.8.0 (#12933) · 640421c0
      Funtowicz Morgan authored
      * Raise an issue if the pytorch version is < 1.8.0
      
      * Attempt to add a test to ensure it correctly raises.
      
      * Missing docstring.
      
      * Second attempt, patch with string absolute import.
      
      * Let's do the call before checking it was called ...
      
      * use the correct function ... 馃う
      
      * Raise ImportError and AssertionError respectively when unable to find torch and torch version is not sufficient.
      
      * Correct path mock patching
      
      * relax constraint for torch_onnx_dict_inputs to ge instead of eq.
      
      * Style.
      
      * Split each version requirements for torch.
      
      * Let's compare version directly.
      
      * Import torch_version after checking pytorch is installed.
      
      * @require_torch
      640421c0
  28. 16 Jul, 2021 1 commit
  29. 08 Jul, 2021 1 commit
    • Funtowicz Morgan's avatar
      [RFC] Laying down building stone for more flexible ONNX export capabilities (#11786) · 2aa3cd93
      Funtowicz Morgan authored
      
      
      * Laying down building stone for more flexible ONNX export capabilities
      
      * Ability to provide a map of config key to override before exporting.
      
      * Makes it possible to export BART with/without past keys.
      
      * Supports simple mathematical syntax for OnnxVariable.repeated
      
      * Effectively apply value override from onnx config for model
      
      * Supports export with additional features such as with-past for seq2seq
      
      * Store the output path directly in the args for uniform usage across.
      
      * Make BART_ONNX_CONFIG_* constants and fix imports.
      
      * Support BERT model.
      
      * Use tokenizer for more flexibility in defining the inputs of a model.
      
      * Add TODO as remainder to provide the batch/sequence_length as CLI args
      
      * Enable optimizations to be done on the model.
      
      * Enable GPT2 + past
      
      * Improve model validation with outputs containing nested structures
      
      * Enable Roberta
      
      * Enable Albert
      
      * Albert requires opset >= 12
      
      * BERT-like models requires opset >= 12
      
      * Remove double printing.
      
      * Enable XLM-Roberta
      
      * Enable DistilBERT
      
      * Disable optimization by default
      
      * Fix missing setattr when applying optimizer_features
      
      * Add value field to OnnxVariable to define constant input (not from tokenizers)
      
      * Add T5 support.
      
      * Simplify model type retrieval
      
      * Example exporting token_classification pipeline for DistilBERT.
      
      * Refactoring to package `transformers.onnx`
      
      * Solve circular dependency & __main__
      
      * Remove unnecessary imports in `__init__`
      
      * Licences
      
      * Use @Narsil's suggestion to forward the model's configuration to the ONNXConfig to avoid interpolation.
      
      * Onnx export v2 fixes (#12388)
      
      * Tiny fixes
      Remove `convert_pytorch` from onnxruntime-less runtimes
      Correct reference to model
      
      * Style
      
      * Fix Copied from
      
      * LongFormer ONNX config.
      
      * Removed optimizations
      
      * Remvoe bad merge relicas.
      
      * Remove unused constants.
      
      * Remove some deleted constants from imports.
      
      * Fix unittest to remove usage of PyTorch model for onnx.utils.
      
      * Fix distilbert export
      
      * Enable ONNX export test for supported model.
      
      * Style.
      
      * Fix lint.
      
      * Enable all supported default models.
      
      * GPT2 only has one output
      
      * Fix bad property name when overriding config.
      
      * Added unittests and docstrings.
      
      * Disable with_past tests for now.
      
      * Enable outputs validation for default export.
      
      * Remove graph opt lvls.
      
      * Last commit with on-going past commented.
      
      * Style.
      
      * Disabled `with_past` for now
      
      * Remove unused imports.
      
      * Remove framework argument
      
      * Remove TFPreTrainedModel reference
      
      * Add documentation
      
      * Add onnxruntime tests to CircleCI
      
      * Add test
      
      * Rename `convert_pytorch` to `export`
      
      * Use OrderedDict for dummy inputs
      
      * WIP Wav2Vec2
      
      * Revert "WIP Wav2Vec2"
      
      This reverts commit f665efb04c92525c3530e589029f0ae7afdf603e.
      
      * Style
      
      * Use OrderedDict for I/O
      
      * Style.
      
      * Specify OrderedDict documentation.
      
      * Style :)
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2aa3cd93