"docs/source/vscode:/vscode.git/clone" did not exist on "4c21da5e347bfc53ee4ec5b71a23721fefe6822c"
  1. 29 Sep, 2021 1 commit
  2. 27 Sep, 2021 2 commits
  3. 22 Sep, 2021 3 commits
  4. 21 Sep, 2021 2 commits
    • Kamal Raj's avatar
      beit-flax (#13515) · a2dec768
      Kamal Raj authored
      * beit-flax
      
      * updated FLAX_BEIT_MLM_DOCSTRING
      
      * removed bool_masked_pos from classification
      
      * updated Copyright
      
      * code refactoring: x -> embeddings
      
      * updated test: rm from_pt
      
      * Update docs/source/model_doc/beit.rst
      
      * model code dtype updates and
      other changes according to review
      
      * relative_position_bias
      revert back to pytorch design
      a2dec768
    • Patrick von Platen's avatar
      Add Speech AutoModels (#13655) · 48fa42e5
      Patrick von Platen authored
      * upload
      
      * correct
      
      * correct
      
      * correct
      
      * finish
      
      * up
      
      * up
      
      * up again
      48fa42e5
  5. 20 Sep, 2021 4 commits
    • flozi00's avatar
      Fix typo distilbert doc (#13643) · ea921365
      flozi00 authored
      ea921365
    • flozi00's avatar
      Change https:/ to https:// (#13644) · aeb2dac0
      flozi00 authored
      aeb2dac0
    • Ayaka Mikazuki's avatar
      Fix mT5 documentation (#13639) · 04976a32
      Ayaka Mikazuki authored
      * Fix MT5 documentation
      
      The abstract is incomplete
      
      * MT5 -> mT5
      04976a32
    • Gunjan Chhablani's avatar
      Add FNet (#13045) · d8049331
      Gunjan Chhablani authored
      
      
      * Init FNet
      
      * Update config
      
      * Fix config
      
      * Update model classes
      
      * Update tokenizers to use sentencepiece
      
      * Fix errors in model
      
      * Fix defaults in config
      
      * Remove position embedding type completely
      
      * Fix typo and take only real numbers
      
      * Fix type vocab size in configuration
      
      * Add projection layer to embeddings
      
      * Fix position ids bug in embeddings
      
      * Add minor changes
      
      * Add conversion script and remove CausalLM vestiges
      
      * Fix conversion script
      
      * Fix conversion script
      
      * Remove CausalLM Test
      
      * Update checkpoint names to dummy checkpoints
      
      * Add tokenizer mapping
      
      * Fix modeling file and corresponding tests
      
      * Add tokenization test file
      
      * Add PreTraining model test
      
      * Make style and quality
      
      * Make tokenization base tests work
      
      * Update docs
      
      * Add FastTokenizer tests
      
      * Fix fast tokenizer special tokens
      
      * Fix style and quality
      
      * Remove load_tf_weights vestiges
      
      * Add FNet to  main README
      
      * Fix configuration example indentation
      
      * Comment tokenization slow test
      
      * Fix style
      
      * Add changes from review
      
      * Fix style
      
      * Remove bos and eos tokens from tokenizers
      
      * Add tokenizer slow test, TPU transforms, NSP
      
      * Add scipy check
      
      * Add scipy availabilty check to test
      
      * Fix tokenizer and use correct inputs
      
      * Remove remaining TODOs
      
      * Fix tests
      
      * Fix tests
      
      * Comment Fourier Test
      
      * Uncomment Fourier Test
      
      * Change to google checkpoint
      
      * Add changes from review
      
      * Fix activation function
      
      * Fix model integration test
      
      * Add more integration tests
      
      * Add comparison steps to MLM integration test
      
      * Fix style
      
      * Add masked tokenization fix
      
      * Improve mask tokenization fix
      
      * Fix index docs
      
      * Add changes from review
      
      * Fix issue
      
      * Fix failing import in test
      
      * some more fixes
      
      * correct fast tokenizer
      
      * finalize
      
      * make style
      
      * Remove additional tokenization logic
      
      * Set do_lower_case to False
      
      * Allow keeping accents
      
      * Fix tokenization test
      
      * Fix FNet Tokenizer Fast
      
      * fix tests
      
      * make style
      
      * Add tips to FNet docs
      Co-authored-by: default avatarpatrickvonplaten <patrick.v.platen@gmail.com>
      d8049331
  6. 14 Sep, 2021 3 commits
  7. 13 Sep, 2021 1 commit
  8. 10 Sep, 2021 2 commits
    • patrickvonplaten's avatar
      Docs for v4.10.1 · 72ec2f3e
      patrickvonplaten authored
      72ec2f3e
    • Nicolas Patry's avatar
      [Large PR] Entire rework of pipelines. (#13308) · c63fcabf
      Nicolas Patry authored
      
      
      * Enabling dataset iteration on pipelines.
      
      Enabling dataset iteration on pipelines.
      
      Unifying parameters under `set_parameters` function.
      
      Small fix.
      
      Last fixes after rebase
      
      Remove print.
      
      Fixing text2text `generate_kwargs`
      
      No more `self.max_length`.
      
      Fixing tf only conversational.
      
      Consistency in start/stop index over TF/PT.
      
      Speeding up drastically on TF (nasty bug where max_length would increase
      a ton.)
      
      Adding test for support for non fast tokenizers.
      
      Fixign GPU usage on zero-shot.
      
      Fix working on Tf.
      
      Update src/transformers/pipelines/base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      Update src/transformers/pipelines/base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      Small cleanup.
      
      Remove all asserts + simple format.
      
      * Fixing audio-classification for large PR.
      
      * Overly explicity null checking.
      
      * Encapsulating GPU/CPU pytorch manipulation directly within `base.py`.
      
      * Removed internal state for parameters of the  pipeline.
      
      Instead of overriding implicitly internal state, we moved
      to real named arguments on every `preprocess`, `_forward`,
      `postprocess` function.
      
      Instead `_sanitize_parameters` will be used to split all kwargs
      of both __init__ and __call__ into the 3 kinds of named parameters.
      
      * Move import warnings.
      
      * Small fixes.
      
      * Quality.
      
      * Another small fix, using the CI to debug faster.
      
      * Last fixes.
      
      * Last fix.
      
      * Small cleanup of tensor moving.
      
      * is not None.
      
      * Adding a bunch of docs + a iteration test.
      
      * Fixing doc style.
      
      * KeyDataset = None guard.
      
      * RRemoving the Cuda test for pipelines (was testing).
      
      * Even more simple iteration test.
      
      * Correct import .
      
      * Long day.
      
      * Fixes in docs.
      
      * [WIP] migrating object detection.
      
      * Fixed the target_size bug.
      
      * Fixup.
      
      * Bad variable name.
      
      * Fixing `ensure_on_device` respects original ModelOutput.
      c63fcabf
  9. 08 Sep, 2021 4 commits
  10. 07 Sep, 2021 1 commit
  11. 06 Sep, 2021 1 commit
    • Nils Reimers's avatar
      Update model configs - Allow setters for common properties (#13026) · c8be8a9a
      Nils Reimers authored
      * refactor GPT Config to allow dyn. properties
      
      * make attribute_map a class attribute
      
      * remove old code
      
      * update unit test to test config: Add test for common properties setter
      
      * update unit test to test config: Add test for common properties passed as parameters to __init__
      
      * update to black code format
      
      * Allow that setters are not defined for certain config classes
      
      * update config classes to implement attribute_map
      
      * bugfix lxmert config - id2labels was not defined when num_labels was set
      
      * update broken configs - add attribute_maps
      
      * update bart config
      
      * update black codestyle
      
      * update documentation on common config attributes
      
      * update GPTJ config to new attribute map
      
      * update docs on common attributes
      
      * gptj config: add max_position_embeddings
      
      * gptj config: format with black
      
      * update speech to text 2 config
      
      * format doc file to max_len 119
      
      * update config template
      c8be8a9a
  12. 02 Sep, 2021 3 commits
  13. 01 Sep, 2021 3 commits
  14. 31 Aug, 2021 10 commits