1. 03 May, 2021 1 commit
  2. 30 Apr, 2021 1 commit
    • Stas Bekman's avatar
      [DeepSpeed] fp32 support (#11499) · 4e7bf94e
      Stas Bekman authored
      * prep for deepspeed==0.3.16
      
      * new version
      
      * too soon
      
      * support and test fp32 mode
      
      * troubleshooting doc start
      
      * workaround no longer needed
      
      * add fp32 doc
      
      * style
      
      * cleanup, add tf32 note
      
      * clarify
      
      * release was made
      4e7bf94e
  3. 26 Apr, 2021 2 commits
  4. 23 Apr, 2021 2 commits
    • Sylvain Gugger's avatar
      Trainer push to hub (#11328) · bf2e0cf7
      Sylvain Gugger authored
      
      
      * Initial support for upload to hub
      
      * push -> upload
      
      * Fixes + examples
      
      * Fix torchhub test
      
      * Torchhub test I hate you
      
      * push_model_to_hub -> push_to_hub
      
      * Apply mixin to other pretrained models
      
      * Remove ABC inheritance
      
      * Add tests
      
      * Typo
      
      * Run tests
      
      * Install git-lfs
      
      * Change approach
      
      * Add push_to_hub to all
      
      * Staging test suite
      
      * Typo
      
      * Maybe like this?
      
      * More deps
      
      * Cache
      
      * Adapt name
      
      * Quality
      
      * MOAR tests
      
      * Put it in testing_utils
      
      * Docs + torchhub last hope
      
      * Styling
      
      * Wrong method
      
      * Typos
      
      * Update src/transformers/file_utils.py
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      
      * Address review comments
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      bf2e0cf7
    • Patrick von Platen's avatar
      [Flax] Big FlaxBert Refactor (#11364) · 8c9b5fcb
      Patrick von Platen authored
      * improve flax
      
      * refactor
      
      * typos
      
      * Update src/transformers/modeling_flax_utils.py
      
      * Apply suggestions from code review
      
      * Update src/transformers/modeling_flax_utils.py
      
      * fix typo
      
      * improve error tolerance
      
      * typo
      
      * correct nasty saving bug
      
      * fix from pretrained
      
      * correct tree map
      
      * add note
      
      * correct weight tying
      8c9b5fcb
  5. 14 Apr, 2021 1 commit
  6. 08 Apr, 2021 1 commit
    • Stas Bekman's avatar
      [DeepSpeed] ZeRO Stage 3 (#10753) · c6d66484
      Stas Bekman authored
      
      
      * synced gpus
      
      * fix
      
      * fix
      
      * need to use t5-small for quality tests
      
      * notes
      
      * complete merge
      
      * fix a disappearing std stream problem
      
      * start zero3 tests
      
      * wip
      
      * tune params
      
      * sorting out the pre-trained model loading
      
      * reworking generate loop wip
      
      * wip
      
      * style
      
      * fix tests
      
      * split the tests
      
      * refactor tests
      
      * wip
      
      * parameterized
      
      * fix
      
      * workout the resume from non-ds checkpoint pass + test
      
      * cleanup
      
      * remove no longer needed code
      
      * split getter/setter functions
      
      * complete the docs
      
      * suggestions
      
      * gpus and their compute capabilities link
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * style
      
      * remove invalid paramgd
      
      * automatically configure zero3 params that rely on hidden size
      
      * make _get_resized_embeddings zero3-aware
      
      * add test exercising resize_token_embeddings()
      
      * add docstring
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      c6d66484
  7. 31 Mar, 2021 2 commits
  8. 06 Mar, 2021 1 commit
  9. 04 Mar, 2021 1 commit
  10. 17 Feb, 2021 1 commit
  11. 05 Jan, 2021 1 commit
  12. 16 Dec, 2020 1 commit
  13. 02 Dec, 2020 1 commit
    • Patrick von Platen's avatar
      [PyTorch] Refactor Resize Token Embeddings (#8880) · 443f67e8
      Patrick von Platen authored
      * fix resize tokens
      
      * correct mobile_bert
      
      * move embedding fix into modeling_utils.py
      
      * refactor
      
      * fix lm head resize
      
      * refactor
      
      * break lines to make sylvain happy
      
      * add news tests
      
      * fix typo
      
      * improve test
      
      * skip bart-like for now
      
      * check if base_model = get(...) is necessary
      
      * clean files
      
      * improve test
      
      * fix tests
      
      * revert style templates
      
      * Update templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/modeling_{{cookiecutter.lowercase_modelname}}.py
      443f67e8
  14. 27 Nov, 2020 1 commit
    • Giovanni Compagnoni's avatar
      Extend typing to path-like objects in `PretrainedConfig` and `PreTrainedModel` (#8770) · f9a2a9e3
      Giovanni Compagnoni authored
      * update configuration_utils.py typing to allow pathlike objects when sensible
      
      * update modeling_utils.py typing to allow pathlike objects when sensible
      
      * black
      
      * update tokenization_utils_base.py typing to allow pathlike objects when sensible
      
      * update tokenization_utils_fast.py typing to allow pathlike objects when sensible
      
      * update configuration_auto.py typing to allow pathlike objects when sensible
      
      * update configuration_auto.py docstring to allow pathlike objects when sensible
      
      * update tokenization_auto.py docstring to allow pathlike objects when sensible
      
      * black
      f9a2a9e3
  15. 23 Nov, 2020 1 commit
    • Stas Bekman's avatar
      consistent ignore keys + make private (#8737) · e84786aa
      Stas Bekman authored
      * consistent ignore keys + make private
      
      * style
      
      * - authorized_missing_keys    => _keys_to_ignore_on_load_missing
        - authorized_unexpected_keys => _keys_to_ignore_on_load_unexpected
      
      * move public doc of private attributes to private comment
      e84786aa
  16. 17 Nov, 2020 1 commit
  17. 16 Nov, 2020 1 commit
  18. 10 Nov, 2020 3 commits
  19. 09 Nov, 2020 1 commit
  20. 02 Nov, 2020 1 commit
  21. 29 Oct, 2020 1 commit
  22. 26 Oct, 2020 1 commit
    • Sylvain Gugger's avatar
      Doc styling (#8067) · 08f534d2
      Sylvain Gugger authored
      * Important files
      
      * Styling them all
      
      * Revert "Styling them all"
      
      This reverts commit 7d029395fdae8513b8281cbc2a6c239f8093503e.
      
      * Syling them for realsies
      
      * Fix syntax error
      
      * Fix benchmark_utils
      
      * More fixes
      
      * Fix modeling auto and script
      
      * Remove new line
      
      * Fixes
      
      * More fixes
      
      * Fix more files
      
      * Style
      
      * Add FSMT
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * Fixes
      
      * More fixes
      
      * More fixes
      
      * Last fixes
      
      * Make sphinx happy
      08f534d2
  23. 23 Oct, 2020 1 commit
  24. 19 Oct, 2020 1 commit
    • Weizhen's avatar
      ProphetNet (#7157) · 2422cda0
      Weizhen authored
      
      
      * add new model prophetnet
      
      prophetnet modified
      
      modify codes as suggested v1
      
      add prophetnet test files
      
      * still bugs, because of changed output formats of encoder and decoder
      
      * move prophetnet into the latest version
      
      * clean integration tests
      
      * clean tokenizers
      
      * add xlm config to init
      
      * correct typo in init
      
      * further refactoring
      
      * continue refactor
      
      * save parallel
      
      * add decoder_attention_mask
      
      * fix use_cache vs. past_key_values
      
      * fix common tests
      
      * change decoder output logits
      
      * fix xlm tests
      
      * make common tests pass
      
      * change model architecture
      
      * add tokenizer tests
      
      * finalize model structure
      
      * no weight mapping
      
      * correct n-gram stream attention mask as discussed with qweizhen
      
      * remove unused import
      
      * fix index.rst
      
      * fix tests
      
      * delete unnecessary code
      
      * add fast integration test
      
      * rename weights
      
      * final weight remapping
      
      * save intermediate
      
      * Descriptions for Prophetnet Config File
      
      * finish all models
      
      * finish new model outputs
      
      * delete unnecessary files
      
      * refactor encoder layer
      
      * add dummy docs
      
      * code quality
      
      * fix tests
      
      * add model pages to doctree
      
      * further refactor
      
      * more refactor, more tests
      
      * finish code refactor and tests
      
      * remove unnecessary files
      
      * further clean up
      
      * add docstring template
      
      * finish tokenizer doc
      
      * finish prophetnet
      
      * fix copies
      
      * fix typos
      
      * fix tf tests
      
      * fix fp16
      
      * fix tf test 2nd try
      
      * fix code quality
      
      * add test for each model
      
      * merge new tests to branch
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update src/transformers/modeling_prophetnet.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update utils/check_repo.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * apply sams and sylvains comments
      
      * make style
      
      * remove unnecessary code
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/configuration_prophetnet.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * implement lysandres comments
      
      * correct docs
      
      * fix isort
      
      * fix tokenizers
      
      * fix copies
      Co-authored-by: default avatarweizhen <weizhen@mail.ustc.edu.cn>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2422cda0
  25. 18 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659) · ba8c4d0a
      Thomas Wolf authored
      * splitting fast and slow tokenizers [WIP]
      
      * [WIP] splitting sentencepiece and tokenizers dependencies
      
      * update dummy objects
      
      * add name_or_path to models and tokenizers
      
      * prefix added to file names
      
      * prefix
      
      * styling + quality
      
      * spliting all the tokenizer files - sorting sentencepiece based ones
      
      * update tokenizer version up to 0.9.0
      
      * remove hard dependency on sentencepiece 馃帀
      
      * and removed hard dependency on tokenizers 馃帀
      
      
      
      * update conversion script
      
      * update missing models
      
      * fixing tests
      
      * move test_tokenization_fast to main tokenization tests - fix bugs
      
      * bump up tokenizers
      
      * fix bert_generation
      
      * update ad fix several tokenizers
      
      * keep sentencepiece in deps for now
      
      * fix funnel and deberta tests
      
      * fix fsmt
      
      * fix marian tests
      
      * fix layoutlm
      
      * fix squeezebert and gpt2
      
      * fix T5 tokenization
      
      * fix xlnet tests
      
      * style
      
      * fix mbart
      
      * bump up tokenizers to 0.9.2
      
      * fix model tests
      
      * fix tf models
      
      * fix seq2seq examples
      
      * fix tests without sentencepiece
      
      * fix slow => fast  conversion without sentencepiece
      
      * update auto and bert generation tests
      
      * fix mbart tests
      
      * fix auto and common test without tokenizers
      
      * fix tests without tokenizers
      
      * clean up tests lighten up when tokenizers + sentencepiece are both off
      
      * style quality and tests fixing
      
      * add sentencepiece to doc/examples reqs
      
      * leave sentencepiece on for now
      
      * style quality split hebert and fix pegasus
      
      * WIP Herbert fast
      
      * add sample_text_no_unicode and fix hebert tokenization
      
      * skip FSMT example test for now
      
      * fix style
      
      * fix fsmt in example tests
      
      * update following Lysandre and Sylvain's comments
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ba8c4d0a
  26. 13 Oct, 2020 1 commit
  27. 12 Oct, 2020 1 commit
    • fteufel's avatar
      check for tpu availability in save_pretrained (#7699) · 2f34bcf3
      fteufel authored
      Added is_torch_tpu_available() to the condition
      for saving a model as xla model. "xla_device"
      property of config can also be True on a non-xla
      device, when loading a checkpointthat was trained
      on xla before.
      
      Resolves #7695
      2f34bcf3
  28. 01 Oct, 2020 2 commits
  29. 25 Sep, 2020 1 commit
  30. 23 Sep, 2020 1 commit
  31. 21 Sep, 2020 1 commit
  32. 17 Sep, 2020 1 commit
  33. 14 Sep, 2020 1 commit
  34. 08 Sep, 2020 1 commit