1. 10 Aug, 2023 1 commit
  2. 09 Aug, 2023 3 commits
  3. 08 Aug, 2023 4 commits
  4. 07 Aug, 2023 3 commits
    • Pedro Lira's avatar
      Add mask2former fp16 support (#25093) · 080a9711
      Pedro Lira authored
      * Add mask2former fp16 support
      
      * Clear consistency/quality issues
      
      * Fix consistency/quality (2)
      
      * Add integration test for mask2former (fp16 case)
      
      * Fix code quality
      
      * Add integration test for maskformer (fp16 case)
      
      * Add integration test for oneformer (fp16 case)
      
      * Remove slow decorator from fp16 tests
      
      * Fix lint
      
      * Remove usage of full inference and value checks for fp16
      
      * Temporarily comment slow for {mask, mask2, one}former
      
      * Add fp16 support to oneformer
      
      * Revert "Temporarily comment slow for {mask, mask2, one}former"
      
      This reverts commit e5371edabd301cf56079def0421a0a87df307cb0.
      
      * Remove dtype conversion noop
      080a9711
    • Sylvain Gugger's avatar
      Migrate Trainer from `Repository` to `upload_folder` (#25095) · baf1daa5
      Sylvain Gugger authored
      
      
      * First draft
      
      * Deal with progress bars
      
      * Update src/transformers/utils/hub.py
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      
      * Address review comments
      
      * Forgot one
      
      * Pin hf_hub
      
      * Add argument for push all and fix tests
      
      * Fix tests
      
      * Address review comments
      
      ---------
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      baf1daa5
    • Yih-Dar's avatar
      Fix more offload edge cases (#25342) · c177606f
      Yih-Dar authored
      
      
      * fix
      
      * fix
      
      * fix
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      c177606f
  5. 06 Aug, 2023 1 commit
  6. 04 Aug, 2023 4 commits
  7. 03 Aug, 2023 3 commits
  8. 02 Aug, 2023 5 commits
  9. 01 Aug, 2023 1 commit
  10. 31 Jul, 2023 2 commits
  11. 28 Jul, 2023 5 commits
  12. 27 Jul, 2023 3 commits
  13. 26 Jul, 2023 3 commits
  14. 25 Jul, 2023 2 commits
    • Sebastian Husch Lee's avatar
      [`T5`, `MT5`, `UMT5`] Add [T5, MT5, UMT5]ForSequenceClassification (#24726) · 8f36ab3e
      Sebastian Husch Lee authored
      * Initial addition of t5forsequenceclassification
      
      * Adding imports and adding tests
      
      * Formatting
      
      * Running make fix-copies
      
      * Adding mt5forseq
      
      * Formatting
      
      * run make fix-copies
      
      * Adding to docs
      
      * Add model_parallel
      
      * Fix bug
      
      * Fix
      
      * Remove TODO
      
      * Fixing tests for T5ForSequenceClassification
      
      * Undo changes to dependency_versions_table.py
      
      * Change classification head to work with T5Config directly
      
      * Change seq length to let tests pass
      
      * PR comments for formatting
      
      * Formatting
      
      * Initial addition of UMT5ForSequenceClassification
      
      * Adding to inits and formatting
      
      * run make fix-copies
      
      * Add doc for UMT5ForSeqClass
      
      * Update UMT5 config
      
      * Fix docs
      
      * Skip torch fx test for SequenceClassification
      
      * Formatting
      
      * Add skip to UMT5 tests as well
      
      * Fix umt5 tests
      
      * Running make fix-copies
      
      * PR comments
      
      * Fix for change to sentence_representation
      
      * Rename seq_len to hidden_size since that's what it is
      
      * Use base_model to follow format of the rest of the library
      
      * Update docs
      
      * Extract the decoder_input_ids changes and make one liner
      
      * Make one-liner
      8f36ab3e
    • Arthur's avatar
      [ `PreTrainedTokenizerFast`] Keep properties from fast tokenizer (#25053) · f9cc3338
      Arthur authored
      * draft solution
      
      * use `setdefault`
      
      * nits
      
      * add tests and fix truncation issue
      
      * fix test
      
      * test passes locally
      
      * quality
      
      * updates
      
      * update tsets
      f9cc3338