1. 26 Feb, 2024 1 commit
  2. 23 Feb, 2024 1 commit
    • Matt's avatar
      Allow remote code repo names to contain "." (#29175) · 371b572e
      Matt authored
      * stash commit
      
      * stash commit
      
      * It works!
      
      * Remove unnecessary change
      
      * We don't actually need the cache_dir!
      
      * Update docstring
      
      * Add test
      
      * Add test with custom cache dir too
      
      * Update model repo path
      371b572e
  3. 22 Feb, 2024 1 commit
  4. 21 Feb, 2024 3 commits
  5. 20 Feb, 2024 3 commits
    • Joao Gante's avatar
    • amyeroberts's avatar
      Revert low cpu mem tie weights (#29135) · 0996a100
      amyeroberts authored
      * Revert "Add tie_weights() to LM heads and set bias in set_output_embeddings() (#28948)"
      
      This reverts commit 725f4ad1.
      
      * Revert "Patch to skip failing `test_save_load_low_cpu_mem_usage` tests (#29043)"
      
      This reverts commit 4156f517.
      0996a100
    • Arthur's avatar
      [`Core tokenization`] `add_dummy_prefix_space` option to help with latest issues (#28010) · 15cfe389
      Arthur authored
      * add add_dummy_prefix_space option to slow
      
      * checking kwargs might be better. Should be there for all spm tokenizer IMO
      
      * nits
      
      * fix copies
      
      * more copied
      
      * nits
      
      * add prefix space
      
      * nit
      
      * nits
      
      * Update src/transformers/convert_slow_tokenizer.py
      
      * fix inti
      
      * revert wrong styling
      
      * fix
      
      * nits
      
      * style
      
      * updates
      
      * make sure we use slow tokenizer for conversion instead of looking for the decoder
      
      * support llama ast well
      
      * update llama tokenizer fast
      
      * nits
      
      * nits nits nits
      
      * update the doc
      
      * update
      
      * update to fix tests
      
      * skip unrelated tailing test
      
      * Update src/transformers/convert_slow_tokenizer.py
      
      * add proper testing
      
      * test decode as well
      
      * more testing
      
      * format
      
      * fix llama test
      
      * Apply suggestions from code review
      15cfe389
  6. 19 Feb, 2024 1 commit
  7. 16 Feb, 2024 2 commits
  8. 15 Feb, 2024 2 commits
  9. 14 Feb, 2024 4 commits
    • JB (Don)'s avatar
      Add tie_weights() to LM heads and set bias in set_output_embeddings() (#28948) · 725f4ad1
      JB (Don) authored
      * Add tie_weights() to LM heads and set bias in set_output_embeddings()
      
      The bias were not tied correctly in some LM heads, and this change should fix that.
      
      * Moving test_save_and_load_low_cpu_mem_usage to ModelTesterMixin
      
      * Adding _tie_weights() to MPNet and Vilt
      
      * Skip test for low cpu mem usage for Deta/DeformableDetr since they cannot init on meta device
      
      * Rename to test name to save_load to match the convention
      725f4ad1
    • Raushan Turganbay's avatar
    • NielsRogge's avatar
      Add SiglipForImageClassification and CLIPForImageClassification (#28952) · 63ffd56d
      NielsRogge authored
      * First draft
      
      * Add CLIPForImageClassification
      
      * Remove scripts
      
      * Fix doctests
      63ffd56d
    • Jonathan Tow's avatar
      Add `StableLM` (#28810) · de6029a0
      Jonathan Tow authored
      * Add `StableLM`
      
      * fix(model): re-create from `huggingface-cli add-new-model-like persimmon`
      
      * fix: re-add changes to address comments
      
      * fix(readme): add links to paper
      
      * fix(tokenization_auto): remove `GPTNeoXTokenizerFastFast` ref
      
      * fix(tests): re-add `@slow` decorator to integration tests
      
      * fix(tests): import slow...
      
      * fix(readme_hd): remove whitespace edit
      
      * fix(tokenizer): auto tokenizer tuple
      
      * skip doctests for `modeling_stablelm`
      de6029a0
  10. 13 Feb, 2024 3 commits
  11. 08 Feb, 2024 1 commit
  12. 06 Feb, 2024 3 commits
  13. 05 Feb, 2024 1 commit
  14. 02 Feb, 2024 3 commits
  15. 01 Feb, 2024 1 commit
    • JB (Don)'s avatar
      Adding [T5/MT5/UMT5]ForTokenClassification (#28443) · 0d26abdd
      JB (Don) authored
      * Adding [T5/MT5/UMT5]ForTokenClassification
      
      * Add auto mappings for T5ForTokenClassification and variants
      
      * Adding ForTokenClassification to the list of models
      
      * Adding attention_mask param to the T5ForTokenClassification test
      
      * Remove outdated comment in test
      
      * Adding EncoderOnly and Token Classification tests for MT5 and UMT5
      
      * Fix typo in umt5 string
      
      * Add tests for all the existing MT5 models
      
      * Fix wrong comment in dependency_versions_table
      
      * Reverting change to common test for _keys_to_ignore_on_load_missing
      
      The test is correctly picking up redundant keys in _keys_to_ignore_on_load_missing.
      
      * Removing _keys_to_ignore_on_missing from MT5 since the key is not used in the model
      
      * Add fix-copies to MT5ModelTest
      0d26abdd
  16. 31 Jan, 2024 2 commits
    • Kian Sierra McGettigan's avatar
      Flax mistral (#26943) · f7076cd3
      Kian Sierra McGettigan authored
      * direct copy from llama work
      
      * mistral modules forward pass working
      
      * flax mistral forward pass with sliding window
      
      * added tests
      
      * added layer collection approach
      
      * Revert "added layer collection approach"
      
      This reverts commit 0e2905bf2236ec323163fc1a9f0c016b21aa8b8f.
      
      * Revert "Revert "added layer collection approach""
      
      This reverts commit fb17b6187ac5d16da7c461e1130514dc3d137a43.
      
      * fixed attention outputs
      
      * added mistral to init and auto
      
      * fixed import name
      
      * fixed layernorm weight dtype
      
      * freeze initialized weights
      
      * make sure conversion consideres bfloat16
      
      * added backend
      
      * added docstrings
      
      * added cache
      
      * fixed sliding window causal mask
      
      * passes cache tests
      
      * passed all tests
      
      * applied make style
      
      * removed commented out code
      
      * applied fix-copies ignored other model changes
      
      * applied make fix-copies
      
      * removed unused functions
      
      * passed generation integration test
      
      * slow tests pass
      
      * fixed slow tests
      
      * changed default dtype from jax.numpy.float32 to float32 for docstring check
      
      * skip cache test  for FlaxMistralForSequenceClassification since if pad_token_id in input_ids it doesn't score previous input_ids
      
      * updated checkpoint since from_pt not included
      
      * applied black style
      
      * removed unused args
      
      * Applied styling and fixup
      
      * changed checkpoint for doc back
      
      * fixed rf after adding it to hf hub
      
      * Add dummy ckpt
      
      * applied styling
      
      * added tokenizer to new ckpt
      
      * fixed slice format
      
      * fix init and slice
      
      * changed ref for placeholder TODO
      
      * added copies from Llama
      
      * applied styling
      
      * applied fix-copies
      
      * fixed docs
      
      * update weight dtype reconversion for sharded weights
      
      * removed Nullable input ids
      
      * Removed unnecessary output attentions in Module
      
      * added embedding weight initialziation
      
      * removed unused past_key_values
      
      * fixed deterministic
      
      * Fixed RMS Norm and added copied from
      
      * removed input_embeds
      
      * applied make style
      
      * removed nullable input ids from sequence classification model
      
      * added copied from GPTJ
      
      * added copied from Llama on FlaxMistralDecoderLayer
      
      * added copied from to FlaxMistralPreTrainedModel methods
      
      * fix test deprecation warning
      
      * freeze gpt neox random_params and fix copies
      
      * applied make style
      
      * fixed doc issue
      
      * skipped docstring test to allign # copied from
      
      * applied make style
      
      * removed FlaxMistralForSequenceClassification
      
      * removed unused padding_idx
      
      * removed more sequence classification
      
      * removed sequence classification
      
      * applied styling and consistency
      
      * added copied from in tests
      
      * removed sequence classification test logic
      
      * applied styling
      
      * applied make style
      
      * removed freeze and fixed copies
      
      * undo test change
      
      * changed repeat_kv to tile
      
      * fixed to key value groups
      
      * updated copyright year
      
      * split casual_mask
      
      * empty to rerun failed pt_flax_equivalence test FlaxWav2Vec2ModelTest
      
      * went back to 2023 for tests_pr_documentation_tests
      
      * went back to 2024
      
      * changed tile to repeat
      
      * applied make style
      
      * empty for retry on Wav2Vec2
      f7076cd3
    • Patrick von Platen's avatar
      [Whisper] Refactor forced_decoder_ids & prompt ids (#28687) · 65a926e8
      Patrick von Platen authored
      
      
      * up
      
      * Fix more
      
      * Correct more
      
      * Fix more tests
      
      * fix fast tests
      
      * Fix more
      
      * fix more
      
      * push all files
      
      * finish all
      
      * make style
      
      * Fix timestamp wrap
      
      * make style
      
      * make style
      
      * up
      
      * up
      
      * up
      
      * Fix lang detection behavior
      
      * Fix lang detection behavior
      
      * Add lang detection test
      
      * Fix lang detection behavior
      
      * make style
      
      * Update src/transformers/models/whisper/generation_whisper.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * better error message
      
      * make style tests
      
      * add warning
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      65a926e8
  17. 30 Jan, 2024 2 commits
    • Matt's avatar
      Add tf_keras imports to prepare for Keras 3 (#28588) · 415e9a09
      Matt authored
      * Port core files + ESM (because ESM code is odd)
      
      * Search-replace in modelling code
      
      * Fix up transfo_xl as well
      
      * Fix other core files + tests (still need to add correct import to tests)
      
      * Fix cookiecutter
      
      * make fixup, fix imports in some more core files
      
      * Auto-add imports to tests
      
      * Cleanup, add imports to sagemaker tests
      
      * Use correct exception for importing tf_keras
      
      * Fixes in modeling_tf_utils
      
      * make fixup
      
      * Correct version parsing code
      
      * Ensure the pipeline tests correctly revert to float32 after each test
      
      * Ensure the pipeline tests correctly revert to float32 after each test
      
      * More tf.keras -> keras
      
      * Add dtype cast
      
      * Better imports of tf_keras
      
      * Add a cast for tf.assign, just in case
      
      * Fix callback imports
      415e9a09
    • amyeroberts's avatar
      [`Backbone`] Use `load_backbone` instead of `AutoBackbone.from_config` (#28661) · 2fa1c808
      amyeroberts authored
      * Enable instantiating model with pretrained backbone weights
      
      * Remove doc updates until changes made in modeling code
      
      * Use load_backbone instead
      
      * Add use_timm_backbone to the model configs
      
      * Add missing imports and arguments
      
      * Update docstrings
      
      * Make sure test is properly configured
      
      * Include recent DPT updates
      2fa1c808
  18. 29 Jan, 2024 1 commit
  19. 25 Jan, 2024 1 commit
    • NielsRogge's avatar
      Add Depth Anything (#28654) · 963db81a
      NielsRogge authored
      * First draft
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Add docs
      
      * Remove file
      
      * Add copied from
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Fix style
      
      * Update docs
      
      * Convert all checkpoints, add integration test
      
      * Rename checkpoints
      
      * Add pretrained backbone attributes
      
      * Fix default config
      
      * Address comment
      
      * Add figure to docs
      
      * Fix bug thanks to @xenova
      
      * Update conversion script
      
      * Fix integration test
      963db81a
  20. 24 Jan, 2024 1 commit
    • Khai Mai's avatar
      Exclude the load balancing loss of padding tokens in Mixtral-8x7B (#28517) · c5c69096
      Khai Mai authored
      * fix the function load_balancing_loss_func in Mixtral_Moe to include attention_mask
      
      * format code using black and ruff
      
      * skip computing mask if attention_mask=None
      
      * add tests for load balancing loss Mixtral-Moe
      
      * fix assert loss is different in mixtral_test
      
      * fix pad_leng
      
      * use assertNotAlmostEqual and print to debug
      
      * remove print for debug
      
      * minor updates
      
      * reduce rtol and atol
      c5c69096
  21. 23 Jan, 2024 1 commit
  22. 21 Jan, 2024 1 commit
  23. 19 Jan, 2024 1 commit