1. 27 Mar, 2024 2 commits
    • Anton Vlasjuk's avatar
      Mamba `slow_forward` gradient fix (#29563) · cefb819f
      Anton Vlasjuk authored
      * FIX: Cached slow forward in mamba
      - additionally added mamba cached test
      - added unused test (mamba causal lm forward and backward)
      - fixed typo: "causl" --> "causal"
      
      * formatting
      
      * fix: use real `slow_forward` call instead of torch module's
      
      * add shape assertion for mixer block test
      
      * adjust shape assertion
      cefb819f
    • Bo Zheng's avatar
      Add Qwen2MoE (#29377) · 1c39974a
      Bo Zheng authored
      
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * Update README.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixup
      
      * fixup
      
      * add archive back
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fixup
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * add archive back
      
      * fix integration test
      
      * fixup
      
      ---------
      Co-authored-by: default avatarbozheng-hit <dsoul0621@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      1c39974a
  2. 25 Mar, 2024 1 commit
  3. 22 Mar, 2024 1 commit
  4. 21 Mar, 2024 1 commit
  5. 20 Mar, 2024 5 commits
  6. 19 Mar, 2024 3 commits
    • Raushan Turganbay's avatar
      Clean-up generation tests after moving methods to private (#29582) · 425ba56c
      Raushan Turganbay authored
      * clean-up tests
      
      * refine comments
      
      * fix musicgen tests
      
      * make style
      
      * remove slow decorator from a test
      
      * more clean-up
      
      * fix other failing tests
      425ba56c
    • StevenBucaille's avatar
      Implementation of SuperPoint and AutoModelForKeypointDetection (#28966) · 56baa033
      StevenBucaille authored
      
      
      * Added SuperPoint docs
      
      * Added tests
      
      * Removed commented part
      
      * Commit to create and fix add_superpoint branch with a new branch
      
      * Fixed dummy_pt_objects
      
      * Committed missing files
      
      * Fixed README.md
      
      * Apply suggestions from code review
      
      Fixed small changes
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Moved ImagePointDescriptionOutput from modeling_outputs.py to modeling_superpoint.py
      
      * Removed AutoModelForKeypointDetection and related stuff
      
      * Fixed inconsistencies in image_processing_superpoint.py
      
      * Moved infer_on_model logic simply in test_inference
      
      * Fixed bugs, added labels to forward method with checks whether it is properly a None value, also added tests about this logic in test_modeling_superpoint.py
      
      * Added tests to SuperPointImageProcessor to ensure that images are properly converted to grayscale
      
      * Removed remaining mentions of MODEL_FOR_KEYPOINT_DETECTION_MAPPING
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fixed from (w, h) to (h, w) as input for tests
      
      * Removed unnecessary condition
      
      * Moved last_hidden_state to be the first returned
      
      * Moved last_hidden_state to be the first returned (bis)
      
      * Moved last_hidden_state to be the first returned (ter)
      
      * Switched image_width and image_height in tests to match recent changes
      
      * Added config as first SuperPointConvBlock init argument
      
      * Reordered README's after merge
      
      * Added missing first config argument to SuperPointConvBlock instantiations
      
      * Removed formatting error
      
      * Added SuperPoint to README's de, pt-br, ru, te and vi
      
      * Checked out README_fr.md
      
      * Fixed README_fr.md
      
      * Test fix README_fr.md
      
      * Test fix README_fr.md
      
      * Last make fix-copies !
      
      * Updated checkpoint path
      
      * Removed unused SuperPoint doc
      
      * Added missing image
      
      * Update src/transformers/models/superpoint/modeling_superpoint.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Removed unnecessary import
      
      * Update src/transformers/models/superpoint/modeling_superpoint.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Added SuperPoint to _toctree.yml
      
      ---------
      Co-authored-by: default avatarsteven <steven.bucaillle@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarSteven Bucaille <steven.bucaille@buawei.com>
      56baa033
    • Arthur's avatar
      [`GemmaConverter`] use user_defined_symbols (#29473) · 2f9a3edb
      Arthur authored
      * use user_defined_symbols
      
      * fixup
      
      * nit
      
      * add a very robust test
      
      * make sure all models are tested with the `pretrained_tokenizer_to_test`
      
      * should we make sure we test all of them?
      
      * merge
      
      * remove the id
      
      * fix test
      
      * update
      
      * ousies
      
      * oups
      
      * fixup
      
      * fix copies check
      
      * remove `pretrained_tokenizer_to_test`
      2f9a3edb
  7. 18 Mar, 2024 1 commit
    • Yoach Lacombe's avatar
      Add MusicGen Melody (#28819) · c43b380e
      Yoach Lacombe authored
      
      
      * first modeling code
      
      * make repository
      
      * still WIP
      
      * update model
      
      * add tests
      
      * add latest change
      
      * clean docstrings and copied from
      
      * update docstrings md and readme
      
      * correct chroma function
      
      * correct copied from and remove unreleated test
      
      * add doc to toctree
      
      * correct imports
      
      * add convert script to notdoctested
      
      * Add suggestion from Sanchit
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * correct get_uncoditional_inputs docstrings
      
      * modify README according to SANCHIT feedback
      
      * add chroma to audio utils
      
      * clean librosa and torchaudio hard dependencies
      
      * fix FE
      
      * refactor audio decoder -> audio encoder for consistency with previous musicgen
      
      * refactor conditional -> encoder
      
      * modify sampling rate logics
      
      * modify license at the beginning
      
      * refactor all_self_attns->all_attentions
      
      * remove ignore copy from causallm generate
      
      * add copied from for from_sub_models
      
      * fix make copies
      
      * add warning if audio is truncated
      
      * add copied from where relevant
      
      * remove artefact
      
      * fix convert script
      
      * fix torchaudio and FE
      
      * modify chroma method according to feedback-> better naming
      
      * refactor input_values->input_features
      
      * refactor input_values->input_features and fix import fe
      
      * add input_features to docstrigs
      
      * correct inputs_embeds logics
      
      * remove dtype conversion
      
      * refactor _prepare_conditional_hidden_states_kwargs_for_generation ->_prepare_encoder_hidden_states_kwargs_for_generation
      
      * change warning for chroma length
      
      * Update src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * change way to save wav, using soundfile
      
      * correct docs and change to soundfile
      
      * fix import
      
      * fix init proj layers
      
      * remove line breaks from md
      
      * fix issue with docstrings
      
      * add FE suggestions
      
      * improve is in logics and remove useless imports
      
      * remove custom from_pretrained
      
      * simplify docstring code
      
      * add suggestions for modeling tests
      
      * make style
      
      * update converting script with sanity check
      
      * remove encoder attention mask from conditional generation
      
      * replace musicgen melody checkpoints with official orga
      
      * rename ylacombe->facebook in checkpoints
      
      * fix copies
      
      * remove unecessary warning
      
      * add shape in code docstrings
      
      * add files to slow doc tests
      
      * fix md bug and add md to not_tested
      
      * make fix-copies
      
      * fix hidden states test and batching
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      c43b380e
  8. 15 Mar, 2024 2 commits
  9. 14 Mar, 2024 2 commits
  10. 13 Mar, 2024 5 commits
    • Nate Cibik's avatar
      Add PvT-v2 Model (#26812) · 1fc505b8
      Nate Cibik authored
      
      
      * Added pytests for pvt-v2, all passed
      
      * Added pvt_v2 to docs/source/end/model_doc
      
      * Ran fix-copies and fixup. All checks passed
      
      * Added additional ReLU for linear attention mode
      
      * pvt_v2_b2_linear converted and working
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * PvT-v2 now works in AutoModel
      
      * Reverted batch eval changes for PR
      
      * Expanded type support for Pvt-v2 config
      
      * Fixed config docstring. Added channels property
      
      * Fixed model names in tests
      
      * Fixed config backbone compat. Added additional type support for image size in config
      
      * Fixed config backbone compat
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * Set key and value layers to use separate linear modules. Fixed pruning function
      
      * Set AvgPool to 7
      
      * Fixed issue in init
      
      * PvT-v2 now works in AutoModel
      
      * Successful conversion of pretrained weights for PVT-v2
      
      * Successful conversion of pretrained weights for PVT-v2 models
      
      * Added pytests for pvt-v2, all passed
      
      * Ran fix-copies and fixup. All checks passed
      
      * Added additional ReLU for linear attention mode
      
      * pvt_v2_b2_linear converted and working
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * Set key and value layers to use separate linear modules. Fixed pruning function
      
      * Set AvgPool to 7
      
      * Fixed issue in init
      
      * PvT-v2 now works in AutoModel
      
      * Successful conversion of pretrained weights for PVT-v2
      
      * Successful conversion of pretrained weights for PVT-v2 models
      
      * Added pytests for pvt-v2, all passed
      
      * Ran fix-copies and fixup. All checks passed
      
      * Added additional ReLU for linear attention mode
      
      * pvt_v2_b2_linear converted and working
      
      * Reverted batch eval changes for PR
      
      * Updated index.md
      
      * Expanded type support for Pvt-v2 config
      
      * Fixed config docstring. Added channels property
      
      * Fixed model names in tests
      
      * Fixed config backbone compat
      
      * Ran fix-copies
      
      * Fixed PvtV2Backbone tests
      
      * Added TFRegNet to OBJECTS_TO_IGNORE in check_docstrings.py
      
      * Fixed backbone stuff and fixed tests: all passing
      
      * Ran make fixup
      
      * Made modifications for code checks
      
      * Remove ONNX config from configuration_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Use explicit image size dict in test_modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Make image_size optional in test_modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove _ntuple use in modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove reference to fp16_enabled
      
      * Model modules now take config as first argument even when not used
      
      * Replaced abbreviations for "SR" and "AP" with explicit "spatialreduction" and "averagepooling"
      
      * All LayerNorm now instantiates with config.layer_norm_eps
      
      * Added docstring for depth-wise conv layer
      
      * PvtV2Config now only takes Union[int, Tuple[int, int]] for image size
      
      * Refactored PVTv2 in prep for gradient checkpointing
      
      * Gradient checkpointing ready to test
      
      * Removed override of _set_gradient_checkpointing
      
      * Cleaned out old code
      
      * Applied code fixup
      
      * Applied code fixup
      
      * Began debug of pvt_v2 tests
      
      * Leave handling of num_labels to base pretrained config class
      
      * Deactivated gradient checkpointing tests until it is fixed
      
      * Removed PvtV2ImageProcessor which duped PvtImageProcessor
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * Set key and value layers to use separate linear modules. Fixed pruning function
      
      * Set AvgPool to 7
      
      * Fixed issue in init
      
      * PvT-v2 now works in AutoModel
      
      * Successful conversion of pretrained weights for PVT-v2
      
      * Successful conversion of pretrained weights for PVT-v2 models
      
      * Added pytests for pvt-v2, all passed
      
      * Added pvt_v2 to docs/source/end/model_doc
      
      * Ran fix-copies and fixup. All checks passed
      
      * Added additional ReLU for linear attention mode
      
      * pvt_v2_b2_linear converted and working
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * PvT-v2 now works in AutoModel
      
      * Reverted batch eval changes for PR
      
      * Expanded type support for Pvt-v2 config
      
      * Fixed config docstring. Added channels property
      
      * Fixed model names in tests
      
      * Fixed config backbone compat. Added additional type support for image size in config
      
      * Fixed config backbone compat
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * Set key and value layers to use separate linear modules. Fixed pruning function
      
      * Set AvgPool to 7
      
      * Fixed issue in init
      
      * PvT-v2 now works in AutoModel
      
      * Successful conversion of pretrained weights for PVT-v2
      
      * Successful conversion of pretrained weights for PVT-v2 models
      
      * Added pytests for pvt-v2, all passed
      
      * Ran fix-copies and fixup. All checks passed
      
      * Added additional ReLU for linear attention mode
      
      * pvt_v2_b2_linear converted and working
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * Set key and value layers to use separate linear modules. Fixed pruning function
      
      * Set AvgPool to 7
      
      * Fixed issue in init
      
      * PvT-v2 now works in AutoModel
      
      * Successful conversion of pretrained weights for PVT-v2
      
      * Successful conversion of pretrained weights for PVT-v2 models
      
      * Added pytests for pvt-v2, all passed
      
      * Ran fix-copies and fixup. All checks passed
      
      * Added additional ReLU for linear attention mode
      
      * pvt_v2_b2_linear converted and working
      
      * Reverted batch eval changes for PR
      
      * Expanded type support for Pvt-v2 config
      
      * Fixed config docstring. Added channels property
      
      * Fixed model names in tests
      
      * Fixed config backbone compat
      
      * Ran fix-copies
      
      * Fixed PvtV2Backbone tests
      
      * Added TFRegNet to OBJECTS_TO_IGNORE in check_docstrings.py
      
      * Fixed backbone stuff and fixed tests: all passing
      
      * Ran make fixup
      
      * Made modifications for code checks
      
      * Remove ONNX config from configuration_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Use explicit image size dict in test_modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Make image_size optional in test_modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove _ntuple use in modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove reference to fp16_enabled
      
      * Model modules now take config as first argument even when not used
      
      * Replaced abbreviations for "SR" and "AP" with explicit "spatialreduction" and "averagepooling"
      
      * All LayerNorm now instantiates with config.layer_norm_eps
      
      * Added docstring for depth-wise conv layer
      
      * PvtV2Config now only takes Union[int, Tuple[int, int]] for image size
      
      * Refactored PVTv2 in prep for gradient checkpointing
      
      * Gradient checkpointing ready to test
      
      * Removed override of _set_gradient_checkpointing
      
      * Cleaned out old code
      
      * Applied code fixup
      
      * Applied code fixup
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * PvT-v2 now works in AutoModel
      
      * Ran fix-copies and fixup. All checks passed
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * PvT-v2 now works in AutoModel
      
      * Reverted batch eval changes for PR
      
      * Fixed config docstring. Added channels property
      
      * Fixed config backbone compat
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * PvT-v2 now works in AutoModel
      
      * Ran fix-copies and fixup. All checks passed
      
      * Allowed for batching of eval metrics
      
      * copied models/pvt to adapt to pvt_v2
      
      * First commit of pvt_v2
      
      * PvT-v2 now works in AutoModel
      
      * Fixed config backbone compat
      
      * Ran fix-copies
      
      * Began debug of pvt_v2 tests
      
      * Leave handling of num_labels to base pretrained config class
      
      * Deactivated gradient checkpointing tests until it is fixed
      
      * Removed PvtV2ImageProcessor which duped PvtImageProcessor
      
      * Fixed issue from rebase
      
      * Fixed issue from rebase
      
      * Set tests for gradient checkpointing to skip those using reentrant since it isn't supported
      
      * Fixed issue from rebase
      
      * Fixed issue from rebase
      
      * Changed model name in docs
      
      * Removed duplicate PvtV2Backbone
      
      * Work around type switching issue in tests
      
      * Fix model name in config comments
      
      * Update docs/source/en/model_doc/pvt_v2.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Changed name of variable from 'attn_reduce' to 'sr_type'
      
      * Changed name of variable from 'attn_reduce' to 'sr_type'
      
      * Changed from using 'sr_type' to 'linear_attention' for clarity
      
      * Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
      
      Removed old code
      
      * Changed from using 'sr_type' to 'linear_attention' for clarity
      
      * Fixed Class names to be more descriptive
      
      * Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
      
      Removed outdated code
      
      * Moved paper abstract to single line in pvt_v2.md
      
      * Added usage tips to pvt_v2.md
      
      * Simplified module inits by passing layer_idx
      
      * Fixed typing for hidden_act in PvtV2Config
      
      * Removed unusued import
      
      * Add pvt_v2 to docs/source/en/_toctree.yml
      
      * Updated documentation in docs/source/en/model_doc/pvt_v2.md to be more comprehensive.
      
      * Updated documentation in docs/source/en/model_doc/pvt_v2.md to be more comprehensive.
      
      * Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
      
      Move function parameters to single line
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
      
      Update year of copyright to 2024
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
      
      Make code more explicit
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Updated sr_ratio to be more explicit spatial_reduction_ratio
      
      * Removed excess type hints in modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Move params to single line in modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Removed needless comment in modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update copyright date in pvt_v2.md
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Moved params to single line in modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Updated copyright date in configuration_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Cleaned comments in modeling_pvt_v2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Renamed spatial_reduction Conv2D operation
      
      * Revert "Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
      "
      
      This reverts commit c4a04416dde8f3475ab405d1feb368600e0f8538.
      
      * Updated conversion script to reflect module name change
      
      * Deprecated reshape_last_stage option in config
      
      * Removed unused imports
      
      * Code formatting
      
      * Fixed outdated decorators on test_inference_fp16
      
      * Added "Copied from" comments in test_modeling_pvt_v2.py
      
      * Fixed import listing
      
      * Updated model name
      
      * Force empty commit for PR refresh
      
      * Fixed linting issue
      
      * Removed # Copied from comments
      
      * Added PVTv2 to README_fr.md
      
      * Ran make fix-copies
      
      * Replace all FoamoftheSea hub references with OpenGVLab
      
      * Fixed out_indices and out_features logic in configuration_pvt_v2.py
      
      * Made ImageNet weight conversion verification optional in convert_pvt_v2_to_pytorch.py
      
      * Ran code fixup
      
      * Fixed order of parent classes in PvtV2Config to fix the to_dict method override
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      1fc505b8
    • Yih-Dar's avatar
    • Raushan Turganbay's avatar
      Fix batching tests for new models (Mamba and SegGPT) (#29633) · 5ac264d8
      Raushan Turganbay authored
      
      
      * fix batchinng tests for new models
      
      * Update tests/models/seggpt/test_modeling_seggpt.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      5ac264d8
    • Lysandre Debut's avatar
      Adds pretrained IDs directly in the tests (#29534) · 11bbb505
      Lysandre Debut authored
      * Adds pretrained IDs directly in the tests
      
      * Fix tests
      
      * Fix tests
      
      * Review!
      11bbb505
    • bytebarde's avatar
      [Flash Attention 2] Add flash attention 2 for GPT-J (#28295) · be3fd8a2
      bytebarde authored
      
      
      * initial implementation of flash attention for gptj
      
      * modify flash attention and overwrite test_flash_attn_2_generate_padding_right
      
      * update flash attention support list
      
      * remove the copy line in the `CodeGenBlock`
      
      * address copy mechanism
      
      * Update src/transformers/models/gptj/modeling_gptj.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Add GPTJ attention classes
      
      * add expected outputs in the gptj test
      
      * Ensure repo consistency with 'make fix-copies'
      
      ---------
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      be3fd8a2
  11. 12 Mar, 2024 2 commits
  12. 11 Mar, 2024 1 commit
  13. 08 Mar, 2024 2 commits
  14. 07 Mar, 2024 3 commits
  15. 05 Mar, 2024 5 commits
    • Arthur's avatar
      [`Add Mamba`] Adds support for the `Mamba` models (#28094) · fb1c62e9
      Arthur authored
      
      
      * initial-commit
      
      * start cleaning
      
      * small nits
      
      * small nits
      
      * current updates
      
      * add kernels
      
      * small refactoring little step
      
      * add comments
      
      * styling
      
      * nit
      
      * nits
      
      * Style
      
      * Small changes
      
      * Push dummy mambda simple slow
      
      * nit
      
      * Use original names
      
      * Use original names and remove norm
      
      * Updates for inference params
      
      * Style nd updates
      
      * nits
      
      * Match logits
      
      * Add a test
      
      * Add expected generated text
      
      * nits doc, imports and styling
      
      * style
      
      * oups
      
      * dont install kernels, invite users to install the required kernels
      
      * let use use the original packages
      
      * styling
      
      * nits
      
      * fix some copieds
      
      * update doc
      
      * fix-copies
      
      * styling done
      
      * nits
      
      * fix import check
      
      * run but wrong cuda ress
      
      * mamba CUDA works :)
      
      * fix the fast path
      
      * config naming nits
      
      * conversion script is not required at this stage
      
      * finish fixing the fast path: generation make sense now!
      
      * nit
      
      * Let's start working on the CIs
      
      * style
      
      * better style
      
      * more nits
      
      * test nit
      
      * quick fix for now
      
      * nits
      
      * nit
      
      * nit
      
      * nit
      
      * nits
      
      * update test rest
      
      * fixup
      
      * update test
      
      * nit
      
      * some fixes
      
      * nits
      
      * update test values
      
      * fix styling
      
      * nit
      
      * support peft
      
      * integrations tests require torchg
      
      * also add slow markers
      
      * styling
      
      * chose forward wisely
      
      * nits
      
      * update tests
      
      * fix gradient checkpointing
      
      * fixup
      
      * nit
      
      * fix doc
      
      * check copies
      
      * fix the docstring
      
      * fix some more tests
      
      * style
      
      * fix beam search
      
      * add init schene
      
      * update
      
      * nit
      
      * fix
      
      * fixup the doc
      
      * fix the doc
      
      * fixup
      
      * tentative update but slow is no longer good
      
      * nit
      
      * should we always use float32?
      
      * nits
      
      * revert wrong changes
      
      * res in float32
      
      * cleanup
      
      * skip fmt for now
      
      * update generation values
      
      * update test values running original model
      
      * fixup
      
      * update tests + rename inference_params to cache_params + make sure training does not use cache_params
      
      * small nits
      
      * more nits
      
      * fix final CIs
      
      * style
      
      * nit doc
      
      * I hope final doc nits
      
      * nit
      
      * 🫠
      
      * final touch!
      
      * fix torch import
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Apply suggestions from code review
      
      * fix fix and fix
      
      * fix base model prefix!
      
      * nit
      
      * Update src/transformers/models/mamba/__init__.py
      
      * Update docs/source/en/model_doc/mamba.md
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * nit
      
      ---------
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      fb1c62e9
    • Arthur's avatar
      [`Udop imports`] Processor tests were not run. (#29456) · 4d892b72
      Arthur authored
      * fix udop imports
      
      * sort imports
      4d892b72
    • Arthur's avatar
      Revert-commit 0d52f9f5 (#29455) · 57d007b9
      Arthur authored
      * style
      
      * revert with RP
      
      * nit
      
      * exact revert
      57d007b9
    • Arthur Zucker's avatar
      more fix · 0d52f9f5
      Arthur Zucker authored
      0d52f9f5
    • Arthur's avatar
      [`UdopTokenizer`] Fix post merge imports (#29451) · 13285220
      Arthur authored
      * update
      
      * ...
      
      * nits
      
      * arf
      
      * 🧼
      
      * beat the last guy
      
      * style everyone
      13285220
  16. 04 Mar, 2024 3 commits
    • NielsRogge's avatar
      Add UDOP (#22940) · 836921fd
      NielsRogge authored
      
      
      * First draft
      
      * More improvements
      
      * More improvements
      
      * More fixes
      
      * Fix copies
      
      * More improvements
      
      * More fixes
      
      * More improvements
      
      * Convert checkpoint
      
      * More improvements, set up tests
      
      * Fix more tests
      
      * Add UdopModel
      
      * More improvements
      
      * Fix equivalence test
      
      * More fixes
      
      * Redesign model
      
      * Extend conversion script
      
      * Use real inputs for conversion script
      
      * Add image processor
      
      * Improve conversion script
      
      * Add UdopTokenizer
      
      * Add fast tokenizer
      
      * Add converter
      
      * Update README's
      
      * Add processor
      
      * Add fully fledged tokenizer
      
      * Add fast tokenizer
      
      * Use processor in conversion script
      
      * Add tokenizer tests
      
      * Fix one more test
      
      * Fix more tests
      
      * Fix tokenizer tests
      
      * Enable fast tokenizer tests
      
      * Fix more tests
      
      * Fix additional_special_tokens of fast tokenizer
      
      * Fix tokenizer tests
      
      * Fix more tests
      
      * Fix equivalence test
      
      * Rename image to pixel_values
      
      * Rename seg_data to bbox
      
      * More renamings
      
      * Remove vis_special_token
      
      * More improvements
      
      * Add docs
      
      * Fix copied from
      
      * Update slow tokenizer
      
      * Update fast tokenizer design
      
      * Make text input optional
      
      * Add first draft of processor tests
      
      * Fix more processor tests
      
      * Fix decoder_start_token_id
      
      * Fix test_initialization
      
      * Add integration test
      
      * More improvements
      
      * Improve processor, add test
      
      * Add more copied from
      
      * Add more copied from
      
      * Add more copied from
      
      * Add more copied from
      
      * Remove print statement
      
      * Update README and auto mapping
      
      * Delete files
      
      * Delete another file
      
      * Remove code
      
      * Fix test
      
      * Fix docs
      
      * Remove asserts
      
      * Add doc tests
      
      * Include UDOP in exotic model tests
      
      * Add expected tesseract decodings
      
      * Add sentencepiece
      
      * Use same design as T5
      
      * Add UdopEncoderModel
      
      * Add UdopEncoderModel to tests
      
      * More fixes
      
      * Fix fast tokenizer
      
      * Fix one more test
      
      * Remove parallelisable attribute
      
      * Fix copies
      
      * Remove legacy file
      
      * Copy from T5Tokenizer
      
      * Fix rebase
      
      * More fixes, copy from T5
      
      * More fixes
      
      * Fix init
      
      * Use ArthurZ/udop for tests
      
      * Make all model tests pass
      
      * Remove UdopForConditionalGeneration from auto mapping
      
      * Fix more tests
      
      * fixups
      
      * more fixups
      
      * fix the tokenizers
      
      * remove un-necessary changes
      
      * nits
      
      * nits
      
      * replace truncate_sequences_boxes with truncate_sequences for fix-copies
      
      * nit current path
      
      * add a test for input ids
      
      * ids that we should get taken from c9f7a32f57440d90ff79890270d376a1cc0acb68
      
      * nits converting
      
      * nits
      
      * apply ruff
      
      * nits
      
      * nits
      
      * style
      
      * fix slow order of addition
      
      * fix udop fast range as well
      
      * fixup
      
      * nits
      
      * Add docstrings
      
      * Fix gradient checkpointing
      
      * Update code examples
      
      * Skip tests
      
      * Update integration test
      
      * Address comment
      
      * Make fixup
      
      * Remove extra ids from tokenizer
      
      * Skip test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update year
      
      * Address comment
      
      * Address more comments
      
      * Address comments
      
      * Add copied from
      
      * Update CI
      
      * Rename script
      
      * Update model id
      
      * Add AddedToken, skip tests
      
      * Update CI
      
      * Fix doc tests
      
      * Do not use Tesseract for the doc tests
      
      * Remove kwargs
      
      * Add original inputs
      
      * Update casting
      
      * Fix doc test
      
      * Update question
      
      * Update question
      
      * Use LayoutLMv3ImageProcessor
      
      * Update organization
      
      * Improve docs
      
      * Update forward signature
      
      * Make images optional
      
      * Remove deprecated device argument
      
      * Add comment, add add_prefix_space
      
      * More improvements
      
      * Remove kwargs
      
      ---------
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      836921fd
    • Donggeun Yu's avatar
      DeformableDETR support bfloat16 (#29232) · ed74d978
      Donggeun Yu authored
      
      
      * Update ms_deform_attn_cuda.cu
      
      * Update ms_deform_attn_cuda.cuh
      
      * Update modeling_deformable_detr.py
      
      * Update src/transformers/models/deformable_detr/modeling_deformable_detr.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update modeling_deformable_detr.py
      
      * python utils/check_copies.py --fix_and_overwrite
      
      * Fix dtype missmatch error
      
      * Update test_modeling_deformable_detr.py
      
      * Update test_modeling_deformable_detr.py
      
      * Update modeling_deformable_detr.py
      
      * Update modeling_deformable_detr.py
      
      * Support DeformableDETR with bfloat16
      
      * Add test code
      
      * Use AT_DISPATCH_FLOATING_TYPES_AND2
      
      Use AT_DISPATCH_FLOATING_TYPES_AND2
      
      * Update tests/models/deformable_detr/test_modeling_deformable_detr.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/deformable_detr/test_modeling_deformable_detr.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fix not found require_torch_bf16 function
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      ed74d978
    • Nick DeGroot's avatar
      Fix OneFormer `post_process_instance_segmentation` for panoptic tasks (#29304) · 8ef98628
      Nick DeGroot authored
      * 🐛 Fix oneformer instance post processing when using panoptic task type
      
      * 
      
       Add unit test for oneformer instance post processing panoptic bug
      
      ---------
      Co-authored-by: default avatarNick DeGroot <1966472+nickthegroot@users.noreply.github.com>
      8ef98628
  17. 01 Mar, 2024 1 commit