1. 11 Apr, 2024 3 commits
    • NielsRogge's avatar
      Update output of SuperPointForKeypointDetection (#29809) · 5569552c
      NielsRogge authored
      * Remove auto class
      
      * Update ImagePointDescriptionOutput
      
      * Update model outputs
      
      * Rename output class
      
      * Revert "Remove auto class"
      
      This reverts commit ed4a8f549d79cdb0cdf7aa74205a185c41471519.
      
      * Address comments
      5569552c
    • lewtun's avatar
      Fix Llava chat template examples (#30130) · fbdb978e
      lewtun authored
      fbdb978e
    • Eduardo Pacheco's avatar
      Adding grounding dino (#26087) · b752ad30
      Eduardo Pacheco authored
      
      
      * Fixed typo when converting weigths to GroundingDINO vision backbone
      
      * Final modifications on modeling
      
      * Removed unnecessary class
      
      * Fixed convert structure
      
      * Added image processing
      
      * make fixup partially completed
      
      * Now text_backbone_config has its own class
      
      * Modified convert script
      
      * Removed unnecessary config attribute
      
      * Added new function to generate sub sentence mask
      
      * Renamed parameters with gamma in the name as it's currently not allowed
      
      * Removed tokenization and image_processing scripts since we'll map from existing models
      
      * Fixed some issues with configuration
      
      * Just some modifications on conversion script
      
      * Other modifications
      
      * Copied deformable detr
      
      * First commit
      
      * Added bert to model
      
      * Bert validated
      
      * Created Text and Fusion layers for Encoder
      
      * Adapted Encoder layer
      
      * Fixed typos
      
      * Adjusted Encoder
      
      * Converted encoder to hf
      
      * Modified Decoder Layer
      
      * Modified main decoder class
      
      * Removed copy comments
      
      * Fixed forward from GroundingDINOModel and GroundingDINODecoder
      
      * Added all necessary layers, configurations and forward logic up to GroundingDINOModel
      
      * Added all layers to convertion
      
      * Fixed outputs for GroundingDINOModel and GroundingDINOForObjectDetection
      
      * Fixed mask input to encoders and fixed nn.MultiheadAttention batch first and attn output
      
      * Fixed forward from GroundingDINOTextEnhancerLayer
      
      * Fixed output bug with GroundingDINODeformableLayer
      
      * Fixed bugs that prevent GroundingDINOForObjectDetection to run forward method
      
      * Fixed attentions to be passed correctly
      
      * Passing temperature arg when creating Sine position embedding
      
      * Removed copy comments
      
      * Added temperature argument for position embedding
      
      * Fixed typo when converting weigths to GroundingDINO vision backbone
      
      * Final modifications on modeling
      
      * Removed unnecessary class
      
      * Fixed convert structure
      
      * Added image processing
      
      * make fixup partially completed
      
      * Now text_backbone_config has its own class
      
      * Modified convert script
      
      * Removed unnecessary config attribute
      
      * Added new function to generate sub sentence mask
      
      * Renamed parameters with gamma in the name as it's currently not allowed
      
      * Removed tokenization and image_processing scripts since we'll map from existing models
      
      * Fixed some issues with configuration
      
      * Just some modifications on conversion script
      
      * Other modifications
      
      * Fix style
      
      * Improve fixup
      
      * Improve conversion script
      
      * Improve conversion script
      
      * Add GroundingDINOProcessor
      
      * More improvements
      
      * Return token type ids
      
      * something
      
      * Fix more tests
      
      * More improvements
      
      * More cleanup
      
      * More improvements
      
      * Fixed tests, improved modeling and config
      
      * More improvements and fixing tests
      
      * Improved tests and modeling
      
      * Improved tests and added image processor
      
      * Improved tests inference
      
      * More improvements
      
      * More test improvements
      
      * Fixed last test
      
      * Improved docstrings and comments
      
      * Fix style
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Better naming
      
      * Better naming
      
      * Added Copied statement
      
      * Added Copied statement
      
      * Moved param init from GroundingDINOBiMultiHeadAttention
      
      * Better naming
      
      * Fixing clamp style
      
      * Better naming
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/configuration_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Improving conversion script
      
      * Improved config
      
      * Improved naming
      
      * Improved naming again
      
      * Improved grouding-dino.md
      
      * Moved grounding dino to multimodal
      
      * Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Fixed docstrings and style
      
      * Fix docstrings
      
      * Remove timm attributes
      
      * Reorder imports
      
      * More improvements
      
      * Add Grounding DINO to pipeline
      
      * Remove model from check_repo
      
      * Added grounded post_process to GroundingDINOProcessor
      
      * Fixed style
      
      * Fixed GroundingDINOTextPrenetConfig docstrings
      
      * Aligned inputs.keys() when both image and text are passed with model_input_names
      
      * Added tests for GroundingDINOImageProcessor and GroundingDINOProcessor
      
      * Testing post_process_grounded_object_detection from GroundingDINOProcessor at test_inference_object_detection_head
      
      * Fixed order
      
      * Marked test with require_torch
      
      * Temporarily changed repo_id
      
      * More improvements
      
      * Fix style
      
      * Final improvements
      
      * Improve annotators
      
      * Fix style
      
      * Add is_torch_available
      
      * Remove type hints
      
      * vocab_tokens as one liner
      
      * Removed print statements
      
      * Renamed GroundingDINOTextPrenetConfig to GroundingDINOTextConfig
      
      * remove unnecessary comments
      
      * Removed unnecessary tests on conversion script
      
      * Renamed GroundingDINO to camel case GroundingDino
      
      * Fixed GroundingDinoProcessor docstrings
      
      * loading MSDA kernels in the modeling file
      
      * Fix copies
      
      * Replace nn.multiheadattention
      
      * Replace nn.multiheadattention
      
      * Fixed inputs for GroundingDinoMultiheadAttention & order of modules
      
      * Fixed processing to avoid messing with inputs
      
      * Added more tips for GroundingDino
      
      * Make style
      
      * Chaning name to align with SAM
      
      * Replace final nn.multiheadattention
      
      * Fix model tests
      
      * Update year, remove GenerationTesterMixin
      
      * Address comments
      
      * Address more comments
      
      * Rename TextPrenet to TextModel
      
      * Rename hidden_states
      
      * Address more comments
      
      * Address more comments
      
      * Address comment
      
      * Address more comments
      
      * Address merge
      
      * Address comment
      
      * Address comment
      
      * Address comment
      
      * Make style
      
      * Added layer norm eps to layer norms
      
      * Address more comments
      
      * More fixes
      
      * Fixed equivalence
      
      * Make fixup
      
      * Remove print statements
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Add comment
      
      * Address comment
      
      * Remove overwriting of test
      
      * Fix bbox_embed
      
      * Improve decoder_bbox_embed_share
      
      * Simplify outputs
      
      * Updated post_process_grounded_object_detection
      
      * Renamed sources to feature_maps
      
      * Improved tests for Grounding Dino ImageProcessor and Processor
      
      * Fixed test requirements and imports
      
      * Fixed image_processing
      
      * Fixed processor tests
      
      * Fixed imports for image processing tests
      
      * Fix copies
      
      * Updated modeling
      
      * Fix style
      
      * Moved functions to correct position
      
      * Fixed copy issues
      
      * Update src/transformers/models/deformable_detr/modeling_deformable_detr.py
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      
      * Keeping consistency custom cuda kernels for MSDA
      
      * Make GroundingDinoProcessor logic clearer
      
      * Updated Grounding DINO checkpoints
      
      * Changed tests to correct structure
      
      * Updated gpu-cpu equivalence test
      
      * fix copies
      
      * Update src/transformers/models/grounding_dino/processing_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/processing_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/configuration_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fixed erros and style
      
      * Fix copies
      
      * Removed inheritance from PreTrainedModel from GroundingDinoTextModel
      
      * Fixed GroundingDinoTextModel
      
      * Fixed type of default backbone config
      
      * Fixed missing methods for GroundingDinoTextModel and Added timm support for GroundingDinoConvEncoder
      
      * Addressed comments
      
      * Addressed batched image processing tests
      
      * Addressed zero shot test comment
      
      * Addressed tip comment
      
      * Removed GroundingDinoTextModel from check_repo
      
      * Removed inplace masking
      
      * Addressed comments
      
      * Addressed comments
      
      * Addressed comments
      
      * Fix copies
      
      * Fixing timm test
      
      * Fixed batching equivalence test
      
      * Update docs/source/en/model_doc/grounding-dino.md
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/grounding-dino.md
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/grounding-dino.md
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      
      * Addressed more comments
      
      * Added a new comment
      
      * Reduced image size
      
      * Addressed more comments
      
      * Nits
      
      * Nits
      
      * Changed the way text_config is initialized
      
      * Update src/transformers/models/grounding_dino/processing_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarNiels <niels.rogge1@gmail.com>
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarEduardo Pacheco <eduardo.pacheco@limehome.com>
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      b752ad30
  2. 10 Apr, 2024 3 commits
    • Arthur's avatar
      Add recurrent gemma (#30143) · 0fe44059
      Arthur authored
      
      
      * Fork.
      
      * RecurrentGemma initial commit.
      
      * Updating __init__.py.
      
      * Minor modification to how we initialize the cache.
      Changing how the config specifies the architecture.
      
      * Reformat code to 4 spaces.
      Fixed a few typos.
      
      * Fixed the forward pass.
      Still unclear on the cache?
      
      * Fixed the RecurrentGemmaForCausalLM
      
      * Minor comment that we might not need attention_mask and output_attention arguments.
      
      * Now cache should work as well.
      
      * Adding a temporary example to check whether the model generation works.
      
      * Adding the tests and updating imports.
      
      * Adding the example file missing in the previous commit.
      
      * First working example.
      
      * Removing .gitignore and reverting parts of __init__.
      
      * Re-add .gitignore.
      
      * Addressing comments for configuration.
      
      * Move mask creation to `_prepare_inputs_for_generation`.
      
      * First try at integration tests:
      1. AttributeError: 'GriffinCausalLMOutput' object has no attribute 'attentions'.
      2. `cache_position` not passed
      
      * Transfoering between machines.
      
      * Running normal tests.
      
      * Minor fix.
      
      * More fixes.
      
      * Addressing more comments.
      
      * Minor fixes.
      
      * first stab at cleanup
      
      * more refactoring
      
      * fix copies and else
      
      * renaming and get init to work
      
      * fix causal mask creation
      
      * update
      
      * nit
      
      * fix a hell lot of things
      
      * updates
      
      * update conversion script
      
      * make all keys importable
      
      * nits
      
      * add auto mappings
      
      * properly convert ffw_up and down
      
      * add scaling
      
      * fix generations
      
      * for recurrent dtype
      
      * update
      
      * fix going beyong window
      
      * fixup
      
      * add missing files
      
      * current updates to remove last einops
      
      * finish modeling refactor
      
      * TADA
      
      * fix compile
      
      * fix most failing testt ? ?
      
      * update tests
      
      * refactor and update
      
      * update
      
      * nits, fixup and update tests
      
      * more fixup
      
      * nits
      
      * fix imports
      
      * test format
      
      * fixups
      
      * nits
      
      * tuple typing
      
      * fix code quality
      
      * add model card
      
      * fix doc
      
      * skip most generation tests
      
      * nits
      
      * style
      
      * doc fixes
      
      * fix pr and check_copies?
      
      * last nit
      
      * oupsy
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * update
      
      * Update src/transformers/models/recurrent_gemma/convert_recurrent_gemma_to_hf.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * update based on review
      
      * doc nit
      
      * fix quality
      
      * quality
      
      * fix slow test model path
      
      * update default dype
      
      * ignore attributes that can be safely ignored in check config attributes
      
      * 0lallalala come on
      
      * save nit
      
      * style
      
      * remove to dict update
      
      * make sure we can also run in float16
      
      * style
      
      ---------
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      Co-authored-by: default avatarAleksandar Botev <botev@google.com>
      Co-authored-by: default avatarLeonard Berrada <lberrada@users.noreply.github.com>
      Co-authored-by: default avataranushanf <anushanf@google.com>
      Co-authored-by: default avatarbotev <botevmg@gmail.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      0fe44059
    • NielsRogge's avatar
      [UDOP] Fix tests (#29573) · 50c1c19f
      NielsRogge authored
      * Fix tests
      
      * Fix tests
      
      * Remove no_split_modules
      50c1c19f
    • Fanli Lin's avatar
      [tests] make 2 tests device-agnostic (#30008) · 18546378
      Fanli Lin authored
      add torch device
      18546378
  3. 09 Apr, 2024 1 commit
  4. 08 Apr, 2024 4 commits
  5. 05 Apr, 2024 2 commits
  6. 04 Apr, 2024 1 commit
    • byi8220's avatar
      [`ProcessingIdefics`] Attention mask bug with padding (#29449) · 75b76a5e
      byi8220 authored
      * Defaulted IdeficsProcessor padding to 'longest', removed manual padding
      
      * make fixup
      
      * Defaulted processor call to padding=False
      
      * Add padding to processor call in IdeficsModelIntegrationTest as well
      
      * Defaulted IdeficsProcessor padding to 'longest', removed manual padding
      
      * make fixup
      
      * Defaulted processor call to padding=False
      
      * Add padding to processor call in IdeficsModelIntegrationTest as well
      
      * redefaulted padding=longest again
      
      * fixup/doc
      75b76a5e
  7. 03 Apr, 2024 4 commits
  8. 02 Apr, 2024 4 commits
    • Minsub Lee (Matt)'s avatar
      Fix `skip_special_tokens` for `Wav2Vec2CTCTokenizer._decode` (#29311) · 15cd6871
      Minsub Lee (Matt) authored
      * Fix skip_special_tokens process for Wav2Vec2CTCTokenizer._decode
      
      * Fix skip_special_tokens for Wav2Vec2CTCTokenizer._decode
      
      * Exclude pad_token filtering since it is used as CTC-blank token
      
      * Add small test for skip_special_tokens
      
      * Update decoding test for added new token
      15cd6871
    • Yoach Lacombe's avatar
      Add Flash Attention 2 support to Musicgen and Musicgen Melody (#29939) · 0d04b1e2
      Yoach Lacombe authored
      * add FA2 to o.g Musicgen
      
      * make style
      
      * add FA2 support to Musicgen Melody
      
      * add generation FA2 tests to o.g Musicgen
      
      * make style and fix copies
      
      * add Musicgen to FA2 docs + deprecate list
      
      * add sdpa supports to Musicgen's
      
      * make style and fix copies
      
      * refactor attention implementation arguments
      
      * add Copied from to sdpa tests
      
      * add copied form in sdpa tests melody
      
      * add copied for FA2 generation tests
      
      * add FA2 inference copied from
      
      * make style
      0d04b1e2
    • Hovnatan Karapetyan's avatar
      Fix 29807 sinusoidal positional encodings in Flaubert, Informer and XLM (#29904) · 416711c3
      Hovnatan Karapetyan authored
      * Fix sinusoidal_embeddings in FlaubertModel
      
      * Fix for Informer
      
      * Fix for XLM
      
      * Move sinusoidal emb for XLM
      
      * Move sinusoidal emb for Flaubert
      
      * Small cleanup
      
      * Add comments on tests code copied from
      
      * Add with Distilbert->
      416711c3
    • Arthur's avatar
      [`generate`] fix breaking change for patch (#29976) · 83b26dd7
      Arthur authored
      * fix bug and add tests
      
      * nit
      
      * otherway to get the cur len instead of attention mask
      
      * more places where this might have been broken
      
      * nit
      
      * oups
      
      * inputs_embeds vs input_embeds
      
      * test generated outptus
      
      * style
      
      * nit
      
      * fix
      
      * skip failing biogpt
      83b26dd7
  9. 01 Apr, 2024 2 commits
  10. 29 Mar, 2024 1 commit
  11. 28 Mar, 2024 5 commits
  12. 27 Mar, 2024 4 commits
    • Lorenzo Verardo's avatar
      MixtralSparseMoeBlock: add gate jitter (#29865) · a25037be
      Lorenzo Verardo authored
      This commit adds gate jitter to MixtralSparseMoeBlock's input data
      before passing it through the MoE layer, if turned on.
      a25037be
    • Hovnatan Karapetyan's avatar
      Fix 29807, sinusoidal positional encodings overwritten by post_init() (#29813) · a81cf9ee
      Hovnatan Karapetyan authored
      * Check for requires_grad when initing weights
      
      * Add unit test
      
      * Move sinusoidal positional encoding generation after post_init()
      
      * Add modules to skip init list
      
      * Move create_sinusoidal_embeddings to _init_weights
      a81cf9ee
    • Anton Vlasjuk's avatar
      Mamba `slow_forward` gradient fix (#29563) · cefb819f
      Anton Vlasjuk authored
      * FIX: Cached slow forward in mamba
      - additionally added mamba cached test
      - added unused test (mamba causal lm forward and backward)
      - fixed typo: "causl" --> "causal"
      
      * formatting
      
      * fix: use real `slow_forward` call instead of torch module's
      
      * add shape assertion for mixer block test
      
      * adjust shape assertion
      cefb819f
    • Bo Zheng's avatar
      Add Qwen2MoE (#29377) · 1c39974a
      Bo Zheng authored
      
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * Update README.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixup
      
      * fixup
      
      * add archive back
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fixup
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * add archive back
      
      * fix integration test
      
      * fixup
      
      ---------
      Co-authored-by: default avatarbozheng-hit <dsoul0621@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      1c39974a
  13. 25 Mar, 2024 1 commit
  14. 22 Mar, 2024 1 commit
  15. 21 Mar, 2024 1 commit
  16. 20 Mar, 2024 3 commits