1. 14 May, 2024 2 commits
  2. 13 May, 2024 1 commit
    • fxmarty's avatar
      CI: update to ROCm 6.0.2 and test MI300 (#30266) · 37bba2a3
      fxmarty authored
      
      
      * update to ROCm 6.0.2 and test MI300
      
      * add callers for mi300
      
      * update dockerfile
      
      * fix trainer tests
      
      * remove apex
      
      * style
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * update to torch 2.3
      
      * add workflow dispatch target
      
      * we may need branches: mi300-ci after all
      
      * nit
      
      * fix docker build
      
      * nit
      
      * add check runner
      
      * remove docker-gpu
      
      * fix issues
      
      * fix
      
      ---------
      Co-authored-by: default avatarYih-Dar <2521628+ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      37bba2a3
  3. 26 Apr, 2024 2 commits
    • amyeroberts's avatar
      Fix GroundingDINO, DPR after BERT SDPA update (#30506) · e7d52a10
      amyeroberts authored
      Fix GroundingDINO, DPR after BET SDPA update
      e7d52a10
    • JB (Don)'s avatar
      [`BERT`] Add support for sdpa (#28802) · dfa7b580
      JB (Don) authored
      * Adding SDPA support for BERT
      
      * Using the proper input name for testing model input in inference()
      
      * Adding documentation for SDPA in BERT model page
      
      * Use the stable link for the documentation
      
      * Adding a gate to only call .contiguous() for torch < 2.2.0
      
      * Additions and fixes to the documentation
      
      * Minor updates to documentation
      
      * Adding extra requirements needed for the contiguous() bug
      
      * Adding "Adapted from" in plcae of the "Copied from"
      
      * Add benchmark speedup tables to the documentation
      
      * Minor fixes to the documentation
      
      * Use ClapText as a replacemenet for Bert in the Copied-From
      
      * Some more fixes for the fix-copies references
      
      * Overriding the test_eager_matches_sdpa_generate in bert tests to not load with low_cpu_mem_usage
      
      [test all]
      
      * Undo changes to separate test
      
      * Refactored SDPA self attention code for KV projections
      
      * Change use_sdpa to attn_implementation
      
      * Fix test_sdpa_can_dispatch_on_flash by preparing input (required for MultipleChoice models)
      dfa7b580
  4. 24 Apr, 2024 1 commit
    • Gustavo de Rosa's avatar
      Phi-3 (#30423) · c9693db2
      Gustavo de Rosa authored
      * chore(root): Initial commit of Phi-3 files.
      
      * fix(root): Fixes Phi-3 missing on readme.
      
      * fix(root): Ensures files are consistent.
      
      * fix(phi3): Fixes unit tests.
      
      * fix(tests): Fixes style of phi-3 test file.
      
      * chore(tests): Adds integration tests for Phi-3.
      
      * fix(phi3): Removes additional flash-attention usage, .e.g, swiglu and rmsnorm.
      
      * fix(phi3): Fixes incorrect docstrings.
      
      * fix(phi3): Fixes docstring typos.
      
      * fix(phi3): Adds support for Su and Yarn embeddings.
      
      * fix(phi3): Improves according first batch of reviews.
      
      * fix(phi3): Uses up_states instead of y in Phi3MLP.
      
      * fix(phi3): Uses gemma rotary embedding to support torch.compile.
      
      * fix(phi3): Improves how rotary embedding classes are defined.
      
      * fix(phi3): Fixes inv_freq not being re-computed for extended RoPE.
      
      * fix(phi3): Adds last suggestions to modeling file.
      
      * fix(phi3): Splits inv_freq calculation in two lines.
      c9693db2
  5. 22 Apr, 2024 1 commit
  6. 18 Apr, 2024 3 commits
    • Abhi Venigalla's avatar
      Add DBRX Model (#29921) · 005b957f
      Abhi Venigalla authored
      
      
      * wip
      
      * fix __init__.py
      
      * add docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * address comments 1
      
      * work on make fixup
      
      * pass configs down
      
      * add sdpa attention
      
      * remove DbrxBlock
      
      * add to configuration_auto
      
      * docstring now passes formatting test
      
      * fix style
      
      * update READMEs
      
      * add dbrx to modeling_auto
      
      * make fix-copies generated this
      
      * add DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * config docstring passes formatting test
      
      * rename moe_loss_weight to router_aux_loss_coef
      
      * add to flash-attn documentation
      
      * fix model-path in tests
      
      * Explicitly make `"suli"` the default `ffn_act_fn`
      Co-authored-by: default avatarWing Lian <wing.lian@gmail.com>
      
      * default to using router_aux_loss_coef over ffn_config[moe_loss_weight]
      
      * fix _flash_attn_uses_top_left_mask and is_causal
      
      * fix tests path
      
      * don't use token type IDs
      
      * follow Llama and remove token_type_ids from test
      
      * init ConfigTester differently so tests pass
      
      * remove multiple choice test
      
      * remove question + answer test
      
      * remove sequence classification test
      
      * remove token classification test
      
      * copy Llama tests and remove token_type_ids from test inputs
      
      * do not test pruning or headmasking; style code
      
      * add _tied_weights_keys parameter to pass test
      
      * add type hints
      
      * fix type check
      
      * update config tester
      
      * remove masked_lm test
      
      * remove encoder tests
      
      * initialize DbrxModelTester with correct params
      
      * style
      
      * torch_dtype does not rely on torch
      
      * run make fixup, fix-copies
      
      * use https://huggingface.co/v2ray/dbrx-base-fixed/blob/main/modeling_dbrx.py
      
      
      
      * add copyright info
      
      * fix imports and DbrxRotaryEmbedding
      
      * update DbrxModel docstring
      
      * use copies
      
      * change model path in docstring
      
      * use config in DbrxFFN
      
      * fix flashattention2, sdpaattention
      
      * input config to DbrXAttention, DbrxNormAttentionNorm
      
      * more fixes
      
      * fix
      
      * fix again!
      
      * add informative comment
      
      * fix ruff?
      
      * remove print statement + style
      
      * change doc-test
      
      * fix doc-test
      
      * fix docstring
      
      * delete commented out text
      
      * make defaults match dbrx-instruct
      
      * replace `router_aux_loss_coef` with `moe_loss_weight`
      
      * is_decoder=True
      
      * remove is_decoder from configtester
      
      * implement sdpa properly
      
      * make is_decoder pass tests
      
      * start on the GenerationTesterMixin tests
      
      * add dbrx to sdpa documentation
      
      * skip weight typing test
      
      * style
      
      * initialize smaller model
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Add DBRX to toctree
      
      * skip test_new_cache_format
      
      * make config defaults smaller again
      
      * add pad_token_id
      
      * remove pad_token_id from config
      
      * Remove all references to DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * Update src/transformers/models/dbrx/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/dbrx.md
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Update src/transformers/models/dbrx/configuration_dbrx.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/dbrx.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix typo
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * update docs, fix configuration_auto.py
      
      * address pr comments
      
      * remove is_decoder flag
      
      * slice
      
      * fix requires grad
      
      * remove grad
      
      * disconnect differently
      
      * remove grad
      
      * enable grads
      
      * patch
      
      * detach expert
      
      * nissan al ghaib
      
      * Update modeling_dbrx.py
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * replace "Gemma" with "Dbrx"
      
      * remove # type: ignore
      
      * don't hardcode vocab_size
      
      * remove ToDo
      
      * Re-add removed idefics2 line
      
      * Update test to use tiny-random!
      
      * Remove TODO
      
      * Remove one more case of loading the entire dbrx-instruct in the tests
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * address some comments
      
      * small model
      
      * add dbrx to tokenization_auto
      
      * More docstrings with add_start_docstrings
      
      * Dbrx for now
      
      * add PipelineTesterMixin
      
      * Update src/transformers/models/dbrx/configuration_dbrx.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * remove flash-attn2 import error
      
      * fix docstring
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add useage example
      
      * put on one line
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix ffn_act_fn
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * change "dbrx" to "DBRX" for display purposes.
      
      * fix __init__.py?
      
      * fix __init__.py
      
      * fix README
      
      * return the aux_loss
      
      * remove extra spaces
      
      * fix configuration_auto.py
      
      * fix format in tokenization_auto
      
      * remove new line
      
      * add more useage examples
      
      ---------
      Co-authored-by: default avatarAbhi Venigalla <abhi.venigalla@databricks.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarEitan Turok <eitan.turok@databricks.com>
      Co-authored-by: default avatarEitan Turok <150733043+eitanturok@users.noreply.github.com>
      Co-authored-by: default avatarWing Lian <wing.lian@gmail.com>
      Co-authored-by: default avatarEitan Turok <eitanturok@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarYour Name <you@example.com>
      Co-authored-by: default avatarMihir Patel <mihir.v.patel7@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      005b957f
    • tomeras91's avatar
      Add jamba (#29943) · 3f20877d
      tomeras91 authored
      * Add jamba arch
      
      * apply "make fix-copies" changes
      
      * fix link to model in JambaConfig docstring
      
      * Add n_ctx in modeling file because repo-consistency wants that
      
      * Add jamba to flash attention and sdpa documentation
      
      * mamba dt_proj quant fix now works for LoRA as well
      
      * override test_left_padding_compatibility and use a more permissive tolerance. left padding numerical difference are accentuated by mamba layers
      
      * add jamba to tokenization auto
      
      * fix comments of shape (PR #24 in the model page: https://huggingface.co/ai21labs/Jamba-v0.1/discussions/24)
      
      * simple PR fixes
      
      * remove unnecessary kwargs from JambaAttentionDecoderLayer and JambaMambaDecoderLayer
      
      * remove the LoRA hack for the mamba dt_proj bias. It was solved in huggingface/peft#1530 (https://github.com/huggingface/peft/pull/1530)
      
      * Add copied comment on JambaMLP (it's the same as MixtralMLP)
      
      * remove padding_mask warnings. It's not supported anymore
      
      * fix docstring. Float instead of int
      
      * A few more minor PR fixes
      
      * (1) lowercase names for mamba layernorms (2) remove _apply_inner_layernorms and do it directly in the forward pass
      
      * Return None attention weights from mamba layers. Append to all attentions only if not None.
      
      * remove some leftover jamba archive lists
      
      * Better separation between expert vs non-expert layers. non-expert layers return None as router_logits, and it is not concatenated to all_router_logits returned from JambaModel
      
      * no need to take router_logits at config.expert_layer_offset anymore. result.router_logits now holds results only for expert layers
      
      * Add Jamba paper on READMEs
      
      * (1) rename n_ctx -> max_position_embeddings (2) don't use it in the modeling file since it's not needed (set it as an exception to check_config_attributes)
      
      * Add copied from comment
      
      * remove the code path for apply_inner_layernorms=False. Jamba always has the inner mamba layernorms
      
      * clearer docstring for _convert_to_standard_cache
      
      * style fixes
      
      * Change calc_logits_for_entire_prompt (bool) to num_logits_to_keep (int). Adapt assisted decoding code tp use it. Also small change in low memory beam search decoding path to support this new int value in model_inputs
      
      * rename test so it still overrides what its meant to override
      
      * draft
      
      * oups
      
      * nit
      
      * remove more complexe logic
      
      * fix names used in config
      
      * fix fix fix
      
      * style
      
      * fix some more failing tests
      
      * generate did not init the cache 馃檭
      
      
      
      * more small nits
      
      * typo
      
      * config.mamba_expand * config.hidden_size for the intermediate size of the mamba shapes
      
      * fix init of pkv with torch.tensor()
      
      * empty tensor
      
      * fix some init issues
      
      * stupid changes required by generate because it does not even support it's own DynamicCache class
      
      * more fixes
      
      * fix general assisted gen cache_position bug
      
      * tests passing
      
      * Add offsets and periods as SPECIAL_CASES_TO_ALLOW in check_config_attributes.py
      
      * fix reorder_cache to reorder mamba states and override some more functions in HybridMambaAttentionDynamicCache
      
      * no need to override test_past_key_values_format() and _check_past_key_values_for_generate() in tests anymore
      
      * fix docstrings and typehints for past_key_values
      
      * style fixes
      
      * fix docs
      
      * change typehint due to copy from Mixtral
      
      * forgot import
      
      * import order
      
      * Add configuration_jamba and modeling_jamba to not_doctested because the model is too big to download (in docstring of JambaForCausalLM.forward)
      
      * Add integration test with tiny tandom Jamba model on hub
      
      * fix flash attention cache shapes
      
      * bring back forgotten hidden states
      
      * rename HybridMambaAttentionDynamicCache.seqlen_offset to has_previous_state (and make bool) and bugfix - it should be set to True after a finished forward pass of the entire model
      
      * align integration test after modeling fixes
      
      * bugfix - mamba can use precomputed states only of forward pass is on a single token
      
      * bugfix - mamba can use precomputed states only if they match the batch size
      
      * typo
      
      * remove making _prepare_4d_causal_attention_mask a leaf function
      
      * stop using past_seq_len.get_seq_length(). Use cache positions instead. Adjust test (test_decoder_model_past_with_large_inputs) accordingly
      
      ---------
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      3f20877d
    • Alexander Visheratin's avatar
      Add Flash Attention 2 to M2M100 model (#30256) · b65df514
      Alexander Visheratin authored
      
      
      * Added flash attention 2.
      
      * Fixes.
      
      * Fix inheritance.
      
      * Fixed init.
      
      * Remove stuff.
      
      * Added documentation.
      
      * Add FA2 to M2M100 documentation.
      
      * Add test.
      
      * Fixed documentation.
      
      * Update src/transformers/models/m2m_100/modeling_m2m_100.py
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/nllb.md
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fixed variable name.
      
      ---------
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      b65df514
  7. 17 Apr, 2024 1 commit
    • Shane A's avatar
      Add OLMo model family (#29890) · e4ea19b9
      Shane A authored
      * Add OLMo using add-new-model-like with Llama
      
      * Fix incorrect tokenizer for OLMo
      
      * Copy-paste relevant OLMo methods and their imports
      
      * Add OLMo config
      
      * Modify OLMo config to follow HF conventions
      
      * Remove unneeded Llama code from OLMo model
      
      * Add ability for OLMo model to output attentions
      
      * Add OLMoPreTrainedModel and OLMoModel
      
      * Add OLMoForCausalLM
      
      * Minor fixes to OLMo model for style and missing functions
      
      * Implement OLMo tokenizer
      
      * Implement OLMo to HF conversion script
      
      * Add tests for OLMo model
      
      * Add tests for OLMo fast tokenizer
      
      * Add auto-generated dummy objects
      
      * Remove unimplemented OLMo classes from auto and init classes and re-format
      
      * Add README and associated auto-generated files
      
      * Use OLMo names for common properties
      
      * Run make fixup
      
      * Remove `|` from OLMo typing
      
      * Remove unneeded tokenization_olmo.py
      
      * Revert model, config and converter to add-new-model-like Llama
      
      * Move logic for adding bos/eos token into GPTNeoxTokenizerFast
      
      * Change OLMoConfig defaults to match OLMo-7B
      
      * Use GPTNeoXToknizerFast in OLMo tokenizer tests
      
      * Modify auto-generated OLMoModelTests to work for OLMo
      
      * Add non-parametric layer norm OLMoLayerNorm
      
      * Update weight conversion script for OLMo
      
      * Fix __init__ and auto structure for OLMo
      
      * Fix errors from make fixup
      
      * Remove OLMoTokenizerFast from documentation
      
      * Add missing 'Copied from' for OLMoModel._update_causal_mask
      
      * Run make fix-copies
      
      * Rearrange string replacements in OLMoForCausalLM Copied from
      
      * Move OLMo and Llama CausalLM.forward example into global constants
      
      * Fix OLMO_GENERATION_EXAMPLE doc string typo
      
      * Add option for qkv clipping to OLMo
      
      * Rearrange OLMoConfig kwargs in convert_olmo_weights_to_hf
      
      * Add clip_qkv to OLMoConfig in convert_olmo_weights_to_hf
      
      * Fix OLMo tokenization bug using conversion script
      
      * Keep model in full precision after conversion
      
      * Do not add eos token automatically
      
      * Update references to OLMo model in HF Hub
      
      * Do not add eos token during encoding by default
      
      * Fix Llama generation example
      
      * Run make fixup
      
      * OLMo 7B integration test fix
      
      * Remove unneeded special case for OLMoConfig
      
      * OLMo 7B Twin 2T integration test fix
      
      * Fix test_model_7b_greedy_generation
      
      * Remove test_compile_static_cache
      
      * Fix OLMo and Llama generation example
      
      * Run make fixup
      
      * Revert "OLMo 7B integration test fix"
      
      This reverts commit 4df56a4b150681bfa559846f40e9b7b7f97d7908.
      
      * Revert "OLMo 7B Twin 2T integration test fix"
      
      This reverts commit 9ff65a4a294ace89ab047b793ca55e623a9ceefc.
      
      * Ungate 7B integration tests and fix greedy generation test
      
      * Add retries for flaky test_eager_matches_sdpa_generate
      
      * Fix output of doc example for OLMoForCausalLM.forward
      
      * Downsize OLMo doc test for OLMoForCausalLM.forward to 1B model
      
      * Try fix incorrect characters in OLMoForCausalLM.forward doct test
      
      * Try fix incorrect characters in OLMoForCausalLM.forward doc test using end quotes
      
      * Remove pretraining_tp from OLMo config and model
      
      * Add missing 'Copied from' instances
      
      * Remove unneeded causal_mask from OLMoModel
      
      * Revert Llama changes
      
      * Ignore copy for OLMoForCausalLM.forward
      
      * Change 'OLMo' to 'Olmo' in classes
      
      * Move minimal OLMo tokenization tests to model tests
      
      * Add missed 'Copied from' for repeat_kv
      e4ea19b9
  8. 15 Apr, 2024 1 commit
    • amyeroberts's avatar
      Add Idefics2 (#30253) · 6b78360e
      amyeroberts authored
      
      
      * Initial add model additions
      
      * Test
      
      * All weights loading
      
      * Can perform full forward pass
      
      * Local and remote the same
      
      * Matching local and remote
      
      * Fixup
      
      * Idefics2Model importable; fixup docstrings
      
      * Don't skip by default
      
      * Remove deprecated use_resampler arg
      
      * Remove self.config
      
      * DecoupledLinear takes config
      
      * Tidy up
      
      * Enable eager attention and tidy up
      
      * Most tests passing
      
      * Update for batch of processed images
      
      * Add image processor
      
      * Update doc pages
      
      * Update conversion script
      
      * Remove erroneous breakpoint
      
      * Remove accidendtal spelling change
      
      * Update to reflect changes on hub - make generate work
      
      * Fix up
      
      * Image processor tests
      
      * Update tests
      
      * Add a processor
      
      * Add a processor
      
      * Update convert script
      
      * Update modeling file - remove fixmes
      
      * Bug fix
      
      * Add processing test
      
      * Use processor
      
      * Fix up
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Fix test
      
      * Update config - PR comments and defaults align with checkpoint
      
      * Reviewer comments
      
      * Add copied froms for flahs attention
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Remove qk_layer_norm and freeze_layers functionality
      
      * Fix
      
      * Remove freeze_layer options from config
      
      * Sync with upstream main
      
      * Fix attention shapes siglip
      
      * Remove Llava-next refs - TO REBASE
      
      * Use AutoModel for text model
      
      * Add comment to explain vision embeddings
      
      * Fix issue with tie_word_embeddings
      
      * Address review comments
      
      * Fix and fix up
      
      * Chat templates for idefics
      
      * Fix copies
      
      * Fix
      
      * Add layer norms to FA2
      
      * Fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Fix
      
      * Review comments
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update inputs merger
      
      * Merge weights in correct order
      
      * Update convert script
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update template
      
      * Model code examples (fix idefics too)
      
      * More review comments
      
      * Tidy up
      
      * Update processing
      
      * Fix attention mask preparation
      
      * Update inputs_merger inputs
      
      * Vectorize inputs_merger
      
      * Update src/transformers/models/idefics2/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      
      * Review comments
      
      * saying bye to the `qk_layer_norms`
      
      * Simplify
      
      * Update latents
      
      * Remove erroneuous readme changes
      
      * Return images when applying chat template
      
      * Fix bug - prompt images are for a single sample
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      
      * image splitting
      
      * fix test
      
      * some more comment
      
      * some comment
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/idefics2/image_processing_idefics2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update processor
      
      * Update model tests
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Don't add BOS in template
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Remove index in examples
      
      * Update tests to reflect #13
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * PR comment - consistent typing
      
      * Update readme and model doc
      
      * Update docs
      
      * Update checkpoint references
      
      * Update examples
      
      * Fix and update tests
      
      * Small addition
      
      * Update tests - remove copied from as no ignore placement copy could be found
      
      * Update example
      
      * small fixes
      
      * Update docs/source/en/model_doc/idefics2.md
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update docs/source/en/model_doc/idefics2.md
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update README.md
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Connector model as bridge
      
      * Fix up
      
      * Fix up
      
      * Don't pass model inputs for generation kwargs update
      
      * IDEFICS-2 -> Idefics2
      
      * Remove config archive name
      
      * IDEFICS-2 -> Idefics2
      
      * Add back llava-next
      
      * Update readmes
      
      * Add requirements for processor tester
      
      * Use custom convert_to_rgb to avoid possible BC
      
      * Fix doc example
      
      * Fix doc example
      
      * Skip model doc tests - as model to large
      
      * More doc example - account for image splitting
      
      * Update src/transformers/image_transforms.py
      
      * Fix config doctest
      
      ---------
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      6b78360e
  9. 02 Apr, 2024 1 commit
    • Yoach Lacombe's avatar
      Add Flash Attention 2 support to Musicgen and Musicgen Melody (#29939) · 0d04b1e2
      Yoach Lacombe authored
      * add FA2 to o.g Musicgen
      
      * make style
      
      * add FA2 support to Musicgen Melody
      
      * add generation FA2 tests to o.g Musicgen
      
      * make style and fix copies
      
      * add Musicgen to FA2 docs + deprecate list
      
      * add sdpa supports to Musicgen's
      
      * make style and fix copies
      
      * refactor attention implementation arguments
      
      * add Copied from to sdpa tests
      
      * add copied form in sdpa tests melody
      
      * add copied for FA2 generation tests
      
      * add FA2 inference copied from
      
      * make style
      0d04b1e2
  10. 28 Mar, 2024 1 commit
  11. 27 Mar, 2024 1 commit
    • Bo Zheng's avatar
      Add Qwen2MoE (#29377) · 1c39974a
      Bo Zheng authored
      
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * Update README.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixup
      
      * fixup
      
      * add archive back
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fixup
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * add archive back
      
      * fix integration test
      
      * fixup
      
      ---------
      Co-authored-by: default avatarbozheng-hit <dsoul0621@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      1c39974a
  12. 20 Mar, 2024 1 commit
    • NielsRogge's avatar
      Add LLaVa-1.6, bis (#29586) · d91fd7f9
      NielsRogge authored
      
      
      * First draft
      
      * Fix tests, add docs
      
      * Improve docstrings
      
      * Fix test
      
      * Address comments
      
      * Address comments
      
      * Remove vocab_size attribute
      
      * Remove batch_size
      
      * Address comment
      
      * Add image processor tests
      
      * Support fx
      
      * Update docstring
      
      * Add support for 34b
      
      * Convert 34b model
      
      * Add integration tests
      
      * Update checkpoints
      
      * Convert vicuna-13b, remove doc tests
      
      * Remove script
      
      * Remove file
      
      * Address comments
      
      * Improve docstrings
      
      * Deprecate vocab_size
      
      * Remove aspect_ratio_setting
      
      * Address comments
      
      * Update READMEs
      
      * Add tips about chat templates
      
      * Fix tests
      
      * Deprecate vocab_size safely
      
      * Update tests
      
      ---------
      Co-authored-by: default avatarAmy Roberts <22614925+amyeroberts@users.noreply.github.com>
      d91fd7f9
  13. 15 Mar, 2024 1 commit
    • Saurabh Dash's avatar
      Cohere Model Release (#29622) · 0e4a1c34
      Saurabh Dash authored
      
      
      * Cohere Model Release (#1)
      
      Cohere Model Release
      
      * Remove unnecessary files and code (#2)
      
      Some cleanup
      
      * Delete cohere-model directory (#3)
      
      * Make Fix (#5)
      
      * Pr fixes (#6)
      
      * fixes for pr
      
      * pr fixes for the format
      
      * pr fixes for the format
      
      * src/transformers/models/auto/tokenization_auto.py
      
      * Tokenizer test (#8)
      
      * tokenizer test
      
      * format fix
      
      * Adding Docs and other minor changes (#7)
      
      * Add modeling tests (#9)
      
      * Smol Fix (#11)
      
      * tokenization tests are fixed
      
      * format fixes
      
      * fix pr doc tests
      
      * fix pr doc tests
      
      * fix pr doc tests
      
      * fix pr style check
      
      * small changes in cohere.md
      
      * FIX: Address final comments for transformers integration (#13)
      
      * fix modeling final nits and add proper test file
      
      * for now leave empty tests
      
      * add integration test
      
      * push new test
      
      * fix modeling cohere (#14)
      
      * Update chat templates to use the new API (#15)
      
      ---------
      Co-authored-by: default avatarahmetustun <ahmetustun89@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      0e4a1c34
  14. 13 Mar, 2024 1 commit
  15. 28 Feb, 2024 1 commit
  16. 22 Feb, 2024 1 commit
  17. 21 Feb, 2024 2 commits
  18. 20 Feb, 2024 1 commit
  19. 16 Feb, 2024 1 commit
  20. 14 Feb, 2024 1 commit
    • Jonathan Tow's avatar
      Add `StableLM` (#28810) · de6029a0
      Jonathan Tow authored
      * Add `StableLM`
      
      * fix(model): re-create from `huggingface-cli add-new-model-like persimmon`
      
      * fix: re-add changes to address comments
      
      * fix(readme): add links to paper
      
      * fix(tokenization_auto): remove `GPTNeoXTokenizerFastFast` ref
      
      * fix(tests): re-add `@slow` decorator to integration tests
      
      * fix(tests): import slow...
      
      * fix(readme_hd): remove whitespace edit
      
      * fix(tokenizer): auto tokenizer tuple
      
      * skip doctests for `modeling_stablelm`
      de6029a0
  21. 17 Jan, 2024 1 commit
    • Junyang Lin's avatar
      Add qwen2 (#28436) · d6ffe74d
      Junyang Lin authored
      
      
      * add config, modeling, and tokenization
      
      * add auto and init
      
      * update readme
      
      * update readme
      
      * update team name
      
      * fixup
      
      * fixup
      
      * update config
      
      * update code style
      
      * update for fixup
      
      * update for fixup
      
      * update for fixup
      
      * update for testing
      
      * update for testing
      
      * fix bug for config and tokenization
      
      * fix bug for bos token
      
      * not doctest
      
      * debug tokenizer
      
      * not doctest
      
      * debug tokenization
      
      * debug init for tokenizer
      
      * fix style
      
      * update init
      
      * delete if in token auto
      
      * add tokenizer doc
      
      * add tokenizer in init
      
      * Update dummy_tokenizers_objects.py
      
      * update
      
      * update
      
      * debug
      
      * Update tokenization_qwen2.py
      
      * debug
      
      * Update convert_slow_tokenizer.py
      
      * add copies
      
      * add copied from and make style
      
      * update files map
      
      * update test
      
      * fix style
      
      * fix merge reading and update tests
      
      * fix tests
      
      * fix tests
      
      * fix style
      
      * debug a variable in readme
      
      * Update src/transformers/models/qwen2/configuration_qwen2.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * update test and copied from
      
      * fix style
      
      * update qwen2 tokenization  and tests
      
      * Update tokenization_qwen2.py
      
      * delete the copied from after property
      
      * fix style
      
      * update tests
      
      * update tests
      
      * add copied from
      
      * fix bugs
      
      * update doc
      
      * add warning for sliding window attention
      
      * update qwen2 tokenization
      
      * fix style
      
      * Update src/transformers/models/qwen2/modeling_qwen2.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix tokenizer fast
      
      ---------
      Co-authored-by: default avatarRen Xuancheng <jklj077@users.noreply.github.com>
      Co-authored-by: default avatarrenxuancheng.rxc <renxuancheng.rxc@alibaba-inc.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      d6ffe74d
  22. 22 Dec, 2023 1 commit
  23. 18 Dec, 2023 1 commit
  24. 13 Dec, 2023 1 commit
    • Younes Belkada's avatar
      Adds VIP-llava to transformers (#27932) · c7f076a0
      Younes Belkada authored
      * v1
      
      * add-new-model-like
      
      * revert
      
      * fix forward and conversion script
      
      * revert
      
      * fix copies
      
      * fixup
      
      * fix
      
      * Update docs/source/en/index.md
      
      * Apply suggestions from code review
      
      * push
      
      * fix
      
      * fixes here and there
      
      * up
      
      * fixup and fix tests
      
      * Apply suggestions from code review
      
      * add docs
      
      * fixup
      
      * fixes
      
      * docstring
      
      * add docstring
      
      * fixup
      
      * docstring
      
      * fixup
      
      * nit
      
      * docs
      
      * more copies
      
      * fix copies
      
      * nit
      
      * update test
      c7f076a0
  25. 12 Dec, 2023 1 commit
  26. 11 Dec, 2023 1 commit
    • Arthur's avatar
      [`Add Mixtral`] Adds support for the Mixtral MoE (#27942) · accccdd0
      Arthur authored
      
      
      * up
      
      * up
      
      * test
      
      * logits ok
      
      * up
      
      * up
      
      * few fixes
      
      * conversion script
      
      * up
      
      * nits
      
      * nits
      
      * update
      
      * nuke
      
      * more updates
      
      * nites
      
      * fix many issues
      
      * nit
      
      * scatter
      
      * nit
      
      * nuke megablocks
      
      * nits
      
      * fix conversion script
      
      * nit
      
      * remove
      
      * nits
      
      * nit
      
      * update
      
      * oupsssss
      
      * change
      
      * nits device
      
      * nits
      
      * fixup
      
      * update
      
      * merge
      
      * add copied from
      
      * fix the copy mentions
      
      * update tests
      
      * more fixes
      
      * nits
      
      * conversion script
      
      * add parts of the readme
      
      * Update tests/models/mixtral/test_modeling_mixtral.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * new test + conversion script
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Apply suggestions from code review
      
      * fix
      
      * fix copies
      
      * fix copies
      
      * ooops
      
      * fix config
      
      * Apply suggestions from code review
      
      * fix nits
      
      * nit
      
      * add copies
      
      * add batched tests
      
      * docs
      
      * fix flash attention
      
      * let's add more verbose
      
      * add correct outputs
      
      * support router ouptus
      
      * ignore copies where needed
      
      * fix
      
      * cat list if list is given for now
      
      * nits
      
      * Update docs/source/en/model_doc/mixtral.md
      
      * finish router refactoring
      
      * fix forward
      
      * fix expected values
      
      * nits
      
      * fixup
      
      * fix
      
      * fix bug
      
      * fix
      
      * fix dtype mismatch
      
      * fix
      
      * grrr grrr I support item assignment
      
      * fix CI
      
      * docs
      
      * fixup
      
      * remove some copied form
      
      * fix weird diff
      
      * skip doctest fast on the config and modeling
      
      * mark that is supports flash attention in the doc
      
      * update
      
      * Update src/transformers/models/mixtral/modeling_mixtral.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Update docs/source/en/model_doc/mixtral.md
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * revert router logits config issue
      
      * update doc accordingly
      
      * Update src/transformers/models/mixtral/convert_mixtral_weights_to_hf.py
      
      * nits
      
      * use torch testing asssert close
      
      * fixup
      
      * doc nits
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      accccdd0
  27. 08 Dec, 2023 1 commit
    • fxmarty's avatar
      F.scaled_dot_product_attention support (#26572) · 80377eb0
      fxmarty authored
      
      
      * add sdpa
      
      * wip
      
      * cleaning
      
      * add ref
      
      * yet more cleaning
      
      * and more :)
      
      * wip llama
      
      * working llama
      
      * add output_attentions=True support
      
      * bigcode sdpa support
      
      * fixes
      
      * gpt-bigcode support, require torch>=2.1.1
      
      * add falcon support
      
      * fix conflicts falcon
      
      * style
      
      * fix attention_mask definition
      
      * remove output_attentions from attnmaskconverter
      
      * support whisper without removing any Copied from statement
      
      * fix mbart default to eager renaming
      
      * fix typo in falcon
      
      * fix is_causal in SDPA
      
      * check is_flash_attn_2_available in the models init as well in case the model is not initialized through from_pretrained
      
      * add warnings when falling back on the manual implementation
      
      * precise doc
      
      * wip replace _flash_attn_enabled by config.attn_implementation
      
      * fix typo
      
      * add tests
      
      * style
      
      * add a copy.deepcopy on the config in from_pretrained, as we do not want to modify it inplace
      
      * obey to config.attn_implementation if a config is passed in from_pretrained
      
      * fix is_torch_sdpa_available when torch is not installed
      
      * remove dead code
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bart/modeling_bart.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove duplicate pretraining_tp code
      
      * add dropout in llama
      
      * precise comment on attn_mask
      
      * add fmt: off for _unmask_unattended docstring
      
      * precise num_masks comment
      
      * nuke pretraining_tp in LlamaSDPAAttention following Arthur's suggestion
      
      * cleanup modeling_utils
      
      * backward compatibility
      
      * fix style as requested
      
      * style
      
      * improve documentation
      
      * test pass
      
      * style
      
      * add _unmask_unattended tests
      
      * skip meaningless tests for idefics
      
      * hard_check SDPA requirements when specifically requested
      
      * standardize the use if XXX_ATTENTION_CLASSES
      
      * fix SDPA bug with mem-efficient backend on CUDA when using fp32
      
      * fix test
      
      * rely on SDPA is_causal parameter to handle the causal mask in some cases
      
      * fix FALCON_ATTENTION_CLASSES
      
      * remove _flash_attn_2_enabled occurences
      
      * fix test
      
      * add OPT to the list of supported flash models
      
      * improve test
      
      * properly test on different SDPA backends, on different dtypes & properly handle separately the pad tokens in the test
      
      * remove remaining _flash_attn_2_enabled occurence
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/perf_infer_gpu_one.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove use_attn_implementation
      
      * fix docstring & slight bug
      
      * make attn_implementation internal (_attn_implementation)
      
      * typos
      
      * fix tests
      
      * deprecate use_flash_attention_2=True
      
      * fix test
      
      * add back llama that was removed by mistake
      
      * fix tests
      
      * remove _flash_attn_2_enabled occurences bis
      
      * add check & test that passed attn_implementation is valid
      
      * fix falcon torchscript export
      
      * fix device of mask in tests
      
      * add tip about torch.jit.trace and move bt doc below sdpa
      
      * fix parameterized.expand order
      
      * move tests from test_modeling_attn_mask_utils to test_modeling_utils as a relevant test class is already there
      
      * update sdpaattention class with the new cache
      
      * Update src/transformers/configuration_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bark/modeling_bark.py
      
      * address review comments
      
      * WIP torch.jit.trace fix. left: test both eager & sdpa
      
      * add test for torch.jit.trace for both eager/sdpa
      
      * fix falcon with torch==2.0 that needs to use sdpa
      
      * fix doc
      
      * hopefully last fix
      
      * fix key_value_length that has no default now in mask converter
      
      * is it flacky?
      
      * fix speculative decoding bug
      
      * tests do pass
      
      * fix following #27907
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      80377eb0
  28. 04 Dec, 2023 1 commit
  29. 24 Nov, 2023 1 commit
  30. 10 Nov, 2023 1 commit
  31. 31 Oct, 2023 2 commits
  32. 03 Oct, 2023 1 commit
  33. 26 Sep, 2023 1 commit
  34. 22 Sep, 2023 1 commit