1. 08 May, 2024 1 commit
  2. 07 May, 2024 1 commit
  3. 06 May, 2024 1 commit
    • Arthur's avatar
      [`CI update`] Try to use dockers and no cache (#29202) · 307f632b
      Arthur authored
      
      
      * change cis
      
      * nits
      
      * update
      
      * minor updates
      
      * [push-ci-image]
      
      * nit [push-ci-image]
      
      * nitsssss
      
      * [build-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * both
      
      * [push-ci-image]
      
      * this?
      
      * [push-ci-image]
      
      * pypi-kenlm needs g++
      
      * [push-ci-image]
      
      * nit
      
      * more nits [push-ci-image]
      
      * nits [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * add vision
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * add new dummy file but will need to update them [push-ci-image]
      
      * [push-ci-image]
      
      * show package size as well
      
      * [push-ci-image]
      
      * potentially ignore failures
      
      * workflow updates
      
      * nits [push-ci-image]
      
      * [push-ci-image]
      
      * fix consistency
      
      * clean nciida triton
      
      * also show big packages [push-ci-image]
      
      * nit
      
      * update
      
      * another one
      
      * line escape?
      
      * add accelerate [push-ci-image]
      
      * updates [push-ci-image]
      
      * nits to run tests, no push-ci
      
      * try to parse skip reason to make sure nothing is skipped that should no be skippped
      
      * nit?
      
      * always show skipped reasons
      
      * nits
      
      * better parsing of the test outputs
      
      * action="store_true",
      
      * failure on failed
      
      * show matched
      
      * debug
      
      * update short summary with skipped, failed and errors
      
      * nits
      
      * nits
      
      * coolu pdates
      
      * remove docbuilder
      
      * fix
      
      * always run checks
      
      * oups
      
      * nits
      
      * don't error out on library printing
      
      * non zero exi codes
      
      * no warning
      
      * nit
      
      * WAT?
      
      * format nit
      
      * [push-ci-image]
      
      * fail if fail is needed
      
      * [push-ci-image]
      
      * sound file for torch light?
      
      * [push-ci-image]
      
      * order is important [push-ci-image]
      
      * [push-ci-image] reduce even further
      
      * [push-ci-image]
      
      * use pytest rich !
      
      * yes [push-ci-image]
      
      * oupsy
      
      * bring back the full traceback, but pytest rich should help
      
      * nit
      
      * [push-ci-image]
      
      * re run
      
      * nit
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * empty push to trigger
      
      * [push-ci-image]
      
      * nit? [push-ci-image]
      
      * empty
      
      * try to install timm with no deps
      
      * [push-ci-image]
      
      * oups [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image] ?
      
      * [push-ci-image] open ssh client for git checkout fast
      
      * empty for torch light
      
      * updates [push-ci-image]
      
      * nit
      
      * @v4 for checkout
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fix fetch tests with parallelism
      
      * [push-ci-image]
      
      * more parallelism
      
      * nit
      
      * more nits
      
      * empty to re-trigger
      
      * empty to re-trigger
      
      * split by timing
      
      * did not work with previous commit
      
      * junit.xml
      
      * no path?
      
      * mmm this?
      
      * junitxml format
      
      * split by timing
      
      * nit
      
      * fix junit family
      
      * now we can test if the xunit1 is compatible!
      
      * this?
      
      * fully list tests
      
      * update
      
      * update
      
      * oups
      
      * finally
      
      * use classname
      
      * remove working directory to make sure the path does not interfere
      
      * okay no juni should have the correct path
      
      * name split?
      
      * sort by classname is what make most sense
      
      * some testing
      
      * naem
      
      * oups
      
      * test something fun
      
      * autodetect
      
      * 18?
      
      * nit
      
      * file size?
      
      * uip
      
      * 4 is best
      
      * update to see versions
      
      * better print
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * please install the correct keras version
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * uv is fucking me up
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * nits
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * install issues an pins
      
      * tapas as well
      
      * nits
      
      * more paralellism
      
      * short tb
      
      * soundfile
      
      * soundfile
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * oups
      
      * [push-ci-image]
      
      * fix some things
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * use torch-light for hub
      
      * small git lfs for hub job
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fix tf tapas
      
      * [push-ci-image]
      
      * nits
      
      * [push-ci-image]
      
      * don't update the test
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * no use them
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * update tf proba
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * woops
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * test with built dockers
      
      * [push-ci-image]
      
      * skip annoying tests
      
      * revert fix copy
      
      * update test values
      
      * update
      
      * last skip and fixup
      
      * nit
      
      * ALL GOOOD
      
      * quality
      
      * Update tests/models/layoutlmv2/test_image_processing_layoutlmv2.py
      
      * Update docker/quality.dockerfile
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Update src/transformers/models/tapas/modeling_tf_tapas.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * use torch-speed
      
      * updates
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fuck ken-lm [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      ---------
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      307f632b
  4. 02 May, 2024 2 commits
  5. 01 May, 2024 3 commits
  6. 30 Apr, 2024 3 commits
  7. 26 Apr, 2024 3 commits
    • Eduardo Pacheco's avatar
      [SegGPT] Fix seggpt image processor (#29550) · 6d4cabda
      Eduardo Pacheco authored
      * Fixed SegGptImageProcessor to handle 2D and 3D prompt mask inputs
      
      * Added new test to check prompt mask equivalence
      
      * New proposal
      
      * Better proposal
      
      * Removed unnecessary method
      
      * Updated seggpt docs
      
      * Introduced do_convert_rgb
      
      * nits
      6d4cabda
    • amyeroberts's avatar
      [`DETR`] Remove timm hardcoded logic in modeling files (#29038) · aafa7ce7
      amyeroberts authored
      
      
      * Enable instantiating model with pretrained backbone weights
      
      * Clarify pretrained import
      
      * Use load_backbone instead
      
      * Add backbone_kwargs to config
      
      * Fix up
      
      * Add tests
      
      * Tidy up
      
      * Enable instantiating model with pretrained backbone weights
      
      * Update tests so backbone checkpoint isn't passed in
      
      * Clarify pretrained import
      
      * Update configs - docs and validation check
      
      * Update src/transformers/utils/backbone_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Clarify exception message
      
      * Update config init in tests
      
      * Add test for when use_timm_backbone=True
      
      * Use load_backbone instead
      
      * Add use_timm_backbone to the model configs
      
      * Add backbone_kwargs to config
      
      * Pass kwargs to constructors
      
      * Draft
      
      * Fix tests
      
      * Add back timm - weight naming
      
      * More tidying up
      
      * Whoops
      
      * Tidy up
      
      * Handle when kwargs are none
      
      * Update tests
      
      * Revert test changes
      
      * Deformable detr test - don't use default
      
      * Don't mutate; correct model attributes
      
      * Add some clarifying comments
      
      * nit - grammar is hard
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      aafa7ce7
    • JB (Don)'s avatar
      [`BERT`] Add support for sdpa (#28802) · dfa7b580
      JB (Don) authored
      * Adding SDPA support for BERT
      
      * Using the proper input name for testing model input in inference()
      
      * Adding documentation for SDPA in BERT model page
      
      * Use the stable link for the documentation
      
      * Adding a gate to only call .contiguous() for torch < 2.2.0
      
      * Additions and fixes to the documentation
      
      * Minor updates to documentation
      
      * Adding extra requirements needed for the contiguous() bug
      
      * Adding "Adapted from" in plcae of the "Copied from"
      
      * Add benchmark speedup tables to the documentation
      
      * Minor fixes to the documentation
      
      * Use ClapText as a replacemenet for Bert in the Copied-From
      
      * Some more fixes for the fix-copies references
      
      * Overriding the test_eager_matches_sdpa_generate in bert tests to not load with low_cpu_mem_usage
      
      [test all]
      
      * Undo changes to separate test
      
      * Refactored SDPA self attention code for KV projections
      
      * Change use_sdpa to attn_implementation
      
      * Fix test_sdpa_can_dispatch_on_flash by preparing input (required for MultipleChoice models)
      dfa7b580
  8. 25 Apr, 2024 3 commits
    • Raushan Turganbay's avatar
      Fix Llava for 0-embeddings (#30473) · e60491ad
      Raushan Turganbay authored
      e60491ad
    • Yoach Lacombe's avatar
      馃毃 Add training compatibility for Musicgen-like models (#29802) · 90cb55bf
      Yoach Lacombe authored
      
      
      * first modeling code
      
      * make repository
      
      * still WIP
      
      * update model
      
      * add tests
      
      * add latest change
      
      * clean docstrings and copied from
      
      * update docstrings md and readme
      
      * correct chroma function
      
      * correct copied from and remove unreleated test
      
      * add doc to toctree
      
      * correct imports
      
      * add convert script to notdoctested
      
      * Add suggestion from Sanchit
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * correct get_uncoditional_inputs docstrings
      
      * modify README according to SANCHIT feedback
      
      * add chroma to audio utils
      
      * clean librosa and torchaudio hard dependencies
      
      * fix FE
      
      * refactor audio decoder -> audio encoder for consistency with previous musicgen
      
      * refactor conditional -> encoder
      
      * modify sampling rate logics
      
      * modify license at the beginning
      
      * refactor all_self_attns->all_attentions
      
      * remove ignore copy from causallm generate
      
      * add copied from for from_sub_models
      
      * fix make copies
      
      * add warning if audio is truncated
      
      * add copied from where relevant
      
      * remove artefact
      
      * fix convert script
      
      * fix torchaudio and FE
      
      * modify chroma method according to feedback-> better naming
      
      * refactor input_values->input_features
      
      * refactor input_values->input_features and fix import fe
      
      * add input_features to docstrigs
      
      * correct inputs_embeds logics
      
      * remove dtype conversion
      
      * refactor _prepare_conditional_hidden_states_kwargs_for_generation ->_prepare_encoder_hidden_states_kwargs_for_generation
      
      * change warning for chroma length
      
      * Update src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * change way to save wav, using soundfile
      
      * correct docs and change to soundfile
      
      * fix import
      
      * fix init proj layers
      
      * add draft training
      
      * fix cross entropy
      
      * clean loss computation
      
      * fix labels
      
      * remove line breaks from md
      
      * fix issue with docstrings
      
      * add FE suggestions
      
      * improve is in logics and remove useless imports
      
      * remove custom from_pretrained
      
      * simplify docstring code
      
      * add suggestions for modeling tests
      
      * make style
      
      * update converting script with sanity check
      
      * remove encoder attention mask from conditional generation
      
      * replace musicgen melody checkpoints with official orga
      
      * rename ylacombe->facebook in checkpoints
      
      * fix copies
      
      * remove unecessary warning
      
      * add shape in code docstrings
      
      * add files to slow doc tests
      
      * fix md bug and add md to not_tested
      
      * make fix-copies
      
      * fix hidden states test and batching
      
      * update training code
      
      * add training tests for melody
      
      * add training for o.g musicgen
      
      * fix copied from
      
      * remove final todos
      
      * make style
      
      * fix style
      
      * add suggestions from review
      
      * add ref to the original loss computation code
      
      * rename method + fix labels in tests
      
      * make style
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      90cb55bf
    • amyeroberts's avatar
      aca4a103
  9. 24 Apr, 2024 4 commits
  10. 23 Apr, 2024 3 commits
  11. 22 Apr, 2024 3 commits
  12. 19 Apr, 2024 5 commits
    • Jo茫o David's avatar
      Add TF swiftformer (#23342) · d2cec09b
      Jo茫o David authored
      
      
      * Duplicate swiftformer
      
      * Convert SwiftFormerPatchEmbedding
      
      * Convert SwiftFormerEmbeddings
      
      * Convert TFSwiftFormerMlp
      
      * Convert TFSwiftFormerConvEncoder
      
      * Convert TFSwiftFormerLocalRepresentation
      
      * convert TFSwiftFormerEncoderBlock
      
      * Convert SwiftFormerStage
      
      * Convert SwiftFormerEncoder
      
      * Add TFSWiftFormerPreTrainedModel
      
      * Convert SwiftFormerForImageClassification
      
      * Add kwargs and start drop path
      
      * Fix syntax
      
      * Change Model class name
      
      * Add TFSwiftFormer to __init__
      
      * Duplicate test_modeling_swiftformer
      
      * First test conversions
      
      * Change require_torch to require_tf
      
      * Add exports to swiftformer __init__
      
      * Add TFSwiftFormerModel wrapper
      
      * Fix __init__ and run black
      
      * Remove docstring from MainLayer, fix padding
      
      * Use keras.layers.Activation on keras.Sequential
      
      * Fix swiftformer exports
      
      * Fix activation layer from config
      
      * Remove post_inits
      
      * Use tf.keras.layers.ZeroPadding2D
      
      * Convert torch normalize
      
      * Change tf test input shape
      
      * Fix softmax and reduce_sum
      
      * Convert expand_dims and repeat
      
      * Add missing reshape and tranpose
      
      * Simplify TFSwiftFormerEncoderBlock.call
      
      * Fix mismatch in patch embeddings
      
      * Fix expected output shape to match channels last
      
      * Fix swiftformer typo
      
      * Disable test_onnx
      
      * Fix TFSwiftFormerForImageClassification call
      
      * Add unpack inputs
      
      * Convert flatten(2).mean(-1)
      
      * Change vision dummy inputs (to be reviewed)
      
      * Change test_forward_signature to use .call
      
      * Fix @unpack_inputs
      
      * Set return_tensors="tf" and rename class
      
      * Rename wrongly named patch_embeddings layer
      
      * Add serving_output and change dummy_input shape
      
      * Make dimensions BCHW and transpose inside embedding layer
      
      * Change SwiftFormerEncoderBlock
      
      * Fix ruff problems
      
      * Add image size to swiftformer config
      
      * Change tranpose to MainLayer and use -1 for reshape
      
      * Remove serving_outputs and dummy_inputs
      
      * Remove test_initialization test from tf model
      
      * Make Sequential component a separate layer
      
      * Fix layers' names
      
      * Tranpose encoder outputs
      
      * Fix tests and check if hidden states is not None
      
      * Fix TFSwiftFormerForImageClassification
      
      * Run make fixup
      
      * Run make fix-copies
      
      * Update modeling_tf_auto
      
      * Update docs
      
      * Fix modeling auto mapping
      
      * Update modelint_tf_swiftformer docs
      
      * Fill image_size doc and type
      
      * Add reduction=None to loss computation
      
      * Update docs
      
      * make style
      
      * Debug: Delete the tip to see if that changes anything
      
      * Re-add tip
      
      * Remove add_code_sample_docstrings
      
      * Remove unused import
      
      * Get the debug to actually tell us the problem it has with the docs
      
      * Try a substitution to match the PyTorch file?
      
      * Add swiftformer to ignore list
      
      * Add build() methods
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove FIXME comment
      
      * Remove from_pt
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Rename one-letter variables
      
      * Remove FIXMEs related to momentum
      
      * Remove old TODO comment
      
      * Remove outstanding FIXME comments
      
      * Get dropout rate from config
      
      * Add specific dropout config for MLP
      
      * Add convencoder dropout to config
      
      * Pass config to SwiftFormerDropPath layer
      
      * Fix drop_path variable name and add Adapted from comment
      
      * Run ruff
      
      * Removed copied from comment
      
      * Run fix copies
      
      * Change drop_path to identity to match pt
      
      * Cleanup build() methods and move to new keras imports
      
      * Update docs/source/en/model_doc/swiftformer.md
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Raise error if drop_path_rate > 0.0
      
      * Apply suggestions from code review
      
      Replace (self.dim), with self.dim,
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Remove drop_path function
      
      * Add training to TFSwiftFormerEncoder
      
      * Set self.built = True last
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Should have been added to previous commit
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Change default_feature_extractor to default_image_processor
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Import Keras from modeling_tf_utils
      
      * Remove relative import
      
      * Run ruff --fix
      
      * Move import keras to tf_available
      
      * Add copied from comment to test_forward_signature
      
      * Reduce batch size and num_labels
      
      * Extract loss logic to hf_compute_loss
      
      * Run ruff format
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      d2cec09b
    • Raushan Turganbay's avatar
      Do not remove half seq length in generation tests (#30016) · b1cd4874
      Raushan Turganbay authored
      
      
      * remove seq length from generation tests
      
      * style and quality
      
      * [test_all] & PR suggestion
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update tests/generation/test_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * [test all] remove unused variables
      
      ---------
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      b1cd4874
    • Sanchit Gandhi's avatar
      [Whisper] Fix slow tests (#30152) · 4ed0e51c
      Sanchit Gandhi authored
      
      
      * fix tests
      
      * style
      
      * more fixes
      
      * move model to device
      
      * move logits to cpu
      
      * update expected values
      
      * use ungated dataset
      
      * fix
      
      * fix
      
      * update
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      4ed0e51c
    • Sanchit Gandhi's avatar
      cd09a8df
    • NielsRogge's avatar
      [UDOP] Add special tokens to tokenizer (#29594) · ecfe9be7
      NielsRogge authored
      * Add special tokens
      
      * Add special tokens
      
      * Use fmt
      
      * Uncomment code
      
      * Add test
      
      * Remove scripts
      
      * Address comments
      
      * Improve tests
      
      * Address comment
      
      * Remove flag
      ecfe9be7
  13. 18 Apr, 2024 4 commits
    • Abhi Venigalla's avatar
      Add DBRX Model (#29921) · 005b957f
      Abhi Venigalla authored
      
      
      * wip
      
      * fix __init__.py
      
      * add docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * address comments 1
      
      * work on make fixup
      
      * pass configs down
      
      * add sdpa attention
      
      * remove DbrxBlock
      
      * add to configuration_auto
      
      * docstring now passes formatting test
      
      * fix style
      
      * update READMEs
      
      * add dbrx to modeling_auto
      
      * make fix-copies generated this
      
      * add DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * config docstring passes formatting test
      
      * rename moe_loss_weight to router_aux_loss_coef
      
      * add to flash-attn documentation
      
      * fix model-path in tests
      
      * Explicitly make `"suli"` the default `ffn_act_fn`
      Co-authored-by: default avatarWing Lian <wing.lian@gmail.com>
      
      * default to using router_aux_loss_coef over ffn_config[moe_loss_weight]
      
      * fix _flash_attn_uses_top_left_mask and is_causal
      
      * fix tests path
      
      * don't use token type IDs
      
      * follow Llama and remove token_type_ids from test
      
      * init ConfigTester differently so tests pass
      
      * remove multiple choice test
      
      * remove question + answer test
      
      * remove sequence classification test
      
      * remove token classification test
      
      * copy Llama tests and remove token_type_ids from test inputs
      
      * do not test pruning or headmasking; style code
      
      * add _tied_weights_keys parameter to pass test
      
      * add type hints
      
      * fix type check
      
      * update config tester
      
      * remove masked_lm test
      
      * remove encoder tests
      
      * initialize DbrxModelTester with correct params
      
      * style
      
      * torch_dtype does not rely on torch
      
      * run make fixup, fix-copies
      
      * use https://huggingface.co/v2ray/dbrx-base-fixed/blob/main/modeling_dbrx.py
      
      
      
      * add copyright info
      
      * fix imports and DbrxRotaryEmbedding
      
      * update DbrxModel docstring
      
      * use copies
      
      * change model path in docstring
      
      * use config in DbrxFFN
      
      * fix flashattention2, sdpaattention
      
      * input config to DbrXAttention, DbrxNormAttentionNorm
      
      * more fixes
      
      * fix
      
      * fix again!
      
      * add informative comment
      
      * fix ruff?
      
      * remove print statement + style
      
      * change doc-test
      
      * fix doc-test
      
      * fix docstring
      
      * delete commented out text
      
      * make defaults match dbrx-instruct
      
      * replace `router_aux_loss_coef` with `moe_loss_weight`
      
      * is_decoder=True
      
      * remove is_decoder from configtester
      
      * implement sdpa properly
      
      * make is_decoder pass tests
      
      * start on the GenerationTesterMixin tests
      
      * add dbrx to sdpa documentation
      
      * skip weight typing test
      
      * style
      
      * initialize smaller model
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Add DBRX to toctree
      
      * skip test_new_cache_format
      
      * make config defaults smaller again
      
      * add pad_token_id
      
      * remove pad_token_id from config
      
      * Remove all references to DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * Update src/transformers/models/dbrx/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/dbrx.md
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Update src/transformers/models/dbrx/configuration_dbrx.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/dbrx.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix typo
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * update docs, fix configuration_auto.py
      
      * address pr comments
      
      * remove is_decoder flag
      
      * slice
      
      * fix requires grad
      
      * remove grad
      
      * disconnect differently
      
      * remove grad
      
      * enable grads
      
      * patch
      
      * detach expert
      
      * nissan al ghaib
      
      * Update modeling_dbrx.py
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * replace "Gemma" with "Dbrx"
      
      * remove # type: ignore
      
      * don't hardcode vocab_size
      
      * remove ToDo
      
      * Re-add removed idefics2 line
      
      * Update test to use tiny-random!
      
      * Remove TODO
      
      * Remove one more case of loading the entire dbrx-instruct in the tests
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * address some comments
      
      * small model
      
      * add dbrx to tokenization_auto
      
      * More docstrings with add_start_docstrings
      
      * Dbrx for now
      
      * add PipelineTesterMixin
      
      * Update src/transformers/models/dbrx/configuration_dbrx.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * remove flash-attn2 import error
      
      * fix docstring
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add useage example
      
      * put on one line
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix ffn_act_fn
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * change "dbrx" to "DBRX" for display purposes.
      
      * fix __init__.py?
      
      * fix __init__.py
      
      * fix README
      
      * return the aux_loss
      
      * remove extra spaces
      
      * fix configuration_auto.py
      
      * fix format in tokenization_auto
      
      * remove new line
      
      * add more useage examples
      
      ---------
      Co-authored-by: default avatarAbhi Venigalla <abhi.venigalla@databricks.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarEitan Turok <eitan.turok@databricks.com>
      Co-authored-by: default avatarEitan Turok <150733043+eitanturok@users.noreply.github.com>
      Co-authored-by: default avatarWing Lian <wing.lian@gmail.com>
      Co-authored-by: default avatarEitan Turok <eitanturok@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarYour Name <you@example.com>
      Co-authored-by: default avatarMihir Patel <mihir.v.patel7@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      005b957f
    • tomeras91's avatar
      Add jamba (#29943) · 3f20877d
      tomeras91 authored
      * Add jamba arch
      
      * apply "make fix-copies" changes
      
      * fix link to model in JambaConfig docstring
      
      * Add n_ctx in modeling file because repo-consistency wants that
      
      * Add jamba to flash attention and sdpa documentation
      
      * mamba dt_proj quant fix now works for LoRA as well
      
      * override test_left_padding_compatibility and use a more permissive tolerance. left padding numerical difference are accentuated by mamba layers
      
      * add jamba to tokenization auto
      
      * fix comments of shape (PR #24 in the model page: https://huggingface.co/ai21labs/Jamba-v0.1/discussions/24)
      
      * simple PR fixes
      
      * remove unnecessary kwargs from JambaAttentionDecoderLayer and JambaMambaDecoderLayer
      
      * remove the LoRA hack for the mamba dt_proj bias. It was solved in huggingface/peft#1530 (https://github.com/huggingface/peft/pull/1530)
      
      * Add copied comment on JambaMLP (it's the same as MixtralMLP)
      
      * remove padding_mask warnings. It's not supported anymore
      
      * fix docstring. Float instead of int
      
      * A few more minor PR fixes
      
      * (1) lowercase names for mamba layernorms (2) remove _apply_inner_layernorms and do it directly in the forward pass
      
      * Return None attention weights from mamba layers. Append to all attentions only if not None.
      
      * remove some leftover jamba archive lists
      
      * Better separation between expert vs non-expert layers. non-expert layers return None as router_logits, and it is not concatenated to all_router_logits returned from JambaModel
      
      * no need to take router_logits at config.expert_layer_offset anymore. result.router_logits now holds results only for expert layers
      
      * Add Jamba paper on READMEs
      
      * (1) rename n_ctx -> max_position_embeddings (2) don't use it in the modeling file since it's not needed (set it as an exception to check_config_attributes)
      
      * Add copied from comment
      
      * remove the code path for apply_inner_layernorms=False. Jamba always has the inner mamba layernorms
      
      * clearer docstring for _convert_to_standard_cache
      
      * style fixes
      
      * Change calc_logits_for_entire_prompt (bool) to num_logits_to_keep (int). Adapt assisted decoding code tp use it. Also small change in low memory beam search decoding path to support this new int value in model_inputs
      
      * rename test so it still overrides what its meant to override
      
      * draft
      
      * oups
      
      * nit
      
      * remove more complexe logic
      
      * fix names used in config
      
      * fix fix fix
      
      * style
      
      * fix some more failing tests
      
      * generate did not init the cache 馃檭
      
      
      
      * more small nits
      
      * typo
      
      * config.mamba_expand * config.hidden_size for the intermediate size of the mamba shapes
      
      * fix init of pkv with torch.tensor()
      
      * empty tensor
      
      * fix some init issues
      
      * stupid changes required by generate because it does not even support it's own DynamicCache class
      
      * more fixes
      
      * fix general assisted gen cache_position bug
      
      * tests passing
      
      * Add offsets and periods as SPECIAL_CASES_TO_ALLOW in check_config_attributes.py
      
      * fix reorder_cache to reorder mamba states and override some more functions in HybridMambaAttentionDynamicCache
      
      * no need to override test_past_key_values_format() and _check_past_key_values_for_generate() in tests anymore
      
      * fix docstrings and typehints for past_key_values
      
      * style fixes
      
      * fix docs
      
      * change typehint due to copy from Mixtral
      
      * forgot import
      
      * import order
      
      * Add configuration_jamba and modeling_jamba to not_doctested because the model is too big to download (in docstring of JambaForCausalLM.forward)
      
      * Add integration test with tiny tandom Jamba model on hub
      
      * fix flash attention cache shapes
      
      * bring back forgotten hidden states
      
      * rename HybridMambaAttentionDynamicCache.seqlen_offset to has_previous_state (and make bool) and bugfix - it should be set to True after a finished forward pass of the entire model
      
      * align integration test after modeling fixes
      
      * bugfix - mamba can use precomputed states only of forward pass is on a single token
      
      * bugfix - mamba can use precomputed states only if they match the batch size
      
      * typo
      
      * remove making _prepare_4d_causal_attention_mask a leaf function
      
      * stop using past_seq_len.get_seq_length(). Use cache positions instead. Adjust test (test_decoder_model_past_with_large_inputs) accordingly
      
      ---------
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      3f20877d
    • Pavel Iakubovskii's avatar
      Fix donut token2json multiline (#30300) · 7915a259
      Pavel Iakubovskii authored
      * Fix multiline processing
      
      * Update test for token2json
      7915a259
    • Alexander Visheratin's avatar
      Add Flash Attention 2 to M2M100 model (#30256) · b65df514
      Alexander Visheratin authored
      
      
      * Added flash attention 2.
      
      * Fixes.
      
      * Fix inheritance.
      
      * Fixed init.
      
      * Remove stuff.
      
      * Added documentation.
      
      * Add FA2 to M2M100 documentation.
      
      * Add test.
      
      * Fixed documentation.
      
      * Update src/transformers/models/m2m_100/modeling_m2m_100.py
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/nllb.md
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fixed variable name.
      
      ---------
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      b65df514
  14. 17 Apr, 2024 3 commits
    • Shane A's avatar
      Add OLMo model family (#29890) · e4ea19b9
      Shane A authored
      * Add OLMo using add-new-model-like with Llama
      
      * Fix incorrect tokenizer for OLMo
      
      * Copy-paste relevant OLMo methods and their imports
      
      * Add OLMo config
      
      * Modify OLMo config to follow HF conventions
      
      * Remove unneeded Llama code from OLMo model
      
      * Add ability for OLMo model to output attentions
      
      * Add OLMoPreTrainedModel and OLMoModel
      
      * Add OLMoForCausalLM
      
      * Minor fixes to OLMo model for style and missing functions
      
      * Implement OLMo tokenizer
      
      * Implement OLMo to HF conversion script
      
      * Add tests for OLMo model
      
      * Add tests for OLMo fast tokenizer
      
      * Add auto-generated dummy objects
      
      * Remove unimplemented OLMo classes from auto and init classes and re-format
      
      * Add README and associated auto-generated files
      
      * Use OLMo names for common properties
      
      * Run make fixup
      
      * Remove `|` from OLMo typing
      
      * Remove unneeded tokenization_olmo.py
      
      * Revert model, config and converter to add-new-model-like Llama
      
      * Move logic for adding bos/eos token into GPTNeoxTokenizerFast
      
      * Change OLMoConfig defaults to match OLMo-7B
      
      * Use GPTNeoXToknizerFast in OLMo tokenizer tests
      
      * Modify auto-generated OLMoModelTests to work for OLMo
      
      * Add non-parametric layer norm OLMoLayerNorm
      
      * Update weight conversion script for OLMo
      
      * Fix __init__ and auto structure for OLMo
      
      * Fix errors from make fixup
      
      * Remove OLMoTokenizerFast from documentation
      
      * Add missing 'Copied from' for OLMoModel._update_causal_mask
      
      * Run make fix-copies
      
      * Rearrange string replacements in OLMoForCausalLM Copied from
      
      * Move OLMo and Llama CausalLM.forward example into global constants
      
      * Fix OLMO_GENERATION_EXAMPLE doc string typo
      
      * Add option for qkv clipping to OLMo
      
      * Rearrange OLMoConfig kwargs in convert_olmo_weights_to_hf
      
      * Add clip_qkv to OLMoConfig in convert_olmo_weights_to_hf
      
      * Fix OLMo tokenization bug using conversion script
      
      * Keep model in full precision after conversion
      
      * Do not add eos token automatically
      
      * Update references to OLMo model in HF Hub
      
      * Do not add eos token during encoding by default
      
      * Fix Llama generation example
      
      * Run make fixup
      
      * OLMo 7B integration test fix
      
      * Remove unneeded special case for OLMoConfig
      
      * OLMo 7B Twin 2T integration test fix
      
      * Fix test_model_7b_greedy_generation
      
      * Remove test_compile_static_cache
      
      * Fix OLMo and Llama generation example
      
      * Run make fixup
      
      * Revert "OLMo 7B integration test fix"
      
      This reverts commit 4df56a4b150681bfa559846f40e9b7b7f97d7908.
      
      * Revert "OLMo 7B Twin 2T integration test fix"
      
      This reverts commit 9ff65a4a294ace89ab047b793ca55e623a9ceefc.
      
      * Ungate 7B integration tests and fix greedy generation test
      
      * Add retries for flaky test_eager_matches_sdpa_generate
      
      * Fix output of doc example for OLMoForCausalLM.forward
      
      * Downsize OLMo doc test for OLMoForCausalLM.forward to 1B model
      
      * Try fix incorrect characters in OLMoForCausalLM.forward doct test
      
      * Try fix incorrect characters in OLMoForCausalLM.forward doc test using end quotes
      
      * Remove pretraining_tp from OLMo config and model
      
      * Add missing 'Copied from' instances
      
      * Remove unneeded causal_mask from OLMoModel
      
      * Revert Llama changes
      
      * Ignore copy for OLMoForCausalLM.forward
      
      * Change 'OLMo' to 'Olmo' in classes
      
      * Move minimal OLMo tokenization tests to model tests
      
      * Add missed 'Copied from' for repeat_kv
      e4ea19b9
    • st81's avatar
      Add token type ids to CodeGenTokenizer (#29265) · 8d6b5096
      st81 authored
      * Add create token type ids to CodeGenTokenizer
      
      * Fix inconsistent length of token type ids
      
      * Format source codes
      
      * Fix inconsistent order of methods
      
      * Update docstring
      
      * add test_tokenizer_integration test
      
      * Format source codes
      
      * Add `copied from` comment to CodeGenTokenizerFast
      
      * Add doc of create_token_type_ids_from_sequences
      
      * Make return_token_type_ids False by default
      
      * Make test_tokenizer_integration as slow test
      
      * Add return_token_type_ids to tokenizer init arg
      
      * Add test for tokenizer's init return_token_type_ids
      
      * Format source codes
      8d6b5096
    • Raushan Turganbay's avatar
      Enable fx tracing for Mistral (#30209) · 304c6a1e
      Raushan Turganbay authored
      * tracing for mistral
      
      * typo
      
      * fix copies
      304c6a1e
  15. 16 Apr, 2024 1 commit