1. 01 May, 2024 1 commit
  2. 30 Apr, 2024 5 commits
  3. 26 Apr, 2024 6 commits
    • Eduardo Pacheco's avatar
      [SegGPT] Fix seggpt image processor (#29550) · 6d4cabda
      Eduardo Pacheco authored
      * Fixed SegGptImageProcessor to handle 2D and 3D prompt mask inputs
      
      * Added new test to check prompt mask equivalence
      
      * New proposal
      
      * Better proposal
      
      * Removed unnecessary method
      
      * Updated seggpt docs
      
      * Introduced do_convert_rgb
      
      * nits
      6d4cabda
    • amyeroberts's avatar
      load_image - decode b64encode and encodebytes strings (#30192) · c793b26f
      amyeroberts authored
      * Decode b64encode and encodebytes strings
      
      * Remove conditional encode -- image is always a string
      c793b26f
    • amyeroberts's avatar
      [`DETR`] Remove timm hardcoded logic in modeling files (#29038) · aafa7ce7
      amyeroberts authored
      
      
      * Enable instantiating model with pretrained backbone weights
      
      * Clarify pretrained import
      
      * Use load_backbone instead
      
      * Add backbone_kwargs to config
      
      * Fix up
      
      * Add tests
      
      * Tidy up
      
      * Enable instantiating model with pretrained backbone weights
      
      * Update tests so backbone checkpoint isn't passed in
      
      * Clarify pretrained import
      
      * Update configs - docs and validation check
      
      * Update src/transformers/utils/backbone_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Clarify exception message
      
      * Update config init in tests
      
      * Add test for when use_timm_backbone=True
      
      * Use load_backbone instead
      
      * Add use_timm_backbone to the model configs
      
      * Add backbone_kwargs to config
      
      * Pass kwargs to constructors
      
      * Draft
      
      * Fix tests
      
      * Add back timm - weight naming
      
      * More tidying up
      
      * Whoops
      
      * Tidy up
      
      * Handle when kwargs are none
      
      * Update tests
      
      * Revert test changes
      
      * Deformable detr test - don't use default
      
      * Don't mutate; correct model attributes
      
      * Add some clarifying comments
      
      * nit - grammar is hard
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      aafa7ce7
    • JB (Don)'s avatar
      [`BERT`] Add support for sdpa (#28802) · dfa7b580
      JB (Don) authored
      * Adding SDPA support for BERT
      
      * Using the proper input name for testing model input in inference()
      
      * Adding documentation for SDPA in BERT model page
      
      * Use the stable link for the documentation
      
      * Adding a gate to only call .contiguous() for torch < 2.2.0
      
      * Additions and fixes to the documentation
      
      * Minor updates to documentation
      
      * Adding extra requirements needed for the contiguous() bug
      
      * Adding "Adapted from" in plcae of the "Copied from"
      
      * Add benchmark speedup tables to the documentation
      
      * Minor fixes to the documentation
      
      * Use ClapText as a replacemenet for Bert in the Copied-From
      
      * Some more fixes for the fix-copies references
      
      * Overriding the test_eager_matches_sdpa_generate in bert tests to not load with low_cpu_mem_usage
      
      [test all]
      
      * Undo changes to separate test
      
      * Refactored SDPA self attention code for KV projections
      
      * Change use_sdpa to attn_implementation
      
      * Fix test_sdpa_can_dispatch_on_flash by preparing input (required for MultipleChoice models)
      dfa7b580
    • Matt's avatar
      Use the Keras set_random_seed in tests (#30504) · 2de5cb12
      Matt authored
      Use the Keras set_random_seed to ensure reproducible weight initialization
      2de5cb12
    • Michael Goin's avatar
      Update `dtype_byte_size` to handle torch.float8_e4m3fn/float8_e5m2 types (#30488) · 20081c74
      Michael Goin authored
      * Update modeling_utils/dtype_byte_size to handle float8 types
      
      * Add a test for dtype_byte_size
      
      * Format
      
      * Fix bool
      20081c74
  4. 25 Apr, 2024 5 commits
    • Raushan Turganbay's avatar
      Fix Llava for 0-embeddings (#30473) · e60491ad
      Raushan Turganbay authored
      e60491ad
    • Zach Mueller's avatar
      Introduce Stateful Callbacks (#29666) · ad697f18
      Zach Mueller authored
      
      
      * Introduce saveable callbacks
      
      * Add note
      
      * Test for non-present and flag
      
      * Support early stopping and refusing to train further
      
      * Update docstring
      
      * More saving
      
      * Import oopsie
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Make it go through TrainerArguments
      
      * Document
      
      * Fix test
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Rework to allow for duplicates
      
      * CLean
      
      * Fix failing tests
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      ad697f18
    • Alexander Visheratin's avatar
      Add WSD scheduler (#30231) · 7b1170b0
      Alexander Visheratin authored
      * Added WSD scheduler.
      
      * Added tests.
      
      * Fixed errors.
      
      * Fix formatting.
      
      * CI fixes.
      7b1170b0
    • Yoach Lacombe's avatar
      馃毃 Add training compatibility for Musicgen-like models (#29802) · 90cb55bf
      Yoach Lacombe authored
      
      
      * first modeling code
      
      * make repository
      
      * still WIP
      
      * update model
      
      * add tests
      
      * add latest change
      
      * clean docstrings and copied from
      
      * update docstrings md and readme
      
      * correct chroma function
      
      * correct copied from and remove unreleated test
      
      * add doc to toctree
      
      * correct imports
      
      * add convert script to notdoctested
      
      * Add suggestion from Sanchit
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * correct get_uncoditional_inputs docstrings
      
      * modify README according to SANCHIT feedback
      
      * add chroma to audio utils
      
      * clean librosa and torchaudio hard dependencies
      
      * fix FE
      
      * refactor audio decoder -> audio encoder for consistency with previous musicgen
      
      * refactor conditional -> encoder
      
      * modify sampling rate logics
      
      * modify license at the beginning
      
      * refactor all_self_attns->all_attentions
      
      * remove ignore copy from causallm generate
      
      * add copied from for from_sub_models
      
      * fix make copies
      
      * add warning if audio is truncated
      
      * add copied from where relevant
      
      * remove artefact
      
      * fix convert script
      
      * fix torchaudio and FE
      
      * modify chroma method according to feedback-> better naming
      
      * refactor input_values->input_features
      
      * refactor input_values->input_features and fix import fe
      
      * add input_features to docstrigs
      
      * correct inputs_embeds logics
      
      * remove dtype conversion
      
      * refactor _prepare_conditional_hidden_states_kwargs_for_generation ->_prepare_encoder_hidden_states_kwargs_for_generation
      
      * change warning for chroma length
      
      * Update src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * change way to save wav, using soundfile
      
      * correct docs and change to soundfile
      
      * fix import
      
      * fix init proj layers
      
      * add draft training
      
      * fix cross entropy
      
      * clean loss computation
      
      * fix labels
      
      * remove line breaks from md
      
      * fix issue with docstrings
      
      * add FE suggestions
      
      * improve is in logics and remove useless imports
      
      * remove custom from_pretrained
      
      * simplify docstring code
      
      * add suggestions for modeling tests
      
      * make style
      
      * update converting script with sanity check
      
      * remove encoder attention mask from conditional generation
      
      * replace musicgen melody checkpoints with official orga
      
      * rename ylacombe->facebook in checkpoints
      
      * fix copies
      
      * remove unecessary warning
      
      * add shape in code docstrings
      
      * add files to slow doc tests
      
      * fix md bug and add md to not_tested
      
      * make fix-copies
      
      * fix hidden states test and batching
      
      * update training code
      
      * add training tests for melody
      
      * add training for o.g musicgen
      
      * fix copied from
      
      * remove final todos
      
      * make style
      
      * fix style
      
      * add suggestions from review
      
      * add ref to the original loss computation code
      
      * rename method + fix labels in tests
      
      * make style
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      90cb55bf
    • amyeroberts's avatar
      aca4a103
  5. 24 Apr, 2024 5 commits
  6. 23 Apr, 2024 4 commits
  7. 22 Apr, 2024 6 commits
  8. 19 Apr, 2024 8 commits
    • Jo茫o David's avatar
      Add TF swiftformer (#23342) · d2cec09b
      Jo茫o David authored
      
      
      * Duplicate swiftformer
      
      * Convert SwiftFormerPatchEmbedding
      
      * Convert SwiftFormerEmbeddings
      
      * Convert TFSwiftFormerMlp
      
      * Convert TFSwiftFormerConvEncoder
      
      * Convert TFSwiftFormerLocalRepresentation
      
      * convert TFSwiftFormerEncoderBlock
      
      * Convert SwiftFormerStage
      
      * Convert SwiftFormerEncoder
      
      * Add TFSWiftFormerPreTrainedModel
      
      * Convert SwiftFormerForImageClassification
      
      * Add kwargs and start drop path
      
      * Fix syntax
      
      * Change Model class name
      
      * Add TFSwiftFormer to __init__
      
      * Duplicate test_modeling_swiftformer
      
      * First test conversions
      
      * Change require_torch to require_tf
      
      * Add exports to swiftformer __init__
      
      * Add TFSwiftFormerModel wrapper
      
      * Fix __init__ and run black
      
      * Remove docstring from MainLayer, fix padding
      
      * Use keras.layers.Activation on keras.Sequential
      
      * Fix swiftformer exports
      
      * Fix activation layer from config
      
      * Remove post_inits
      
      * Use tf.keras.layers.ZeroPadding2D
      
      * Convert torch normalize
      
      * Change tf test input shape
      
      * Fix softmax and reduce_sum
      
      * Convert expand_dims and repeat
      
      * Add missing reshape and tranpose
      
      * Simplify TFSwiftFormerEncoderBlock.call
      
      * Fix mismatch in patch embeddings
      
      * Fix expected output shape to match channels last
      
      * Fix swiftformer typo
      
      * Disable test_onnx
      
      * Fix TFSwiftFormerForImageClassification call
      
      * Add unpack inputs
      
      * Convert flatten(2).mean(-1)
      
      * Change vision dummy inputs (to be reviewed)
      
      * Change test_forward_signature to use .call
      
      * Fix @unpack_inputs
      
      * Set return_tensors="tf" and rename class
      
      * Rename wrongly named patch_embeddings layer
      
      * Add serving_output and change dummy_input shape
      
      * Make dimensions BCHW and transpose inside embedding layer
      
      * Change SwiftFormerEncoderBlock
      
      * Fix ruff problems
      
      * Add image size to swiftformer config
      
      * Change tranpose to MainLayer and use -1 for reshape
      
      * Remove serving_outputs and dummy_inputs
      
      * Remove test_initialization test from tf model
      
      * Make Sequential component a separate layer
      
      * Fix layers' names
      
      * Tranpose encoder outputs
      
      * Fix tests and check if hidden states is not None
      
      * Fix TFSwiftFormerForImageClassification
      
      * Run make fixup
      
      * Run make fix-copies
      
      * Update modeling_tf_auto
      
      * Update docs
      
      * Fix modeling auto mapping
      
      * Update modelint_tf_swiftformer docs
      
      * Fill image_size doc and type
      
      * Add reduction=None to loss computation
      
      * Update docs
      
      * make style
      
      * Debug: Delete the tip to see if that changes anything
      
      * Re-add tip
      
      * Remove add_code_sample_docstrings
      
      * Remove unused import
      
      * Get the debug to actually tell us the problem it has with the docs
      
      * Try a substitution to match the PyTorch file?
      
      * Add swiftformer to ignore list
      
      * Add build() methods
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove FIXME comment
      
      * Remove from_pt
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Rename one-letter variables
      
      * Remove FIXMEs related to momentum
      
      * Remove old TODO comment
      
      * Remove outstanding FIXME comments
      
      * Get dropout rate from config
      
      * Add specific dropout config for MLP
      
      * Add convencoder dropout to config
      
      * Pass config to SwiftFormerDropPath layer
      
      * Fix drop_path variable name and add Adapted from comment
      
      * Run ruff
      
      * Removed copied from comment
      
      * Run fix copies
      
      * Change drop_path to identity to match pt
      
      * Cleanup build() methods and move to new keras imports
      
      * Update docs/source/en/model_doc/swiftformer.md
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Raise error if drop_path_rate > 0.0
      
      * Apply suggestions from code review
      
      Replace (self.dim), with self.dim,
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Remove drop_path function
      
      * Add training to TFSwiftFormerEncoder
      
      * Set self.built = True last
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Should have been added to previous commit
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Change default_feature_extractor to default_image_processor
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Import Keras from modeling_tf_utils
      
      * Remove relative import
      
      * Run ruff --fix
      
      * Move import keras to tf_available
      
      * Add copied from comment to test_forward_signature
      
      * Reduce batch size and num_labels
      
      * Extract loss logic to hf_compute_loss
      
      * Run ruff format
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      d2cec09b
    • hoshi-hiyouga's avatar
      Fix config + attn_implementation in AutoModelForCausalLM.from_pretrained (#30299) · 21c912e7
      hoshi-hiyouga authored
      * Update modeling_utils.py
      
      * Update test_modeling_utils.py
      
      * Update test_modeling_utils.py
      
      * Update test_modeling_utils.py
      21c912e7
    • Raushan Turganbay's avatar
      Do not remove half seq length in generation tests (#30016) · b1cd4874
      Raushan Turganbay authored
      
      
      * remove seq length from generation tests
      
      * style and quality
      
      * [test_all] & PR suggestion
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update tests/generation/test_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * [test all] remove unused variables
      
      ---------
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      b1cd4874
    • Marc Sun's avatar
      Update unwrap from accelerate (#29933) · b4fd49b6
      Marc Sun authored
      
      
      * Use unwrap with the one in accelerate
      
      * oups
      
      * update unwrap
      
      * fix
      
      * wording
      
      * raise error instead
      
      * comment
      
      * doc
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarZach Mueller <muellerzr@gmail.com>
      
      * style
      
      * put else
      
      ---------
      Co-authored-by: default avatarZach Mueller <muellerzr@gmail.com>
      b4fd49b6
    • Sanchit Gandhi's avatar
      [Whisper] Fix slow tests (#30152) · 4ed0e51c
      Sanchit Gandhi authored
      
      
      * fix tests
      
      * style
      
      * more fixes
      
      * move model to device
      
      * move logits to cpu
      
      * update expected values
      
      * use ungated dataset
      
      * fix
      
      * fix
      
      * update
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      4ed0e51c
    • Sanchit Gandhi's avatar
      cd09a8df
    • Jacky Lee's avatar
      Enable multi-device for some models (#30207) · 30b45320
      Jacky Lee authored
      
      
      * feat: multidevice for resnet
      
      * feat: yes! resnet
      
      * fix: compare all elements in tuple
      
      * feat: support for regnet
      
      * feat: support for convnextv2
      
      * feat: support for bit
      
      * feat: support for cvt
      
      * feat: add support for focalnet
      
      * feat: support for yolos
      
      * feat: support for glpn
      
      * feat: support for imagegpt
      
      * feat: support for levit
      
      * feat: support for mgp_str
      
      * feat: support for mobilnet_v1
      
      * feat: support for mobilnet_v2
      
      * feat: support for mobilevit
      
      * feat: support for mobilevitv2
      
      * feat: support for poolformer
      
      * fix: copies
      
      * fix: code quality check
      
      * update: upstream changes from main
      
      * fix: consistency check
      
      * feat: support for sam
      
      * feat: support for switchformer
      
      * feat: support for swin
      
      * feat: support for swinv2
      
      * feat: support for timesformer
      
      * feat: suport for trocr
      
      * feat: support for upernet
      
      * fix: check copies
      
      * update: rerun CI
      
      * update: rerun again, maybe
      
      * update: one more rerun
      
      ---------
      Co-authored-by: default avatarJacky Lee <jackylee328@gmail.com>
      30b45320
    • NielsRogge's avatar
      [UDOP] Add special tokens to tokenizer (#29594) · ecfe9be7
      NielsRogge authored
      * Add special tokens
      
      * Add special tokens
      
      * Use fmt
      
      * Uncomment code
      
      * Add test
      
      * Remove scripts
      
      * Address comments
      
      * Improve tests
      
      * Address comment
      
      * Remove flag
      ecfe9be7