1. 14 May, 2024 1 commit
  2. 13 May, 2024 4 commits
    • Alazar's avatar
      Port IDEFICS to tensorflow (#26870) · 94306352
      Alazar authored
      
      
      * Initial commit
      
      * Just a copy of modeling_idefics.py that will be ported to TF
      
      * - Prepend TF to the name of all classes
      - Convert pytorch ops to TF (not all operations are converted yet)
      
      * Add TF imports
      
      * Add autotranslated files
      
      * Add TF classes to model_tf_auto.py
      
      * Add the TF classes in model_doc
      
      * include auto-translated code
      
      * Adopted from auto-translated version
      
      * Add a forgotten super().build
      
      * Add test code for TF version.
      
      * Fix indentation and load pytorch weights for now
      
      * Some fixes. Many tests are still failing but some are passing now.
      
      - I have added TODO's for some of the hacks I made to unblock me
        and I will address them soon
      - I have the processing_idefics.py hacked in my view to support TF temporarily
      
      * Add ALL_LAYERNORM_LAYERS to match pytorch
      
      * Revert "Add ALL_LAYERNORM_LAYERS to match pytorch"
      
      This reverts commit 7e0a35119b4d7a6284d04d8c543fba1b29e573c9 as it
      is not needed in the tf implementation.
      
      * Fix freeze_relevant_params()
      
      * Some more fixes
      
      * Fix test_attention_outputs
      
      * Add tf stuff to processing_idefics.py
      
      processing_idefics.py supports both pytorch and tf now.
      
      test_processor_idefics.py for pytorch is passing, so i didn't break anything
      but still some issues with tf. I also need to add tf tests in
      test_processor_idefics.py.
      
      * Pass return_tensors to image processing code and fix test
      
      * Pass return_tensors to the image processor __init__
      
      * Fix several test cases
      
      - Make input to some of the forward pass of type `TFModelInputType`
      - Decorate main layer forward pass with `@unpack_inputs`
      - Decorate main layer with `@keras_serializable`
      - Pass `inputs` to TFIdeficsModel
      
      * Some more fixes forgotten in last commit
      
      * Fix processing code and vision_tf.py
      
      * Fix perceiver bug
      
      * Import from
      
      * Auto-add build() methods + style pass
      
      * Fix build() errors due to `None` being passed as shape to some layers
      
      * Change name in TFIdeficsForVisionText2Text to attribute in IdeficsForVisionText2Text
      
      * Fix pytorch weights load for tf2
      
      There were a lot of `name=` missing in weight initialization code.
      
      * Attempt to fix CI
      
      * Add back accidently removed line
      
      * Remove torch-specific stuff from the TF test file
      
      * make fix-copies, make style, remove autotranslated files
      
      * Fixes to imports/docstrings
      
      * Let's try the from future import in desperation
      
      * Fix the core random_attention_mask fn to match the torch/flax behaviour
      
      * Clean random_attention_mask up correctly
      
      * Remove torch-only test
      
      * Fix loss shape, couple of nits
      
      * make style
      
      * Don't test for OOB embeddings because IDEFICS uses those deliberately
      
      * Fix loss computation to handle masking
      
      * Fix test failures when flattening
      
      * Fix some test failures
      
      - Add cross attention gate which was missing and wasn't being passed arround
      - Fix overwriting of image_attention_mask due to hack I had for dummy inputs
      
      * Add a proper stateless scaled_dot_product_attention
      
      * make style
      
      * Adding missing attribute from the PyTorch version
      
      * Small cleanups to decoupledlinearlayer in case that helps
      
      * Pass epsilon to LayerNormalization
      
      * Attemp to fix pytorch weight cross-loading for TFIdeficsEmbedding
      
      * Fix a bug in TFIdeficsGatedCrossAttentionLayer
      
      * Patching up build() methods
      
      * Constant self.inv_freq
      
      * Constant self.inv_freq
      
      * First working version
      
      The TF implementation works now, there was a bug in the TFIdeficsDecoupledLinear
      where the weights were mis-intialized (in_features,out_features)
      when it should be: (out_features, in_features)
      
      I have tested this so far with tiny-random and idefics-9b-instruct
      and gives correct output.
      
      I also dumped the final outputs for both pytorch and TF
      and they are identical.
      
      * Fix some test failures
      
      * remove print statement
      
      * Fix return_tensors
      
      * Fix CI test failure check_code_quality
      
      * Attempt to fix CI failures by running `make fixup`
      
      The hardcoded IDs in test_modeling_tf_idefics.py are for the integration
      test and makes that file unreadable and should probably be moved to a seperate file.
      
      * Attempt to fix tests_pr_documentation_tests
      
      * Fix a test failure in test_image_processing_idefics.py
      
      * Fix test test_pt_tf_model_equivalence
      
      * Fix a few failures
      
      * Tiny fix
      
      * Some minor fixes
      
      * Remove a duplicate test
      
      * Override a few test failures for IDEFICS
      
      - `test_keras_save_load` is passing now
      - `test_compile_tf_model` is still failing
      
      * Fix processing_idefics.py after rebase
      
      * Guard import keras with is_tf_available
      
      * fix check code quality
      
      * fix check code quality
      
      * Minor fixes
      
      * Skip test_save_load temporarily
      
      This test passed on my local box but fails on the CI, skipping
      for now to see if there are other remaining failures on the CI.
      
      * Run `ruff format tests src utils`
      
      * Fix last failing test, `test_compile_tf_model`
      
      * Add fixes for vision_tf.py
      
      I forgot to add this file in last commit.
      
      * Minor fixes
      
      * Replace "<<<" with "<<" for doc tests
      
      IDEFICS-9B is too big for doctest runner, so don't run it there
      
      * Make code more readable
      
      * Fix bug after code review
      
      I added a layer_norm_eps to IdeficsConfig but I don't even need it
      since the vision config has a layer_norm_eps.
      
      * Fix after code review
      
      Use original code tokenizer.convert_tokens_to_ids
      
      * Keep PyTorch as the default return_tensors
      
      * Fixes to modeling_tf after code review
      
      * Fixes from code review
      
      - Remove all references of `TF_IDEFICS_PRETRAINED_MODEL_ARCHIVE_LIST`
      - Pass 1e-5 to LayerNormalization in perceiver
      
      * Run ruff
      
      * Undo a change
      
      * Refactor processing code after Matt's suggestion
      
      * Remove TODO's that aren't needed anymore
      
      * For pytorch, Use original pytorch processing code from main
      
      Since this PR is a TF port it shouldn't make any modifications
      to pytorch IDEFICS code. This changes undo's the pytorch processing
      modifications I made and uses original code from main.
      
      * Update tests/models/idefics/test_modeling_idefics.py
      
      * Update tests/models/idefics/test_modeling_tf_idefics.py
      
      * Add missing imports for is_pt_tf_cross_test
      
      * [DO NOT MERGE]: This is a commit for debugging and will be reverted
      
      The cross test `test_pt_tf_model_equivalence` passes locally but
      fails when running on the CI. This commit is to help debug that
      and will be reverted.
      
      * Revert "[DO NOT MERGE]: This is a commit for debugging and will be reverted"
      
      This reverts commit 8f0d709ec5bd46685fb0b4259d914ffee794875b.
      
      * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted
      
      * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted
      
      * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted"
      
      This reverts commit 998cc38b8c3d313bf5e5eb55a7f5b7b881897b89.
      
      * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted"
      
      This reverts commit 1c695ac4219c4ae4d39b330b01744dc27deb7dd4.
      
      * Don't skip test_save_load
      
      IIRC test_save_load was also failing on the CI but not on my local
      box, it might be easier to debug that on the CI first than the cross tests
      
      * Debugging commit, will be reverted
      
      * Revert "Debugging commit, will be reverted"
      
      This reverts commit 8eafc8e41e20c4e95a3a90834f06a6e9f445e2d5.
      
      * Override `test_save_load` and push model to save
      
      Maybe this will help me repro this weird bug
      
      * pass my repo_id
      
      * add endpoint
      
      * Pass a temp (write) token just for this CI
      
      * Undo last few commits, still pushing to hub for model debugging
      
      The issue seems to be with save_pretrained(),  when I looked at the model saved
      from the CI test failure it is basically empty and has no weights.
      `self.save_weights(..)` seems to be failing in save_pretrained but needs
      more debugging
      
      * Add logging to modeling tf utils, will be reverted just for debugging
      
      * Debugging, will revert
      
      * Revert "Debugging, will revert"
      
      This reverts commit 9d0d3075fb7c82d8cde3a5c76bc8f3876c5c55d3.
      
      * Revert "Add logging to modeling tf utils, will be reverted just for debugging"
      
      This reverts commit 774b6b7b1c17b3ce5d7634ade768f2f686cee617.
      
      * Remove `test_save_load`
      
      The CI failures are gone after my latest rebase, no idea why
      but I was still saving the model to my hub on HF and the tf_model.h5
      file now has everything.
      
      * Run make fix-copies
      
      * Run ruff format tests src utils
      
      * Debugging commit, will be reverted
      
      * Run ruff, also trigger CI run
      
      * Run ruff again
      
      * Undo debugging commit
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      94306352
    • Poedator's avatar
      Llama: fix custom 4D masks, v2 (#30348) · a0779b9e
      Poedator authored
      
      
      * 4d mask fixes
      
      * Update custom 4D mask logic
      
      * test moved to mixin
      
      * extra tests 4d mask
      
      * upd 4d mask and StaticCache handling
      
      * added Mask4DTestHard to mistral tests
      
      * post-rebase fixes
      
      * test fixes for StaticCache
      
      * make fix-copies
      
      * upd 1 after #30476
      
      * fix common tests
      
      * rm elif attention_mask.dim() == 4:
      
      * tests combined, fixed, mixtral supported
      
      * bigbird style chg reverted
      
      * rm if attention_mask.dim() == 2
      
      * modeling_llama formatting chg
      
      ---------
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      a0779b9e
    • Nilabhra Roy Chowdhury's avatar
      Support for Falcon2-11B (#30771) · e52741f6
      Nilabhra Roy Chowdhury authored
      
      
      * remove unrelated changes
      
      * remove unrelated changes on phi and stable LM
      
      * add: Test for Falcon 10B
      
      * fix: formatting
      
      * fix: loading the falcon 10B in 8 bit precision using bitsanbytes.
      
      * fix: device placement
      
      * fix: broken tests.
      
      * fix: backwards compatibility for falcon 1B architecture.
      
      * chore: updated test.
      
      * chore: test_modeling_falcon.py to use the 11B model.
      
      * chore: minor edit
      
      * chore: formating.
      
      ---------
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      e52741f6
    • Zafir Stojanovski's avatar
      Blip dynamic input resolution (#30722) · f63d8222
      Zafir Stojanovski authored
      * blip with interpolated pos encoding
      
      * feat: Add interpolate_pos_encoding option to other models from `BLIP` family.
      
      * include check for textual generated content in tests
      f63d8222
  3. 09 May, 2024 5 commits
  4. 08 May, 2024 1 commit
  5. 07 May, 2024 1 commit
  6. 06 May, 2024 1 commit
    • Arthur's avatar
      [`CI update`] Try to use dockers and no cache (#29202) · 307f632b
      Arthur authored
      
      
      * change cis
      
      * nits
      
      * update
      
      * minor updates
      
      * [push-ci-image]
      
      * nit [push-ci-image]
      
      * nitsssss
      
      * [build-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * both
      
      * [push-ci-image]
      
      * this?
      
      * [push-ci-image]
      
      * pypi-kenlm needs g++
      
      * [push-ci-image]
      
      * nit
      
      * more nits [push-ci-image]
      
      * nits [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * add vision
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * add new dummy file but will need to update them [push-ci-image]
      
      * [push-ci-image]
      
      * show package size as well
      
      * [push-ci-image]
      
      * potentially ignore failures
      
      * workflow updates
      
      * nits [push-ci-image]
      
      * [push-ci-image]
      
      * fix consistency
      
      * clean nciida triton
      
      * also show big packages [push-ci-image]
      
      * nit
      
      * update
      
      * another one
      
      * line escape?
      
      * add accelerate [push-ci-image]
      
      * updates [push-ci-image]
      
      * nits to run tests, no push-ci
      
      * try to parse skip reason to make sure nothing is skipped that should no be skippped
      
      * nit?
      
      * always show skipped reasons
      
      * nits
      
      * better parsing of the test outputs
      
      * action="store_true",
      
      * failure on failed
      
      * show matched
      
      * debug
      
      * update short summary with skipped, failed and errors
      
      * nits
      
      * nits
      
      * coolu pdates
      
      * remove docbuilder
      
      * fix
      
      * always run checks
      
      * oups
      
      * nits
      
      * don't error out on library printing
      
      * non zero exi codes
      
      * no warning
      
      * nit
      
      * WAT?
      
      * format nit
      
      * [push-ci-image]
      
      * fail if fail is needed
      
      * [push-ci-image]
      
      * sound file for torch light?
      
      * [push-ci-image]
      
      * order is important [push-ci-image]
      
      * [push-ci-image] reduce even further
      
      * [push-ci-image]
      
      * use pytest rich !
      
      * yes [push-ci-image]
      
      * oupsy
      
      * bring back the full traceback, but pytest rich should help
      
      * nit
      
      * [push-ci-image]
      
      * re run
      
      * nit
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * empty push to trigger
      
      * [push-ci-image]
      
      * nit? [push-ci-image]
      
      * empty
      
      * try to install timm with no deps
      
      * [push-ci-image]
      
      * oups [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image] ?
      
      * [push-ci-image] open ssh client for git checkout fast
      
      * empty for torch light
      
      * updates [push-ci-image]
      
      * nit
      
      * @v4 for checkout
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fix fetch tests with parallelism
      
      * [push-ci-image]
      
      * more parallelism
      
      * nit
      
      * more nits
      
      * empty to re-trigger
      
      * empty to re-trigger
      
      * split by timing
      
      * did not work with previous commit
      
      * junit.xml
      
      * no path?
      
      * mmm this?
      
      * junitxml format
      
      * split by timing
      
      * nit
      
      * fix junit family
      
      * now we can test if the xunit1 is compatible!
      
      * this?
      
      * fully list tests
      
      * update
      
      * update
      
      * oups
      
      * finally
      
      * use classname
      
      * remove working directory to make sure the path does not interfere
      
      * okay no juni should have the correct path
      
      * name split?
      
      * sort by classname is what make most sense
      
      * some testing
      
      * naem
      
      * oups
      
      * test something fun
      
      * autodetect
      
      * 18?
      
      * nit
      
      * file size?
      
      * uip
      
      * 4 is best
      
      * update to see versions
      
      * better print
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * please install the correct keras version
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * uv is fucking me up
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * nits
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * install issues an pins
      
      * tapas as well
      
      * nits
      
      * more paralellism
      
      * short tb
      
      * soundfile
      
      * soundfile
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * oups
      
      * [push-ci-image]
      
      * fix some things
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * use torch-light for hub
      
      * small git lfs for hub job
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fix tf tapas
      
      * [push-ci-image]
      
      * nits
      
      * [push-ci-image]
      
      * don't update the test
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * no use them
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * update tf proba
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * woops
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * test with built dockers
      
      * [push-ci-image]
      
      * skip annoying tests
      
      * revert fix copy
      
      * update test values
      
      * update
      
      * last skip and fixup
      
      * nit
      
      * ALL GOOOD
      
      * quality
      
      * Update tests/models/layoutlmv2/test_image_processing_layoutlmv2.py
      
      * Update docker/quality.dockerfile
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Update src/transformers/models/tapas/modeling_tf_tapas.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * use torch-speed
      
      * updates
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fuck ken-lm [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      ---------
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      307f632b
  7. 02 May, 2024 2 commits
  8. 01 May, 2024 3 commits
  9. 30 Apr, 2024 3 commits
  10. 26 Apr, 2024 3 commits
    • Eduardo Pacheco's avatar
      [SegGPT] Fix seggpt image processor (#29550) · 6d4cabda
      Eduardo Pacheco authored
      * Fixed SegGptImageProcessor to handle 2D and 3D prompt mask inputs
      
      * Added new test to check prompt mask equivalence
      
      * New proposal
      
      * Better proposal
      
      * Removed unnecessary method
      
      * Updated seggpt docs
      
      * Introduced do_convert_rgb
      
      * nits
      6d4cabda
    • amyeroberts's avatar
      [`DETR`] Remove timm hardcoded logic in modeling files (#29038) · aafa7ce7
      amyeroberts authored
      
      
      * Enable instantiating model with pretrained backbone weights
      
      * Clarify pretrained import
      
      * Use load_backbone instead
      
      * Add backbone_kwargs to config
      
      * Fix up
      
      * Add tests
      
      * Tidy up
      
      * Enable instantiating model with pretrained backbone weights
      
      * Update tests so backbone checkpoint isn't passed in
      
      * Clarify pretrained import
      
      * Update configs - docs and validation check
      
      * Update src/transformers/utils/backbone_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Clarify exception message
      
      * Update config init in tests
      
      * Add test for when use_timm_backbone=True
      
      * Use load_backbone instead
      
      * Add use_timm_backbone to the model configs
      
      * Add backbone_kwargs to config
      
      * Pass kwargs to constructors
      
      * Draft
      
      * Fix tests
      
      * Add back timm - weight naming
      
      * More tidying up
      
      * Whoops
      
      * Tidy up
      
      * Handle when kwargs are none
      
      * Update tests
      
      * Revert test changes
      
      * Deformable detr test - don't use default
      
      * Don't mutate; correct model attributes
      
      * Add some clarifying comments
      
      * nit - grammar is hard
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      aafa7ce7
    • JB (Don)'s avatar
      [`BERT`] Add support for sdpa (#28802) · dfa7b580
      JB (Don) authored
      * Adding SDPA support for BERT
      
      * Using the proper input name for testing model input in inference()
      
      * Adding documentation for SDPA in BERT model page
      
      * Use the stable link for the documentation
      
      * Adding a gate to only call .contiguous() for torch < 2.2.0
      
      * Additions and fixes to the documentation
      
      * Minor updates to documentation
      
      * Adding extra requirements needed for the contiguous() bug
      
      * Adding "Adapted from" in plcae of the "Copied from"
      
      * Add benchmark speedup tables to the documentation
      
      * Minor fixes to the documentation
      
      * Use ClapText as a replacemenet for Bert in the Copied-From
      
      * Some more fixes for the fix-copies references
      
      * Overriding the test_eager_matches_sdpa_generate in bert tests to not load with low_cpu_mem_usage
      
      [test all]
      
      * Undo changes to separate test
      
      * Refactored SDPA self attention code for KV projections
      
      * Change use_sdpa to attn_implementation
      
      * Fix test_sdpa_can_dispatch_on_flash by preparing input (required for MultipleChoice models)
      dfa7b580
  11. 25 Apr, 2024 3 commits
    • Raushan Turganbay's avatar
      Fix Llava for 0-embeddings (#30473) · e60491ad
      Raushan Turganbay authored
      e60491ad
    • Yoach Lacombe's avatar
      馃毃 Add training compatibility for Musicgen-like models (#29802) · 90cb55bf
      Yoach Lacombe authored
      
      
      * first modeling code
      
      * make repository
      
      * still WIP
      
      * update model
      
      * add tests
      
      * add latest change
      
      * clean docstrings and copied from
      
      * update docstrings md and readme
      
      * correct chroma function
      
      * correct copied from and remove unreleated test
      
      * add doc to toctree
      
      * correct imports
      
      * add convert script to notdoctested
      
      * Add suggestion from Sanchit
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * correct get_uncoditional_inputs docstrings
      
      * modify README according to SANCHIT feedback
      
      * add chroma to audio utils
      
      * clean librosa and torchaudio hard dependencies
      
      * fix FE
      
      * refactor audio decoder -> audio encoder for consistency with previous musicgen
      
      * refactor conditional -> encoder
      
      * modify sampling rate logics
      
      * modify license at the beginning
      
      * refactor all_self_attns->all_attentions
      
      * remove ignore copy from causallm generate
      
      * add copied from for from_sub_models
      
      * fix make copies
      
      * add warning if audio is truncated
      
      * add copied from where relevant
      
      * remove artefact
      
      * fix convert script
      
      * fix torchaudio and FE
      
      * modify chroma method according to feedback-> better naming
      
      * refactor input_values->input_features
      
      * refactor input_values->input_features and fix import fe
      
      * add input_features to docstrigs
      
      * correct inputs_embeds logics
      
      * remove dtype conversion
      
      * refactor _prepare_conditional_hidden_states_kwargs_for_generation ->_prepare_encoder_hidden_states_kwargs_for_generation
      
      * change warning for chroma length
      
      * Update src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * change way to save wav, using soundfile
      
      * correct docs and change to soundfile
      
      * fix import
      
      * fix init proj layers
      
      * add draft training
      
      * fix cross entropy
      
      * clean loss computation
      
      * fix labels
      
      * remove line breaks from md
      
      * fix issue with docstrings
      
      * add FE suggestions
      
      * improve is in logics and remove useless imports
      
      * remove custom from_pretrained
      
      * simplify docstring code
      
      * add suggestions for modeling tests
      
      * make style
      
      * update converting script with sanity check
      
      * remove encoder attention mask from conditional generation
      
      * replace musicgen melody checkpoints with official orga
      
      * rename ylacombe->facebook in checkpoints
      
      * fix copies
      
      * remove unecessary warning
      
      * add shape in code docstrings
      
      * add files to slow doc tests
      
      * fix md bug and add md to not_tested
      
      * make fix-copies
      
      * fix hidden states test and batching
      
      * update training code
      
      * add training tests for melody
      
      * add training for o.g musicgen
      
      * fix copied from
      
      * remove final todos
      
      * make style
      
      * fix style
      
      * add suggestions from review
      
      * add ref to the original loss computation code
      
      * rename method + fix labels in tests
      
      * make style
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      90cb55bf
    • amyeroberts's avatar
      aca4a103
  12. 24 Apr, 2024 4 commits
  13. 23 Apr, 2024 3 commits
  14. 22 Apr, 2024 3 commits
  15. 19 Apr, 2024 3 commits
    • Jo茫o David's avatar
      Add TF swiftformer (#23342) · d2cec09b
      Jo茫o David authored
      
      
      * Duplicate swiftformer
      
      * Convert SwiftFormerPatchEmbedding
      
      * Convert SwiftFormerEmbeddings
      
      * Convert TFSwiftFormerMlp
      
      * Convert TFSwiftFormerConvEncoder
      
      * Convert TFSwiftFormerLocalRepresentation
      
      * convert TFSwiftFormerEncoderBlock
      
      * Convert SwiftFormerStage
      
      * Convert SwiftFormerEncoder
      
      * Add TFSWiftFormerPreTrainedModel
      
      * Convert SwiftFormerForImageClassification
      
      * Add kwargs and start drop path
      
      * Fix syntax
      
      * Change Model class name
      
      * Add TFSwiftFormer to __init__
      
      * Duplicate test_modeling_swiftformer
      
      * First test conversions
      
      * Change require_torch to require_tf
      
      * Add exports to swiftformer __init__
      
      * Add TFSwiftFormerModel wrapper
      
      * Fix __init__ and run black
      
      * Remove docstring from MainLayer, fix padding
      
      * Use keras.layers.Activation on keras.Sequential
      
      * Fix swiftformer exports
      
      * Fix activation layer from config
      
      * Remove post_inits
      
      * Use tf.keras.layers.ZeroPadding2D
      
      * Convert torch normalize
      
      * Change tf test input shape
      
      * Fix softmax and reduce_sum
      
      * Convert expand_dims and repeat
      
      * Add missing reshape and tranpose
      
      * Simplify TFSwiftFormerEncoderBlock.call
      
      * Fix mismatch in patch embeddings
      
      * Fix expected output shape to match channels last
      
      * Fix swiftformer typo
      
      * Disable test_onnx
      
      * Fix TFSwiftFormerForImageClassification call
      
      * Add unpack inputs
      
      * Convert flatten(2).mean(-1)
      
      * Change vision dummy inputs (to be reviewed)
      
      * Change test_forward_signature to use .call
      
      * Fix @unpack_inputs
      
      * Set return_tensors="tf" and rename class
      
      * Rename wrongly named patch_embeddings layer
      
      * Add serving_output and change dummy_input shape
      
      * Make dimensions BCHW and transpose inside embedding layer
      
      * Change SwiftFormerEncoderBlock
      
      * Fix ruff problems
      
      * Add image size to swiftformer config
      
      * Change tranpose to MainLayer and use -1 for reshape
      
      * Remove serving_outputs and dummy_inputs
      
      * Remove test_initialization test from tf model
      
      * Make Sequential component a separate layer
      
      * Fix layers' names
      
      * Tranpose encoder outputs
      
      * Fix tests and check if hidden states is not None
      
      * Fix TFSwiftFormerForImageClassification
      
      * Run make fixup
      
      * Run make fix-copies
      
      * Update modeling_tf_auto
      
      * Update docs
      
      * Fix modeling auto mapping
      
      * Update modelint_tf_swiftformer docs
      
      * Fill image_size doc and type
      
      * Add reduction=None to loss computation
      
      * Update docs
      
      * make style
      
      * Debug: Delete the tip to see if that changes anything
      
      * Re-add tip
      
      * Remove add_code_sample_docstrings
      
      * Remove unused import
      
      * Get the debug to actually tell us the problem it has with the docs
      
      * Try a substitution to match the PyTorch file?
      
      * Add swiftformer to ignore list
      
      * Add build() methods
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove FIXME comment
      
      * Remove from_pt
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Rename one-letter variables
      
      * Remove FIXMEs related to momentum
      
      * Remove old TODO comment
      
      * Remove outstanding FIXME comments
      
      * Get dropout rate from config
      
      * Add specific dropout config for MLP
      
      * Add convencoder dropout to config
      
      * Pass config to SwiftFormerDropPath layer
      
      * Fix drop_path variable name and add Adapted from comment
      
      * Run ruff
      
      * Removed copied from comment
      
      * Run fix copies
      
      * Change drop_path to identity to match pt
      
      * Cleanup build() methods and move to new keras imports
      
      * Update docs/source/en/model_doc/swiftformer.md
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Raise error if drop_path_rate > 0.0
      
      * Apply suggestions from code review
      
      Replace (self.dim), with self.dim,
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Remove drop_path function
      
      * Add training to TFSwiftFormerEncoder
      
      * Set self.built = True last
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Should have been added to previous commit
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Change default_feature_extractor to default_image_processor
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Import Keras from modeling_tf_utils
      
      * Remove relative import
      
      * Run ruff --fix
      
      * Move import keras to tf_available
      
      * Add copied from comment to test_forward_signature
      
      * Reduce batch size and num_labels
      
      * Extract loss logic to hf_compute_loss
      
      * Run ruff format
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      d2cec09b
    • Raushan Turganbay's avatar
      Do not remove half seq length in generation tests (#30016) · b1cd4874
      Raushan Turganbay authored
      
      
      * remove seq length from generation tests
      
      * style and quality
      
      * [test_all] & PR suggestion
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update tests/generation/test_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * [test all] remove unused variables
      
      ---------
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      b1cd4874
    • Sanchit Gandhi's avatar
      [Whisper] Fix slow tests (#30152) · 4ed0e51c
      Sanchit Gandhi authored
      
      
      * fix tests
      
      * style
      
      * more fixes
      
      * move model to device
      
      * move logits to cpu
      
      * update expected values
      
      * use ungated dataset
      
      * fix
      
      * fix
      
      * update
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      4ed0e51c