"git@developer.sourcefind.cn:chenpangpang/parler-tts.git" did not exist on "ee12a8122643112ecce17498d4a8dc39236b9ceb"
  1. 19 Jun, 2024 1 commit
    • Anton Vlasjuk's avatar
      [`GPT2`] Add SDPA support (#31172) · b275a410
      Anton Vlasjuk authored
      * `gpt2` sdpa support
      
      * fix (at least) one test, style, repo consistency
      
      * fix sdpa mask in forward --> fixes generation
      
      * test
      
      * test2
      
      * test3
      
      * test4
      
      * simplify shapes for attn mask creation and small comments
      
      * hub fail test
      
      * benchmarks
      
      * flash attn 2 mask should not be inverted on enc-dec setup
      
      * fix comment
      
      * apply some suggestion from code review
      
      - only save _attn_implentation once
      - remove unnecessary comment
      
      * change elif logic
      
      * [run-slow] gpt2
      
      * modify `test_gpt2_sample_max_time` to follow previous assertion patterns
      b275a410
  2. 11 Jun, 2024 1 commit
    • amyeroberts's avatar
      Fast image processor (#28847) · f53fe35b
      amyeroberts authored
      
      
      * Draft fast image processors
      
      * Draft working fast version
      
      * py3.8 compatible cache
      
      * Enable loading fast image processors through auto
      
      * Tidy up; rescale behaviour based on input type
      
      * Enable tests for fast image processors
      
      * Smarter rescaling
      
      * Don't default to Fast
      
      * Safer imports
      
      * Add necessary Pillow requirement
      
      * Woops
      
      * Add AutoImageProcessor test
      
      * Fix up
      
      * Fix test for imagegpt
      
      * Fix test
      
      * Review comments
      
      * Add warning for TF and JAX input types
      
      * Rearrange
      
      * Return transforms
      
      * NumpyToTensor transformation
      
      * Rebase - include changes from upstream in ImageProcessingMixin
      
      * Safe typing
      
      * Fix up
      
      * convert mean/std to tesnor to rescale
      
      * Don't store transforms in state
      
      * Fix up
      
      * Update src/transformers/image_processing_utils_fast.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Warn if fast image processor available
      
      * Update src/transformers/models/vit/image_processing_vit_fast.py
      
      * Transpose incoming numpy images to be in CHW format
      
      * Update mapping names based on packages, auto set fast to None
      
      * Fix up
      
      * Fix
      
      * Add AutoImageProcessor.from_pretrained(checkpoint, use_fast=True) test
      
      * Update src/transformers/models/vit/image_processing_vit_fast.py
      Co-authored-by: default avatarPavel Iakubovskii <qubvel@gmail.com>
      
      * Add equivalence and speed tests
      
      * Fix up
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarPavel Iakubovskii <qubvel@gmail.com>
      f53fe35b
  3. 10 Jun, 2024 1 commit
    • Pavel Iakubovskii's avatar
      Decorators for deprecation and named arguments validation (#30799) · 517df566
      Pavel Iakubovskii authored
      
      
      * Fix do_reduce_labels for maskformer image processor
      
      * Deprecate reduce_labels in favor to do_reduce_labels
      
      * Deprecate reduce_labels in favor to do_reduce_labels (segformer)
      
      * Deprecate reduce_labels in favor to do_reduce_labels (oneformer)
      
      * Deprecate reduce_labels in favor to do_reduce_labels (maskformer)
      
      * Deprecate reduce_labels in favor to do_reduce_labels (mask2former)
      
      * Fix typo
      
      * Update mask2former test
      
      * fixup
      
      * Update segmentation examples
      
      * Update docs
      
      * Fixup
      
      * Imports fixup
      
      * Add deprecation decorator draft
      
      * Add deprecation decorator
      
      * Fixup
      
      * Add deprecate_kwarg decorator
      
      * Validate kwargs decorator
      
      * Kwargs validation (beit)
      
      * fixup
      
      * Kwargs validation (mask2former)
      
      * Kwargs validation (maskformer)
      
      * Kwargs validation (oneformer)
      
      * Kwargs validation (segformer)
      
      * Better message
      
      * Fix oneformer processor save-load test
      
      * Update src/transformers/utils/deprecation.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/utils/deprecation.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/utils/deprecation.py
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      
      * Update src/transformers/utils/deprecation.py
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      
      * Better handle classmethod warning
      
      * Fix typo, remove warn
      
      * Add header
      
      * Docs and `additional_message`
      
      * Move to filter decorator ot generic
      
      * Proper deprecation for semantic segm scripts
      
      * Add to __init__ and update import
      
      * Basic tests for filter decorator
      
      * Fix doc
      
      * Override `to_dict()` to pop depracated `_max_size`
      
      * Pop unused parameters
      
      * Fix trailing whitespace
      
      * Add test for deprecation
      
      * Add deprecation warning control parameter
      
      * Update generic test
      
      * Fixup deprecation tests
      
      * Introduce init service kwargs
      
      * Revert popping unused params
      
      * Revert oneformer test
      
      * Allow "metadata" to pass
      
      * Better docs
      
      * Fix test
      
      * Add notion in docstring
      
      * Fix notification for both names
      
      * Add func name to warning message
      
      * Fixup
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      517df566
  4. 05 Jun, 2024 1 commit
  5. 04 Jun, 2024 1 commit
  6. 31 May, 2024 2 commits
  7. 28 May, 2024 2 commits
    • amyeroberts's avatar
      Deprecate low use models (#30781) · a564d10a
      amyeroberts authored
      * Deprecate models
      - graphormer
      - time_series_transformer
      - xlm_prophetnet
      - qdqbert
      - nat
      - ernie_m
      - tvlt
      - nezha
      - mega
      - jukebox
      - vit_hybrid
      - x_clip
      - deta
      - speech_to_text_2
      - efficientformer
      - realm
      - gptsan_japanese
      
      * Fix up
      
      * Fix speech2text2 imports
      
      * Make sure message isn't indented
      
      * Fix docstrings
      
      * Correctly map for deprecated models from model_type
      
      * Uncomment out
      
      * Add back time series transformer and x-clip
      
      * Import fix and fix-up
      
      * Fix up with updated ruff
      a564d10a
    • NielsRogge's avatar
      [SuperPoint, PaliGemma] Update docs (#31025) · 90da0b1c
      NielsRogge authored
      * Update docs
      
      * Add PaliGemma resources
      
      * Address comment
      
      * Update docs
      90da0b1c
  8. 27 May, 2024 1 commit
  9. 23 May, 2024 1 commit
    • Aritra Roy Gosthipaty's avatar
      [Port] TensorFlow implementation of Mistral (#29708) · 965e98dc
      Aritra Roy Gosthipaty authored
      
      
      * chore: initial commit
      
      * chore: adding imports and inits
      
      * chore: adding the causal and classification code
      
      * chore: adding names to the layers
      
      * chore: using single self attn layer
      
      * chore: built the model and layers
      
      * chore: start with testing
      
      * chore: docstring change, transpose fix
      
      * fix: rotary embedding
      
      * chore: adding cache implementation
      
      * remove unused torch
      
      * chore: fixing the indexing issue
      
      * make fix-copies
      
      * Use modeling_tf_utils.keras
      
      * make fixup
      
      * chore: fixing tests
      
      * chore: adding past key value logic
      
      * chore: adding multi label classfication test
      
      * fix: switching on the built parameters in the layers
      
      * fixing repo consistency
      
      * ruff formats
      
      * style changes
      
      * fix: tf and pt equivalence
      
      * removing returns from docstrings
      
      * fix docstrings
      
      * fix docstrings
      
      * removing todos
      
      * fix copies
      
      * fix docstring
      
      * fix docstring
      
      * chore: using easier rotate_half
      
      * adding integration tests
      
      * chore: addressing review related to rotary embedding layer
      
      * review changes
      
      * [run-slow] mistral
      
      * skip: test save load after resize token embedding
      
      * style
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      965e98dc
  10. 22 May, 2024 2 commits
  11. 21 May, 2024 1 commit
  12. 20 May, 2024 2 commits
  13. 16 May, 2024 3 commits
    • Raushan Turganbay's avatar
      Video-LLaVa: Fix docs (#30855) · 95b3c381
      Raushan Turganbay authored
      fix model id in docs
      95b3c381
    • NielsRogge's avatar
      [Idefics2] Improve docs, add resources (#30717) · 17cc71e1
      NielsRogge authored
      
      
      * Add resources
      
      * Address comment
      
      * Address comments
      
      * Update docs/source/en/model_doc/idefics2.md
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update figure
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      17cc71e1
    • hyenal's avatar
      add sdpa to ViT [follow up of #29325] (#30555) · 1c21f48a
      hyenal authored
      
      
      remove blank line (+1 squashed commit)
      Squashed commits:
      [24ccd2061] [run-slow]vit_msn,vision_encoder_decoder (+24 squashed commits)
      Squashed commits:
      [08bd27e7a] [run-slow]vit_msn,vision_encoder_decoder
      [ec96a8db3] [run-slow]vit_msn
      [ead817eca] fix vit msn multi gpu
      [d12cdc8fd] [run-slow]audio_spectrogram_transformer,deit,vision_encoder_decoder,vision_text_dual_encoder,vit,vit_hybrid,vit_mae,vit_msn,videomae,yolos
      [3fdbfa88f] doc
      [a3ff33e4a] finish implementation
      [e20b7b7fb] Update test_modeling_common.py
      [e290c5810] Update test_modeling_flax_common.py
      [d3af86f46] comment
      [ff7dd32d8] more comments
      [59b137889] suggestion
      [7e2ba6d67] attn_implementation as attribute of the class
      [fe66ab71f] minor
      [38642b568] Apply suggestions from code review
      
      Accept comments
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      [22cde7d52] Update tests/test_modeling_common.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      [48e137cc6] Update tests/test_modeling_common.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      [99f4c679f] Update tests/test_modeling_common.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      [96cf20a6d] Update src/transformers/models/vit_msn/modeling_vit_msn.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      [c59377d23] Update src/transformers/models/vit_mae/modeling_vit_mae.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      [b70a47259] Update tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      [00c84d216] [run-slow]audio_spectrogram_transformer,deit,vision_encoder_decoder,vision_text_dual_encoder,vit,vit_hybrid,vit_mae,vit_msn,videomae,yolos
      [61f00ebb0] all tests are passing locally
      [e9e0b82b7] vision encoder/decoder
      [4d5076b56] test-vision (+20 squashed commits)
      Squashed commits:
      [d1add8db9] yolo
      [9fde65716] fix flax
      [986566c28] minor
      [ca2f21d1f] vit
      [3333efd7a] easy models change
      [ebfc21402] [run-slow]audio_spectrogram_transformer,deit,vision_encoder_decoder,vision_text_dual_encoder,vit,vit_hybrid,vit_mae,vit_msn,videomae,yolos
      [b8b8603ed] [run-slow]vision_encoder_decoder,vision_text_dual_encoder,yolos
      [48ecc7e26] all tests are passing locally
      [bff7fc366] minor
      [62f88306f] fix yolo and text_encoder tests
      [121507555] [run-slow]audio_spectrogram_transformer,deit,vit,vit_hybrid,vit_mae,vit_msn,videomae
      [1064cae0a] [run-slow]vision_encoder_decoder,vision_text_dual_encoder,yolos
      [b7f52ff3a] [run-slow]audio_spectrogram_transformer,deit,vit,vit_hybrid,vit_mae,vit_msn,videomae
      [cffaa10dd] fix-copies
      [ef6c511c4] test vit hybrid
      [7d4ba8644] vit hybrid
      [66f919033] [run-slow]audio_spectrogram_transformer,deit,vit,vit_hybrid,vit_mae,vit_msn,videomae
      [1fcc0a031] fixes
      [cfde6eb21] fixup
      [e77df1ed3] all except yolo end encoder decoder (+17 squashed commits)
      Squashed commits:
      [602913e22] vit + vit_mae are working
      [547f6c4cc] RUN_SLOW=1 pytest tests/models/audio_spectrogram_transformer/ tests/models/deit/ tests/models/videomae/  passes
      [61a97dfa9] it s the complete opposite...
      [aefab37d4] fix more tests
      [71802a1b9] fix all torch tests
      [40b12eb58] encoder - decoder tests
      [941552b69] slow decorator where appropriate
      [14d055d80] has_attentions to yolo and msn
      [3381fa19f] add correct name
      [e261316a7] repo consistency
      [31c6d0c08] fixup
      [9d214276c] minor fix
      [11ed2e1b7] chore
      [eca6644c4] add sdpa to vit-based models
      [cffbf390b] make fix-copies result
      [6468319b0] fix style
      [d324cd02a] add sdpa for vit
      Co-authored-by: default avatarLiubov Yaronskaya <luba.yaronskaya@gmail.com>
      1c21f48a
  14. 15 May, 2024 1 commit
  15. 14 May, 2024 3 commits
  16. 13 May, 2024 1 commit
    • Alazar's avatar
      Port IDEFICS to tensorflow (#26870) · 94306352
      Alazar authored
      
      
      * Initial commit
      
      * Just a copy of modeling_idefics.py that will be ported to TF
      
      * - Prepend TF to the name of all classes
      - Convert pytorch ops to TF (not all operations are converted yet)
      
      * Add TF imports
      
      * Add autotranslated files
      
      * Add TF classes to model_tf_auto.py
      
      * Add the TF classes in model_doc
      
      * include auto-translated code
      
      * Adopted from auto-translated version
      
      * Add a forgotten super().build
      
      * Add test code for TF version.
      
      * Fix indentation and load pytorch weights for now
      
      * Some fixes. Many tests are still failing but some are passing now.
      
      - I have added TODO's for some of the hacks I made to unblock me
        and I will address them soon
      - I have the processing_idefics.py hacked in my view to support TF temporarily
      
      * Add ALL_LAYERNORM_LAYERS to match pytorch
      
      * Revert "Add ALL_LAYERNORM_LAYERS to match pytorch"
      
      This reverts commit 7e0a35119b4d7a6284d04d8c543fba1b29e573c9 as it
      is not needed in the tf implementation.
      
      * Fix freeze_relevant_params()
      
      * Some more fixes
      
      * Fix test_attention_outputs
      
      * Add tf stuff to processing_idefics.py
      
      processing_idefics.py supports both pytorch and tf now.
      
      test_processor_idefics.py for pytorch is passing, so i didn't break anything
      but still some issues with tf. I also need to add tf tests in
      test_processor_idefics.py.
      
      * Pass return_tensors to image processing code and fix test
      
      * Pass return_tensors to the image processor __init__
      
      * Fix several test cases
      
      - Make input to some of the forward pass of type `TFModelInputType`
      - Decorate main layer forward pass with `@unpack_inputs`
      - Decorate main layer with `@keras_serializable`
      - Pass `inputs` to TFIdeficsModel
      
      * Some more fixes forgotten in last commit
      
      * Fix processing code and vision_tf.py
      
      * Fix perceiver bug
      
      * Import from
      
      * Auto-add build() methods + style pass
      
      * Fix build() errors due to `None` being passed as shape to some layers
      
      * Change name in TFIdeficsForVisionText2Text to attribute in IdeficsForVisionText2Text
      
      * Fix pytorch weights load for tf2
      
      There were a lot of `name=` missing in weight initialization code.
      
      * Attempt to fix CI
      
      * Add back accidently removed line
      
      * Remove torch-specific stuff from the TF test file
      
      * make fix-copies, make style, remove autotranslated files
      
      * Fixes to imports/docstrings
      
      * Let's try the from future import in desperation
      
      * Fix the core random_attention_mask fn to match the torch/flax behaviour
      
      * Clean random_attention_mask up correctly
      
      * Remove torch-only test
      
      * Fix loss shape, couple of nits
      
      * make style
      
      * Don't test for OOB embeddings because IDEFICS uses those deliberately
      
      * Fix loss computation to handle masking
      
      * Fix test failures when flattening
      
      * Fix some test failures
      
      - Add cross attention gate which was missing and wasn't being passed arround
      - Fix overwriting of image_attention_mask due to hack I had for dummy inputs
      
      * Add a proper stateless scaled_dot_product_attention
      
      * make style
      
      * Adding missing attribute from the PyTorch version
      
      * Small cleanups to decoupledlinearlayer in case that helps
      
      * Pass epsilon to LayerNormalization
      
      * Attemp to fix pytorch weight cross-loading for TFIdeficsEmbedding
      
      * Fix a bug in TFIdeficsGatedCrossAttentionLayer
      
      * Patching up build() methods
      
      * Constant self.inv_freq
      
      * Constant self.inv_freq
      
      * First working version
      
      The TF implementation works now, there was a bug in the TFIdeficsDecoupledLinear
      where the weights were mis-intialized (in_features,out_features)
      when it should be: (out_features, in_features)
      
      I have tested this so far with tiny-random and idefics-9b-instruct
      and gives correct output.
      
      I also dumped the final outputs for both pytorch and TF
      and they are identical.
      
      * Fix some test failures
      
      * remove print statement
      
      * Fix return_tensors
      
      * Fix CI test failure check_code_quality
      
      * Attempt to fix CI failures by running `make fixup`
      
      The hardcoded IDs in test_modeling_tf_idefics.py are for the integration
      test and makes that file unreadable and should probably be moved to a seperate file.
      
      * Attempt to fix tests_pr_documentation_tests
      
      * Fix a test failure in test_image_processing_idefics.py
      
      * Fix test test_pt_tf_model_equivalence
      
      * Fix a few failures
      
      * Tiny fix
      
      * Some minor fixes
      
      * Remove a duplicate test
      
      * Override a few test failures for IDEFICS
      
      - `test_keras_save_load` is passing now
      - `test_compile_tf_model` is still failing
      
      * Fix processing_idefics.py after rebase
      
      * Guard import keras with is_tf_available
      
      * fix check code quality
      
      * fix check code quality
      
      * Minor fixes
      
      * Skip test_save_load temporarily
      
      This test passed on my local box but fails on the CI, skipping
      for now to see if there are other remaining failures on the CI.
      
      * Run `ruff format tests src utils`
      
      * Fix last failing test, `test_compile_tf_model`
      
      * Add fixes for vision_tf.py
      
      I forgot to add this file in last commit.
      
      * Minor fixes
      
      * Replace "<<<" with "<<" for doc tests
      
      IDEFICS-9B is too big for doctest runner, so don't run it there
      
      * Make code more readable
      
      * Fix bug after code review
      
      I added a layer_norm_eps to IdeficsConfig but I don't even need it
      since the vision config has a layer_norm_eps.
      
      * Fix after code review
      
      Use original code tokenizer.convert_tokens_to_ids
      
      * Keep PyTorch as the default return_tensors
      
      * Fixes to modeling_tf after code review
      
      * Fixes from code review
      
      - Remove all references of `TF_IDEFICS_PRETRAINED_MODEL_ARCHIVE_LIST`
      - Pass 1e-5 to LayerNormalization in perceiver
      
      * Run ruff
      
      * Undo a change
      
      * Refactor processing code after Matt's suggestion
      
      * Remove TODO's that aren't needed anymore
      
      * For pytorch, Use original pytorch processing code from main
      
      Since this PR is a TF port it shouldn't make any modifications
      to pytorch IDEFICS code. This changes undo's the pytorch processing
      modifications I made and uses original code from main.
      
      * Update tests/models/idefics/test_modeling_idefics.py
      
      * Update tests/models/idefics/test_modeling_tf_idefics.py
      
      * Add missing imports for is_pt_tf_cross_test
      
      * [DO NOT MERGE]: This is a commit for debugging and will be reverted
      
      The cross test `test_pt_tf_model_equivalence` passes locally but
      fails when running on the CI. This commit is to help debug that
      and will be reverted.
      
      * Revert "[DO NOT MERGE]: This is a commit for debugging and will be reverted"
      
      This reverts commit 8f0d709ec5bd46685fb0b4259d914ffee794875b.
      
      * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted
      
      * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted
      
      * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted"
      
      This reverts commit 998cc38b8c3d313bf5e5eb55a7f5b7b881897b89.
      
      * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted"
      
      This reverts commit 1c695ac4219c4ae4d39b330b01744dc27deb7dd4.
      
      * Don't skip test_save_load
      
      IIRC test_save_load was also failing on the CI but not on my local
      box, it might be easier to debug that on the CI first than the cross tests
      
      * Debugging commit, will be reverted
      
      * Revert "Debugging commit, will be reverted"
      
      This reverts commit 8eafc8e41e20c4e95a3a90834f06a6e9f445e2d5.
      
      * Override `test_save_load` and push model to save
      
      Maybe this will help me repro this weird bug
      
      * pass my repo_id
      
      * add endpoint
      
      * Pass a temp (write) token just for this CI
      
      * Undo last few commits, still pushing to hub for model debugging
      
      The issue seems to be with save_pretrained(),  when I looked at the model saved
      from the CI test failure it is basically empty and has no weights.
      `self.save_weights(..)` seems to be failing in save_pretrained but needs
      more debugging
      
      * Add logging to modeling tf utils, will be reverted just for debugging
      
      * Debugging, will revert
      
      * Revert "Debugging, will revert"
      
      This reverts commit 9d0d3075fb7c82d8cde3a5c76bc8f3876c5c55d3.
      
      * Revert "Add logging to modeling tf utils, will be reverted just for debugging"
      
      This reverts commit 774b6b7b1c17b3ce5d7634ade768f2f686cee617.
      
      * Remove `test_save_load`
      
      The CI failures are gone after my latest rebase, no idea why
      but I was still saving the model to my hub on HF and the tf_model.h5
      file now has everything.
      
      * Run make fix-copies
      
      * Run ruff format tests src utils
      
      * Debugging commit, will be reverted
      
      * Run ruff, also trigger CI run
      
      * Run ruff again
      
      * Undo debugging commit
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      94306352
  17. 10 May, 2024 1 commit
  18. 09 May, 2024 1 commit
  19. 08 May, 2024 1 commit
    • Pavel Iakubovskii's avatar
      Add examples for detection models finetuning (#30422) · 998dbe06
      Pavel Iakubovskii authored
      
      
      * Training script for object detection
      
      * Evaluation script for object detection
      
      * Training script for object detection with eval loop outside trainer
      
      * Trainer DETR finetuning
      
      * No trainer DETR finetuning
      
      * Eval script
      
      * Refine object detection example with trainer
      
      * Remove commented code and enable telemetry
      
      * No trainer example
      
      * Add requirements for object detection examples
      
      * Add test for trainer example
      
      * Readme draft
      
      * Fix uploading to HUB
      
      * Readme improvements
      
      * Update eval script
      
      * Adding tests for object-detection examples
      
      * Add object-detection example
      
      * Add object-detection resources to docs
      
      * Update README with custom dataset instructions
      
      * Update year
      
      * Replace valid with validation
      
      * Update instructions for custom dataset
      
      * Remove eval script
      
      * Remove use_auth_token
      
      * Add copied from and telemetry
      
      * Fixup
      
      * Update readme
      
      * Fix id2label
      
      * Fix links in docs
      
      * Update examples/pytorch/object-detection/run_object_detection.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update examples/pytorch/object-detection/run_object_detection.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Move description to the top
      
      * Fix Trainer example
      
      * Update no trainer example
      
      * Update albumentations version
      
      ---------
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      998dbe06
  20. 06 May, 2024 1 commit
  21. 02 May, 2024 1 commit
  22. 26 Apr, 2024 3 commits
    • Eitan Turok's avatar
      Fix link in dbrx.md (#30509) · 73014b56
      Eitan Turok authored
      73014b56
    • Eduardo Pacheco's avatar
      [SegGPT] Fix seggpt image processor (#29550) · 6d4cabda
      Eduardo Pacheco authored
      * Fixed SegGptImageProcessor to handle 2D and 3D prompt mask inputs
      
      * Added new test to check prompt mask equivalence
      
      * New proposal
      
      * Better proposal
      
      * Removed unnecessary method
      
      * Updated seggpt docs
      
      * Introduced do_convert_rgb
      
      * nits
      6d4cabda
    • JB (Don)'s avatar
      [`BERT`] Add support for sdpa (#28802) · dfa7b580
      JB (Don) authored
      * Adding SDPA support for BERT
      
      * Using the proper input name for testing model input in inference()
      
      * Adding documentation for SDPA in BERT model page
      
      * Use the stable link for the documentation
      
      * Adding a gate to only call .contiguous() for torch < 2.2.0
      
      * Additions and fixes to the documentation
      
      * Minor updates to documentation
      
      * Adding extra requirements needed for the contiguous() bug
      
      * Adding "Adapted from" in plcae of the "Copied from"
      
      * Add benchmark speedup tables to the documentation
      
      * Minor fixes to the documentation
      
      * Use ClapText as a replacemenet for Bert in the Copied-From
      
      * Some more fixes for the fix-copies references
      
      * Overriding the test_eager_matches_sdpa_generate in bert tests to not load with low_cpu_mem_usage
      
      [test all]
      
      * Undo changes to separate test
      
      * Refactored SDPA self attention code for KV projections
      
      * Change use_sdpa to attn_implementation
      
      * Fix test_sdpa_can_dispatch_on_flash by preparing input (required for MultipleChoice models)
      dfa7b580
  23. 24 Apr, 2024 2 commits
    • Gustavo de Rosa's avatar
      Phi-3 (#30423) · c9693db2
      Gustavo de Rosa authored
      * chore(root): Initial commit of Phi-3 files.
      
      * fix(root): Fixes Phi-3 missing on readme.
      
      * fix(root): Ensures files are consistent.
      
      * fix(phi3): Fixes unit tests.
      
      * fix(tests): Fixes style of phi-3 test file.
      
      * chore(tests): Adds integration tests for Phi-3.
      
      * fix(phi3): Removes additional flash-attention usage, .e.g, swiglu and rmsnorm.
      
      * fix(phi3): Fixes incorrect docstrings.
      
      * fix(phi3): Fixes docstring typos.
      
      * fix(phi3): Adds support for Su and Yarn embeddings.
      
      * fix(phi3): Improves according first batch of reviews.
      
      * fix(phi3): Uses up_states instead of y in Phi3MLP.
      
      * fix(phi3): Uses gemma rotary embedding to support torch.compile.
      
      * fix(phi3): Improves how rotary embedding classes are defined.
      
      * fix(phi3): Fixes inv_freq not being re-computed for extended RoPE.
      
      * fix(phi3): Adds last suggestions to modeling file.
      
      * fix(phi3): Splits inv_freq calculation in two lines.
      c9693db2
    • Arthur's avatar
      Add llama3 (#30334) · 89c510d8
      Arthur authored
      
      
      * nuke
      
      * add co-author
      
      * add co-author
      
      * update card
      
      * fixup and fix copies to please our ci
      
      * nit fixup
      
      * super small nits
      
      * remove tokenizer_path from call to `write_model`
      
      * always safe serialize by default
      
      ---------
      Co-authored-by: default avatarpcuenca <pcuenca@users.noreply.github.com>
      Co-authored-by: default avatarxenova <xenova@users.noreply.github.com>
      89c510d8
  24. 22 Apr, 2024 2 commits
  25. 19 Apr, 2024 2 commits
    • NielsRogge's avatar
      [Grounding DINO] Add resources (#30232) · 8c12690c
      NielsRogge authored
      
      
      * Add resources
      
      * Address comments
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      8c12690c
    • Jo茫o David's avatar
      Add TF swiftformer (#23342) · d2cec09b
      Jo茫o David authored
      
      
      * Duplicate swiftformer
      
      * Convert SwiftFormerPatchEmbedding
      
      * Convert SwiftFormerEmbeddings
      
      * Convert TFSwiftFormerMlp
      
      * Convert TFSwiftFormerConvEncoder
      
      * Convert TFSwiftFormerLocalRepresentation
      
      * convert TFSwiftFormerEncoderBlock
      
      * Convert SwiftFormerStage
      
      * Convert SwiftFormerEncoder
      
      * Add TFSWiftFormerPreTrainedModel
      
      * Convert SwiftFormerForImageClassification
      
      * Add kwargs and start drop path
      
      * Fix syntax
      
      * Change Model class name
      
      * Add TFSwiftFormer to __init__
      
      * Duplicate test_modeling_swiftformer
      
      * First test conversions
      
      * Change require_torch to require_tf
      
      * Add exports to swiftformer __init__
      
      * Add TFSwiftFormerModel wrapper
      
      * Fix __init__ and run black
      
      * Remove docstring from MainLayer, fix padding
      
      * Use keras.layers.Activation on keras.Sequential
      
      * Fix swiftformer exports
      
      * Fix activation layer from config
      
      * Remove post_inits
      
      * Use tf.keras.layers.ZeroPadding2D
      
      * Convert torch normalize
      
      * Change tf test input shape
      
      * Fix softmax and reduce_sum
      
      * Convert expand_dims and repeat
      
      * Add missing reshape and tranpose
      
      * Simplify TFSwiftFormerEncoderBlock.call
      
      * Fix mismatch in patch embeddings
      
      * Fix expected output shape to match channels last
      
      * Fix swiftformer typo
      
      * Disable test_onnx
      
      * Fix TFSwiftFormerForImageClassification call
      
      * Add unpack inputs
      
      * Convert flatten(2).mean(-1)
      
      * Change vision dummy inputs (to be reviewed)
      
      * Change test_forward_signature to use .call
      
      * Fix @unpack_inputs
      
      * Set return_tensors="tf" and rename class
      
      * Rename wrongly named patch_embeddings layer
      
      * Add serving_output and change dummy_input shape
      
      * Make dimensions BCHW and transpose inside embedding layer
      
      * Change SwiftFormerEncoderBlock
      
      * Fix ruff problems
      
      * Add image size to swiftformer config
      
      * Change tranpose to MainLayer and use -1 for reshape
      
      * Remove serving_outputs and dummy_inputs
      
      * Remove test_initialization test from tf model
      
      * Make Sequential component a separate layer
      
      * Fix layers' names
      
      * Tranpose encoder outputs
      
      * Fix tests and check if hidden states is not None
      
      * Fix TFSwiftFormerForImageClassification
      
      * Run make fixup
      
      * Run make fix-copies
      
      * Update modeling_tf_auto
      
      * Update docs
      
      * Fix modeling auto mapping
      
      * Update modelint_tf_swiftformer docs
      
      * Fill image_size doc and type
      
      * Add reduction=None to loss computation
      
      * Update docs
      
      * make style
      
      * Debug: Delete the tip to see if that changes anything
      
      * Re-add tip
      
      * Remove add_code_sample_docstrings
      
      * Remove unused import
      
      * Get the debug to actually tell us the problem it has with the docs
      
      * Try a substitution to match the PyTorch file?
      
      * Add swiftformer to ignore list
      
      * Add build() methods
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Remove FIXME comment
      
      * Remove from_pt
      
      * Update copyright year
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Rename one-letter variables
      
      * Remove FIXMEs related to momentum
      
      * Remove old TODO comment
      
      * Remove outstanding FIXME comments
      
      * Get dropout rate from config
      
      * Add specific dropout config for MLP
      
      * Add convencoder dropout to config
      
      * Pass config to SwiftFormerDropPath layer
      
      * Fix drop_path variable name and add Adapted from comment
      
      * Run ruff
      
      * Removed copied from comment
      
      * Run fix copies
      
      * Change drop_path to identity to match pt
      
      * Cleanup build() methods and move to new keras imports
      
      * Update docs/source/en/model_doc/swiftformer.md
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Raise error if drop_path_rate > 0.0
      
      * Apply suggestions from code review
      
      Replace (self.dim), with self.dim,
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Remove drop_path function
      
      * Add training to TFSwiftFormerEncoder
      
      * Set self.built = True last
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Should have been added to previous commit
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Change default_feature_extractor to default_image_processor
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Import Keras from modeling_tf_utils
      
      * Remove relative import
      
      * Run ruff --fix
      
      * Move import keras to tf_available
      
      * Add copied from comment to test_forward_signature
      
      * Reduce batch size and num_labels
      
      * Extract loss logic to hf_compute_loss
      
      * Run ruff format
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      d2cec09b
  26. 18 Apr, 2024 2 commits
    • Abhi Venigalla's avatar
      Add DBRX Model (#29921) · 005b957f
      Abhi Venigalla authored
      
      
      * wip
      
      * fix __init__.py
      
      * add docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * address comments 1
      
      * work on make fixup
      
      * pass configs down
      
      * add sdpa attention
      
      * remove DbrxBlock
      
      * add to configuration_auto
      
      * docstring now passes formatting test
      
      * fix style
      
      * update READMEs
      
      * add dbrx to modeling_auto
      
      * make fix-copies generated this
      
      * add DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * config docstring passes formatting test
      
      * rename moe_loss_weight to router_aux_loss_coef
      
      * add to flash-attn documentation
      
      * fix model-path in tests
      
      * Explicitly make `"suli"` the default `ffn_act_fn`
      Co-authored-by: default avatarWing Lian <wing.lian@gmail.com>
      
      * default to using router_aux_loss_coef over ffn_config[moe_loss_weight]
      
      * fix _flash_attn_uses_top_left_mask and is_causal
      
      * fix tests path
      
      * don't use token type IDs
      
      * follow Llama and remove token_type_ids from test
      
      * init ConfigTester differently so tests pass
      
      * remove multiple choice test
      
      * remove question + answer test
      
      * remove sequence classification test
      
      * remove token classification test
      
      * copy Llama tests and remove token_type_ids from test inputs
      
      * do not test pruning or headmasking; style code
      
      * add _tied_weights_keys parameter to pass test
      
      * add type hints
      
      * fix type check
      
      * update config tester
      
      * remove masked_lm test
      
      * remove encoder tests
      
      * initialize DbrxModelTester with correct params
      
      * style
      
      * torch_dtype does not rely on torch
      
      * run make fixup, fix-copies
      
      * use https://huggingface.co/v2ray/dbrx-base-fixed/blob/main/modeling_dbrx.py
      
      
      
      * add copyright info
      
      * fix imports and DbrxRotaryEmbedding
      
      * update DbrxModel docstring
      
      * use copies
      
      * change model path in docstring
      
      * use config in DbrxFFN
      
      * fix flashattention2, sdpaattention
      
      * input config to DbrXAttention, DbrxNormAttentionNorm
      
      * more fixes
      
      * fix
      
      * fix again!
      
      * add informative comment
      
      * fix ruff?
      
      * remove print statement + style
      
      * change doc-test
      
      * fix doc-test
      
      * fix docstring
      
      * delete commented out text
      
      * make defaults match dbrx-instruct
      
      * replace `router_aux_loss_coef` with `moe_loss_weight`
      
      * is_decoder=True
      
      * remove is_decoder from configtester
      
      * implement sdpa properly
      
      * make is_decoder pass tests
      
      * start on the GenerationTesterMixin tests
      
      * add dbrx to sdpa documentation
      
      * skip weight typing test
      
      * style
      
      * initialize smaller model
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Add DBRX to toctree
      
      * skip test_new_cache_format
      
      * make config defaults smaller again
      
      * add pad_token_id
      
      * remove pad_token_id from config
      
      * Remove all references to DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * Update src/transformers/models/dbrx/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/dbrx.md
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Update src/transformers/models/dbrx/configuration_dbrx.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/dbrx.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix typo
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * update docs, fix configuration_auto.py
      
      * address pr comments
      
      * remove is_decoder flag
      
      * slice
      
      * fix requires grad
      
      * remove grad
      
      * disconnect differently
      
      * remove grad
      
      * enable grads
      
      * patch
      
      * detach expert
      
      * nissan al ghaib
      
      * Update modeling_dbrx.py
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * replace "Gemma" with "Dbrx"
      
      * remove # type: ignore
      
      * don't hardcode vocab_size
      
      * remove ToDo
      
      * Re-add removed idefics2 line
      
      * Update test to use tiny-random!
      
      * Remove TODO
      
      * Remove one more case of loading the entire dbrx-instruct in the tests
      
      * Update src/transformers/models/dbrx/modeling_dbrx.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * address some comments
      
      * small model
      
      * add dbrx to tokenization_auto
      
      * More docstrings with add_start_docstrings
      
      * Dbrx for now
      
      * add PipelineTesterMixin
      
      * Update src/transformers/models/dbrx/configuration_dbrx.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * remove flash-attn2 import error
      
      * fix docstring
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add useage example
      
      * put on one line
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix ffn_act_fn
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * change "dbrx" to "DBRX" for display purposes.
      
      * fix __init__.py?
      
      * fix __init__.py
      
      * fix README
      
      * return the aux_loss
      
      * remove extra spaces
      
      * fix configuration_auto.py
      
      * fix format in tokenization_auto
      
      * remove new line
      
      * add more useage examples
      
      ---------
      Co-authored-by: default avatarAbhi Venigalla <abhi.venigalla@databricks.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarEitan Turok <eitan.turok@databricks.com>
      Co-authored-by: default avatarEitan Turok <150733043+eitanturok@users.noreply.github.com>
      Co-authored-by: default avatarWing Lian <wing.lian@gmail.com>
      Co-authored-by: default avatarEitan Turok <eitanturok@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarYour Name <you@example.com>
      Co-authored-by: default avatarMihir Patel <mihir.v.patel7@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      005b957f
    • tomeras91's avatar
      Add jamba (#29943) · 3f20877d
      tomeras91 authored
      * Add jamba arch
      
      * apply "make fix-copies" changes
      
      * fix link to model in JambaConfig docstring
      
      * Add n_ctx in modeling file because repo-consistency wants that
      
      * Add jamba to flash attention and sdpa documentation
      
      * mamba dt_proj quant fix now works for LoRA as well
      
      * override test_left_padding_compatibility and use a more permissive tolerance. left padding numerical difference are accentuated by mamba layers
      
      * add jamba to tokenization auto
      
      * fix comments of shape (PR #24 in the model page: https://huggingface.co/ai21labs/Jamba-v0.1/discussions/24)
      
      * simple PR fixes
      
      * remove unnecessary kwargs from JambaAttentionDecoderLayer and JambaMambaDecoderLayer
      
      * remove the LoRA hack for the mamba dt_proj bias. It was solved in huggingface/peft#1530 (https://github.com/huggingface/peft/pull/1530)
      
      * Add copied comment on JambaMLP (it's the same as MixtralMLP)
      
      * remove padding_mask warnings. It's not supported anymore
      
      * fix docstring. Float instead of int
      
      * A few more minor PR fixes
      
      * (1) lowercase names for mamba layernorms (2) remove _apply_inner_layernorms and do it directly in the forward pass
      
      * Return None attention weights from mamba layers. Append to all attentions only if not None.
      
      * remove some leftover jamba archive lists
      
      * Better separation between expert vs non-expert layers. non-expert layers return None as router_logits, and it is not concatenated to all_router_logits returned from JambaModel
      
      * no need to take router_logits at config.expert_layer_offset anymore. result.router_logits now holds results only for expert layers
      
      * Add Jamba paper on READMEs
      
      * (1) rename n_ctx -> max_position_embeddings (2) don't use it in the modeling file since it's not needed (set it as an exception to check_config_attributes)
      
      * Add copied from comment
      
      * remove the code path for apply_inner_layernorms=False. Jamba always has the inner mamba layernorms
      
      * clearer docstring for _convert_to_standard_cache
      
      * style fixes
      
      * Change calc_logits_for_entire_prompt (bool) to num_logits_to_keep (int). Adapt assisted decoding code tp use it. Also small change in low memory beam search decoding path to support this new int value in model_inputs
      
      * rename test so it still overrides what its meant to override
      
      * draft
      
      * oups
      
      * nit
      
      * remove more complexe logic
      
      * fix names used in config
      
      * fix fix fix
      
      * style
      
      * fix some more failing tests
      
      * generate did not init the cache 馃檭
      
      
      
      * more small nits
      
      * typo
      
      * config.mamba_expand * config.hidden_size for the intermediate size of the mamba shapes
      
      * fix init of pkv with torch.tensor()
      
      * empty tensor
      
      * fix some init issues
      
      * stupid changes required by generate because it does not even support it's own DynamicCache class
      
      * more fixes
      
      * fix general assisted gen cache_position bug
      
      * tests passing
      
      * Add offsets and periods as SPECIAL_CASES_TO_ALLOW in check_config_attributes.py
      
      * fix reorder_cache to reorder mamba states and override some more functions in HybridMambaAttentionDynamicCache
      
      * no need to override test_past_key_values_format() and _check_past_key_values_for_generate() in tests anymore
      
      * fix docstrings and typehints for past_key_values
      
      * style fixes
      
      * fix docs
      
      * change typehint due to copy from Mixtral
      
      * forgot import
      
      * import order
      
      * Add configuration_jamba and modeling_jamba to not_doctested because the model is too big to download (in docstring of JambaForCausalLM.forward)
      
      * Add integration test with tiny tandom Jamba model on hub
      
      * fix flash attention cache shapes
      
      * bring back forgotten hidden states
      
      * rename HybridMambaAttentionDynamicCache.seqlen_offset to has_previous_state (and make bool) and bugfix - it should be set to True after a finished forward pass of the entire model
      
      * align integration test after modeling fixes
      
      * bugfix - mamba can use precomputed states only of forward pass is on a single token
      
      * bugfix - mamba can use precomputed states only if they match the batch size
      
      * typo
      
      * remove making _prepare_4d_causal_attention_mask a leaf function
      
      * stop using past_seq_len.get_seq_length(). Use cache positions instead. Adjust test (test_decoder_model_past_with_large_inputs) accordingly
      
      ---------
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      3f20877d