1. 11 Jun, 2024 1 commit
    • amyeroberts's avatar
      Fast image processor (#28847) · f53fe35b
      amyeroberts authored
      
      
      * Draft fast image processors
      
      * Draft working fast version
      
      * py3.8 compatible cache
      
      * Enable loading fast image processors through auto
      
      * Tidy up; rescale behaviour based on input type
      
      * Enable tests for fast image processors
      
      * Smarter rescaling
      
      * Don't default to Fast
      
      * Safer imports
      
      * Add necessary Pillow requirement
      
      * Woops
      
      * Add AutoImageProcessor test
      
      * Fix up
      
      * Fix test for imagegpt
      
      * Fix test
      
      * Review comments
      
      * Add warning for TF and JAX input types
      
      * Rearrange
      
      * Return transforms
      
      * NumpyToTensor transformation
      
      * Rebase - include changes from upstream in ImageProcessingMixin
      
      * Safe typing
      
      * Fix up
      
      * convert mean/std to tesnor to rescale
      
      * Don't store transforms in state
      
      * Fix up
      
      * Update src/transformers/image_processing_utils_fast.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Warn if fast image processor available
      
      * Update src/transformers/models/vit/image_processing_vit_fast.py
      
      * Transpose incoming numpy images to be in CHW format
      
      * Update mapping names based on packages, auto set fast to None
      
      * Fix up
      
      * Fix
      
      * Add AutoImageProcessor.from_pretrained(checkpoint, use_fast=True) test
      
      * Update src/transformers/models/vit/image_processing_vit_fast.py
      Co-authored-by: default avatarPavel Iakubovskii <qubvel@gmail.com>
      
      * Add equivalence and speed tests
      
      * Fix up
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarPavel Iakubovskii <qubvel@gmail.com>
      f53fe35b
  2. 22 May, 2024 1 commit
  3. 13 May, 2024 1 commit
    • Alazar's avatar
      Port IDEFICS to tensorflow (#26870) · 94306352
      Alazar authored
      
      
      * Initial commit
      
      * Just a copy of modeling_idefics.py that will be ported to TF
      
      * - Prepend TF to the name of all classes
      - Convert pytorch ops to TF (not all operations are converted yet)
      
      * Add TF imports
      
      * Add autotranslated files
      
      * Add TF classes to model_tf_auto.py
      
      * Add the TF classes in model_doc
      
      * include auto-translated code
      
      * Adopted from auto-translated version
      
      * Add a forgotten super().build
      
      * Add test code for TF version.
      
      * Fix indentation and load pytorch weights for now
      
      * Some fixes. Many tests are still failing but some are passing now.
      
      - I have added TODO's for some of the hacks I made to unblock me
        and I will address them soon
      - I have the processing_idefics.py hacked in my view to support TF temporarily
      
      * Add ALL_LAYERNORM_LAYERS to match pytorch
      
      * Revert "Add ALL_LAYERNORM_LAYERS to match pytorch"
      
      This reverts commit 7e0a35119b4d7a6284d04d8c543fba1b29e573c9 as it
      is not needed in the tf implementation.
      
      * Fix freeze_relevant_params()
      
      * Some more fixes
      
      * Fix test_attention_outputs
      
      * Add tf stuff to processing_idefics.py
      
      processing_idefics.py supports both pytorch and tf now.
      
      test_processor_idefics.py for pytorch is passing, so i didn't break anything
      but still some issues with tf. I also need to add tf tests in
      test_processor_idefics.py.
      
      * Pass return_tensors to image processing code and fix test
      
      * Pass return_tensors to the image processor __init__
      
      * Fix several test cases
      
      - Make input to some of the forward pass of type `TFModelInputType`
      - Decorate main layer forward pass with `@unpack_inputs`
      - Decorate main layer with `@keras_serializable`
      - Pass `inputs` to TFIdeficsModel
      
      * Some more fixes forgotten in last commit
      
      * Fix processing code and vision_tf.py
      
      * Fix perceiver bug
      
      * Import from
      
      * Auto-add build() methods + style pass
      
      * Fix build() errors due to `None` being passed as shape to some layers
      
      * Change name in TFIdeficsForVisionText2Text to attribute in IdeficsForVisionText2Text
      
      * Fix pytorch weights load for tf2
      
      There were a lot of `name=` missing in weight initialization code.
      
      * Attempt to fix CI
      
      * Add back accidently removed line
      
      * Remove torch-specific stuff from the TF test file
      
      * make fix-copies, make style, remove autotranslated files
      
      * Fixes to imports/docstrings
      
      * Let's try the from future import in desperation
      
      * Fix the core random_attention_mask fn to match the torch/flax behaviour
      
      * Clean random_attention_mask up correctly
      
      * Remove torch-only test
      
      * Fix loss shape, couple of nits
      
      * make style
      
      * Don't test for OOB embeddings because IDEFICS uses those deliberately
      
      * Fix loss computation to handle masking
      
      * Fix test failures when flattening
      
      * Fix some test failures
      
      - Add cross attention gate which was missing and wasn't being passed arround
      - Fix overwriting of image_attention_mask due to hack I had for dummy inputs
      
      * Add a proper stateless scaled_dot_product_attention
      
      * make style
      
      * Adding missing attribute from the PyTorch version
      
      * Small cleanups to decoupledlinearlayer in case that helps
      
      * Pass epsilon to LayerNormalization
      
      * Attemp to fix pytorch weight cross-loading for TFIdeficsEmbedding
      
      * Fix a bug in TFIdeficsGatedCrossAttentionLayer
      
      * Patching up build() methods
      
      * Constant self.inv_freq
      
      * Constant self.inv_freq
      
      * First working version
      
      The TF implementation works now, there was a bug in the TFIdeficsDecoupledLinear
      where the weights were mis-intialized (in_features,out_features)
      when it should be: (out_features, in_features)
      
      I have tested this so far with tiny-random and idefics-9b-instruct
      and gives correct output.
      
      I also dumped the final outputs for both pytorch and TF
      and they are identical.
      
      * Fix some test failures
      
      * remove print statement
      
      * Fix return_tensors
      
      * Fix CI test failure check_code_quality
      
      * Attempt to fix CI failures by running `make fixup`
      
      The hardcoded IDs in test_modeling_tf_idefics.py are for the integration
      test and makes that file unreadable and should probably be moved to a seperate file.
      
      * Attempt to fix tests_pr_documentation_tests
      
      * Fix a test failure in test_image_processing_idefics.py
      
      * Fix test test_pt_tf_model_equivalence
      
      * Fix a few failures
      
      * Tiny fix
      
      * Some minor fixes
      
      * Remove a duplicate test
      
      * Override a few test failures for IDEFICS
      
      - `test_keras_save_load` is passing now
      - `test_compile_tf_model` is still failing
      
      * Fix processing_idefics.py after rebase
      
      * Guard import keras with is_tf_available
      
      * fix check code quality
      
      * fix check code quality
      
      * Minor fixes
      
      * Skip test_save_load temporarily
      
      This test passed on my local box but fails on the CI, skipping
      for now to see if there are other remaining failures on the CI.
      
      * Run `ruff format tests src utils`
      
      * Fix last failing test, `test_compile_tf_model`
      
      * Add fixes for vision_tf.py
      
      I forgot to add this file in last commit.
      
      * Minor fixes
      
      * Replace "<<<" with "<<" for doc tests
      
      IDEFICS-9B is too big for doctest runner, so don't run it there
      
      * Make code more readable
      
      * Fix bug after code review
      
      I added a layer_norm_eps to IdeficsConfig but I don't even need it
      since the vision config has a layer_norm_eps.
      
      * Fix after code review
      
      Use original code tokenizer.convert_tokens_to_ids
      
      * Keep PyTorch as the default return_tensors
      
      * Fixes to modeling_tf after code review
      
      * Fixes from code review
      
      - Remove all references of `TF_IDEFICS_PRETRAINED_MODEL_ARCHIVE_LIST`
      - Pass 1e-5 to LayerNormalization in perceiver
      
      * Run ruff
      
      * Undo a change
      
      * Refactor processing code after Matt's suggestion
      
      * Remove TODO's that aren't needed anymore
      
      * For pytorch, Use original pytorch processing code from main
      
      Since this PR is a TF port it shouldn't make any modifications
      to pytorch IDEFICS code. This changes undo's the pytorch processing
      modifications I made and uses original code from main.
      
      * Update tests/models/idefics/test_modeling_idefics.py
      
      * Update tests/models/idefics/test_modeling_tf_idefics.py
      
      * Add missing imports for is_pt_tf_cross_test
      
      * [DO NOT MERGE]: This is a commit for debugging and will be reverted
      
      The cross test `test_pt_tf_model_equivalence` passes locally but
      fails when running on the CI. This commit is to help debug that
      and will be reverted.
      
      * Revert "[DO NOT MERGE]: This is a commit for debugging and will be reverted"
      
      This reverts commit 8f0d709ec5bd46685fb0b4259d914ffee794875b.
      
      * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted
      
      * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted
      
      * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted"
      
      This reverts commit 998cc38b8c3d313bf5e5eb55a7f5b7b881897b89.
      
      * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted"
      
      This reverts commit 1c695ac4219c4ae4d39b330b01744dc27deb7dd4.
      
      * Don't skip test_save_load
      
      IIRC test_save_load was also failing on the CI but not on my local
      box, it might be easier to debug that on the CI first than the cross tests
      
      * Debugging commit, will be reverted
      
      * Revert "Debugging commit, will be reverted"
      
      This reverts commit 8eafc8e41e20c4e95a3a90834f06a6e9f445e2d5.
      
      * Override `test_save_load` and push model to save
      
      Maybe this will help me repro this weird bug
      
      * pass my repo_id
      
      * add endpoint
      
      * Pass a temp (write) token just for this CI
      
      * Undo last few commits, still pushing to hub for model debugging
      
      The issue seems to be with save_pretrained(),  when I looked at the model saved
      from the CI test failure it is basically empty and has no weights.
      `self.save_weights(..)` seems to be failing in save_pretrained but needs
      more debugging
      
      * Add logging to modeling tf utils, will be reverted just for debugging
      
      * Debugging, will revert
      
      * Revert "Debugging, will revert"
      
      This reverts commit 9d0d3075fb7c82d8cde3a5c76bc8f3876c5c55d3.
      
      * Revert "Add logging to modeling tf utils, will be reverted just for debugging"
      
      This reverts commit 774b6b7b1c17b3ce5d7634ade768f2f686cee617.
      
      * Remove `test_save_load`
      
      The CI failures are gone after my latest rebase, no idea why
      but I was still saving the model to my hub on HF and the tf_model.h5
      file now has everything.
      
      * Run make fix-copies
      
      * Run ruff format tests src utils
      
      * Debugging commit, will be reverted
      
      * Run ruff, also trigger CI run
      
      * Run ruff again
      
      * Undo debugging commit
      
      ---------
      Co-authored-by: default avatarMatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      94306352
  4. 04 Apr, 2024 1 commit
    • byi8220's avatar
      [`ProcessingIdefics`] Attention mask bug with padding (#29449) · 75b76a5e
      byi8220 authored
      * Defaulted IdeficsProcessor padding to 'longest', removed manual padding
      
      * make fixup
      
      * Defaulted processor call to padding=False
      
      * Add padding to processor call in IdeficsModelIntegrationTest as well
      
      * Defaulted IdeficsProcessor padding to 'longest', removed manual padding
      
      * make fixup
      
      * Defaulted processor call to padding=False
      
      * Add padding to processor call in IdeficsModelIntegrationTest as well
      
      * redefaulted padding=longest again
      
      * fixup/doc
      75b76a5e
  5. 25 Mar, 2024 1 commit
  6. 08 Dec, 2023 1 commit
    • fxmarty's avatar
      F.scaled_dot_product_attention support (#26572) · 80377eb0
      fxmarty authored
      
      
      * add sdpa
      
      * wip
      
      * cleaning
      
      * add ref
      
      * yet more cleaning
      
      * and more :)
      
      * wip llama
      
      * working llama
      
      * add output_attentions=True support
      
      * bigcode sdpa support
      
      * fixes
      
      * gpt-bigcode support, require torch>=2.1.1
      
      * add falcon support
      
      * fix conflicts falcon
      
      * style
      
      * fix attention_mask definition
      
      * remove output_attentions from attnmaskconverter
      
      * support whisper without removing any Copied from statement
      
      * fix mbart default to eager renaming
      
      * fix typo in falcon
      
      * fix is_causal in SDPA
      
      * check is_flash_attn_2_available in the models init as well in case the model is not initialized through from_pretrained
      
      * add warnings when falling back on the manual implementation
      
      * precise doc
      
      * wip replace _flash_attn_enabled by config.attn_implementation
      
      * fix typo
      
      * add tests
      
      * style
      
      * add a copy.deepcopy on the config in from_pretrained, as we do not want to modify it inplace
      
      * obey to config.attn_implementation if a config is passed in from_pretrained
      
      * fix is_torch_sdpa_available when torch is not installed
      
      * remove dead code
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bart/modeling_bart.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove duplicate pretraining_tp code
      
      * add dropout in llama
      
      * precise comment on attn_mask
      
      * add fmt: off for _unmask_unattended docstring
      
      * precise num_masks comment
      
      * nuke pretraining_tp in LlamaSDPAAttention following Arthur's suggestion
      
      * cleanup modeling_utils
      
      * backward compatibility
      
      * fix style as requested
      
      * style
      
      * improve documentation
      
      * test pass
      
      * style
      
      * add _unmask_unattended tests
      
      * skip meaningless tests for idefics
      
      * hard_check SDPA requirements when specifically requested
      
      * standardize the use if XXX_ATTENTION_CLASSES
      
      * fix SDPA bug with mem-efficient backend on CUDA when using fp32
      
      * fix test
      
      * rely on SDPA is_causal parameter to handle the causal mask in some cases
      
      * fix FALCON_ATTENTION_CLASSES
      
      * remove _flash_attn_2_enabled occurences
      
      * fix test
      
      * add OPT to the list of supported flash models
      
      * improve test
      
      * properly test on different SDPA backends, on different dtypes & properly handle separately the pad tokens in the test
      
      * remove remaining _flash_attn_2_enabled occurence
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/perf_infer_gpu_one.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove use_attn_implementation
      
      * fix docstring & slight bug
      
      * make attn_implementation internal (_attn_implementation)
      
      * typos
      
      * fix tests
      
      * deprecate use_flash_attention_2=True
      
      * fix test
      
      * add back llama that was removed by mistake
      
      * fix tests
      
      * remove _flash_attn_2_enabled occurences bis
      
      * add check & test that passed attn_implementation is valid
      
      * fix falcon torchscript export
      
      * fix device of mask in tests
      
      * add tip about torch.jit.trace and move bt doc below sdpa
      
      * fix parameterized.expand order
      
      * move tests from test_modeling_attn_mask_utils to test_modeling_utils as a relevant test class is already there
      
      * update sdpaattention class with the new cache
      
      * Update src/transformers/configuration_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bark/modeling_bark.py
      
      * address review comments
      
      * WIP torch.jit.trace fix. left: test both eager & sdpa
      
      * add test for torch.jit.trace for both eager/sdpa
      
      * fix falcon with torch==2.0 that needs to use sdpa
      
      * fix doc
      
      * hopefully last fix
      
      * fix key_value_length that has no default now in mask converter
      
      * is it flacky?
      
      * fix speculative decoding bug
      
      * tests do pass
      
      * fix following #27907
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      80377eb0
  7. 21 Nov, 2023 1 commit
    • Leo Tronchon's avatar
      Idefics: Fix information leak with cross attention gate in modeling (#26839) · 851a4f70
      Leo Tronchon authored
      
      
      * fix image_attention gate in idefics modeling
      
      * update comment
      
      * cleaner gating
      
      * fix gate condition
      
      * create attention gate once
      
      * update comment
      
      * update doc of cross-attention forward
      
      * improve comment
      
      * bring back no_images
      
      * pass cross_attention_gate similarly  to no_images gate
      
      * add information on gate shape
      
      * fix no_images placement
      
      * make tests for gate
      
      * take off no_images logic
      
      * update test based on comments
      
      * raise value error if cross_attention_gate is None
      
      * send cross_attention_gate to device
      
      * Revert "send cross_attention_gate to device"
      
      This reverts commit 054f84228405bfa2e75fecc502f6a96dc83cdc0b.
      
      * send cross_attention_gate to device
      
      * fix device in test + nit
      
      * fill hidden_states with zeros instead of multiplying with the gate
      
      * style
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      851a4f70
  8. 30 Oct, 2023 1 commit
  9. 13 Oct, 2023 1 commit
  10. 02 Oct, 2023 1 commit
    • Arthur's avatar
      Fix model integration ci (#26322) · 63864e05
      Arthur authored
      * fix wav2vec2
      
      * nit
      
      * stash
      
      * one more file to update
      
      * fix byt5
      
      * vocab size is 256, don't change that!
      
      * use other revision
      
      * test persimon in smaller size
      
      * style
      
      * tests
      
      * nits
      
      * update add tokens from pretrained
      
      * test tokenization
      
      * nits
      
      * potential fnet fix?
      
      * more nits
      
      * nits
      
      * correct test
      
      * assert close
      
      * udpate
      
      * ouch
      
      * fix it
      
      * some more nits
      
      * FINALLU
      
      * use `adept` checkpoints
      
      * more adept checkpoints
      
      * that was invlved!
      63864e05
  11. 27 Sep, 2023 1 commit
  12. 25 Sep, 2023 1 commit
  13. 14 Sep, 2023 1 commit
  14. 24 Aug, 2023 1 commit
  15. 18 Aug, 2023 1 commit
    • Stas Bekman's avatar
      new model: IDEFICS via HuggingFaceM4 (#24796) · 6c811a32
      Stas Bekman authored
      
      
      * rename
      
      * restore
      
      * mappings
      
      * unedited tests+docs
      
      * docs
      
      * fixes
      
      * fix auto-sync breakage
      
      * cleanup
      
      * wip
      
      * wip
      
      * add fetch_images
      
      * remove einops dependency
      
      * update
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * re-add
      
      * add batching
      
      * rework
      
      * fix
      
      * improve
      
      * add Leo as I am extending his work
      
      * cleanup
      
      * fix
      
      * cleanup
      
      * slow-test
      
      * fix
      
      * fix
      
      * fixes
      
      * deal with warning
      
      * rename modified llama classes
      
      * rework fetch_images
      
      * alternative implementation
      
      * cleanup
      
      * strict version
      
      * cleanup
      
      * [`IDEFICS`]聽Fix idefics ci (#25056)
      
      * Fix IDEFICS CI
      
      * fix test file
      
      * fixup
      
      * some changes to make tests pass
      
      * fix
      
      * fixup
      
      * Update src/transformers/models/idefics/configuration_idefics.py
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * remove compat checks
      
      * style
      
      * explain that Idefics is not for training from scratch
      
      * require pt>=2.0
      
      * fix idefics vision config (#25092)
      
      * fix idefics vision config
      
      * fixup
      
      * clean
      
      * Update src/transformers/models/idefics/configuration_idefics.py
      
      ---------
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * cleanup
      
      * style
      
      * cleanup
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * upcase
      
      * sequence of images
      
      * handle the case with no images
      
      * Update src/transformers/image_processing_utils.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * support pure lm take 2
      
      * support tokenizer options
      
      * parameterize num_channels
      
      * fix upcase
      
      * s|IdeficsForCausalLM|IdeficsForVisionText2Text|g
      
      * manual to one line
      
      * addressing review
      
      * unbreak
      
      * remove clip dependency
      
      * fix test
      
      * consistency
      
      * PIL import
      
      * Idefics prefix
      
      * Idefics prefix
      
      * hack to make tests work
      
      * style
      
      * fix
      
      * fix
      
      * revert
      
      * try/finally
      
      * cleanup
      
      * clean up
      
      * move
      
      * [`IDEFICS`] Fix idefics config refactor (#25149)
      
      * refactor config
      
      * nuke init weights
      
      * more refactor
      
      * oops
      
      * remove visual question answering pipeline support
      
      * Update src/transformers/models/idefics/clip.py
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      
      * cleanup
      
      * mv clip.py vision.py
      
      * tidyup
      
      ---------
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      Co-authored-by: default avatarStas Bekman <stas@stason.org>
      
      * fix
      
      * license
      
      * condition on pt
      
      * fix
      
      * style
      
      * fix
      
      * rm torchvision dependency, allow custom transforms
      
      * address review
      
      * rework device arg
      
      * add_eos_token
      
      * s/transforms/transform/
      
      * fix top level imports
      
      * fix return value
      
      * cleanup
      
      * cleanup
      
      * fix
      
      * style
      
      * license
      
      * license
      
      * Update src/transformers/models/idefics/image_processing_idefics.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add a wrapper to freeze vision layears
      
      * tidyup
      
      * use the correct std/mean settings
      
      * parameterize values from config
      
      * add tests/models/idefics/test_image_processing_idefics.py
      
      * add test_processor_idefics.py
      
      * cleanup
      
      * cleanups
      
      * fix
      
      * fix
      
      * move to the right group
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add perceiver config
      
      * reset
      
      * missing arg docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      
      * address review comments
      
      * inject automatic end of utterance tokens (#25218)
      
      * inject automatic end of utterance tokens
      
      * fix
      
      * fix
      
      * fix
      
      * rework to not use the config
      
      * not end_of_utterance_token at the end
      
      * Update src/transformers/models/idefics/processing_idefics.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * address review
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/image_processing_utils.py
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      
      * [`Idefics`] add image_embeddings option in generate-related methods (#25442)
      
      * add image_embeddings option in generate-related methods
      
      * style
      
      * rename image_embeddings and allow perceiver embeddings precomputation
      
      * compute embeddings within generate
      
      * make is_encoder_decoder= True the default in config
      
      * nested if else fix
      
      * better triple check
      
      * switch if elif order for pixel values / img embeds
      
      * update model_kwargs perceiver only at the end
      
      * use _prepare_model_inputs instead of encoder_decoder logic
      
      * fix comment typo
      
      * fix config default for is_encoder_decoder
      
      * style
      
      * add typehints
      
      * precompute in forward
      
      * doc builder
      
      * style
      
      * pop instead of get image hidden states
      
      * Trigger CI
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix * + indentation + style
      
      * simplify a bit the use_resampler logic using comments
      
      * update diocstrings
      
      * Trigger CI
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix rebase changes
      
      * unbreak #25237 - to be fixed in follow up PRs
      
      * is_composition = False
      
      * no longer needed
      
      ---------
      Co-authored-by: default avatarleot13 <leo.tronchon@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      6c811a32