1. 06 Aug, 2024 1 commit
    • Pavel Iakubovskii's avatar
      Update kwargs validation for `preprocess` with decorator (#32024) · fb66ef81
      Pavel Iakubovskii authored
      * BLIP preprocess
      
      * BIT preprocess
      
      * BRIDGETOWER preprocess
      
      * CHAMELEON preprocess
      
      * CHINESE_CLIP preprocess
      
      * CONVNEXT preprocess
      
      * DEIT preprocess
      
      * DONUT preprocess
      
      * DPT preprocess
      
      * FLAVA preprocess
      
      * EFFICIENTNET preprocess
      
      * FUYU preprocess
      
      * GLPN preprocess
      
      * IMAGEGPT preprocess
      
      * INTRUCTBLIPVIDEO preprocess
      
      * VIVIT preprocess
      
      * ZOEDEPTH preprocess
      
      * VITMATTE preprocess
      
      * VIT preprocess
      
      * VILT preprocess
      
      * VIDEOMAE preprocess
      
      * VIDEOLLAVA
      
      * TVP processing
      
      * TVP fixup
      
      * SWIN2SR preprocess
      
      * SIGLIP preprocess
      
      * SAM preprocess
      
      * RT-DETR preprocess
      
      * PVT preprocess
      
      * POOLFORMER preprocess
      
      * PERCEIVER preprocess
      
      * OWLVIT preprocess
      
      * OWLV2 preprocess
      
      * NOUGAT preprocess
      
      * MOBILEVIT preprocess
      
      * MOBILENETV2 preprocess
      
      * MOBILENETV1 preprocess
      
      * LEVIT preprocess
      
      * LAYOUTLMV2 preprocess
      
      * LAYOUTLMV3 preprocess
      
      * Add test
      
      * Update tests
      fb66ef81
  2. 08 Jul, 2024 1 commit
    • Pavel Iakubovskii's avatar
      Add FA2 and `sdpa` support for SigLIP (#31499) · a177821b
      Pavel Iakubovskii authored
      * Rebase to main
      
      * Fix attention implementation autoset for tex and vision configs
      
      * Fixup
      
      * Minor fixes
      
      * Fix copies
      
      * Fix attention_mask for FA2
      
      * Add eqvivalence tests for siglip
      
      * Remove right padding test
      
      * Uncomment flaky
      
      * Fix import
      
      * Add to docs
      
      * Fix test message
      
      * Add sdpa
      
      * Add sdpa equivalence test
      
      * Add siglip sdpa to docs
      
      * Fix typing for attention output
      
      * Add sdpa tests
      
      * Fix signature of FA2
      
      * Autoset attn_implementation in config
      
      * Rename bsz -> batch_size
      
      * Move back autoset attn method
      
      * Mark as flaky
      
      * Correct attention mask padding
      
      * [run-slow] siglip
      
      * Add FA2 and sdpa docs
      
      * Style fix
      
      * Remove flaky for FA2 test
      
      * Change attention implementation set
      
      * Change attn_implementaiton propogation
      
      * Fix typos
      
      * Add modality to assert message
      
      * Add more sdpa backends in test
      
      * [run slow] siglip
      
      * Add math sdpa backend for all options
      
      * [run slow] siglip
      a177821b
  3. 05 Jul, 2024 1 commit
    • Billy Cao's avatar
      Add training support for SigLIP (#31495) · 1d3eaa6f
      Billy Cao authored
      * Add siglip loss function
      
      * Update docs
      
      * Enable training tests
      [experimental] enable GC training tests as it has worked for my own data
      
      * Remove test_training* overrides to enable training tests
      [run_slow] siglip
      
      * Skip training tests for Siglip text model and ImageClassificationModel
      [run_slow] siglip
      
      * Skip GC training tests for SiglipForImageClassification
      
      * Explicitly skip training tests for SiglipVisionModel
      Add skip reason for training tests for SiglipTextModel
      
      * Remove copied from to fix CI
      1d3eaa6f
  4. 26 Jun, 2024 1 commit
  5. 25 Jun, 2024 1 commit
  6. 17 Jun, 2024 1 commit
  7. 11 Jun, 2024 1 commit
    • amyeroberts's avatar
      Fast image processor (#28847) · f53fe35b
      amyeroberts authored
      
      
      * Draft fast image processors
      
      * Draft working fast version
      
      * py3.8 compatible cache
      
      * Enable loading fast image processors through auto
      
      * Tidy up; rescale behaviour based on input type
      
      * Enable tests for fast image processors
      
      * Smarter rescaling
      
      * Don't default to Fast
      
      * Safer imports
      
      * Add necessary Pillow requirement
      
      * Woops
      
      * Add AutoImageProcessor test
      
      * Fix up
      
      * Fix test for imagegpt
      
      * Fix test
      
      * Review comments
      
      * Add warning for TF and JAX input types
      
      * Rearrange
      
      * Return transforms
      
      * NumpyToTensor transformation
      
      * Rebase - include changes from upstream in ImageProcessingMixin
      
      * Safe typing
      
      * Fix up
      
      * convert mean/std to tesnor to rescale
      
      * Don't store transforms in state
      
      * Fix up
      
      * Update src/transformers/image_processing_utils_fast.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/auto/image_processing_auto.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Warn if fast image processor available
      
      * Update src/transformers/models/vit/image_processing_vit_fast.py
      
      * Transpose incoming numpy images to be in CHW format
      
      * Update mapping names based on packages, auto set fast to None
      
      * Fix up
      
      * Fix
      
      * Add AutoImageProcessor.from_pretrained(checkpoint, use_fast=True) test
      
      * Update src/transformers/models/vit/image_processing_vit_fast.py
      Co-authored-by: default avatarPavel Iakubovskii <qubvel@gmail.com>
      
      * Add equivalence and speed tests
      
      * Fix up
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarPavel Iakubovskii <qubvel@gmail.com>
      f53fe35b
  8. 07 Jun, 2024 1 commit
  9. 23 May, 2024 1 commit
    • Marc Sun's avatar
      Fix accelerate failing tests (#30836) · 8366b572
      Marc Sun authored
      * Fix accelerate tests
      
      * fix clip
      
      * skip dbrx tests
      
      * fix GPTSan
      
      * fix M2M100Model
      
      * same fix as jamba
      
      * fix mt5
      
      * Fix T5Model
      
      * Fix umt5 model
      
      * fix switch_transformers
      
      * fix whisper
      
      * fix gptsan again
      
      * fix siglip recent test
      
      * skip siglip tests
      
      * wrong place fixed
      8366b572
  10. 22 May, 2024 1 commit
  11. 09 May, 2024 1 commit
  12. 25 Mar, 2024 1 commit
  13. 13 Mar, 2024 1 commit
  14. 12 Mar, 2024 1 commit
  15. 14 Feb, 2024 1 commit
  16. 08 Jan, 2024 1 commit
    • NielsRogge's avatar
      Add SigLIP (#26522) · 3b742ea8
      NielsRogge authored
      
      
      * Add first draft
      
      * Use appropriate gelu function
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Convert checkpoint
      
      * More improvements
      
      * Improve docs, remove print statements
      
      * More improvements
      
      * Add link
      
      * remove unused masking function
      
      * begin tokenizer
      
      * do_lower_case
      
      * debug
      
      * set split_special_tokens=True
      
      * Remove script
      
      * Fix style
      
      * Fix rebase
      
      * Use same design as CLIP
      
      * Add fast tokenizer
      
      * Add SiglipTokenizer to init, remove extra_ids
      
      * Improve conversion script
      
      * Use smaller inputs in conversion script
      
      * Update conversion script
      
      * More improvements
      
      * Add processor to conversion script
      
      * Add tests
      
      * Remove print statements
      
      * Add tokenizer tests
      
      * Fix more tests
      
      * More improvements related to weight initialization
      
      * More improvements
      
      * Make more tests pass
      
      * More improvements
      
      * More improvements
      
      * Add copied from
      
      * Add canonicalize_text
      
      * Enable fast tokenizer tests
      
      * More improvements
      
      * Fix most slow tokenizer tests
      
      * Address comments
      
      * Fix style
      
      * Remove script
      
      * Address some comments
      
      * Add copied from to tests
      
      * Add more copied from
      
      * Add more copied from
      
      * Add more copied from
      
      * Remove is_flax_available
      
      * More updates
      
      * Address comment
      
      * Remove SiglipTokenizerFast for now
      
      * Add caching
      
      * Remove umt5 test
      
      * Add canonicalize_text inside _tokenize, thanks Arthur
      
      * Fix image processor tests
      
      * Skip tests which are not applicable
      
      * Skip test_initialization
      
      * More improvements
      
      * Compare pixel values
      
      * Fix doc tests, add integration test
      
      * Add do_normalize
      
      * Remove causal mask and leverage ignore copy
      
      * Fix attention_mask
      
      * Fix remaining tests
      
      * Fix dummies
      
      * Rename temperature and bias
      
      * Address comments
      
      * Add copied from to tokenizer tests
      
      * Add SiglipVisionModel to auto mapping
      
      * Add copied from to image processor tests
      
      * Improve doc
      
      * Remove SiglipVisionModel from index
      
      * Address comments
      
      * Improve docs
      
      * Simplify config
      
      * Add first draft
      
      * Make it like mistral
      
      * More improvements
      
      * Fix attention_mask
      
      * Fix output_attentions
      
      * Add note in docs
      
      * Convert multilingual model
      
      * Convert large checkpoint
      
      * Convert more checkpoints
      
      * Add pipeline support, correct image_mean and image_std
      
      * Use padding=max_length by default
      
      * Make processor like llava
      
      * Add code snippet
      
      * Convert more checkpoints
      
      * Set keep_punctuation_string=None as in OpenCLIP
      
      * Set normalized=False for special tokens
      
      * Fix doc test
      
      * Update integration test
      
      * Add figure
      
      * Update organization
      
      * Happy new year
      
      * Use AutoModel everywhere
      
      ---------
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      3b742ea8