1. 20 Oct, 2023 1 commit
    • Pedro Cuenca's avatar
      Fix Fuyu image scaling bug (#26918) · c030fc89
      Pedro Cuenca authored
      * Fix Fuyu image scaling bug
      
      It could produce negative padding and hence inference errors for certain
      image sizes.
      
      * Fix aspect ratio scaling test
      c030fc89
  2. 19 Oct, 2023 1 commit
  3. 18 Oct, 2023 4 commits
    • Pablo Montalvo's avatar
      Add fuyu model (#26911) · caa0ff0b
      Pablo Montalvo authored
      
      
      * initial commit
      
      * add processor, add fuyu naming
      
      * add draft processor
      
      * fix processor
      
      * remove dropout to fix loading of weights
      
      * add image processing fixes from Pedro
      
      * fix
      
      * fix processor
      
      * add basic processing fuyu test
      
      * add documentation and TODO
      
      * address comments, add tests, add doc
      
      * replace assert with torch asserts
      
      * add Mixins and fix tests
      
      * clean imports
      
      * add model tester, clean imports
      
      * fix embedding test
      
      * add updated tests from pre-release model
      
      * Processor: return input_ids used for inference
      
      * separate processing and model tests
      
      * relax test tolerance for embeddings
      
      * add test for logit comparison
      
      * make sure fuyu image processor is imported in the init
      
      * fix formattingh
      
      * more formatting issues
      
      * and more
      
      * fixups
      
      * remove some stuff
      
      * nits
      
      * update init
      
      * remove the fuyu file
      
      * Update integration test with release model
      
      * Update conversion script.
      
      The projection is not used, as confirmed by the authors.
      
      * improve geenration
      
      * Remove duplicate function
      
      * Trickle down patches to model call
      
      * processing fuyu updates
      
      * remove things
      
      * fix prepare_inputs_for_generation to fix generate()
      
      * remove model_input
      
      * update
      
      * add generation tests
      
      * nits
      
      * draft leverage automodel and autoconfig
      
      * nits
      
      * fix dtype patch
      
      * address comments, update READMEs and doc, include tests
      
      * add working processing test, remove refs to subsequences
      
      * add tests, remove Sequence classification
      
      * processing
      
      * update
      
      * update the conversion script
      
      * more processing cleanup
      
      * safe import
      
      * take out ModelTesterMixin for early release
      
      * more cl;eanup
      
      * more cleanup
      
      * more cleanup
      
      * and more
      
      * register a buffer
      
      * nits
      
      * add postprocessing of generate output
      
      * nits
      
      * updates
      
      * add one working test
      
      * fix test
      
      * make fixup works
      
      * fixup
      
      * Arthur's updates
      
      * nits
      
      * update
      
      * update
      
      * fix processor
      
      * update tests
      
      * passe more fixups
      
      * fix
      
      * nits
      
      * don't import torch
      
      * skip fuyu config for now
      
      * fixup done
      
      * fixup
      
      * update
      
      * oups
      
      * nits
      
      * Use input embeddings
      
      * no buffer
      
      * update
      
      * styling processing fuyu
      
      * fix test
      
      * update licence
      
      * protect torch import
      
      * fixup and update not doctested
      
      * kwargs should be passed
      
      * udpates
      
      * update the impofixuprts in the test
      
      * protect import
      
      * protecting imports
      
      * protect imports in type checking
      
      * add testing decorators
      
      * protect top level import structure
      
      * fix typo
      
      * fix check init
      
      * move requires_backend to functions
      
      * Imports
      
      * Protect types
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarLysandre <lysandre@huggingface.co>
      caa0ff0b
    • Younes Belkada's avatar
      [`FA-2`] Final fix for FA2 dtype (#26846) · 5a73316b
      Younes Belkada authored
      
      
      * final fix for FA2 dtype
      
      * try
      
      * oops
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * apply fix everywhere
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      5a73316b
    • Matt's avatar
      de55ead1
    • Arthur's avatar
      [`Tokenizer`] Fix slow and fast serialization (#26570) · ef7e9369
      Arthur authored
      * fix
      
      * last attempt
      
      * current work
      
      * fix forward compatibility
      
      * save all special tokens
      
      * current state
      
      * revert additional changes
      
      * updates
      
      * remove tokenizer.model
      
      * add a test and the fix
      
      * nit
      
      * revert one more break
      
      * fix typefield issue
      
      * quality
      
      * more tests
      
      * fix fields for FC
      
      * more nits?
      
      * new additional changes
      
      * how
      
      * some updates
      
      * simplify all
      
      * more nits
      
      * revert some things to original
      
      * nice
      
      * nits
      
      * a small hack
      
      * more nits
      
      * ahhaha
      
      * fixup
      
      * update
      
      * make test run on ci
      
      * use subtesting
      
      * update
      
      * Update .circleci/create_circleci_config.py
      
      * updates
      
      * fixup
      
      * nits
      
      * replace typo
      
      * fix the test
      
      * nits
      
      * update
      
      * None max dif pls
      
      * a partial fix
      
      * had to revert one thing
      
      * test the fast
      
      * updates
      
      * fixup
      
      * and more nits
      
      * more fixes
      
      * update
      
      * Oupsy 馃憗
      
      
      
      * nits
      
      * fix marian
      
      * on our way to heaven
      
      * Update src/transformers/models/t5/tokenization_t5.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * fixup
      
      * Update src/transformers/tokenization_utils_fast.py
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      
      * fix phobert
      
      * skip some things, test more
      
      * nits
      
      * fixup
      
      * fix deberta
      
      * update
      
      * update
      
      * more updates
      
      * skip one test
      
      * more updates
      
      * fix camembert
      
      * can't test this one
      
      * more good fixes
      
      * kind of a major update
      
      - seperate what is only done in fast in fast init and refactor
      - add_token(AddedToken(..., speicla = True)) ignores it in fast
      - better loading
      
      * fixup
      
      * more fixups
      
      * fix pegasus and mpnet
      
      * remove skipped tests
      
      * fix phoneme tokenizer if self.verbose
      
      * fix individual models
      
      * update common tests
      
      * update testing files
      
      * all over again
      
      * nits
      
      * skip test for markup lm
      
      * fixups
      
      * fix order of addition in fast by sorting the added tokens decoder
      
      * proper defaults for deberta
      
      * correct default for fnet
      
      * nits on add tokens, string initialized to special if special
      
      * skip irrelevant herbert tests
      
      * main fixes
      
      * update test added_tokens_serialization
      
      * the fix for bart like models and class instanciating
      
      * update bart
      
      * nit!
      
      * update idefix test
      
      * fix whisper!
      
      * some fixup
      
      * fixups
      
      * revert some of the wrong chanegs
      
      * fixup
      
      * fixup
      
      * skip marian
      
      * skip the correct tests
      
      * skip for tf and flax as well
      
      ---------
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      ef7e9369
  4. 17 Oct, 2023 2 commits
  5. 16 Oct, 2023 3 commits
  6. 13 Oct, 2023 4 commits
    • NielsRogge's avatar
      Add OWLv2, bis (#26668) · 762af3e3
      NielsRogge authored
      * First draft
      
      * Update conversion script
      
      * Update copied from statements
      
      * Fix style
      
      * Add copied from to config
      
      * Add copied from to processor
      
      * Run make fixup
      
      * Add docstring
      
      * Update docstrings
      
      * Add method
      
      * Improve docstrings
      
      * Fix docstrings
      
      * Improve docstrings
      
      * Remove onnx
      
      * Add flag
      
      * Address comments
      
      * Add copied from to model tests
      
      * Add flag to conversion script
      
      * Add code snippet
      
      * Address more comments
      
      * Address comment
      
      * Improve conversion script
      
      * More improvements
      
      * Add expected objectness logits
      
      * Skip test
      
      * Improve conversion script
      
      * Extend conversion script
      
      * Convert large checkpoint
      
      * Fix doc tests
      
      * Convert all checkpoints, update integration tests
      
      * Add checkpoint_path arg
      
      * Fix repo_id
      762af3e3
    • Matt's avatar
      Fix Falcon generation test (#26770) · bdb391e9
      Matt authored
      bdb391e9
    • Matt's avatar
      Disable default system prompt for LLaMA (#26765) · c9785d95
      Matt authored
      * Disable default system prompt for LLaMA
      
      * Update test to not expect default prompt
      c9785d95
    • Yih-Dar's avatar
  7. 12 Oct, 2023 6 commits
  8. 11 Oct, 2023 4 commits
    • Patrick von Platen's avatar
      [Assistant Generation] Improve Encoder Decoder (#26701) · da69de17
      Patrick von Platen authored
      * [Assistant Generation] Improve enc dec
      
      * save more
      
      * Fix logit processor checks
      
      * Clean
      
      * make style
      
      * fix deprecation
      
      * fix generation test
      
      * Apply suggestions from code review
      
      * fix biogpt
      
      * make style
      da69de17
    • Yih-Dar's avatar
      `Copied from` for test files (#26713) · 5334796d
      Yih-Dar authored
      
      
      * copied statement for test files
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      5334796d
    • Billy Bradley's avatar
      In assisted decoding, pass model_kwargs to model's forward call (fix... · dcc49d8a
      Billy Bradley authored
      In assisted decoding, pass model_kwargs to model's forward call (fix prepare_input_for_generation in all models) (#25242)
      
      * In assisted decoding, pass model_kwargs to model's forward call
      
      Previously, assisted decoding would ignore any additional kwargs
      that it doesn't explicitly handle. This was inconsistent with other
      generation methods, which pass the model_kwargs through
      prepare_inputs_for_generation and forward the returned dict to the
      model's forward call.
      
      The prepare_inputs_for_generation method needs to be amended in all
      models, as previously it only kept the last input ID when a past_key_values
      was passed.
      
      * Improve variable names in _extend_attention_mask
      
      * Refactor extending token_type_ids into a function
      
      * Replace deepcopy with copy to optimize performance
      
      * Update new persimmon model with llama changes for assisted generation
      
      * Update new mistral model for assisted generation with prepare_inputs_for_generation
      
      * Update position_ids creation in falcon prepare_inputs_for_generation to support assisted generation
      dcc49d8a
    • Thien Tran's avatar
      Make Whisper Encoder's sinusoidal PE non-trainable by default (#26032) · 1e3c9dda
      Thien Tran authored
      
      
      * set encoder's PE as non-trainable
      
      * freeze flax
      
      * init sinusoids
      
      * add test for non-trainable embed positions
      
      * simplify TF encoder embed_pos
      
      * revert tf
      
      * clean up
      
      * add sinusoidal init for jax
      
      * make consistent sinusoidal function
      
      * fix dtype
      
      * add default dtype
      
      * use numpy for sinusoids. fix jax
      
      * add sinusoid init for TF
      
      * fix
      
      * use custom embedding
      
      * use specialized init for each impl
      
      * fix sinusoids init. add test for pytorch
      
      * fix TF dtype
      
      * simplify sinusoid init for flax and tf
      
      * add tests for TF
      
      * change default dtype to float32
      
      * add sinusoid test for flax
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * move sinusoidal init to _init_weights
      
      ---------
      Co-authored-by: default avatarsanchit-gandhi <sanchit@huggingface.co>
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      1e3c9dda
  9. 09 Oct, 2023 1 commit
  10. 06 Oct, 2023 6 commits
  11. 05 Oct, 2023 3 commits
  12. 04 Oct, 2023 3 commits
  13. 03 Oct, 2023 2 commits