"docs/vscode:/vscode.git/clone" did not exist on "1749ca317eea9840dde044eb497a0b75fe0db0d5"
  1. 19 Oct, 2023 4 commits
  2. 18 Oct, 2023 7 commits
    • Younes Belkada's avatar
    • Pablo Montalvo's avatar
      Add fuyu model (#26911) · caa0ff0b
      Pablo Montalvo authored
      
      
      * initial commit
      
      * add processor, add fuyu naming
      
      * add draft processor
      
      * fix processor
      
      * remove dropout to fix loading of weights
      
      * add image processing fixes from Pedro
      
      * fix
      
      * fix processor
      
      * add basic processing fuyu test
      
      * add documentation and TODO
      
      * address comments, add tests, add doc
      
      * replace assert with torch asserts
      
      * add Mixins and fix tests
      
      * clean imports
      
      * add model tester, clean imports
      
      * fix embedding test
      
      * add updated tests from pre-release model
      
      * Processor: return input_ids used for inference
      
      * separate processing and model tests
      
      * relax test tolerance for embeddings
      
      * add test for logit comparison
      
      * make sure fuyu image processor is imported in the init
      
      * fix formattingh
      
      * more formatting issues
      
      * and more
      
      * fixups
      
      * remove some stuff
      
      * nits
      
      * update init
      
      * remove the fuyu file
      
      * Update integration test with release model
      
      * Update conversion script.
      
      The projection is not used, as confirmed by the authors.
      
      * improve geenration
      
      * Remove duplicate function
      
      * Trickle down patches to model call
      
      * processing fuyu updates
      
      * remove things
      
      * fix prepare_inputs_for_generation to fix generate()
      
      * remove model_input
      
      * update
      
      * add generation tests
      
      * nits
      
      * draft leverage automodel and autoconfig
      
      * nits
      
      * fix dtype patch
      
      * address comments, update READMEs and doc, include tests
      
      * add working processing test, remove refs to subsequences
      
      * add tests, remove Sequence classification
      
      * processing
      
      * update
      
      * update the conversion script
      
      * more processing cleanup
      
      * safe import
      
      * take out ModelTesterMixin for early release
      
      * more cl;eanup
      
      * more cleanup
      
      * more cleanup
      
      * and more
      
      * register a buffer
      
      * nits
      
      * add postprocessing of generate output
      
      * nits
      
      * updates
      
      * add one working test
      
      * fix test
      
      * make fixup works
      
      * fixup
      
      * Arthur's updates
      
      * nits
      
      * update
      
      * update
      
      * fix processor
      
      * update tests
      
      * passe more fixups
      
      * fix
      
      * nits
      
      * don't import torch
      
      * skip fuyu config for now
      
      * fixup done
      
      * fixup
      
      * update
      
      * oups
      
      * nits
      
      * Use input embeddings
      
      * no buffer
      
      * update
      
      * styling processing fuyu
      
      * fix test
      
      * update licence
      
      * protect torch import
      
      * fixup and update not doctested
      
      * kwargs should be passed
      
      * udpates
      
      * update the impofixuprts in the test
      
      * protect import
      
      * protecting imports
      
      * protect imports in type checking
      
      * add testing decorators
      
      * protect top level import structure
      
      * fix typo
      
      * fix check init
      
      * move requires_backend to functions
      
      * Imports
      
      * Protect types
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarLysandre <lysandre@huggingface.co>
      caa0ff0b
    • Younes Belkada's avatar
      [`FA-2`] Final fix for FA2 dtype (#26846) · 5a73316b
      Younes Belkada authored
      
      
      * final fix for FA2 dtype
      
      * try
      
      * oops
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * apply fix everywhere
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      5a73316b
    • Matt's avatar
      Add default template warning (#26637) · d933818d
      Matt authored
      * Add default template warnings
      
      * make fixup
      
      * Move warnings to FutureWarning
      
      * Move warnings to FutureWarning
      
      * fix make fixup
      
      * Remove futurewarning
      d933818d
    • Arthur's avatar
      [`Tokenizer`] Fix slow and fast serialization (#26570) · ef7e9369
      Arthur authored
      * fix
      
      * last attempt
      
      * current work
      
      * fix forward compatibility
      
      * save all special tokens
      
      * current state
      
      * revert additional changes
      
      * updates
      
      * remove tokenizer.model
      
      * add a test and the fix
      
      * nit
      
      * revert one more break
      
      * fix typefield issue
      
      * quality
      
      * more tests
      
      * fix fields for FC
      
      * more nits?
      
      * new additional changes
      
      * how
      
      * some updates
      
      * simplify all
      
      * more nits
      
      * revert some things to original
      
      * nice
      
      * nits
      
      * a small hack
      
      * more nits
      
      * ahhaha
      
      * fixup
      
      * update
      
      * make test run on ci
      
      * use subtesting
      
      * update
      
      * Update .circleci/create_circleci_config.py
      
      * updates
      
      * fixup
      
      * nits
      
      * replace typo
      
      * fix the test
      
      * nits
      
      * update
      
      * None max dif pls
      
      * a partial fix
      
      * had to revert one thing
      
      * test the fast
      
      * updates
      
      * fixup
      
      * and more nits
      
      * more fixes
      
      * update
      
      * Oupsy 馃憗
      
      
      
      * nits
      
      * fix marian
      
      * on our way to heaven
      
      * Update src/transformers/models/t5/tokenization_t5.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * fixup
      
      * Update src/transformers/tokenization_utils_fast.py
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      
      * fix phobert
      
      * skip some things, test more
      
      * nits
      
      * fixup
      
      * fix deberta
      
      * update
      
      * update
      
      * more updates
      
      * skip one test
      
      * more updates
      
      * fix camembert
      
      * can't test this one
      
      * more good fixes
      
      * kind of a major update
      
      - seperate what is only done in fast in fast init and refactor
      - add_token(AddedToken(..., speicla = True)) ignores it in fast
      - better loading
      
      * fixup
      
      * more fixups
      
      * fix pegasus and mpnet
      
      * remove skipped tests
      
      * fix phoneme tokenizer if self.verbose
      
      * fix individual models
      
      * update common tests
      
      * update testing files
      
      * all over again
      
      * nits
      
      * skip test for markup lm
      
      * fixups
      
      * fix order of addition in fast by sorting the added tokens decoder
      
      * proper defaults for deberta
      
      * correct default for fnet
      
      * nits on add tokens, string initialized to special if special
      
      * skip irrelevant herbert tests
      
      * main fixes
      
      * update test added_tokens_serialization
      
      * the fix for bart like models and class instanciating
      
      * update bart
      
      * nit!
      
      * update idefix test
      
      * fix whisper!
      
      * some fixup
      
      * fixups
      
      * revert some of the wrong chanegs
      
      * fixup
      
      * fixup
      
      * skip marian
      
      * skip the correct tests
      
      * skip for tf and flax as well
      
      ---------
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      ef7e9369
    • Matt's avatar
      Fix Seq2seqTrainer decoder attention mask (#26841) · 34678db4
      Matt authored
      Don't drop decoder_input_ids without also dropping decoder_attention_mask
      34678db4
    • Joao Gante's avatar
      Generate: improve docstrings for custom stopping criteria (#26863) · e893b1ef
      Joao Gante authored
      improve docstrings
      e893b1ef
  3. 17 Oct, 2023 7 commits
  4. 16 Oct, 2023 8 commits
  5. 13 Oct, 2023 8 commits
  6. 12 Oct, 2023 6 commits