1. 03 Apr, 2024 2 commits
  2. 01 Apr, 2024 1 commit
  3. 12 Mar, 2024 1 commit
  4. 08 Mar, 2024 1 commit
  5. 27 Feb, 2024 1 commit
  6. 31 Jan, 2024 1 commit
  7. 19 Jan, 2024 1 commit
  8. 18 Jan, 2024 1 commit
  9. 10 Jan, 2024 1 commit
  10. 22 Dec, 2023 1 commit
  11. 08 Dec, 2023 1 commit
    • fxmarty's avatar
      F.scaled_dot_product_attention support (#26572) · 80377eb0
      fxmarty authored
      
      
      * add sdpa
      
      * wip
      
      * cleaning
      
      * add ref
      
      * yet more cleaning
      
      * and more :)
      
      * wip llama
      
      * working llama
      
      * add output_attentions=True support
      
      * bigcode sdpa support
      
      * fixes
      
      * gpt-bigcode support, require torch>=2.1.1
      
      * add falcon support
      
      * fix conflicts falcon
      
      * style
      
      * fix attention_mask definition
      
      * remove output_attentions from attnmaskconverter
      
      * support whisper without removing any Copied from statement
      
      * fix mbart default to eager renaming
      
      * fix typo in falcon
      
      * fix is_causal in SDPA
      
      * check is_flash_attn_2_available in the models init as well in case the model is not initialized through from_pretrained
      
      * add warnings when falling back on the manual implementation
      
      * precise doc
      
      * wip replace _flash_attn_enabled by config.attn_implementation
      
      * fix typo
      
      * add tests
      
      * style
      
      * add a copy.deepcopy on the config in from_pretrained, as we do not want to modify it inplace
      
      * obey to config.attn_implementation if a config is passed in from_pretrained
      
      * fix is_torch_sdpa_available when torch is not installed
      
      * remove dead code
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bart/modeling_bart.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove duplicate pretraining_tp code
      
      * add dropout in llama
      
      * precise comment on attn_mask
      
      * add fmt: off for _unmask_unattended docstring
      
      * precise num_masks comment
      
      * nuke pretraining_tp in LlamaSDPAAttention following Arthur's suggestion
      
      * cleanup modeling_utils
      
      * backward compatibility
      
      * fix style as requested
      
      * style
      
      * improve documentation
      
      * test pass
      
      * style
      
      * add _unmask_unattended tests
      
      * skip meaningless tests for idefics
      
      * hard_check SDPA requirements when specifically requested
      
      * standardize the use if XXX_ATTENTION_CLASSES
      
      * fix SDPA bug with mem-efficient backend on CUDA when using fp32
      
      * fix test
      
      * rely on SDPA is_causal parameter to handle the causal mask in some cases
      
      * fix FALCON_ATTENTION_CLASSES
      
      * remove _flash_attn_2_enabled occurences
      
      * fix test
      
      * add OPT to the list of supported flash models
      
      * improve test
      
      * properly test on different SDPA backends, on different dtypes & properly handle separately the pad tokens in the test
      
      * remove remaining _flash_attn_2_enabled occurence
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/perf_infer_gpu_one.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove use_attn_implementation
      
      * fix docstring & slight bug
      
      * make attn_implementation internal (_attn_implementation)
      
      * typos
      
      * fix tests
      
      * deprecate use_flash_attention_2=True
      
      * fix test
      
      * add back llama that was removed by mistake
      
      * fix tests
      
      * remove _flash_attn_2_enabled occurences bis
      
      * add check & test that passed attn_implementation is valid
      
      * fix falcon torchscript export
      
      * fix device of mask in tests
      
      * add tip about torch.jit.trace and move bt doc below sdpa
      
      * fix parameterized.expand order
      
      * move tests from test_modeling_attn_mask_utils to test_modeling_utils as a relevant test class is already there
      
      * update sdpaattention class with the new cache
      
      * Update src/transformers/configuration_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bark/modeling_bark.py
      
      * address review comments
      
      * WIP torch.jit.trace fix. left: test both eager & sdpa
      
      * add test for torch.jit.trace for both eager/sdpa
      
      * fix falcon with torch==2.0 that needs to use sdpa
      
      * fix doc
      
      * hopefully last fix
      
      * fix key_value_length that has no default now in mask converter
      
      * is it flacky?
      
      * fix speculative decoding bug
      
      * tests do pass
      
      * fix following #27907
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      80377eb0
  12. 23 Nov, 2023 1 commit
  13. 22 Nov, 2023 1 commit
    • Patrick von Platen's avatar
      [Whisper] Add sequential longform decoding (#27492) · 4151fbb4
      Patrick von Platen authored
      * [Whisper] Add seq gen
      
      * [Whisper] Add seq gen
      
      * more debug
      
      * Fix whisper logit processor
      
      * Improve whisper code further
      
      * Fix more
      
      * more debug
      
      * more debug
      
      * Improve further
      
      * Add tests
      
      * Prep for batch size > 1
      
      * Get batch_size>1 working
      
      * Correct more
      
      * Add extensive tests
      
      * more debug
      
      * more debug
      
      * more debug
      
      * add more tests
      
      * more debug
      
      * Apply suggestions from code review
      
      * more debug
      
      * add comments to explain the code better
      
      * add comments to explain the code better
      
      * add comments to explain the code better
      
      * Add more examples
      
      * add comments to explain the code better
      
      * fix more
      
      * add comments to explain the code better
      
      * add comments to explain the code better
      
      * correct
      
      * correct
      
      * finalize
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      4151fbb4
  14. 16 Nov, 2023 2 commits
    • Arthur's avatar
      [`Styling`] stylify using ruff (#27144) · 651408a0
      Arthur authored
      
      
      * try to stylify using ruff
      
      * might need to remove these changes?
      
      * use ruf format andruff check
      
      * use isinstance instead of type comparision
      
      * use # fmt: skip
      
      * use # fmt: skip
      
      * nits
      
      * soem styling changes
      
      * update ci job
      
      * nits isinstance
      
      * more files update
      
      * nits
      
      * more nits
      
      * small nits
      
      * check and format
      
      * revert wrong changes
      
      * actually use formatter instead of checker
      
      * nits
      
      * well docbuilder is overwriting this commit
      
      * revert notebook changes
      
      * try to nuke docbuilder
      
      * style
      
      * fix feature exrtaction test
      
      * remve `indent-width = 4`
      
      * fixup
      
      * more nits
      
      * update the ruff version that we use
      
      * style
      
      * nuke docbuilder styling
      
      * leve the print for detected changes
      
      * nits
      
      * Remove file I/O
      Co-authored-by: default avatarcharliermarsh <charlie.r.marsh@gmail.com>
      
      * style
      
      * nits
      
      * revert notebook changes
      
      * Add # fmt skip when possible
      
      * Add # fmt skip when possible
      
      * Fix
      
      * More `  # fmt: skip` usage
      
      * More `  # fmt: skip` usage
      
      * More `  # fmt: skip` usage
      
      * NIts
      
      * more fixes
      
      * fix tapas
      
      * Another way to skip
      
      * Recommended way
      
      * Fix two more fiels
      
      * Remove asynch
      Remove asynch
      
      ---------
      Co-authored-by: default avatarcharliermarsh <charlie.r.marsh@gmail.com>
      651408a0
    • Patrick von Platen's avatar
      Revert "add attention_mask and position_ids in assisted model" (#27523) · 5603fad2
      Patrick von Platen authored
      * Revert "add attention_mask and position_ids in assisted model (#26892)"
      
      This reverts commit 184f60dc.
      
      * more debug
      5603fad2
  15. 09 Nov, 2023 1 commit
  16. 01 Nov, 2023 3 commits
  17. 31 Oct, 2023 1 commit
    • Hz, Ji's avatar
      device agnostic models testing (#27146) · 50378cbf
      Hz, Ji authored
      * device agnostic models testing
      
      * add decorator `require_torch_fp16`
      
      * make style
      
      * apply review suggestion
      
      * Oops, the fp16 decorator was misused
      50378cbf
  18. 30 Oct, 2023 1 commit
  19. 11 Oct, 2023 1 commit
  20. 15 Sep, 2023 1 commit
  21. 29 Jun, 2023 1 commit
  22. 28 Jun, 2023 1 commit
  23. 27 Jun, 2023 1 commit
  24. 26 Jun, 2023 1 commit
  25. 21 Jun, 2023 1 commit
    • Matthijs Hollemans's avatar
      add word-level timestamps to Whisper (#23205) · cd927a47
      Matthijs Hollemans authored
      * let's go!
      
      * initial implementation of token-level timestamps
      
      * only return a single timestamp per token
      
      * remove token probabilities
      
      * fix return type
      
      * fix doc comment
      
      * strip special tokens
      
      * rename
      
      * revert to not stripping special tokens
      
      * only support models that have alignment_heads
      
      * add integration test
      
      * consistently name it token-level timestamps
      
      * small DTW tweak
      
      * initial support for ASR pipeline
      
      * fix pipeline doc comments
      
      * resolve token timestamps in pipeline with chunking
      
      * change warning when no final timestamp is found
      
      * return word-level timestamps
      
      * fixup
      
      * fix bug that skipped final word in each chunk
      
      * fix failing unit tests
      
      * merge punctuations into the words
      
      * also return word tokens
      
      * also return token indices
      
      * add (failing) unit test for combine_tokens_into_words
      
      * make combine_tokens_into_words private
      
      * restore OpenAI's punctuation rules
      
      * add pipeline tests
      
      * make requested changes
      
      * PR review changes
      
      * fix failing pipeline test
      
      * small stuff from PR
      
      * only return words and their timestamps, not segments
      
      * move alignment_heads into generation config
      
      * forgot to set alignment_heads in pipeline tests
      
      * tiny comment fix
      
      * grr
      cd927a47
  26. 20 Jun, 2023 1 commit
  27. 30 May, 2023 1 commit
  28. 24 May, 2023 1 commit
  29. 19 May, 2023 1 commit
    • Connor Henderson's avatar
      feat: Whisper prompting (#22496) · 2acedf47
      Connor Henderson authored
      * initial working additions
      
      * clean and rename, add cond stripping initial prompt to decode
      
      * cleanup, edit create_initial_prompt_ids, add tests
      
      * repo consistency, flip order of conditional
      
      * fix error, move the processor fn to the tokenizer
      
      * repo consistency, update test ids to corresponding tokenizer
      
      * use convert_tokens_to_ids not get_vocab...
      
      * use actual conditional in generate
      
      * make sytle
      
      * initial address comments
      
      * initial working add new params to pipeline
      
      * first draft of sequential generation for condition_on_previous_text
      
      * add/update tests, make compatible with timestamps
      
      * make compatible with diff. input kwargs and max length
      
      * add None check
      
      * add temperature check
      
      * flip temp check operand
      
      * refocusing to prev pr scope
      
      * remove the params too
      
      * make style
      
      * edits, move max length incorporating prompt to whisper
      
      * address comments
      
      * remove asr pipeline prompt decoding, fix indexing
      
      * address comments (more tests, validate prompt)
      
      * un-comment out tests (from debug)
      
      * remove old comment
      
      * address comments
      
      * fix typo
      
      * remove timestamp token from test
      
      * make style
      
      * cleanup
      
      * copy method to fast tokenizer, set max_new_tokens for test
      
      * prompt_ids type just pt
      
      * address Amy's comments
      
      * make style
      2acedf47
  30. 11 May, 2023 2 commits
  31. 05 May, 2023 2 commits
  32. 18 Apr, 2023 1 commit
    • Joao Gante's avatar
      Generate: Add assisted generation (#22211) · 78cda46f
      Joao Gante authored
      * working mvp
      
      * remove breakpoint
      
      * fix commit
      
      * standardize outputs
      
      * tmp commit
      
      * tests almost ready
      
      * tmp commit
      
      * skip a few models
      
      * Add streaming; Docs and examples
      
      * document limitations
      
      * PR commits
      
      * Amy PR comments
      78cda46f
  33. 06 Apr, 2023 2 commits