1. 08 Feb, 2023 2 commits
  2. 07 Feb, 2023 3 commits
  3. 06 Feb, 2023 2 commits
  4. 03 Feb, 2023 1 commit
  5. 02 Feb, 2023 1 commit
    • Jorge C. Gomes's avatar
      Fixes bug in the creation of ExponentialDecayLengthPenalty (#21423) · 6a3d1a98
      Jorge C. Gomes authored
      input_ids_seq_length doesn't exist in the GenerationConfig, it exists as local variable in the function.
      
      Setting exponential_decay_length_penalty therefore results in an error:
      `AttributeError: 'GenerationConfig' object has no attribute 'input_ids_seq_length'`
      
      This simple change fixes this issue, and the exponential_decay_length_penalty works as expected.
      6a3d1a98
  6. 01 Feb, 2023 1 commit
  7. 30 Jan, 2023 1 commit
  8. 27 Jan, 2023 2 commits
  9. 26 Jan, 2023 1 commit
  10. 25 Jan, 2023 3 commits
    • Arthur's avatar
      [WHISPER] Small patch (#21307) · 6f3faf38
      Arthur authored
      * add small patch
      
      * update tests, forced decoder ids is not prioritary against generation config
      
      * fix two new tests
      6f3faf38
    • Nick Hill's avatar
      Small fix to ExponentialDecayLengthPenalty docstring (#21308) · 140c6ede
      Nick Hill authored
      Currently, it incorrectly states that the exponential_decay_length_penalty tuple parameter is optional.
      
      Also changed the corresponding type hint to be more specific.
      140c6ede
    • Arthur's avatar
      [Whisper] Refactor whisper (#21252) · 255257f3
      Arthur authored
      * update whisper logit processor
      
      * add generate for whisper
      
      * remove part of the whisper specific code from pipeline
      
      * update logit processes
      
      * major update
      
      * enforce first timestamp
      
      * update generate
      
      * add more tests
      
      * update new decoding strategy
      
      * Apply suggestions from code review
      
      * update docstring
      
      * fixup
      
      * default config will not have multilingual ar
      
      * update expected tokenizer size, see pull on the hub for whisper-tiny
      255257f3
  11. 24 Jan, 2023 1 commit
  12. 23 Jan, 2023 1 commit
  13. 20 Jan, 2023 1 commit
  14. 19 Jan, 2023 1 commit
    • Karim Foda's avatar
      Add hallucination filter (#18675) · b9403e95
      Karim Foda authored
      
      
      * Add hallucination penalty
      
      * Make quality changes
      
      * Inverse penalty
      
      * Fix imports & quality
      
      * Fix name spelling issue
      
      * set encoder_repetition_penalty and fix quality
      
      * Fix failing test
      
      * Add to config_common_kwargs
      
      * Fix modelling_rag error
      
      * Update src/transformers/generation_logits_process.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Remove breakpoint
      
      * Make style fixes
      
      * Update encoder_repetition_penalty default value
      
      * Merge latest main changes
      
      * Make fixup changes
      
      * Add EncoderRepetitionPenaltyLogitsProcessor to generation/__init__.py
      
      * Fix repo-inconsistency
      
      * Remove venv
      
      * Remove tensorflow-macos & add tests
      
      * Add documentation
      
      * Fix quality issues
      
      * move encoder_repetition_penalty to config
      
      * Update src/transformers/configuration_utils.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/generation/configuration_utils.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Remove encoder_repetition_penalty from tests
      
      * Fix type error
      
      * Fix format error
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      b9403e95
  15. 17 Jan, 2023 6 commits
    • Sherman Siu's avatar
      Add Epsilon- and Eta-Sampling (#21121) · 865da84a
      Sherman Siu authored
      * Add epsilon- and eta-sampling.
      
      Add epsilon- and eta-sampling, following the official code from https://github.com/john-hewitt/truncation-sampling and adapting to be more configurable, as required by Huggingface transformers.
      
      * Add unit tests for epsilon- and eta-sampling.
      
      * Black: fix code formatting.
      
      * Fix docstring spacing.
      
      * Clean up newlines.
      
      * Fix implementation bugs and their associated tests.
      
      * Remove epsilon- and eta-sampling parameters from PretrainedConfig.
      
      * Clarify and clean up the documentation.
      
      * Remove parameters for PretrainedConfig test.
      865da84a
    • Maria Khalusova's avatar
      Refactoring of the text generate API docs (#21112) · 02488103
      Maria Khalusova authored
      * initial commit, refactoring the text generation api reference
      
      * removed repetitive code examples
      
      * Refactoring the text generation docs to reduce repetition
      
      * make style
      02488103
    • Arthur's avatar
      Whisper Timestamp processor and prediction (#20620) · bb300ac6
      Arthur authored
      
      
      * add draft logit processor
      
      * add template functions
      
      * update timesapmt processor parameters
      
      * draft script
      
      * simplify code
      
      * cleanup
      
      * fixup and clean
      
      * update pipeline
      
      * style
      
      * clean up previous idea
      
      * add tokenization utils
      
      * update tokenizer and asr output
      
      * fit whisper type
      
      * style and update test
      
      * clean test
      
      * style test
      
      * update tests
      
      * update error test
      
      * udpate code (not based on review yet)
      
      * update tokenization
      
      * update asr pipeline
      
      * update code
      
      * cleanup and update test
      
      * fmt
      
      * remove text verificatino
      
      * cleanup
      
      * cleanup
      
      * add model test
      
      * update tests
      
      * update code add docstring
      
      * update code and add docstring
      
      * fix pipeline tests
      
      * add draft logit processor
      
      add template functions
      
      update timesapmt processor parameters
      
      draft script
      
      simplify code
      
      cleanup
      
      fixup and clean
      
      update pipeline
      
      style
      
      clean up previous idea
      
      add tokenization utils
      
      update tokenizer and asr output
      
      fit whisper type
      
      style and update test
      
      clean test
      
      style test
      
      update tests
      
      update error test
      
      udpate code (not based on review yet)
      
      update tokenization
      
      update asr pipeline
      
      update code
      
      cleanup and update test
      
      fmt
      
      remove text verificatino
      
      cleanup
      
      cleanup
      
      add model test
      
      update tests
      
      update code add docstring
      
      update code and add docstring
      
      fix pipeline tests
      
      * Small update.
      
      * Fixup.
      
      * Tmp.
      
      * More support.
      
      * Making `forced_decoder_ids` non mandatory for users to set.
      
      * update and fix first bug
      
      * properly process sequence right after merge if last
      
      * tofo
      
      * allow list inputs + compute begin index better
      
      * start adding tests
      
      * add the 3 edge cases
      
      * style
      
      * format sequences
      
      * fixup
      
      * update
      
      * update
      
      * style
      
      * test passes, edge cases should be good
      
      * update last value
      
      * remove Trie
      
      * update tests and expec ted values
      
      * handle bigger chunk_length
      
      * clean tests a bit
      
      * refactor chunk iter and clean pipeline
      
      * update tests
      
      * style
      
      * refactor chunk iter and clean pipeline
      
      * upade
      
      * resolve comments
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      
      * take stride right into account
      
      * update test expected values
      
      * Update code based on review
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      bb300ac6
    • Sherman Siu's avatar
      Clarify and add missing typical_p argument docstring. (#21095) · 8896ebb9
      Sherman Siu authored
      
      
      * Clarify and add missing typical_p docstring.
      
      * Make the docstring easier to understand.
      
      * Clarify typical_p docstring
      
      Accept the suggestion by @stevhliu for paraphrasing the docstring.
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Use the same docstring as in GenerationConfig
      
      Follow the suggestion suggested by @stevhliu in the pull request conversation.
      
      * Fix docstring spacing.
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      8896ebb9
    • Nick Hill's avatar
      Small simplification to TopKLogitsWarper (#21130) · 3bbc2451
      Nick Hill authored
      The max of top_k and min_tokens_to_keep performed on every call can just be done once up-front.
      3bbc2451
    • Joao Gante's avatar
  16. 16 Jan, 2023 1 commit
  17. 08 Jan, 2023 1 commit
    • Arthur's avatar
      Replace `past` with `past_key_values` (#20944) · f0577df6
      Arthur authored
      * start cleanup
      
      * more updates
      
      * more models are affected
      
      * more updates
      
      * update generation utils
      
      * style
      
      * revert change that removed reorder cachce
      
      * update generation utils
      
      * style
      
      * style
      
      * remove reorder cache
      f0577df6
  18. 05 Jan, 2023 3 commits
  19. 04 Jan, 2023 1 commit
  20. 03 Jan, 2023 4 commits
    • Motoki Wu's avatar
      Add custom stop token ids for generation (#20727) · 45da7cec
      Motoki Wu authored
      * Add StopIdStoppingCriteria
      
      * add a working test for stop id criteria
      
      * add to global scope
      
      * add stop_ids to generate
      
      * add pipeline test
      
      * use tokenizer encode in test
      
      * add test to generation utils
      
      * reformat
      
      * fixup
      
      * make-fix-copies
      
      * rename to stop_token_id
      
      * use stop_tokens instead
      
      * add to text to text generation
      
      * make fixup
      
      * make repo-consistency
      
      * Add support for list of ints for eos_token_id inside generation/utils.py
      
      * Instead of having if elses, cast the eos_token_id into a List[int]
      
      * Add List[int] support for logits_process.py
      
      * add List[int] for beam_search.py
      
      * add List[int] for forced_eos_token_id
      
      * revert stop token id stopping criteria changes
      
      * make fixup
      
      * fix tests
      
      * add eos_token_id to generation/utils.py and added tests test_utils.py
      
      * add eos_token_id type hints and fix for pad tokens
      
      * add comments
      
      * remove some prints and remove forced false test
      
      * fix
      
      * put back test_stop_sequence_stopping_criteria
      
      * remove unused import and make fixup
      
      * add a none check
      
      * update docstring
      
      * add more docstring for list ints
      
      * make fixup
      45da7cec
    • samuelpullely's avatar
      Enable `decoder_attention_mask` in `generate` function (#20726) · 15c68c67
      samuelpullely authored
      * Enable `decoder_attention_mask` in `generate` function
      
      * Make style corrections
      
      * Run `make repo-consistency`
      
      * Add integration test
      15c68c67
    • Konstantin Kotik's avatar
      `MinNewTokensLengthLogitsProcessor` for `.generate` method #20814 (#20892) · 367fdf33
      Konstantin Kotik authored
      
      
      * feat: add min new length logit processor
      
      * test: add min new length logit processor
      
      * docs: add MinNewTokensLengthLogitsProcessor
      
      * feat: import MinNewTokensLengthLogitsProcessor
      
      * fix: update pytorch dummy objects
      
      * refactor & fix: rename attributes and var and get rid of dynamic attribute
      
      * tests: align test with new interface
      
      * docs: fix typo
      
      * docs: minor clarification
      
      * Empty-Commit
      
      * empty commit
      
      * run automated quality edits
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      367fdf33
    • Joao Gante's avatar
      4fd89e49
  21. 02 Jan, 2023 1 commit
  22. 28 Dec, 2022 1 commit
  23. 21 Dec, 2022 1 commit