1. 22 Apr, 2024 1 commit
    • Matt's avatar
      Terminator strings for generate() (#28932) · 0d84901c
      Matt authored
      
      
      * stash commit (will discard all of this)
      
      * stash commit
      
      * First commit - needs a lot of testing!
      
      * Add a test
      
      * Fix imports and make the tests actually test something
      
      * Tests pass!
      
      * Rearrange test
      
      * Add comments (but it's still a bit confusing)
      
      * Stop storing the tokenizer
      
      * Comment fixup
      
      * Fix for input_ids with a single sequence
      
      * Update tests to test single sequences
      
      * make fixup
      
      * Fix incorrect use of isin()
      
      * Expand tests to catch more cases
      
      * Expand tests to catch more cases
      
      * make fixup
      
      * Fix length calculation and update tests
      
      * Handle 臓 as a space replacement too
      
      * Update src/transformers/generation/stopping_criteria.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Add optimizations from Joao's suggestion
      
      * Remove TODO
      
      * Update src/transformers/generation/stopping_criteria.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update tests/generation/test_stopping_criteria.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * make fixup
      
      * Rename some variables and remove some debugging clauses for clarity
      
      * Add tests for the sub-methods
      
      * Clarify one test slightly
      
      * Add stop_strings to GenerationConfig
      
      * generate() supports stop_string arg, asks for tokenizer if not provided
      
      * make fixup
      
      * Cleanup code and rename variables for clarity
      
      * Update tokenizer error
      
      * Update tokenizer passing, handle generation on GPU
      
      * Slightly more explanation cleanup
      
      * More comment cleanup
      
      * Factor out the token cleanup so it's more obvious what we're doing, and we can change it later
      
      * Careful with that cleanup!
      
      * Cleanup + optimizations to _get_matching_positions
      
      * More minor performance tweaks
      
      * Implement caching and eliminate some expensive ops (startup time: 200ms -> 9ms)
      
      * Remove the pin_memory call
      
      * Parallelize across all stop strings!
      
      * Quick fix for tensor devices
      
      * Update embeddings test for the new format
      
      * Fix test imports
      
      * Manual patching for BERT-like tokenizers
      
      * Return a bool vector instead of a single True/False
      
      * Better comment
      
      * Better comment
      
      * Add tests from @zucchini-nlp
      
      * Amy's list creation nit
      
      * tok_list -> token_list
      
      * Push a big expanded docstring (should we put it somewhere else?)
      
      * Expand docstrings
      
      * Docstring fixups
      
      * Rebase
      
      * make fixup
      
      * Make a properly general method for figuring out token strings
      
      * Fix naming throughout the functions
      
      * Move cache, refactor, fix tests
      
      * Add comment
      
      * Remove finished TODO
      
      * Remove finished TODO
      
      * make fixup
      
      * Update src/transformers/generation/stopping_criteria.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update and shorten docstring
      
      * Update tests to be shorter/clearer and test specific cases
      
      ---------
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      0d84901c
  2. 27 Mar, 2024 1 commit
  3. 26 Feb, 2024 1 commit
  4. 09 Nov, 2022 1 commit
  5. 23 Feb, 2022 1 commit
  6. 27 May, 2021 1 commit
    • Nicolas Patry's avatar
      Adding new argument `max_new_tokens` for generate. (#11476) · 80d712fa
      Nicolas Patry authored
      * Adding new argument `max_new_tokens` for generate.
      
      This is a proposal to add a new argument `max_new_tokens` to `generate`.
      This include a `MaxNewTokensCriteria` that enables callers that don't
      know about the token length ahead (like pipelines callers) to manage
      more easily the length of their generated output.
      
      * Adding a test for the user warning when both`max_length` and
      `max_new_tokens` are used together.
      
      * Removed redundant `no_grad`.
      80d712fa
  7. 21 Apr, 2021 1 commit
    • Nicolas Patry's avatar
      Removed `max_length` from being mandatory within `generate`. (#11314) · aad95c7c
      Nicolas Patry authored
      * Removed `max_length` from being mandatory within `generate`.
      
      - Moving on to fully using `StoppingCriteria` for `greedy` and `sample`
      modes.
      - `max_length` still used for `beam_search` and `group_beam_search`
      (Follow up PR)
      - Fixes a bug with MaxLengthStoppingCriteria (we should stop as soon a
      we hit the max_length, the comparison needs to be or equal, that affects
      the tests).
      - Added options to use `logits_processor` and `stopping_criteria`
      directly within `generate` function (so some users can define their own
      `logits_processor` and `stopping_criteria`).
      - Modified the backward compat tests to make sure we issue a warning.
      
      * Fix `max_length` argument in `generate`.
      
      * Moving validate to being functional.
      
      - Renamed `smax_length` to `stoppping_max_length`.
      
      * Removing `logits_processor` and `stopping_criteria` from `generate`
      arguments.
      
      * Deepcopy.
      
      * Fix global variable name.
      aad95c7c
  8. 12 Mar, 2021 1 commit
    • Nicolas Patry's avatar
      Adding new parameter to `generate`: `max_time`. (#9846) · 543d0549
      Nicolas Patry authored
      * [WIP] Adding new parameter to `generate`:  `max_time`.
      
      Generation by tokens number is sometimes a bit clunky because we don't
      know how many tokens are good enough or even how many tokens are in
      the payload (for pipelines users for instance). This leads to hard
      to understand behavior.
      
      This PR proposes a new argument `max_time` which is a float of seconds
      for the allowed time for `generate` to run on.
      Ideally combinations of `max_tokens=None`, `max_time=2` could be used to
      generate as many tokens as possible within time budget.
      
      NB: Another possible approach consists of passing a callback to `generate`
        putting the caller in charge of the actual decision of when to stop
        generating tokens. It opens the door to 'which args should we pass'
        to this callback. It's hard to imagine other use-cases for this
        early stopping behavior than time (that are not already covered by
        parameters of generate)
      
      * Revamp with StoppingCriteria
      
      * Removing deprecated mentions.
      
      * Forgot arguments to stopping criteria.
      
      * Readding max_length it's not just used as a stopping criteria.
      
      * Default value for `stopping_criteria`.
      
      * Address @patrickvonplaten comments.
      
      - More docstrings
      - Actual doc
      - Include in global namespace
      - Remove TF work.
      
      * Put back `max_length` (deprecation different PR).
      
      * Doc quality.
      
      * Fixing old behavior without `stopping_criteria` but with `max_length`.
      
      Making sure we don't break that in the future.
      
      * Adding more tests for possible inconsistencies between
      
      `max_length` and `stopping_criteria`.
      
      * Fixing the torch imports.
      543d0549