- 03 Jan, 2023 2 commits
-
-
Motoki Wu authored
* Add StopIdStoppingCriteria * add a working test for stop id criteria * add to global scope * add stop_ids to generate * add pipeline test * use tokenizer encode in test * add test to generation utils * reformat * fixup * make-fix-copies * rename to stop_token_id * use stop_tokens instead * add to text to text generation * make fixup * make repo-consistency * Add support for list of ints for eos_token_id inside generation/utils.py * Instead of having if elses, cast the eos_token_id into a List[int] * Add List[int] support for logits_process.py * add List[int] for beam_search.py * add List[int] for forced_eos_token_id * revert stop token id stopping criteria changes * make fixup * fix tests * add eos_token_id to generation/utils.py and added tests test_utils.py * add eos_token_id type hints and fix for pad tokens * add comments * remove some prints and remove forced false test * fix * put back test_stop_sequence_stopping_criteria * remove unused import and make fixup * add a none check * update docstring * add more docstring for list ints * make fixup
-
Konstantin Kotik authored
* feat: add min new length logit processor * test: add min new length logit processor * docs: add MinNewTokensLengthLogitsProcessor * feat: import MinNewTokensLengthLogitsProcessor * fix: update pytorch dummy objects * refactor & fix: rename attributes and var and get rid of dynamic attribute * tests: align test with new interface * docs: fix typo * docs: minor clarification * Empty-Commit * empty commit * run automated quality edits Co-authored-by:Joao Gante <joao@huggingface.co>
-
- 20 Dec, 2022 1 commit
-
-
fzyzcjy authored
* Fix typo * Update README.md * Update run_mlm_flax_stream.py * Update README.md
-
- 15 Dec, 2022 1 commit
-
-
Joao Gante authored
* generate from config mvp * fix failing tests * max_time test * Load default gen config at model load time; Update docs * further documentation; add tests * adapt rag to the new structure * handle models not instantiated with from_pretained (like in tests) * better default generation config * add can_generate fn * handle legacy use case of ad hoc model config changes * initialize gen config from config in individual methods, if gen config is none * fix _get_decoder_start_token_id when called outside GenerationMixin * correct model config load order (set attr > model config > decoder config) * update rag to match latest changes * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * load gen config from model config in model.from_pretrained * fix can_generate fn * handle generate calls without a previous from_pretrained (e.g. tests) * add legacy behavior (and a warning) * lower logger severity Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 21 Nov, 2022 2 commits
-
-
Joao Gante authored
-
Joao Gante authored
Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 14 Nov, 2022 1 commit
-
-
Joao Gante authored
-
- 09 Nov, 2022 1 commit
-
-
Joao Gante authored
* move generation_*.py src files into generation/*.py * populate generation.__init__ with lazy loading * move imports and references from generation.xxx.object to generation.object
-
- 01 Nov, 2022 1 commit
-
-
Joao Gante authored
* Use beam search functionality; Add extra outputs and test * Add full tests for contrastive search * Add error message on unconventional cache format
-
- 21 Oct, 2022 1 commit
-
-
Joao Gante authored
* contrastive search test updates * make fixup
-
- 19 Oct, 2022 1 commit
-
-
GMFTBY authored
Adding the state-of-the-art contrastive search decoding methods for the codebase of generation_utils.py (#19477) * add: the contrastive search for generaton_utils * add: testing scripts for contrastive search under examples/text-generation * update the quality of codes * revise the docstring; make the generation_contrastive_search.py scripts; * revise the examples/pytorch/text-generation/run_generation_contrastive_search.py to the auto-APIs format * revise the necessary documents * fix: revise the docstring of generation_contrastive_search.py * Fix the code indentation * fix: revise the nits and examples in contrastive_search docstring. * fix the copyright * delete generation_contrastive_search.py * revise the logic in contrastive_search * update the intergration test and the docstring * run the tests over * add the slow decorate to the contrastive_search intergrate test * add more test * do the style, quality, consistency checks
-
- 10 Oct, 2022 1 commit
-
-
amyeroberts authored
* simplify loop * add featur extractor * add model * start conversion * add dropout * initial commit of test files * copnversion for all models * update processor for correct padding * update feature extraction * update integration test logits match * fmnt: off for the logits * on the fly mel bank * small nit * update test * update tokenizer * nit feature extraction * update * update tokenizer test * adds logit processor and update tokenizer to get supress tokens * style * clean convert * revert to original modeling tf utils * Update * update * nit * clean convert file * update tests and nits * quality * slow generation test * ffn_dim to allow customization * update readme * add to toctreee * start fixing integration tests * update tests and code * fix feature extractor * fix config tests common * update code to fix tests * fix feature exctractor * nit feature extraction * update test for new feature extractor * style * add absrtact * large logits wioth custom decoder input ids * wraap around is otrch available * fix feature extractor * correct logits for whisper small.en * nit * fix encoder_attentino_mask * some fixes * remove unnecessary inputs * nits * add normalizer file * update etst tokenization * fix attention mask not defined * fix generate * remove uncoder attention mask useless * update test modeling whisper * update condfig to add second non supress tokens * nits on feature exrtactor * nit for test tokenizers * update etsts * update tests * update tokenization test * fixup * invalidated hf token. Clean convert openai to whisper * fix logit tests * fixup * Add model to README * Fix doc tests * clean merge * revert toc_tree changes * remove useless LogitProcessor * Update whisper .mdx * update config file doc * update configuration docstring * update test tokenization * update test tokenization * update tokenization whisper Added copied from where needed * update feature extraction * nit test name * style * quality * remove get suppress tokens and update non_speech tokens global variables * Update src/transformers/models/whisper/feature_extraction_whisper.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * clean modeling whisper and test Removed the attention mask arguments that are deprecated * fix large test * Add multilingual audio test, and translate test * style * fix larg multilingual test * nits * add copied from for attention layer * remove attention masks in doc * add english normalizer * Update docs/source/en/model_doc/whisper.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * update tokenization test * remove copied from in whisper attention : no bias in k_proj only * wrap around dependencies in english normalizer * style * correct import generation logits * for now, wrap feature extractor with torch * remove torch depencies for feature extraction and style * Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update docs/source/en/model_doc/whisper.mdx Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * fixup * nit * update logitds * style * nit * nits and fix final tests * add `is_more_itertools_available` to utils * quality * add begin supress tokens, supress tokens to generate args and config * clean supressTokensLogitProcessor in generation logits * Nit naming * add supressTokensAtBegin * udpate tests, supress tokens to None or correct values * nit and style * update RAG to fit test and generate_logit * add copy pasted statment on english normalizer * add arguments to config_common_kwargs * Update src/transformers/generation_utils.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/generation_logits_process.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * revert changes based on reviews * update doc and nits * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * more nits * last nits * update test configuration common * add BART name in decoder attention mask documentation * Update src/transformers/models/whisper/modeling_whisper.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * style * nit * nit * add english.json file to git * nits on documentation * nit * nits * last styling * add main toctree file * remove sentence piece dependency * clean init file * fix tokenizer that has no dependencies on sentencepiece * update whisper init file, nit * remove english.json file * add get decoder prompt id * All weights loading * Remove hanging pdb * Fixup and tidy up * Use same copied from as PT model * Remove whitespace changes * Remove torch references * Tie embeddings * Remove logits processor input to generate * Update logit values * revert changes and add forced logit processor * nit * clean normalizer * remove protected * Add logit processors and update generation code & tests * Some tidy up * Update docstring * update * update based on review * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update to reflect changes on the PT model branch * Tidy up * Remove extra whitespace * Fix test - make input ids small enough we can append * Include upstream changes on main * PR comments - add batch tests, remove comments & defaults * Fix model output imports * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/generation_tf_logits_process.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update tests/models/whisper/test_modeling_tf_whisper.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update docstring example * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com> * Remove changes to adjust_logits_during_generation function * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Tidy up imports that don't require TF * Update tests - skip and no more skip * Update tests/generation/test_generation_tf_logits_process.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/whisper/modeling_tf_whisper.py * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com> * Add training flags * Add (skipped) XLA generation tests * Add embedding correctness test * Add constant ids for generation tests * Make logits finding a bit tidier * Remove unused args * xla generation enabled * Don't skip XLA tests anymore * Fix tests - add position ids to expected signature and update rag generation * Undo method reorder * Remove added whitespace * Remove copy-paste gradient checkopint ref * Remove * Trigger CI - (issue with refs when pulling) Co-authored-by:
Arthur Zucker <arthur.zucker@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
NielsRogge <niels.rogge1@gmail.com> Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com> Co-authored-by:
Joao Gante <joao@huggingface.co>
-
- 30 Sep, 2022 1 commit
-
-
Karim Foda authored
-
- 15 Sep, 2022 1 commit
-
-
Ekagra Ranjan authored
* init PR * optimize top p and add edge case * styling * style * revert tf and flax test * add edge case test for FLAX and TF * update doc with smallest set sampling for top p * make style
-
- 05 Sep, 2022 1 commit
-
-
Joao Gante authored
-
- 02 Sep, 2022 1 commit
-
-
Joao Gante authored
-
- 19 Aug, 2022 1 commit
-
-
Joao Gante authored
-
- 18 Aug, 2022 1 commit
-
-
Joao Gante authored
-
- 12 Aug, 2022 1 commit
-
-
Joao Gante authored
* validate generate model_kwargs * generate tests -- not all models have an attn mask
-
- 23 Jul, 2022 1 commit
-
-
Joao Gante authored
-
- 28 Jun, 2022 1 commit
-
-
Yih-Dar authored
* fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 21 Jun, 2022 1 commit
-
-
unifyh authored
- Fix `top_k_top_p_filtering` not passing `filter_value` to `TopPLogitsWarper` causing any top-p filtered logits to be -inf instead of specified value - Add corresponding test
-
- 10 Jun, 2022 1 commit
-
-
Patrick von Platen authored
-
- 19 May, 2022 1 commit
-
-
Patrick von Platen authored
* [Draft] fix transition probs * up * up * up * make it work * fix * finish * update
-
- 12 May, 2022 1 commit
-
-
Sylvain Gugger authored
* Black preview * Fixup too! * Fix check copies * Use the same version as the CI * Bump black
-
- 29 Apr, 2022 1 commit
-
-
Joao Gante authored
-
- 25 Apr, 2022 2 commits
-
-
Joao Gante authored
Co-authored-by:Matt <Rocketknight1@users.noreply.github.com>
-
Joao Gante authored
* XLA min len, forced eos, and forced bos Co-authored-by:Matt <Rocketknight1@users.noreply.github.com>
-
- 22 Apr, 2022 1 commit
-
-
Joao Gante authored
-
- 13 Apr, 2022 1 commit
-
-
Santiago Castro authored
* Normalize using a logits warper * Add a flag in `generate` to support the logit renormalization * Add in RAG
-
- 12 Apr, 2022 1 commit
-
-
Joao Gante authored
-
- 11 Apr, 2022 1 commit
-
-
Joao Gante authored
* min length must be smaller than max length * Update min_length in tests
-
- 06 Apr, 2022 1 commit
-
-
Joao Gante authored
* refactor TF beam search * refactored generate can now properly use attention masks * add force bos/eos logit processors
-
- 16 Mar, 2022 2 commits
-
-
Patrick von Platen authored
* up * fix min lengths
-
Sanchit Gandhi authored
* Replace all deprecated `jax.ops` operations with jnp's `at` * np to jnp scores * suggested changes
-
- 11 Mar, 2022 1 commit
-
-
Kevin Bondzio authored
* add possibility to softly regulate length when using sampling method in model.generate() function * fix test config, fix formatting * fix rag integration, fix docstyling * fix wrong docstring * change param to tuple, add test * fix old param in rag_model, remove unused import * change test according to new param * fix formatting * fix test case * fix doc style * move start_length calculation to Logitprocessor * add possibility to softly regulate length when using sampling method in model.generate() function * fix rag integration, fix docstyling * fix test config, fix formatting * change param to tuple, add test * fix old param in rag_model, remove unused import * add possibility to softly regulate length when using sampling method in model.generate() function * change param to tuple, add test * fix old param in rag_model, remove unused import * remove unused import * fix small errors * fix test * add possibility to softly regulate length when using sampling method in model.generate() function * fix test config, fix formatting * fix rag integration, fix docstyling * change param to tuple, add test * fix old param in rag_model, remove unused import * change test according to new param * fix test case * move start_length calculation to Logitprocessor * add possibility to softly regulate length when using sampling method in model.generate() function * fix rag integration, fix docstyling * fix test config, fix formatting * change param to tuple, add test * fix old param in rag_model, remove unused import * add possibility to softly regulate length when using sampling method in model.generate() function * fix test config, fix formatting * fix rag integration, fix docstyling * add possibility to softly regulate length when using sampling method in model.generate() function * fix rag integration, fix docstyling * change param to tuple, add test * fix old param in rag_model, remove unused import * fix small errors * Update src/transformers/generation_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/generation_utils.py * Update src/transformers/generation_utils.py * fix docstring, add type ind model rag * fix docstrings * introduce seq_length variable for cleaner code * fix black formatting * add input_ids_seq_length to modeling_rag * add input_ids_seq_length to test * retrigger checks * retrigger checks Co-authored-by:
Kevin Bondzio <kev@AIM-LAP-02.local> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Kevin Bondzio <kev@AIM-LAP-02.fritz.box>
-
- 07 Mar, 2022 1 commit
-
-
Chan Woo Kim authored
[Bug Fix] Beam search example in docs fails & a fix (integrating `max_length` in `BeamScorer.finalize()`) (#15555) * added the test and fix * had left out a comment
-
- 04 Mar, 2022 1 commit
-
-
Chan Woo Kim authored
* added classes to get started with constrained beam search * in progress, think i can directly force tokens now but not yet with the round robin * think now i have total control, now need to code the bank selection * technically works as desired, need to optimize and fix design choices leading to undersirable outputs * complete PR #1 without disjunctive decoding * removed incorrect tests * Delete k.txt * Delete test.py * Delete test.sh * revert changes to test scripts * genutils * full implementation with testing, no disjunctive yet * shifted docs * passing all tests realistically ran locally * removing accidentally included print statements * fixed source of error in initial PR test * fixing the get_device() vs device trap * fixed documentation docstrings about constrained_beam_search * fixed tests having failing for Speech2TextModel's floating point inputs * fix cuda long tensor * added examples and testing for them and founx & fixed a bug in beam_search and constrained_beam_search * deleted accidentally added test halting code with assert False * code reformat * Update tests/test_generation_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py * fixing based on comments on PR * took out the testing code that should but work fails without the beam search moditification ; style changes * fixing comments issues * docstrings for ConstraintListState * typo in PhrsalConstraint docstring * docstrings improvements * finished adding what is sort of an opinionated implementation of disjunctive generation, but it revealed errors in inner beam search logic during testing. * fixed bug found in constrained beam search that used beam_idx that were not global across all the batches * disjunctive constraint working 100% correctly * passing all tests * Accidentally included mlruns * Update src/transformers/generation_beam_constraints.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/generation_beam_constraints.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * complete overhaul of type complexities and other nits * strict type checks in generate() * fixing second round of feedback by narsil * fixed failing generation test because of type check overhaul * generation test fail fix * fixing test fails Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Mar, 2022 1 commit
-
-
Joao Gante authored
* Add TF logits wrappers * Add sample method * add tests for TF logit wrappers * TF generate sample tests now run on CPU Co-authored-by:Matt <Rocketknight1@users.noreply.github.com>
-
- 23 Feb, 2022 1 commit
-
-
Lysandre Debut authored
* Per-folder tests reorganization Co-authored-by:
sgugger <sylvain.gugger@gmail.com> Co-authored-by:
Stas Bekman <stas@stason.org>
-