1. 31 Oct, 2023 2 commits
  2. 19 Oct, 2023 1 commit
  3. 12 Oct, 2023 1 commit
  4. 03 Oct, 2023 1 commit
  5. 25 Sep, 2023 1 commit
  6. 22 Sep, 2023 1 commit
  7. 18 Sep, 2023 2 commits
    • Arthur's avatar
      🚨🚨 🚨🚨 [`Tokenizer`] attemp to fix add_token issues🚨🚨 🚨🚨 (#23909) · 2da88537
      Arthur authored
      
      
      * fix test for bart. Order is correct now let's skip BPEs
      
      * ouf
      
      * styling
      
      * fix bert....
      
      * slow refactoring
      
      * current updates
      
      * massive refactoring
      
      * update
      
      * NICE!
      
      * update to see where I am at
      
      * updates
      
      * update
      
      * update
      
      * revert
      
      * updates
      
      * updates
      
      * start supporting legacy_save
      
      * styling
      
      * big update
      
      * revert some changes
      
      * nits
      
      * nniiiiiice
      
      * small fixes
      
      * kinda fix t5 with new behaviour
      
      * major update
      
      * fixup
      
      * fix copies
      
      * today's updates
      
      * fix byt5
      
      * upfate
      
      * update
      
      * update
      
      * updates
      
      * update vocab size test
      
      * Barthez does not use not need the fairseq offset ids
      
      * super calll must be after
      
      * calll super
      
      * move all super init
      
      * move other super init
      
      * fixup
      
      * nits
      
      * more fixes
      
      * nits
      
      * more fixes
      
      * nits
      
      * more fix
      
      * remove useless files
      
      * ouch all of them are affected
      
      * and more!
      
      * small imporvements
      
      * no more sanitize token
      
      * more changes around unique no split tokens
      
      * partially fix more things
      
      * keep legacy save but add warning
      
      * so... more fixes
      
      * updates
      
      * guess deberta tokenizer could be nuked
      
      * fixup
      
      * fixup did some bad things
      
      * nuke it if it breaks
      
      * remove prints and pretrain fast from slow with new format.
      
      * fixups
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fiou
      
      * nit
      
      * by default specials should not be normalized?
      
      * update
      
      * remove brakpoint
      
      * updates
      
      * a lot of updates
      
      * fixup
      
      * fixes revert some changes to match fast
      
      * small nits
      
      * that makes it cleaner
      
      * fix camembert accordingly
      
      * update
      
      * some lest breaking changes
      
      * update
      
      * fixup
      
      * fix byt5 and whisper mostly
      
      * some more fixes, canine's byte vocab
      
      * fix gpt2
      
      * fix most of the perceiver tests (4 left)
      
      * fix layout lmv3
      
      * fixup
      
      * fix copies for gpt2 style
      
      * make sure to only warn once
      
      * fix perciever and gpt2 tests
      
      * some more backward compatibility: also read special tokens map because some ppl use it........////.....
      
      * fixup
      
      * add else when reading
      
      * nits
      
      * fresh updates
      
      * fix copies
      
      * will this make everything faster?
      
      * fixes
      
      * more fixes
      
      * update
      
      * more fixes
      
      * fixup
      
      * is the source of truth right?
      
      * sorry camembert for the troubles
      
      * current updates
      
      * fixup
      
      * update led
      
      * update
      
      * fix regression
      
      * fix single word
      
      * more model specific fixes
      
      * fix t5 tests
      
      * fixup
      
      * more comments
      
      * update
      
      * fix nllb
      
      * rstrip removed
      
      * small fixes
      
      * better handle additional_special_tokens and vocab sizes
      
      * fixing
      
      * styling
      
      * fix 4 / 21
      
      * fixup
      
      * fix nlbb's tests
      
      * some fixes
      
      * fix t5
      
      * fixes
      
      * style
      
      * fix canine tests
      
      * damn this is nice
      
      * nits
      
      * m2m100 nit
      
      * fixups
      
      * fixes!
      
      * fixup
      
      * stash
      
      * fix merge
      
      * revert bad change
      
      * fixup
      
      * correct order for code Llama
      
      * fix speecht5 post merge
      
      * styling
      
      * revert source of 11 fails
      
      * small nits
      
      * all changes in one go
      
      * fnet hack
      
      * fix 2 more tests
      
      * update based on main branch of tokenizers
      
      * fixup
      
      * fix VITS issues
      
      * more fixes
      
      * fix mgp test
      
      * fix camembert issues
      
      * oups camembert still has 2 failing tests
      
      * mluke fixes
      
      * decode fixes
      
      * small nits
      
      * nits
      
      * fix llama and vits
      
      * fix camembert
      
      * smal nits
      
      * more fixes when initialising a fast from a slow and etc
      
      * fix one of the last test
      
      * fix CPM tokenizer test
      
      * fixups
      
      * fix pop2piano
      
      * fixup
      
      * ️ Change tokenizers required version ️
      
      * ️ Change tokenizers required version ️
      
      * "tokenizers>=0.14,<0.15", don't forget smaller than
      
      * fix musicgen tests and pretraiendtokenizerfast
      
      * fix owlvit and all
      
      * update t5
      
      * fix 800 red
      
      * fix tests
      
      * fix the fix of the fix of t5
      
      * styling
      
      * documentation nits
      
      * cache _added_tokens_encoder
      
      * fixups
      
      * Nit
      
      * fix red tests
      
      * one last nit!
      
      * make eveything a lot simpler
      
      * Now it's over 😉
      
      
      
      * few small nits
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * updates that work for now
      
      * tests that should no be skipped / changed and fixed next
      
      * fixup
      
      * i am ashamed
      
      * pushe the fix
      
      * update
      
      * fixups
      
      * nits
      
      * fix added_tokens_encoder
      
      * fix canine test
      
      * fix pegasus vocab
      
      * fix transfoXL
      
      * fixup
      
      * whisper needs to be fixed for train new
      
      * pegasus nits
      
      * more pegasus fixes
      
      * minor update
      
      * better error message in failed test
      
      * fix whisper failing test
      
      * fix whisper failing test
      
      * fix pegasus
      
      * fixup
      
      * fix **** pegasus
      
      * reset things
      
      * remove another file
      
      * attempts to fix the strange custome encoder and offset
      
      * nits here and there
      
      * update
      
      * fixup
      
      * nit
      
      * fix the whisper test
      
      * nits nits
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * updates based on review
      
      * some small update to potentially remove
      
      * nits
      
      * import rlu cache
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * move warning to `from_pretrained`
      
      * update tests results now that the special tokens are always added
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      2da88537
    • Matt's avatar
      Fix ConversationalPipeline tests (#26217) · f0a6057f
      Matt authored
      Add BlenderbotSmall templates and correct handling for conversation.past_user_inputs
      f0a6057f
  8. 14 Sep, 2023 3 commits
    • Joshua Lochner's avatar
      [Whisper] Fix word-level timestamps for audio < 30 seconds (#25607) · 95fe0f5d
      Joshua Lochner authored
      
      
      * Fix word-level timestamps for audio < 30 seconds
      
      * Fix code quality
      
      * fix unit tests
      
      * Fix unit tests
      
      * Fix unit test
      
      * temp: print out result
      
      * temp: set max diff to None
      
      * fix unit tests
      
      * fix typo
      
      * Fix typo
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Use generation config for `num_frames`
      
      * fix docs
      
      * Move `num_frames` to kwargs
      
      * compute stride/attn_mask once
      
      * mark test as slow
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarsanchit-gandhi <sanchit@huggingface.co>
      95fe0f5d
    • Sanchit Gandhi's avatar
      [MusicGen] Add sampling rate to config (#26136) · 44a0490d
      Sanchit Gandhi authored
      
      
      * [MusicGen] Add sampling rate to config
      
      * remove tiny
      
      * make property
      
      * Update tests/pipelines/test_pipelines_text_to_audio.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * style
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      44a0490d
    • Matt's avatar
      Overhaul Conversation class and prompt templating (#25323) · 866df66f
      Matt authored
      
      
      * First commit while I figure this out
      
      * make fixup
      
      * Remove unused method
      
      * Store prompt attrib
      
      * Fix prompt argument for tests
      
      * Make same changes in fast tokenizer
      
      * Remove global prompts from fast tokenizer too
      
      * stash commit
      
      * stash commit
      
      * Migrate PromptConfig to its True Final Location
      
      * Replace Conversation entirely with the new class
      
      * Import/dependency fixes
      
      * Import/dependency fixes
      
      * Change format for lots of default prompts
      
      * More default prompt fixups
      
      * Revert llama old methods so we can compare
      
      * Fix some default configs
      
      * Fix some default configs
      
      * Fix misspelled kwarg
      
      * Fixes for Blenderbot
      
      * make fixup
      
      * little rebase cleanup
      
      * Add basic documentation
      
      * Quick doc fix
      
      * Truncate docstring for now
      
      * Add handling for the case when messages is a single string
      
      * Quick llama merges
      
      * Update conversational pipeline and tests
      
      * Add a couple of legacy properties for backward compatibility
      
      * More legacy handling
      
      * Add docstring for build_conversation_input_ids
      
      * Restructure PromptConfig
      
      * Let's start T E M P L A T I N G
      
      * Refactor all default configs to use templates instead
      
      * Revert changes to the special token properties since we don't need them anymore
      
      * More class templates
      
      * Make the sandbox even sandier
      
      * Everything replaced with pure templating
      
      * Remove docs for PromptConfig
      
      * Add testing and optional requirement boilerplate
      
      * Fix imports and make fixup
      
      * Fix LLaMA tests and add Conversation docstring
      
      * Finally get LLaMA working with the template system
      
      * Finally get LLaMA working with the template system
      
      * make fixup
      
      * make fixup
      
      * fmt-off for the long lists of test tokens
      
      * Rename method to apply_chat_template for now
      
      * Start on documentation
      
      * Make chat_template a property that reads through to the default if it's not set
      
      * Expand docs
      
      * Expand chat templating doc some more
      
      * trim/lstrip blocks by default and update doc
      
      * Few doc tweaks
      
      * rebase cleanup
      
      * Clarify docstring
      
      * rebase cleanup
      
      * rebase cleanup
      
      * make fixup
      
      * Quick doc edit
      
      * Reformat the standard template to match ChatML
      
      * Re-add PEFT check
      
      * Update docs/source/en/chat_templating.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Add apply_chat_template to the tokenizer doc
      
      * make fixup
      
      * Add doc links
      
      * Fix chat links
      
      * Fix chat links
      
      * Explain system messages in the doc
      
      * Add chat template test
      
      * Proper save-loading for chat template attribute
      
      * Add test skips for layout models
      
      * Remove _build_conversation_input_ids, add default_chat_template to code_llama
      
      * Make sure all LLaMA models are using the latest template
      
      * Remove default_system_prompt block in code_llama because it has no default prompt
      
      * Update ConversationPipeline preprocess
      
      * Add correct #Copied from links to the default_chat_templates
      
      * Remove unneeded type checking line
      
      * Add a dummy mark_processsed method
      
      * Reorganize Conversation to have **deprecated_kwargs
      
      * Update chat_templating.md
      
      * Quick fix to LLAMA tests
      
      * Small doc tweaks
      
      * Add proper docstrings and "copied from" statements to all default chat templates
      
      * Merge use_default_system_prompt support for code_llama too
      
      * Improve clarity around self.chat_template
      
      * Docstring fix
      
      * Fix blenderbot default template
      
      * More doctest fix
      
      * Break out some tokenizer kwargs
      
      * Update doc to explain default templates
      
      * Quick tweaks to tokenizer args
      
      * Cleanups for tokenizer args
      
      * Add note about cacheing
      
      * Quick tweak to the chat-templating doc
      
      * Update the LLaMA template with error checking and correct system message embedding
      
      * make fixup
      
      * make fixup
      
      * add requires_jinja
      
      * Cleanup to expected output formatting
      
      * Add cacheing
      
      * Fix typo in llama default template
      
      * Update LLaMA tests
      
      * Update documentation
      
      * Improved legacy handling in the Conversation class
      
      * Update Jinja template with proper error handling
      
      * Quick bugfix
      
      * Proper exception raising
      
      * Change cacheing behaviour so it doesn't try to pickle an entire Jinja env
      
      * make fixup
      
      * rebase cleanup
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      866df66f
  9. 05 Sep, 2023 2 commits
  10. 01 Sep, 2023 1 commit
  11. 31 Aug, 2023 1 commit
  12. 30 Aug, 2023 1 commit
    • Juan Pizarro's avatar
      Add Blip2 model in VQA pipeline (#25532) · 09dc9951
      Juan Pizarro authored
      * Add Blip2 model in VQA pipeline
      
      * use require_torch_gpu for test_large_model_pt_blip2
      
      * use can_generate in vqa pipeline
      
      * test Blip2ForConditionalGeneration using float16
      
      * remove custom can_generate from Blip2ForConditionalGeneration
      09dc9951
  13. 24 Aug, 2023 1 commit
  14. 18 Aug, 2023 1 commit
  15. 17 Aug, 2023 1 commit
    • Yoach Lacombe's avatar
      Add Text-To-Speech pipeline (#24952) · b8f69d0d
      Yoach Lacombe authored
      
      
      * add AutoModelForTextToSpeech class
      
      * add TTS pipeline and tessting
      
      * add docstrings to text_to_speech pipeline
      
      * fix torch dependency
      
      * corrector 'processor is None' case in Pipeline
      
      * correct repo id
      
      * modify text-to-speech -> text-to-audio
      
      * remove processor
      
      * rename text_to_speech pipelines files to text_audio
      
      * add textToWaveform and textToSpectrogram instead of textToAudio classes
      
      * update TTS pipeline to the bare minimum
      
      * update tests TTS pipeline
      
      * make style and erase useless import torch in TTS pipeline tests
      
      * modify how to check if generate or forward in TTS pipeline
      
      * remove unnecessary extra new lines
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * refactor input_texts -> text_inputs
      
      * correct docstrings of TTS.__call__
      
      * correct the shape of generated waveform
      
      * take care of Bark tokenizer special case
      
      * correct run_pipeline_test TTS
      
      * make style
      
      * update TTS docstrings
      
      * address Sylvain nit refactors
      
      * make style
      
      * refactor into one liners
      
      * correct squeeze
      
      * correct way to test if forward or generate
      
      * Update output audio waveform shape
      
      * make style
      
      * correct import
      
      * modify how the TTS pipeline test if a model can generate
      
      * align shape output of TTS pipeline with consistent shape
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      b8f69d0d
  16. 16 Aug, 2023 1 commit
  17. 08 Aug, 2023 1 commit
  18. 28 Jul, 2023 1 commit
  19. 18 Jul, 2023 2 commits
  20. 17 Jul, 2023 1 commit
  21. 13 Jul, 2023 1 commit
  22. 26 Jun, 2023 1 commit
  23. 23 Jun, 2023 1 commit
  24. 22 Jun, 2023 1 commit
  25. 21 Jun, 2023 1 commit
    • Matthijs Hollemans's avatar
      add word-level timestamps to Whisper (#23205) · cd927a47
      Matthijs Hollemans authored
      * let's go!
      
      * initial implementation of token-level timestamps
      
      * only return a single timestamp per token
      
      * remove token probabilities
      
      * fix return type
      
      * fix doc comment
      
      * strip special tokens
      
      * rename
      
      * revert to not stripping special tokens
      
      * only support models that have alignment_heads
      
      * add integration test
      
      * consistently name it token-level timestamps
      
      * small DTW tweak
      
      * initial support for ASR pipeline
      
      * fix pipeline doc comments
      
      * resolve token timestamps in pipeline with chunking
      
      * change warning when no final timestamp is found
      
      * return word-level timestamps
      
      * fixup
      
      * fix bug that skipped final word in each chunk
      
      * fix failing unit tests
      
      * merge punctuations into the words
      
      * also return word tokens
      
      * also return token indices
      
      * add (failing) unit test for combine_tokens_into_words
      
      * make combine_tokens_into_words private
      
      * restore OpenAI's punctuation rules
      
      * add pipeline tests
      
      * make requested changes
      
      * PR review changes
      
      * fix failing pipeline test
      
      * small stuff from PR
      
      * only return words and their timestamps, not segments
      
      * move alignment_heads into generation config
      
      * forgot to set alignment_heads in pipeline tests
      
      * tiny comment fix
      
      * grr
      cd927a47
  26. 20 Jun, 2023 1 commit
  27. 14 Jun, 2023 1 commit
  28. 09 Jun, 2023 1 commit
  29. 22 May, 2023 1 commit
  30. 18 May, 2023 1 commit
  31. 04 May, 2023 1 commit
  32. 24 Apr, 2023 1 commit
  33. 21 Apr, 2023 1 commit
  34. 20 Apr, 2023 1 commit