"tests/models/mobilebert/test_tokenization_mobilebert.py" did not exist on "48a8f3daa1ff882f3ece8c2748411cb448538ca2"
  1. 03 Oct, 2023 2 commits
  2. 02 Oct, 2023 3 commits
    • Arthur's avatar
      Code-llama-nit (#26300) · bab33319
      Arthur authored
      * fix encoding when the fill token is None
      
      * add tests and edge cases
      
      * fiuxp
      
      * Update tests/models/code_llama/test_tokenization_code_llama.py
      bab33319
    • Arthur's avatar
      Fix model integration ci (#26322) · 63864e05
      Arthur authored
      * fix wav2vec2
      
      * nit
      
      * stash
      
      * one more file to update
      
      * fix byt5
      
      * vocab size is 256, don't change that!
      
      * use other revision
      
      * test persimon in smaller size
      
      * style
      
      * tests
      
      * nits
      
      * update add tokens from pretrained
      
      * test tokenization
      
      * nits
      
      * potential fnet fix?
      
      * more nits
      
      * nits
      
      * correct test
      
      * assert close
      
      * udpate
      
      * ouch
      
      * fix it
      
      * some more nits
      
      * FINALLU
      
      * use `adept` checkpoints
      
      * more adept checkpoints
      
      * that was invlved!
      63864e05
    • Lysandre Debut's avatar
      Revert falcon exception (#26472) · 67239f73
      Lysandre Debut authored
      * Revert "Falcon: fix revision propagation (#26006)"
      
      This reverts commit 118c676ef3124423e5d062b665f05cde55bc9a90.
      
      * Revert "Put Falcon back (#25960)"
      
      This reverts commit 22a69f1d.
      67239f73
  3. 29 Sep, 2023 1 commit
  4. 28 Sep, 2023 1 commit
  5. 27 Sep, 2023 2 commits
  6. 26 Sep, 2023 2 commits
    • sanjeevk-os's avatar
    • NielsRogge's avatar
      Add Nougat (#25942) · ace74d16
      NielsRogge authored
      
      
      * Add conversion script
      
      * Add NougatImageProcessor
      
      * Add crop margin
      
      * More improvements
      
      * Add docs, READMEs
      
      * Remove print statements
      
      * Include model_max_length
      
      * Add NougatTokenizerFast
      
      * Fix imports
      
      * Improve postprocessing
      
      * Improve image processor
      
      * Fix image processor
      
      * Improve normalize method
      
      * More improvements
      
      * More improvements
      
      * Add processor, improve docs
      
      * Simplify fast tokenizer
      
      * Remove test file
      
      * Fix docstrings
      
      * Use NougatProcessor in conversion script
      
      * Add is_levensthein_available
      
      * Add tokenizer tests
      
      * More improvements
      
      * Use numpy instead of opencv
      
      * Add is_cv2_available
      
      * Fix cv2_available
      
      * Add is_nltk_available
      
      * Add image processor tests, improve crop_margin
      
      * Add integration tests
      
      * Improve integration test
      
      * Use do_rescale instead of hacks, thanks Amy
      
      * Remove random_padding
      
      * Address comments
      
      * Address more comments
      
      * Add import
      
      * Address more comments
      
      * Address more comments
      
      * Address comment
      
      * Address comment
      
      * Set max_model_input_sizes
      
      * Add tests
      
      * Add requires_backends
      
      * Add Nougat to exotic tests
      
      * Use to_pil_image
      
      * Address comment regarding nltk
      
      * Add NLTK
      
      * Improve variable names, integration test
      
      * Add test
      
      * refactor, document, and test regexes
      
      * remove named capture groups, add comments
      
      * format
      
      * add non-markdown fixed tokenization
      
      * format
      
      * correct flakyness of args parse
      
      * add regex comments
      
      * test functionalities for crop_image, align long axis and expected output
      
      * add regex tests
      
      * remove cv2 dependency
      
      * test crop_margin equality between cv2 and python
      
      * refactor table regexes to markdown
      
      add newline
      
      * change print to log, improve doc
      
      * fix high count tables correction
      
      * address PR comments: naming, linting, asserts
      
      * Address comments
      
      * Add copied from
      
      * Update conversion script
      
      * Update conversion script to convert both small and base versions
      
      * Add inference example
      
      * Add more info
      
      * Fix style
      
      * Add require annotators to test
      
      * Define all keyword arguments explicitly
      
      * Move cv2 annotator
      
      * Add tokenizer init method
      
      * Transfer checkpoints
      
      * Add reference to Donut
      
      * Address comments
      
      * Skip test
      
      * Remove cv2 method
      
      * Add copied from statements
      
      * Use cached_property
      
      * Fix docstring
      
      * Add file to not doctested
      
      ---------
      Co-authored-by: default avatarPablo Montalvo <pablo.montalvo.leroux@gmail.com>
      ace74d16
  7. 25 Sep, 2023 1 commit
  8. 22 Sep, 2023 1 commit
  9. 21 Sep, 2023 2 commits
  10. 20 Sep, 2023 2 commits
  11. 19 Sep, 2023 1 commit
    • NielsRogge's avatar
      Add ViTMatte (#25843) · 7d6354e0
      NielsRogge authored
      * First draft
      
      * Simplify image processor
      
      * Fix rebase
      
      * Address comments
      
      * Address more comments
      
      * Address more comments
      
      * Address more comments
      
      * Address more comments
      
      * Improve pad_image
      
      * Add tests
      
      * Update integration test
      
      * Fix image processor tests
      
      * Fix model tests
      
      * Convert checkpoints
      
      * Fix doc tests
      
      * Remove file
      
      * Apply suggestions
      
      * Address comments
      
      * Fix typing hint
      
      * Add batch_norm_eps
      
      * Address comments
      
      * Fix style
      7d6354e0
  12. 18 Sep, 2023 4 commits
    • NielsRogge's avatar
      [AutoBackbone] Add test (#26094) · de8bec6d
      NielsRogge authored
      * Add test
      
      * Add config_class
      de8bec6d
    • Arthur's avatar
      🚨🚨 🚨🚨 [`Tokenizer`] attemp to fix add_token issues🚨🚨 🚨🚨 (#23909) · 2da88537
      Arthur authored
      
      
      * fix test for bart. Order is correct now let's skip BPEs
      
      * ouf
      
      * styling
      
      * fix bert....
      
      * slow refactoring
      
      * current updates
      
      * massive refactoring
      
      * update
      
      * NICE!
      
      * update to see where I am at
      
      * updates
      
      * update
      
      * update
      
      * revert
      
      * updates
      
      * updates
      
      * start supporting legacy_save
      
      * styling
      
      * big update
      
      * revert some changes
      
      * nits
      
      * nniiiiiice
      
      * small fixes
      
      * kinda fix t5 with new behaviour
      
      * major update
      
      * fixup
      
      * fix copies
      
      * today's updates
      
      * fix byt5
      
      * upfate
      
      * update
      
      * update
      
      * updates
      
      * update vocab size test
      
      * Barthez does not use not need the fairseq offset ids
      
      * super calll must be after
      
      * calll super
      
      * move all super init
      
      * move other super init
      
      * fixup
      
      * nits
      
      * more fixes
      
      * nits
      
      * more fixes
      
      * nits
      
      * more fix
      
      * remove useless files
      
      * ouch all of them are affected
      
      * and more!
      
      * small imporvements
      
      * no more sanitize token
      
      * more changes around unique no split tokens
      
      * partially fix more things
      
      * keep legacy save but add warning
      
      * so... more fixes
      
      * updates
      
      * guess deberta tokenizer could be nuked
      
      * fixup
      
      * fixup did some bad things
      
      * nuke it if it breaks
      
      * remove prints and pretrain fast from slow with new format.
      
      * fixups
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fiou
      
      * nit
      
      * by default specials should not be normalized?
      
      * update
      
      * remove brakpoint
      
      * updates
      
      * a lot of updates
      
      * fixup
      
      * fixes revert some changes to match fast
      
      * small nits
      
      * that makes it cleaner
      
      * fix camembert accordingly
      
      * update
      
      * some lest breaking changes
      
      * update
      
      * fixup
      
      * fix byt5 and whisper mostly
      
      * some more fixes, canine's byte vocab
      
      * fix gpt2
      
      * fix most of the perceiver tests (4 left)
      
      * fix layout lmv3
      
      * fixup
      
      * fix copies for gpt2 style
      
      * make sure to only warn once
      
      * fix perciever and gpt2 tests
      
      * some more backward compatibility: also read special tokens map because some ppl use it........////.....
      
      * fixup
      
      * add else when reading
      
      * nits
      
      * fresh updates
      
      * fix copies
      
      * will this make everything faster?
      
      * fixes
      
      * more fixes
      
      * update
      
      * more fixes
      
      * fixup
      
      * is the source of truth right?
      
      * sorry camembert for the troubles
      
      * current updates
      
      * fixup
      
      * update led
      
      * update
      
      * fix regression
      
      * fix single word
      
      * more model specific fixes
      
      * fix t5 tests
      
      * fixup
      
      * more comments
      
      * update
      
      * fix nllb
      
      * rstrip removed
      
      * small fixes
      
      * better handle additional_special_tokens and vocab sizes
      
      * fixing
      
      * styling
      
      * fix 4 / 21
      
      * fixup
      
      * fix nlbb's tests
      
      * some fixes
      
      * fix t5
      
      * fixes
      
      * style
      
      * fix canine tests
      
      * damn this is nice
      
      * nits
      
      * m2m100 nit
      
      * fixups
      
      * fixes!
      
      * fixup
      
      * stash
      
      * fix merge
      
      * revert bad change
      
      * fixup
      
      * correct order for code Llama
      
      * fix speecht5 post merge
      
      * styling
      
      * revert source of 11 fails
      
      * small nits
      
      * all changes in one go
      
      * fnet hack
      
      * fix 2 more tests
      
      * update based on main branch of tokenizers
      
      * fixup
      
      * fix VITS issues
      
      * more fixes
      
      * fix mgp test
      
      * fix camembert issues
      
      * oups camembert still has 2 failing tests
      
      * mluke fixes
      
      * decode fixes
      
      * small nits
      
      * nits
      
      * fix llama and vits
      
      * fix camembert
      
      * smal nits
      
      * more fixes when initialising a fast from a slow and etc
      
      * fix one of the last test
      
      * fix CPM tokenizer test
      
      * fixups
      
      * fix pop2piano
      
      * fixup
      
      * ️ Change tokenizers required version ️
      
      * ️ Change tokenizers required version ️
      
      * "tokenizers>=0.14,<0.15", don't forget smaller than
      
      * fix musicgen tests and pretraiendtokenizerfast
      
      * fix owlvit and all
      
      * update t5
      
      * fix 800 red
      
      * fix tests
      
      * fix the fix of the fix of t5
      
      * styling
      
      * documentation nits
      
      * cache _added_tokens_encoder
      
      * fixups
      
      * Nit
      
      * fix red tests
      
      * one last nit!
      
      * make eveything a lot simpler
      
      * Now it's over 😉
      
      
      
      * few small nits
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * updates that work for now
      
      * tests that should no be skipped / changed and fixed next
      
      * fixup
      
      * i am ashamed
      
      * pushe the fix
      
      * update
      
      * fixups
      
      * nits
      
      * fix added_tokens_encoder
      
      * fix canine test
      
      * fix pegasus vocab
      
      * fix transfoXL
      
      * fixup
      
      * whisper needs to be fixed for train new
      
      * pegasus nits
      
      * more pegasus fixes
      
      * minor update
      
      * better error message in failed test
      
      * fix whisper failing test
      
      * fix whisper failing test
      
      * fix pegasus
      
      * fixup
      
      * fix **** pegasus
      
      * reset things
      
      * remove another file
      
      * attempts to fix the strange custome encoder and offset
      
      * nits here and there
      
      * update
      
      * fixup
      
      * nit
      
      * fix the whisper test
      
      * nits nits
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * updates based on review
      
      * some small update to potentially remove
      
      * nits
      
      * import rlu cache
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * move warning to `from_pretrained`
      
      * update tests results now that the special tokens are always added
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      2da88537
    • Lysandre Debut's avatar
      [FSMT] Fix non-shared weights (#26187) · 77ed9fa1
      Lysandre Debut authored
      * Fix non-shared weights
      
      * Add tests
      
      * Edit tied weights keys
      77ed9fa1
    • Julien Chaumond's avatar
      moved `ctrl` to `Salesforce/ctrl` (#26183) · bc7ce180
      Julien Chaumond authored
      
      
      * moved `ctrl` to `Salesforce/ctrl`
      
      redirects should theoretically work, but still updating those repo references for clarity
      
      * Fixup
      
      * Slow doc tests
      
      * Add modeling file
      
      ---------
      Co-authored-by: default avatarLysandre <lysandre@huggingface.co>
      bc7ce180
  13. 15 Sep, 2023 2 commits
  14. 14 Sep, 2023 4 commits
    • Leo Tronchon's avatar
      IDEFICS: allow interpolation of vision's pos embeddings (#26029) · 869733ab
      Leo Tronchon authored
      
      
      * add pos embed interpolation for vision encoder
      
      * style
      
      * update config with interpolate_pos_encoding arg
      
      * fix imports formatting
      
      * take off copied from on vision embeddings
      
      * add test for image embeddings interpolation
      
      * add credit for interpolation code
      
      * Update src/transformers/models/idefics/configuration_idefics.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/vision.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix condition to check nbr image patches match shape of pos embeddings
      
      * use kwargs in the forward methods for interpolation
      
      * fix tests
      
      * have interpolate_pos_encoding default to False instead of None
      
      * Update tests/models/idefics/test_modeling_idefics.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/idefics/test_modeling_idefics.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/idefics/test_modeling_idefics.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/configuration_idefics.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * take off for loop meant to print k,v
      
      * add interpolate_pos_encoding arg in prepare_inputs_for_generation
      
      * add test for interpolated generation
      
      * fix edge case num_patches == num_positions and height == width
      
      * add test for edge case
      
      * fix pos_embed in interpolate
      
      * allow interpolation in bf16 with upcasting
      
      * Update src/transformers/models/idefics/vision.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/vision.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * add multiple images tests for interpolation and generation
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      869733ab
    • Jinho Park's avatar
      Add BROS (#23190) · 17fdd354
      Jinho Park authored
      
      
      * add Bros boilerplate
      
      * copy and pasted modeling_bros.py from official Bros repo
      
      * update copyright of bros files
      
      * copy tokenization_bros.py from official repo and update import path
      
      * copy tokenization_bros_fast.py from official repo and update import path
      
      * copy configuration_bros.py from official repo and update import path
      
      * remove trailing period in copyright line
      
      * copy and paste bros/__init__.py from official repo
      
      * save formatting
      
      * remove unused unnecessary pe_type argument - using only crel type
      
      * resolve import issue
      
      * remove unused model classes
      
      * remove unnecessary tests
      
      * remove unused classes
      
      * fix original code's bug - layer_module's argument order
      
      * clean up modeling auto
      
      * add bbox to prepare_config_and_inputs
      
      * set temporary value to hidden_size (32 is too low because of the of the
      Bros' positional embedding)
      
      * remove decoder test, update create_and_check* input arguemnts
      
      * add missing variable to model tests
      
      * do make fixup
      
      * update bros.mdx
      
      * add boilerate plate for no_head inference test
      
      * update BROS_PRETRAINED_MODEL_ARCHIVE_LIST (add naver-clova-ocr prefix)
      
      * add prepare_bros_batch_inputs function
      
      * update modeling_common to add bbox inputs in Bros Model Test
      
      * remove unnecessary model inference
      
      * add test case
      
      * add model_doc
      
      * add test case for token_classification
      
      * apply fixup
      
      * update modeling code
      
      * update BrosForTokenClassification loss calculation logic
      
      * revert logits preprocessing logic to make sure logits have original shape
      
      * - update class name
      
      * - add BrosSpadeOutput
      - update BrosConfig arguments
      
      * add boilerate plate for no_head inference test
      
      * add prepare_bros_batch_inputs function
      
      * add test case
      
      * add test case for token_classification
      
      * update modeling code
      
      * update BrosForTokenClassification loss calculation logic
      
      * revert logits preprocessing logic to make sure logits have original shape
      
      * apply masking on the fly
      
      * add BrosSpadeForTokenLinking
      
      * update class name
      put docstring to the beginning of the file
      
      * separate the logits calculation logic and loss calculation logic
      
      * update logic for loss calculation so that logits shape doesn't change
      when return
      
      * update typo
      
      * update prepare_config_and_inputs
      
      * update dummy node initialization
      
      * update last_hidden_states getting logic to consider when return_dict is False
      
      * update box first token mask param
      
      * bugfix: remove random attention mask generation
      
      * update keys to ignore on load missing
      
      * run make style and quality
      
      * apply make style and quality of other codes
      
      * update box_first_token_mask to bool type
      
      * update index.md
      
      * apply make style and quality
      
      * apply make fix-copies
      
      * pass check_repo
      
      * update bros model doc
      
      * docstring bugfix fix
      
      * add checkpoint for doc, tokenizer for doc
      
      * Update README.md
      
      * Update docs/source/en/model_doc/bros.md
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update bros.md
      
      * Update src/transformers/__init__.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/bros.md
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * apply suggestions from code review
      
      * apply suggestions from code review
      
      * revert test_processor_markuplm.py
      
      * Update test_processor_markuplm.py
      
      * apply suggestions from code review
      
      * apply suggestions from code review
      
      * apply suggestions from code review
      
      * update BrosSpadeELForTokenClassification head name to entity linker
      
      * add doc string for config params
      
      * update class, var names to more explicit and apply suggestions from code review
      
      * remove unnecessary keys to ignore
      
      * update relation extractor to be initialized with config
      
      * add bros processor
      
      * apply make style and quality
      
      * update bros.md
      
      * remove bros tokenizer, add bros processor that wraps bert tokenizer
      
      * revert change
      
      * apply make fix-copies
      
      * update processor code, update itc -> initial token, stc -> subsequent token
      
      * add type hint
      
      * remove unnecessary condition branches in embedding forward
      
      * fix auto tokenizer fail
      
      * update docstring for each classes
      
      * update bbox input dimension as standard 2 points and convert them to 4
      points in forward pass
      
      * update bros docs
      
      * apply suggestions from code review : update Bros -> BROS in bros.md
      
      * 1. box prefix var -> bbox
      2. update variable names to be more explicit
      
      * replace einsum with torch matmul
      
      * apply style and quality
      
      * remove unused argument
      
      * remove unused arguments
      
      * update docstrings
      
      * apply suggestions from code review: add BrosBboxEmbeddings, replace
      einsum with classical matrix operations
      
      * revert einsum update
      
      * update bros processor
      
      * apply suggestions from code review
      
      * add conversion script for bros
      
      * Apply suggestions from code review
      
      * fix readme
      
      * apply fix-copies
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      17fdd354
    • Matt's avatar
      Overhaul Conversation class and prompt templating (#25323) · 866df66f
      Matt authored
      
      
      * First commit while I figure this out
      
      * make fixup
      
      * Remove unused method
      
      * Store prompt attrib
      
      * Fix prompt argument for tests
      
      * Make same changes in fast tokenizer
      
      * Remove global prompts from fast tokenizer too
      
      * stash commit
      
      * stash commit
      
      * Migrate PromptConfig to its True Final Location
      
      * Replace Conversation entirely with the new class
      
      * Import/dependency fixes
      
      * Import/dependency fixes
      
      * Change format for lots of default prompts
      
      * More default prompt fixups
      
      * Revert llama old methods so we can compare
      
      * Fix some default configs
      
      * Fix some default configs
      
      * Fix misspelled kwarg
      
      * Fixes for Blenderbot
      
      * make fixup
      
      * little rebase cleanup
      
      * Add basic documentation
      
      * Quick doc fix
      
      * Truncate docstring for now
      
      * Add handling for the case when messages is a single string
      
      * Quick llama merges
      
      * Update conversational pipeline and tests
      
      * Add a couple of legacy properties for backward compatibility
      
      * More legacy handling
      
      * Add docstring for build_conversation_input_ids
      
      * Restructure PromptConfig
      
      * Let's start T E M P L A T I N G
      
      * Refactor all default configs to use templates instead
      
      * Revert changes to the special token properties since we don't need them anymore
      
      * More class templates
      
      * Make the sandbox even sandier
      
      * Everything replaced with pure templating
      
      * Remove docs for PromptConfig
      
      * Add testing and optional requirement boilerplate
      
      * Fix imports and make fixup
      
      * Fix LLaMA tests and add Conversation docstring
      
      * Finally get LLaMA working with the template system
      
      * Finally get LLaMA working with the template system
      
      * make fixup
      
      * make fixup
      
      * fmt-off for the long lists of test tokens
      
      * Rename method to apply_chat_template for now
      
      * Start on documentation
      
      * Make chat_template a property that reads through to the default if it's not set
      
      * Expand docs
      
      * Expand chat templating doc some more
      
      * trim/lstrip blocks by default and update doc
      
      * Few doc tweaks
      
      * rebase cleanup
      
      * Clarify docstring
      
      * rebase cleanup
      
      * rebase cleanup
      
      * make fixup
      
      * Quick doc edit
      
      * Reformat the standard template to match ChatML
      
      * Re-add PEFT check
      
      * Update docs/source/en/chat_templating.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Add apply_chat_template to the tokenizer doc
      
      * make fixup
      
      * Add doc links
      
      * Fix chat links
      
      * Fix chat links
      
      * Explain system messages in the doc
      
      * Add chat template test
      
      * Proper save-loading for chat template attribute
      
      * Add test skips for layout models
      
      * Remove _build_conversation_input_ids, add default_chat_template to code_llama
      
      * Make sure all LLaMA models are using the latest template
      
      * Remove default_system_prompt block in code_llama because it has no default prompt
      
      * Update ConversationPipeline preprocess
      
      * Add correct #Copied from links to the default_chat_templates
      
      * Remove unneeded type checking line
      
      * Add a dummy mark_processsed method
      
      * Reorganize Conversation to have **deprecated_kwargs
      
      * Update chat_templating.md
      
      * Quick fix to LLAMA tests
      
      * Small doc tweaks
      
      * Add proper docstrings and "copied from" statements to all default chat templates
      
      * Merge use_default_system_prompt support for code_llama too
      
      * Improve clarity around self.chat_template
      
      * Docstring fix
      
      * Fix blenderbot default template
      
      * More doctest fix
      
      * Break out some tokenizer kwargs
      
      * Update doc to explain default templates
      
      * Quick tweaks to tokenizer args
      
      * Cleanups for tokenizer args
      
      * Add note about cacheing
      
      * Quick tweak to the chat-templating doc
      
      * Update the LLaMA template with error checking and correct system message embedding
      
      * make fixup
      
      * make fixup
      
      * add requires_jinja
      
      * Cleanup to expected output formatting
      
      * Add cacheing
      
      * Fix typo in llama default template
      
      * Update LLaMA tests
      
      * Update documentation
      
      * Improved legacy handling in the Conversation class
      
      * Update Jinja template with proper error handling
      
      * Quick bugfix
      
      * Proper exception raising
      
      * Change cacheing behaviour so it doesn't try to pickle an entire Jinja env
      
      * make fixup
      
      * rebase cleanup
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      866df66f
    • Sanchit Gandhi's avatar
      [Whisper Tokenizer] Encode timestamps (#26054) · ac957f69
      Sanchit Gandhi authored
      * [Whisper Tokenizer] Fix tests after adding timestamps
      
      * fix s2t tokenizer tests
      
      * fix vocab test
      
      * backwards comp
      
      * fix tests
      
      * comment
      
      * style
      
      * fix last test
      
      * fix fast
      
      * make faster
      
      * move logic to decode
      
      * remove skip test
      
      * fix decode with offsets
      
      * fix special tokens
      
      * empty commit to re-trigger ci
      
      * use lru cache
      ac957f69
  15. 13 Sep, 2023 1 commit
  16. 12 Sep, 2023 2 commits
  17. 09 Sep, 2023 1 commit
  18. 07 Sep, 2023 1 commit
  19. 05 Sep, 2023 6 commits
  20. 04 Sep, 2023 1 commit