1. 25 Jul, 2023 2 commits
    • Sebastian Husch Lee's avatar
      [`T5`, `MT5`, `UMT5`] Add [T5, MT5, UMT5]ForSequenceClassification (#24726) · 8f36ab3e
      Sebastian Husch Lee authored
      * Initial addition of t5forsequenceclassification
      
      * Adding imports and adding tests
      
      * Formatting
      
      * Running make fix-copies
      
      * Adding mt5forseq
      
      * Formatting
      
      * run make fix-copies
      
      * Adding to docs
      
      * Add model_parallel
      
      * Fix bug
      
      * Fix
      
      * Remove TODO
      
      * Fixing tests for T5ForSequenceClassification
      
      * Undo changes to dependency_versions_table.py
      
      * Change classification head to work with T5Config directly
      
      * Change seq length to let tests pass
      
      * PR comments for formatting
      
      * Formatting
      
      * Initial addition of UMT5ForSequenceClassification
      
      * Adding to inits and formatting
      
      * run make fix-copies
      
      * Add doc for UMT5ForSeqClass
      
      * Update UMT5 config
      
      * Fix docs
      
      * Skip torch fx test for SequenceClassification
      
      * Formatting
      
      * Add skip to UMT5 tests as well
      
      * Fix umt5 tests
      
      * Running make fix-copies
      
      * PR comments
      
      * Fix for change to sentence_representation
      
      * Rename seq_len to hidden_size since that's what it is
      
      * Use base_model to follow format of the rest of the library
      
      * Update docs
      
      * Extract the decoder_input_ids changes and make one liner
      
      * Make one-liner
      8f36ab3e
    • Arthur's avatar
      [`MPT`] Add MosaicML's `MPT` model to transformers (#24629) · dcb183f4
      Arthur authored
      
      
      * draft add new model like
      
      * some cleaning of the config
      
      * nits
      
      * add nested configs
      
      * nits
      
      * update
      
      * update
      
      * added layer norms + triton kernels
      
      * consider only LPLayerNorm for now.
      
      * update
      
      * all keys match.
      
      * Update
      
      * fixing nits here and there
      
      * working forward pass.
      
      * removed einops dependency
      
      * nits
      
      * format
      
      * add alibi
      
      * byebye head mask
      
      * refactor attention
      
      * nits.
      
      * format
      
      * fix nits.
      
      * nuke ande updates
      
      * nuke tokenizer test
      
      * don't reshape query with kv heads
      
      * added a bit of documentation.
      
      * remove unneeded things
      
      * nuke more stuff
      
      * nit
      
      * logits match - same generations
      
      * rm unneeded methods
      
      * 1 remaining failing CI test
      
      * nit
      
      * fix nits
      
      * fix docs
      
      * fix docs
      
      * rm tokenizer
      
      * fixup
      
      * fixup
      
      * fixup and fix tests
      
      * fixed configuration object.
      
      * use correct activation
      
      * few minor fixes
      
      * clarify docs a bit
      
      * logits match à 1e-12
      
      * skip and unskip a test
      
      * added some slow tests.
      
      * fix readme
      
      * add more details
      
      * Update docs/source/en/model_doc/mpt.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix configuration issues
      
      * more fixes in config
      
      * added more models
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove unneeded position ids
      
      * fix some  comments
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * revert suggestion
      
      * mpt alibi + added batched generation
      
      * Update src/transformers/models/mpt/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove init config
      
      * Update src/transformers/models/mpt/configuration_mpt.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix nit
      
      * add another slow test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fits in one line
      
      * some refactor because make fixup doesn't pass
      
      * add ft notebook
      
      * update md
      
      * correct doc path
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      dcb183f4
  2. 24 Jul, 2023 1 commit
    • Rinat's avatar
      Pvt model (#24720) · a03d13c8
      Rinat authored
      * pull and push updates
      
      * add docs
      
      * fix modeling
      
      * Add and run test
      
      * make copies
      
      * add task
      
      * fix tests and fix small issues
      
      * Checks on a Pull Request
      
      * fix docs
      
      * add desc pvt.md
      a03d13c8
  3. 20 Jul, 2023 1 commit
    • Tom Aarsen's avatar
      Deprecate unused OpenLlama architecture (#24922) · 79444f37
      Tom Aarsen authored
      * Resolve typo in check_repo.py
      
      * Specify encoding when opening modeling files
      
      * Deprecate the OpenLlama architecture
      
      * Add disclaimer pointing to Llama
      
      I'm open to different wordings here
      
      * Match the capitalisation of LLaMA
      79444f37
  4. 18 Jul, 2023 1 commit
    • NielsRogge's avatar
      Add DINOv2 (#24016) · 3ec10e6c
      NielsRogge authored
      * First draft
      
      * More improvements
      
      * Convert patch embedding layer
      
      * Convert all weights
      
      * Make conversion work
      
      * Improve conversion script
      
      * Fix style
      
      * Make all tests pass
      
      * Add image processor to auto mapping
      
      * Add swiglu ffn
      
      * Add image processor to conversion script
      
      * Fix conversion of giant model
      
      * Fix documentation
      
      * Fix style
      
      * Fix tests
      
      * Address comments
      
      * Address more comments
      
      * Remove unused arguments
      
      * Remove more arguments
      
      * Rename parameters
      
      * Include mask token
      
      * Address comments
      
      * Add docstring
      
      * Transfer checkpoints
      
      * Empty commit
      3ec10e6c
  5. 17 Jul, 2023 1 commit
    • Yoach Lacombe's avatar
      Add bark (#24086) · f42a35e6
      Yoach Lacombe authored
      
      
      * first raw version of the bark integration
      
      * working code on small models with single run
      
      * add converting script from suno weights 2 hf
      
      * many changes
      
      * correct past_kv output
      
      * working implementation for inference
      
      * update the converting script according to the architecture changes
      
      * add a working end-to-end inference code
      
      * remove some comments and make small changes
      
      * remove unecessary comment
      
      * add docstrings and ensure no unecessary intermediary output during audio generation
      
      * remove done TODOs
      
      * make style + add config docstrings
      
      * modification for batch inference support on the whole model
      
      * add details to .generation_audio method
      
      * add copyright
      
      * convert EncodecModel from original library to transformers implementation
      
      * add two class in order to facilitate model and sub-models loading from the hub
      
      * add support of loading the whole model
      
      * add BarkProcessor
      
      * correct modeling according to processor output
      
      * Add proper __init__ and auto support
      
      * Add up-to-date copyright/license message
      
      * add relative import instead of absolute
      
      * cleaner head_dim computation
      
      * small comment removal or changes
      
      * more verbose LayerNorm init method
      
      * specify eps for clearer comprehension
      
      * more verbose variable naming in the MLP module
      
      * remove unecessary BarkBlock parameter
      
      * clearer code in the forward pass of the BarkBlock
      
      * remove _initialize_modules method for cleaner code
      
      * Remove unnecessary methods from sub-models
      
      * move code to remove unnecessary function
      
      * rename a variable for clarity and change an assert
      
      * move code and change variable name for clarity
      
      * remove unnecessary asserts
      
      * correct small bug
      
      * correct a comment
      
      * change variable names for clarity
      
      * remove asserts
      
      * change import from absolute to relative
      
      * correct small error due to comma missing + correct import
      
      * Add attribute Bark config
      
      * add first version of tests
      
      * update attention_map
      
      * add tie_weights and resize_token_embeddings for fineModel
      
      * correct getting attention_mask in generate_text_semantic
      
      * remove Bark inference trick
      
      * leave more choices in barkProcessor
      
      * remove _no_split_modules
      
      * fixe error in forward of block and introduce clearer notations
      
      * correct converting script with last changes
      
      * make style + add draft bark.mdx
      
      * correct BarkModelTest::test_generate_text_semantic
      
      * add Bark in main README
      
      * add dummy_pt_objects for Bark
      
      * add missing models in the main init
      
      * correct test_decoder_model_past_with_large_inputs
      
      * disable torchscript test
      
      * change docstring of BarkProcessor
      
      * Add test_processor_bark
      
      * make style
      
      * correct copyrights
      
      * add bark.mdx + make style, quality and consistency
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Remove unnecessary test method
      
      * simply logic of a test
      
      * Only check first ids for slow audio generation
      
      * split full end-to-end generation tests
      
      * remove unneccessary comment
      
      * change submodel names for clearer naming
      
      * remove ModuleDict from modeling_bark
      
      * combine two if statements
      
      * ensure that an edge misued won't happen
      
      * modify variable name
      
      * move code snippet to the right place (coarse instead of semantic)
      
      * change BarkSemanticModule -> BarkSemanticModel
      
      * align BarkProcessor with transformers paradigm
      
      * correct BarkProcessor tests with last commit changes
      
      * change _validate_voice_preset to an instance method instead of a class method
      
      * tie_weights already called with post_init
      
      * add codec_model config to configuration
      
      * update bark modeling tests with recent BarkProcessor changes
      
      * remove SubModelPretrainedModel + change speakers embeddings prompt type in BarkModel
      
      * change absolute imports to relative
      
      * remove TODO
      
      * change docstrings
      
      * add examples to docs and docstrings
      
      * make style
      
      * uses BatchFeature in BarkProcessor insteads of dict
      
      * continue improving docstrings and docs + make style
      
      * correct docstrings examples
      
      * more comprehensible speaker_embeddings load/Save
      
      * rename speaker_embeddings_dict -> speaker_embeddings
      
      * correct bark.mdx + add bark to documentation_tests
      
      * correct docstrings configuration_bark
      
      * integrate last nit suggestions
      
      * integrate BarkGeneration configs
      
      * make style
      
      * remove bark tests from documentation_tests.txt because timeout - tested manually
      
      * add proper generation config initialization
      
      * small bark.mdx documentation changes
      
      * rename bark.mdx -> bark.md
      
      * add torch.no_grad behind BarkModel.generate_audio()
      
      * replace assert by ValueError in convert_suno_to_hf.py
      
      * integrate a series of short comments from reviewer
      
      * move SemanticLogitsProcessors and remove .detach() from Bark docs and docstrings
      
      * actually remove SemanticLogitsProcessor from modeling_bark.oy
      
      * BarkProcessor returns a single output instead of tuple + correct docstrings
      
      * make style + correct bug
      
      * add initializer_range to BarkConfig + correct slow modeling tests
      
      * add .clone() to history_prompt.coarse_prompt to avoid modifying input array
      
      * Making sure no extra "`" are present
      
      * remove extra characters in modeling_bark.py
      
      * Correct output if history_prompt is None
      
      * remove TODOs
      
      * remove ravel comment
      
      * completing generation_configuration_bark.py docstrings
      
      * change docstrings - number of audio codebooks instead of Encodec codebooks
      
      * change 'bias' docstrings in configuration_bark.py
      
      * format code
      
      * rename BarkModel.generate_audio -> BarkModel.generate_speech
      
      * modify AutoConfig instead of EncodecConfig in BarkConfig
      
      * correct AutoConfig wrong init
      
      * refactor BarkModel and sub-models generate_coarse, generate_fine, generate_text_semantic
      
      * remove SemanticLogitsProcessor and replace it with SuppressTokensLogitsProcessor
      
      * move nb_codebook related config arguments to BarkFineConfig
      
      * rename bark.mdx -> bark.md
      
      * correcting BarkModelConfig from_pretrained + remove keys_to_ignore
      
      * correct bark.md with correct hub path
      
      * correct code bug in bark.md
      
      * correct list tokens_to_suppress
      
      * modify Processor to load nested speaker embeddings in a safer way
      
      * correct batch sampling in BarkFineModel.generate_fine
      
      * Apply suggestions from code review
      
      Small docstrings correction and code improvements
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * give more details about num_layers in docstrings
      
      * correct indentation mistake
      
      * correct submodelconfig order of docstring variables
      
      * put audio models in alphabetical order in utils/check_repo.my
      
      * remove useless line from test_modeling_bark.py
      
      * makes BarkCoarseModelTest inherits from (ModelTesterMixin, GenerationTesterMixin, unittest.TestCase) instead of BarkSemanticModelTest
      
      * make a Tester class for each sub-model instead of inheriting
      
      * add test_resize_embeddings=True for Bark sub-models
      
      * add Copied from transformers.models.gpt_neo.modeling_gpt_neo.GPTNeoSelfAttention._split_heads
      
      * remove 'Copied fom Bark' comment
      
      * remove unneccessary comment
      
      * change np.min -> min in modeling_bark.py
      
      * refactored all custom layers to have Bark prefix
      
      * add attention_mask as an argument of generate_text_semantic
      
      * refactor sub-models start docstrings to have more precise config class definition
      
      * move _tied_weights_keys overriding
      
      * add docstrings to generate_xxx in modeling_bark.py
      
      * add loading whole BarkModel to convert_suno_to_hf
      
      * refactor attribute and variable names
      
      * make style convert_suno
      
      * update bark checkpoints
      
      * remove never entered if statement
      
      * move bark_modeling docstrings after BarkPretrainedModel class definition
      
      * refactor modeling_bark.py: kv -> key_values
      
      * small nits - code refactoring and removing unecessary lines from _init_weights
      
      * nits - replace inplace method by variable assigning
      
      * remove *optional* when necessary
      
      * remove some lines in generate_speech
      
      * add default value for optional parameter
      
      * Refactor preprocess_histories_before_coarse -> preprocess_histories
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * correct usage after refactoring
      
      * refactor Bark's generate_xxx -> generate and modify docstrings and tests accordingly
      
      * update docstrings python in configuration_bark.py
      
      * add bark files in utils/documentation_test.txt
      
      * correct docstrings python snippet
      
      * add the ability to use parameters in the form of e.g coarse_temperature
      
      * add semantic_max_new_tokens in python snippet in docstrings for quicker generation
      
      * Reformate sub-models kwargs in BakModel.generate
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * correct kwargs in BarkModel.generate
      
      * correct attention_mask kwarg in BarkModel.generate
      
      * add tests for sub-models args in BarkModel.generate and correct BarkFineModel.test_generate_fp16
      
      * enrich BarkModel.generate docstrings with a description of how to use the kwargs
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      f42a35e6
  6. 13 Jul, 2023 1 commit
  7. 11 Jul, 2023 2 commits
  8. 10 Jul, 2023 1 commit
  9. 03 Jul, 2023 1 commit
    • Arthur's avatar
      [`Umt5`] Add google's umt5 to `transformers` (#24477) · 799df10a
      Arthur authored
      
      
      * add tokenization template
      
      * update conversion script
      
      * update modeling code
      
      * update
      
      * update convert checkpoint
      
      * update modeling
      
      * revert changes on convert script
      
      * new conversion script for new format
      
      * correct position bias
      
      * cleaning a bit
      
      * Credit co authors
      Co-authored-by: default avataragemagician <ahmed.elnaggar@tum.de>
      
      Co-authored-by: stefan-it
      <>
      
      * styling
      
      * Add docq
      
      * fix copies
      
      * add co author
      
      * Other Author
      
      * Merge branch 'main' of https://github.com/huggingface/transformers
      
       into add-umt5
      
      * add testing
      
      * nit
      
      * Update docs/source/en/model_doc/umt5.mdx
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      
      * fix t5
      
      * actual fix?
      
      * revert wrong changes
      
      * remove
      
      * update test
      
      * more fixes
      
      * revert some changes
      
      * add SPIECE_UNDERLINE
      
      * add a commone xample
      
      * upfate
      
      * fix copies
      
      * revert changes on t5 conversion script
      
      * revert bytefallback changes since there was no addition yet
      
      * fixup
      
      * fixup
      
      * ingore umt5 cutom testing folder
      
      * fix readmes
      
      * revertT5 changes
      
      * same outputs
      
      * fixup
      
      * update example
      
      * Apply suggestions from code review
      
      * style
      
      * draft addition of all new files
      
      * current update
      
      * fix attention and stuff
      
      * finish refactoring
      
      * auto config
      
      * fixup
      
      * more nits
      
      * add umt5 to init
      
      * use md format
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * revert changes on mt5
      
      * revert mt4 changes
      
      * update test
      
      * more fixes
      
      * add to mapping
      
      * fix-copies
      
      * fix copies
      
      * foix retain grad
      
      * fix some tests
      
      * nits
      
      * done
      
      * Update src/transformers/models/umt5/modeling_umt5.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/umt5.md
      
      * Update src/transformers/models/umt5/__init__.py
      
      * Update docs/source/en/model_doc/umt5.md
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      
      * Update src/transformers/models/umt5/modeling_umt5.py
      
      * update conversion script + use google checkpoints
      
      * nits
      
      * update test and modelling
      
      * stash slow convert
      
      * update fixupd
      
      * don't change slow
      
      ---------
      
      Co-authored-by: stefan-it <>
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      799df10a
  10. 29 Jun, 2023 1 commit
    • Sanchit Gandhi's avatar
      Add Musicgen (#24109) · 1c1c9075
      Sanchit Gandhi authored
      
      
      * Add Audiocraft
      
      * add cross attention
      
      * style
      
      * add for lm
      
      * convert and verify
      
      * introduce t5
      
      * split configs
      
      * load t5 + lm
      
      * clean conversion
      
      * copy from t5
      
      * style
      
      * start pattern provider
      
      * make generation work
      
      * style
      
      * fix pos embs
      
      * propagate shape changes
      
      * propagate shape changes
      
      * style
      
      * delay pattern: pad tokens at end
      
      * audiocraft -> musicgen
      
      * fix inits
      
      * add mdx
      
      * style
      
      * fix pad token in processor
      
      * override generate and add todos
      
      * add init to test
      
      * undo pattern delay mask after gen
      
      * remove cfg logits processor
      
      * remove cfg logits processor
      
      * remove logits processor in favour of mask
      
      * clean pos embs
      
      * make fix copies
      
      * update readmes
      
      * clean pos emb
      
      * refactor encoder/decoder
      
      * make fix copies
      
      * update conversion
      
      * fix config imports
      
      * update config docs
      
      * make style
      
      * send pattern mask to device
      
      * pattern mask with delay
      
      * recover prompted audio tokens
      
      * fix docstrings
      
      * laydown test file
      
      * pattern edge case
      
      * remove t5 ref
      
      * add processing class
      
      * config refactor
      
      * better pattern comment
      
      * check if mask is not present
      
      * check if mask is not present
      
      * refactor to auto class
      
      * remove encoder configs
      
      * fix processor
      
      * processor import
      
      * start updating conversion
      
      * start updating tests
      
      * make style
      
      * convert t5, encodec, lm
      
      * convert as composite
      
      * also convert processor
      
      * run generate
      
      * classifier free gen
      
      * comments and clean up
      
      * make style
      
      * docs for logit proc
      
      * docstring for uncond gen
      
      * start lm tests
      
      * work tests
      
      * let the lm generate
      
      * refactor: reshape inside forward
      
      * undo greedy loop changes
      
      * from_enc_dec -> from_sub_model
      
      * fix input id shapes in docstrings
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * undo generate changes
      
      * from sub model config
      
      * Update src/transformers/models/musicgen/modeling_musicgen.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make generate work again
      
      * generate uncond -> get uncond inputs
      
      * remove prefix allowed tokens fn
      
      * better error message
      
      * logit proc checks
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * make decoder only tests work
      
      * composite fast tests
      
      * make style
      
      * uncond generation
      
      * feat extr padding
      
      * make audio prompt work
      
      * fix inputs docstrings
      
      * unconditional inputs: dict -> model output
      
      * clean up tests
      
      * more clean up tests
      
      * make style
      
      * t5 encoder -> auto text encoder
      
      * remove comments
      
      * deal with frames
      
      * fix auto text
      
      * slow tests
      
      * nice mdx
      
      * remove can generate
      
      * todo - hub id
      
      * convert m/l
      
      * make fix copies
      
      * only import generation with torch
      
      * ignore decoder from tests
      
      * don't wrap uncond inputs
      
      * make style
      
      * cleaner uncond inputs
      
      * add example to musicgen forward
      
      * fix docs
      
      * ignore MusicGen Model/ForConditionalGeneration in auto mapping
      
      * add doc section to toctree
      
      * add to doc tests
      
      * add processor tests
      
      * fix push to hub in conversion
      
      * tips for decoder only loading
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix conversion for s / m / l checkpoints
      
      * import stopping criteria from module
      
      * remove from pipeline tests
      
      * fix uncond docstring
      
      * decode audio method
      
      * fix docs
      
      * org: sanchit-gandhi -> facebook
      
      * fix max pos embeddings
      
      * remove auto doc (not compatible with shapes)
      
      * bump max pos emb
      
      * make style
      
      * fix doc
      
      * fix config doc
      
      * fix config doc
      
      * ignore musicgen config from docstring
      
      * make style
      
      * fix config
      
      * fix config for doctest
      
      * consistent from_sub_models
      
      * don't automap decoder
      
      * fix mdx save audio file
      
      * fix mdx save audio file
      
      * processor batch decode for audio
      
      * remove keys to ignore
      
      * update doc md
      
      * update generation config
      
      * allow changes for default generation config
      
      * update tests
      
      * make style
      
      * fix docstring for uncond
      
      * fix processor test
      
      * fix processor test
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      1c1c9075
  11. 27 Jun, 2023 1 commit
  12. 26 Jun, 2023 1 commit
  13. 23 Jun, 2023 1 commit
  14. 21 Jun, 2023 2 commits
  15. 14 Jun, 2023 1 commit
    • Matthijs Hollemans's avatar
      [WIP] add EnCodec model (#23655) · 0c3fdccf
      Matthijs Hollemans authored
      
      
      * boilerplate stuff
      
      * messing around with the feature extractor
      
      * fix feature extractor
      
      * unit tests for feature extractor
      
      * rename speech to audio
      
      * quick-and-dirty import of Meta's code
      
      * import weights (sort of)
      
      * cleaning up
      
      * more cleaning up
      
      * move encoder/decoder args into config
      
      * cleanup model
      
      * rename EnCodec -> Encodec
      
      * RVQ parameters in config
      
      * add slow test
      
      * add lstm init and test_init
      
      * Add save & load
      
      * finish EncodecModel
      
      * remove decoder_input_values as they are ont used anywhere (not removed from doc yet)
      
      * fix test feature extraction model name
      
      * Add better slow test
      
      * Fix tests
      
      * some fixup and cleaning
      
      * Improve further
      
      * cleaning up quantizer
      
      * fix up conversion script
      
      * test don't pass, _encode_fram does not work
      
      * update tests with output per encode and decode
      
      * more cleanup
      
      * rename _codebook
      
      * remove old config cruft
      
      * ratios & hop_length
      
      * use ModuleList instead of Sequential
      
      * clean up resnet block
      
      * update types
      
      * update tests
      
      * fixup
      
      * quick cleanup
      
      * fix padding
      
      * more styl,ing
      
      * add patrick feedback
      
      * fix copies
      
      * fixup
      
      * fix lstm
      
      * fix shape issues
      
      * fixup
      
      * rename conv layers
      
      * fixup
      
      * fix decoding
      
      * small conv refactoring
      
      * remove norm_params
      
      * simplify conv layers
      
      * rename conv layers
      
      * stuff
      
      * Clean up
      
      * Add padding logic
      
      use padding mask
      
      small conv refactoring
      
      remove norm_params
      
      simplify conv layers
      
      rename conv layers
      
      stuff
      
      add batched test
      
      update
      
      Clean up
      
      merge and update for padding
      
      fix padding
      
      fixup
      
      * clean up more
      
      * clean up more
      
      * More clean ups
      
      * cleanup convolutions
      
      * typo
      
      * fix typos
      
      * fixup
      
      * build PR doc?
      
      * start refactoring docstring
      
      * fix don't pad when no strid and chunk
      
      * update docstring
      
      * update docstring
      
      * nits
      
      * update going to lunch
      
      * update config and model
      
      * fix broken testse (becaue of the config changes)
      
      * fix scale computation
      
      * fixu[
      
      * only return dict if speciefied or if config returns it
      
      * remove todos
      
      * update defaults in config
      
      * update conversion script
      
      * fix doctest
      
      * more docstring + fixup
      
      * nits on batched_tests
      
      * more nits
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * update basxed on review
      
      * fix update
      
      * updaet tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fixup
      
      * add overlap and chunl_length_s
      
      * cleanup feature extraction
      
      * teste edge cases truncation and padding
      
      * correct processor values
      
      * update config encodec, nits
      
      * fix tests
      
      * fixup
      
      * fix 24Hz test
      
      * elle tests are green
      
      * fix fixup
      
      * Apply suggestions from code review
      
      * revert readme changes
      
      * fixup
      
      * add example
      
      * use facebook checkpoints
      
      * fix typo
      
      * no pipeline tests
      
      * use slef.pad everywhere we can
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * update based on review
      
      * update
      
      * update mdx
      
      * fix bug and tests
      
      * fixup
      
      * fix doctest
      
      * remove comment
      
      * more nits
      
      * add more coverage for `test_truncation_and_padding`
      
      * fixup
      
      * add last test
      
      * fix text
      
      * nits
      
      * Update tests/models/encodec/test_modeling_encodec.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * take care of the last comments
      
      * typo
      
      * fix test
      
      * nits
      
      * fixup
      
      * Update src/transformers/models/encodec/feature_extraction_encodec.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatararthur.zucker@gmail.com <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      0c3fdccf
  16. 06 Jun, 2023 1 commit
    • amyeroberts's avatar
      Add TimmBackbone model (#22619) · a717e031
      amyeroberts authored
      
      
      * Add test_backbone for convnext
      
      * Add TimmBackbone model
      
      * Add check for backbone type
      
      * Tidying up - config checks
      
      * Update convnextv2
      
      * Tidy up
      
      * Fix indices & clearer comment
      
      * Exceptions for config checks
      
      * Correclty update config for tests
      
      * Safer imports
      
      * Safer safer imports
      
      * Fix where decorators go
      
      * Update import logic and backbone tests
      
      * More import fixes
      
      * Fixup
      
      * Only import all_models if torch available
      
      * Fix kwarg updates in from_pretrained & main rebase
      
      * Tidy up
      
      * Add tests for AutoBackbone
      
      * Tidy up
      
      * Fix import error
      
      * Fix up
      
      * Install nattan in doc_test_job
      
      * Revert back to setting self._out_xxx directly
      
      * Bug fix - out_indices mapping from out_features
      
      * Fix tests
      
      * Dont accept output_loading_info for Timm models
      
      * Set out_xxx and don't remap
      
      * Use smaller checkpoint for test
      
      * Don't remap timm indices - check out_indices based on stage names
      
      * Skip test as it's n/a
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Cleaner imports / spelling is hard
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      a717e031
  17. 02 Jun, 2023 1 commit
  18. 30 May, 2023 1 commit
  19. 12 May, 2023 1 commit
  20. 09 May, 2023 1 commit
    • Sylvain Gugger's avatar
      Add RWKV-4 (#22797) · b4d4d6fe
      Sylvain Gugger authored
      
      
      * First draft of RWKV-4
      
      * Add support for generate
      
      * Style post-rebase
      
      * Properly use state
      
      * Write doc
      
      * Fix doc
      
      * More math
      
      * Add model to README, dummies and clean config
      
      * Fix init
      
      * multiple fixes:
      
      - fix common tests
      - fix configuraion default values
      - add CI test for checking state computation
      - fix some CI tests
      
      * correct tokenizer
      
      * some tweaks
      
      - fix config docstring
      - fix failing tests
      
      * fix CI tests
      
      - add output_attention / output_hidden_states
      - override test_initialization
      - fix failing CIs
      
      * fix conversion script
      
      - fix sharded case
      - add new arguments
      
      * add slow tests + more fixes on conversion script
      
      * add another test
      
      * final fixes
      
      * change single name variable
      
      * add mock attention mask for pipeline to work
      
      * correct eos token id
      
      * fix nits
      
      * add checkpoints
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add `tie_word_embeddings` in docstring
      
      * change tensor name
      
      * fix final nits
      
      * Trigger CI
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      b4d4d6fe
  21. 04 May, 2023 1 commit
  22. 03 May, 2023 2 commits
  23. 02 May, 2023 1 commit
  24. 01 May, 2023 1 commit
  25. 28 Apr, 2023 1 commit
  26. 27 Apr, 2023 2 commits
  27. 23 Apr, 2023 1 commit
  28. 21 Apr, 2023 1 commit
  29. 20 Apr, 2023 1 commit
  30. 19 Apr, 2023 1 commit
    • Arthur's avatar
      Add Segment Anything Model (SAM) (#22654) · 474bf508
      Arthur authored
      
      
      * initial commit
      
      * keys match
      
      * update, fix conversion
      
      * fixes, inference working
      
      * fix
      
      * more fixes
      
      * more fixes
      
      * clean up
      
      * more clean up
      
      * fix copies and add convext copied layer norm
      
      * stash
      
      * pretty big upfate
      
      * cleaning
      
      * more cleaning
      
      * fixup stuffs
      
      * fix copies
      
      * fix iinit
      
      * update test removing tokenizer
      
      * nits
      
      * add pretrained
      
      * more nits
      
      * remove tracking of pipeline
      
      * few fixes
      
      * update san and conversion script
      
      * fix mask decoder and prompt encoder conversion
      
      * fixes
      
      * small update
      
      * fix order
      
      * fix
      
      * fix image embeddings
      
      * nites
      
      * few fixes
      
      * fix logits
      
      * clean up
      
      * fixes boxes inference
      
      * v1 AMG
      
      * clean up
      
      * some clean up
      
      * multi points support
      
      * amg working
      
      * fixup
      
      * clean up
      
      * readme
      
      * update toctree
      
      * fix type hint
      
      * multiple fixes
      
      * fixup
      
      * fixes
      
      * updates
      
      * updates
      
      * more tests
      
      * few fixes
      
      * change to `SamForMaskGeneration`
      
      * doc
      
      * fixup
      
      * fix more tests
      
      * multiple fixes
      
      * fix CI tests
      
      * refactor processor
      
      * renamings
      
      * draft the pipeline
      
      * refactor
      
      * fix tests
      
      * fix test
      
      * few cleanings
      
      * fix test
      
      * edit pipelien support chunking
      
      * udate
      
      * add slow tests
      
      * fix nit
      
      * fixup
      
      * fix nit
      
      * current chunk pipleine
      
      * cast boxes in fp32
      
      * nit
      
      * current updates
      
      * piepleine works
      
      * fixup
      
      * clean up config
      
      * fix slow tests
      
      * fix slow tests
      
      * clean up
      
      * update doc and pipeline
      
      * adds more slow tests
      
      * fix slow tests
      
      * cleaning
      
      * tests pass
      
      * add docstring
      
      * fix copies
      
      * clean up
      
      * support batch of images
      
      * style
      
      * dummy is needed, add tests
      
      * fix slow tests
      
      * fix CI
      
      * update
      
      * adds more tests
      
      * fixes
      
      * fixes
      
      * fixup
      
      * fixes
      
      * few fixes
      
      * filter
      
      * few fixes
      
      * some refactor
      
      * touches finales
      
      * fix
      
      * style
      
      * remove pipeline files
      
      * fixes nits
      
      * revert pipeline changes
      
      * fix test
      
      * fixup
      
      * remove automodel for automatic mask generation
      
      * fix failing torch tests
      
      * update mdx
      
      * revert removal of `MODEL_FOR_AUTOMATIC_MASK_GENERATION_MAPPING`
      
      * update sam config based on review
      Co-authored-by: default avataramyeroberts <aeroberts4444@gmail.com>
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      
      * update low_resolution_masks -> pred_masks
      inti ln with layer_norm_eps
      add_decomposed_rel_pos doc
      forward doc of SamForMaskGeneration
      
      * update processor docstring
      
      * remove image processor import empty
      
      * update for testing
      
      * output vision hidden states + clean recomm
      also test all iou values
      
      * fixup
      
      * fixup
      
      * remove unused
      
      * Update src/transformers/models/sam/modeling_sam.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/sam/image_processing_sam.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * nits
      
      * fix
      
      * fix CI tests and slow tests
      
      * replace with Amy's processor
      
      * clearer docstring
      
      * add `SamVisionNeck`
      
      * refactor - all CI tests should pass
      
      * fix broken import on Gcolab
      
      * few fixes here and there
      
      * fix another bug
      
      * fix more bugs
      
      * update and merge
      
      * correct ckpt
      
      * address comments
      
      * add tips
      
      * revert
      
      * fix docstring
      
      * replace with `SamModel`
      
      * make fixup
      
      * add support for bathed images and batch ed points
      
      * make fixup this time, really
      
      * make fixup again and again
      
      * few fixes here and there, this should be the touche finale
      
      * Update docs/source/en/model_doc/sam.mdx
      
      * fixup
      
      * correct checkpoints
      
      * correct name
      
      * rm unneeded file
      
      * add notebook
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avataramyeroberts <aeroberts4444@gmail.com>
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      474bf508
  31. 12 Apr, 2023 1 commit
    • pioliverse's avatar
      add model resources for CPMAnt (new) (#20906) · 523ca4e0
      pioliverse authored
      
      
      * resolve conflicts
      
      * rebase and make style
      
      * test
      
      * test
      
      * test
      
      * rebase and make style
      
      * rebase and make style
      
      * tests
      
      * tests
      
      * rewrite some functions
      
      * rebase and make style
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * fix some bugs & docstring
      
      * add models and tests
      
      * solve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * tests
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * fix some bugs & docstring
      
      * save resolution
      
      * make style
      
      * delete redefinition code
      
      * reformat function
      
      * reformat
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * tests
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * resolve conflicts
      
      * make style
      
      * fix bugs and refactor
      
      * modify docstrings and make style
      
      * unify import format in __init__.py
      
      * fix import-altclp bug
      
      * fix copies to update index.md
      
      * fix unused config parameters
      
      * fix unused config parameters
      
      * fix unused config parameters
      
      * update README_ja.md
      
      * dummy commit for unit test
      
      * fix attention mask
      
      * add CPMAntTokenizer&-Fast to auto-mapping
      
      * drop redundant changes in README_ko
      
      * fix  defaults in docstring
      
      * fix use_cache and some docstring
      
      * add missing args in tokenizer
      
      * modify tester inheritance
      
      * add is_jieba_available
      
      * fix some bugs
      
      * make style and fix-copies
      
      * add doctests
      
      * skip integration tests
      
      * add is_jieba_available
      
      * fix bugs in common tests
      
      * adjust docstrings and make style
      
      * add argument docstring
      
      * adjust code to some specifications
      
      * make style and fix-copies
      
      * add fast tokenization test
      
      * dummy commit for unit test
      
      * dummy commit for unit test
      
      * dummy commit for unit test
      
      * normalize some comments and names
      
      * Bert->CPMAnt
      
      * camel names and drop redundant codes
      
      * make style and fix-coies
      
      * add CpmTokenizerFast _import_structure
      
      * drop cpmanttokenizerfast in model_doc
      
      * fix some problems
      
      * fix CPMAnt tokenization for common test
      
      * make style and fixup
      
      * fix copies and fixup
      
      * fix bugs in tokenization test
      
      * dummy commit for connection failure in unittest
      
      * fix copies
      
      * drop trailing comma
      
      * fix decorator in tests
      
      * dummy commit for connection failure in unittest
      
      ---------
      Co-authored-by: default avatarGong Baitao <gongbaitao11@gmail.com>
      523ca4e0
  32. 10 Apr, 2023 2 commits
    • Sugawara's avatar
      add GPTNeoXForSequenceClassification (#22671) · 6daa9cb5
      Sugawara authored
      * add GPTNeoXForSequenceClassification
      
      * move the labels to logits.device (ref: #22561)
      
      * fix
      6daa9cb5
    • Joel Lamy-Poirier's avatar
      Add GPTBigCode model (Optimized GPT2 with MQA from Santacoder & BigCode) (#22575) · e0921c6b
      Joel Lamy-Poirier authored
      
      
      * Add model with cli tool
      
      * Remove unwanted stuff
      
      * Add new code
      
      * Remove inference runner
      
      * Style
      
      * Fix checks
      
      * Test updates
      
      * make fixup
      
      * fix docs
      
      * fix doc
      
      * fix test
      
      * hopefully fix pipeline tests
      
      * refactor
      
      * fix CIs
      
      * add comment
      
      * rename to `GPTBigCodeForCausalLM`
      
      * correct readme
      
      * make fixup + docs
      
      * make fixup
      
      * fixes
      
      * fixes
      
      * Remove pruning
      
      * Remove import
      
      * Doc updates
      
      * More pruning removal
      
      * Combine copies
      
      * Single MQA implementation, remove kv cache pre-allocation and padding
      
      * Update doc
      
      * Revert refactor to match gpt2 style
      
      * Merge back key and value caches, fix some type hints
      
      * Update doc
      
      * Fix position ids pith padding (PR 21080)
      
      * Add conversion script temporarily
      
      * Update conversion script
      
      * Remove checkpoint conversion
      
      * New model
      
      * Fix MQA test
      
      * Fix copies
      
      * try fix tests
      
      * FIX TEST!!
      
      * remove  `DoubleHeadsModel`
      
      * add MQA tests
      
      * add slow tests
      
      * clean up
      
      * add CPU checker
      
      * final fixes
      
      * fixes
      
      - fix GPU issue
      - fixed slow tests
      - skip disk offload
      
      * fix final issue
      
      * Simplify and comment baddbmm fix
      
      * Remove unnecessary code
      
      * Transpose tweaks
      
      * Use beta=1 on cpu, improve tests
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      e0921c6b
  33. 03 Apr, 2023 1 commit
  34. 27 Mar, 2023 1 commit
    • Arthur's avatar
      [WIP]`NLLB-MoE` Adds the moe model (#22024) · 19ade242
      Arthur authored
      * Initial commit
      
      * update modeling code
      
      * update doc
      
      * add functions necessary
      
      * fix impotrs
      
      * revert changes
      
      * fixup
      
      * more styling to get going
      
      * remove standalone encoder
      
      * update code
      
      * styling
      
      * fix config and model
      
      * update code and some refactoring
      
      * make more tests pass
      
      * Adding NLLB-200 - MoE - 54.5B for no language left behind
      Fixes #21300
      
      * fix mor common tests
      
      * styke
      
      * update testing file
      
      * update
      
      * update
      
      * Router2 doc
      
      * update check config with sparse layer
      
      * add dummy router
      
      * update current conversion script
      
      * create on the fly conversion script
      
      * Fixup
      
      * style
      
      * style 2
      
      * fix empty return
      
      * fix return
      
      * Update default config sparse layers
      
      * easier to create sparse layers
      
      * update
      
      * update conversion script
      
      * update modeling
      
      * add to toctree
      
      * styling
      
      * make ruff happy
      
      * update docstring
      
      * update conversion script
      
      * update, will break tests but impelemting top2
      
      * update
      
      * local groups are supported here
      
      * ️ Support for local groups is now removed ️
      
      This is because it has to work with model parallelism that we do not support
      
      * finish simplificaiton
      
      * Fix forward
      
      * style
      
      * fixup
      
      * Update modelling and test, refactoring
      
      * update tests
      
      * remove final layer)norm as it is done in the FF
      
      * routing works! Logits test added
      
      * nit in test
      
      * remove top1router
      
      * style
      
      * make sure sparse are tested. Had to change route_tokens a liottle bit
      
      * add support for unslip models when converting
      
      * fixup
      
      * style
      
      * update test s
      
      * update test
      
      * REFACTOR
      
      * encoder outputs match!
      
      * style
      
      * update testing
      
      * 🎉encoder and decoder logits match 🎉
      
      
      
      * styleing
      
      * update tests
      
      * cleanup tests
      
      * fix router test and CIs
      
      * cleanup
      
      * cleanup test styling
      
      * fix tests
      
      * Finally the generation tests match!
      
      * cleanup
      
      * update test
      
      * style testing file
      
      * remove script
      
      * cleanup
      
      * more cleanup
      
      * nits
      
      * update
      
      * NLLB tokenizer is wrong and will be fixed soon
      
      * use LongTensors
      
      * update tests
      
      * revert some small changes
      
      * fix second expert sampling and batch prioritized routing
      
      * update tests
      
      * finish last tests
      
      * make ruff happy
      
      * update
      
      * ruff again
      
      * style
      
      * Update docs/source/en/model_doc/nllb-moe.mdx
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Updates based on review
      
      * style and fix import issue
      
      * nit
      
      * more nits
      
      * cleanup
      
      * styling
      
      * update test_seconde_expert_policy
      
      * fix name
      
      * last nit on the markdown examples
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      19ade242