"examples/summarization/vscode:/vscode.git/clone" did not exist on "c203509d5b0f002a7833382a03ffe7802aa14e91"
  1. 27 Sep, 2023 1 commit
  2. 12 Sep, 2023 1 commit
  3. 25 Aug, 2023 1 commit
    • Arthur's avatar
      [`CodeLlama`] Add support for `CodeLlama` (#25740) · 015f8e11
      Arthur authored
      
      
      * add all
      
      * Revert "Delete .github directory"
      
      This reverts commit 9b0ff7b052e2b20b629a26fb13606b78a42944d1.
      
      * make conversion script backward compatible
      
      * fixup
      
      * more styling
      
      * copy to llama changes
      
      * fix repo consistency
      
      * nits
      
      * document correct classes
      
      * updates
      
      * more fixes
      
      * nits
      
      * update auto mappings
      
      * add readmes
      
      * smallupdates
      
      * llama-code replace with llama_code
      
      * make fixup
      
      * updates to the testsing suite
      
      * fix fast nits
      
      * more small fixes
      
      * fix decode
      
      * fix template processing
      
      * properly reset the normalizer
      
      * nits processor
      
      * tokenization tests pass
      
      * styling
      
      * last tests
      
      * additional nits
      
      * one test is left
      
      * nits
      
      Co-authored-by faabian <faabian@users.noreply.github.com>
      
      * update failing test
      
      * fixup
      
      * remove decode infilling users should handle it on their onw after generation, padding can be a problem
      
      * update
      
      * make test slow and more meaningfull
      
      * fixup
      
      * doc update
      
      * fixup
      
      * Apply suggestions from code review
      
      * add kwargs doc
      
      * tokenizer requires `requires_backend`
      
      * type requires_backends
      
      * CodeLlama instead of LlamaCode
      
      * more name cahnges
      
      * nits
      
      * make doctests happy
      
      * small pipeline nits
      
      * last nit
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update
      
      * add codellama to toctree
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      015f8e11
  4. 25 Jul, 2023 2 commits
    • Sebastian Husch Lee's avatar
      [`T5`, `MT5`, `UMT5`] Add [T5, MT5, UMT5]ForSequenceClassification (#24726) · 8f36ab3e
      Sebastian Husch Lee authored
      * Initial addition of t5forsequenceclassification
      
      * Adding imports and adding tests
      
      * Formatting
      
      * Running make fix-copies
      
      * Adding mt5forseq
      
      * Formatting
      
      * run make fix-copies
      
      * Adding to docs
      
      * Add model_parallel
      
      * Fix bug
      
      * Fix
      
      * Remove TODO
      
      * Fixing tests for T5ForSequenceClassification
      
      * Undo changes to dependency_versions_table.py
      
      * Change classification head to work with T5Config directly
      
      * Change seq length to let tests pass
      
      * PR comments for formatting
      
      * Formatting
      
      * Initial addition of UMT5ForSequenceClassification
      
      * Adding to inits and formatting
      
      * run make fix-copies
      
      * Add doc for UMT5ForSeqClass
      
      * Update UMT5 config
      
      * Fix docs
      
      * Skip torch fx test for SequenceClassification
      
      * Formatting
      
      * Add skip to UMT5 tests as well
      
      * Fix umt5 tests
      
      * Running make fix-copies
      
      * PR comments
      
      * Fix for change to sentence_representation
      
      * Rename seq_len to hidden_size since that's what it is
      
      * Use base_model to follow format of the rest of the library
      
      * Update docs
      
      * Extract the decoder_input_ids changes and make one liner
      
      * Make one-liner
      8f36ab3e
    • Arthur's avatar
      [`MPT`] Add MosaicML's `MPT` model to transformers (#24629) · dcb183f4
      Arthur authored
      
      
      * draft add new model like
      
      * some cleaning of the config
      
      * nits
      
      * add nested configs
      
      * nits
      
      * update
      
      * update
      
      * added layer norms + triton kernels
      
      * consider only LPLayerNorm for now.
      
      * update
      
      * all keys match.
      
      * Update
      
      * fixing nits here and there
      
      * working forward pass.
      
      * removed einops dependency
      
      * nits
      
      * format
      
      * add alibi
      
      * byebye head mask
      
      * refactor attention
      
      * nits.
      
      * format
      
      * fix nits.
      
      * nuke ande updates
      
      * nuke tokenizer test
      
      * don't reshape query with kv heads
      
      * added a bit of documentation.
      
      * remove unneeded things
      
      * nuke more stuff
      
      * nit
      
      * logits match - same generations
      
      * rm unneeded methods
      
      * 1 remaining failing CI test
      
      * nit
      
      * fix nits
      
      * fix docs
      
      * fix docs
      
      * rm tokenizer
      
      * fixup
      
      * fixup
      
      * fixup and fix tests
      
      * fixed configuration object.
      
      * use correct activation
      
      * few minor fixes
      
      * clarify docs a bit
      
      * logits match à 1e-12
      
      * skip and unskip a test
      
      * added some slow tests.
      
      * fix readme
      
      * add more details
      
      * Update docs/source/en/model_doc/mpt.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix configuration issues
      
      * more fixes in config
      
      * added more models
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove unneeded position ids
      
      * fix some  comments
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * revert suggestion
      
      * mpt alibi + added batched generation
      
      * Update src/transformers/models/mpt/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove init config
      
      * Update src/transformers/models/mpt/configuration_mpt.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix nit
      
      * add another slow test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fits in one line
      
      * some refactor because make fixup doesn't pass
      
      * add ft notebook
      
      * update md
      
      * correct doc path
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      dcb183f4
  5. 11 Jul, 2023 1 commit
    • Matt's avatar
      Falcon port (#24523) · b3ab3fac
      Matt authored
      
      
      * Initial commit
      
      * Update src/transformers/models/falcon/configuration_falcon.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/configuration_falcon.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Cleanup config docstring
      
      * Update src/transformers/models/falcon/configuration_falcon.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Convert to relative imports
      
      * Remove torch < 1.8 warning
      
      * Restructure cos_sin header
      
      * qkv -> query, key, value
      
      * Refactor attention calculation
      
      * Add a couple of config variables to account for the different checkpoints
      
      * Successful merging of the code paths!
      
      * Fix misplaced line in the non-parallel attention path
      
      * Update config and tests
      
      * Add a pad_token_id when testing
      
      * Support output_attentions when alibi is None
      
      * make fixup
      
      * Skip KV cache shape test
      
      * No more _keys_to_ignore_on_load_missing
      
      * Simplify self attention a bit
      
      * Simplify self attention a bit
      
      * make fixup
      
      * stash commit
      
      * Some more attention mask updates
      
      * Should pass all tests except assisted generation!
      
      * Add big model generation test
      
      * make fixup
      
      * Add temporary workaround for test
      
      * Test overrides for assisted generation
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update tests/models/falcon/test_modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Test overrides for assisted generation
      
      * Add generation demo
      
      * Update copyright
      
      * Make the docstring model actually small
      
      * Add module-level docstring
      
      * Remove all assertions
      
      * Add copied from bloom
      
      * Reformat the QKV layer
      
      * Add copied from bloom
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Remove unused line and reformat
      
      * No single letter variables
      
      * Cleanup return names
      
      * Add copied from line
      
      * Remove the deprecated arguments blocks
      
      * Change the embeddings test to an alibi on/off test
      
      * Remove position_ids from FalconForQA
      
      * Remove old check for token type IDs
      
      * Fix the alibi path when multi_query is False
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/falcon/test_modeling_falcon.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update config naming
      
      * Fix typo for new_decoder_architecture
      
      * Add some comments
      
      * Fix docstring
      
      * Fix docstring
      
      * Create range in the right dtype from the start
      
      * Review comment cleanup
      
      * n_head_kv -> num_kv_heads
      
      * self.alibi -> self.use_alibi
      
      * self.num_kv -> self.num_kv_heads
      
      * Reorder config args
      
      * Made alibi arguments Optional
      
      * Add all model docstrings
      
      * Add extra checkpoints
      
      * Add author info for Falcon
      
      * Stop removing token_type_ids because our checkpoints shouldn't return it anymore
      
      * Add one hopeful comment for the future
      
      * Fix typo
      
      * Update tests, fix cache issue for generation
      
      * Use -1e9 instead of -inf to avoid float overflow
      
      * Recompute the rotary embeddings much less often
      
      * Re-enable disabled tests
      
      * One final fix to attention mask calculation, and update tests
      
      * Cleanup targeting falcon-40b equivalency
      
      * Post-rebase docs update
      
      * Update docstrings, especially in the config
      
      * More descriptive variable names, and comments where we can't rename them
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      b3ab3fac
  6. 10 Jul, 2023 1 commit
  7. 20 Jun, 2023 1 commit
  8. 02 Jun, 2023 1 commit
  9. 01 May, 2023 1 commit
  10. 28 Apr, 2023 1 commit
  11. 10 Apr, 2023 2 commits
    • Sugawara's avatar
      add GPTNeoXForSequenceClassification (#22671) · 6daa9cb5
      Sugawara authored
      * add GPTNeoXForSequenceClassification
      
      * move the labels to logits.device (ref: #22561)
      
      * fix
      6daa9cb5
    • Joel Lamy-Poirier's avatar
      Add GPTBigCode model (Optimized GPT2 with MQA from Santacoder & BigCode) (#22575) · e0921c6b
      Joel Lamy-Poirier authored
      
      
      * Add model with cli tool
      
      * Remove unwanted stuff
      
      * Add new code
      
      * Remove inference runner
      
      * Style
      
      * Fix checks
      
      * Test updates
      
      * make fixup
      
      * fix docs
      
      * fix doc
      
      * fix test
      
      * hopefully fix pipeline tests
      
      * refactor
      
      * fix CIs
      
      * add comment
      
      * rename to `GPTBigCodeForCausalLM`
      
      * correct readme
      
      * make fixup + docs
      
      * make fixup
      
      * fixes
      
      * fixes
      
      * Remove pruning
      
      * Remove import
      
      * Doc updates
      
      * More pruning removal
      
      * Combine copies
      
      * Single MQA implementation, remove kv cache pre-allocation and padding
      
      * Update doc
      
      * Revert refactor to match gpt2 style
      
      * Merge back key and value caches, fix some type hints
      
      * Update doc
      
      * Fix position ids pith padding (PR 21080)
      
      * Add conversion script temporarily
      
      * Update conversion script
      
      * Remove checkpoint conversion
      
      * New model
      
      * Fix MQA test
      
      * Fix copies
      
      * try fix tests
      
      * FIX TEST!!
      
      * remove  `DoubleHeadsModel`
      
      * add MQA tests
      
      * add slow tests
      
      * clean up
      
      * add CPU checker
      
      * final fixes
      
      * fixes
      
      - fix GPU issue
      - fixed slow tests
      - skip disk offload
      
      * fix final issue
      
      * Simplify and comment baddbmm fix
      
      * Remove unnecessary code
      
      * Transpose tweaks
      
      * Use beta=1 on cpu, improve tests
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      e0921c6b
  12. 24 Mar, 2023 1 commit
    • Mitch Naylor's avatar
      Add Mega: Moving Average Equipped Gated Attention (#21766) · 57f25f4b
      Mitch Naylor authored
      
      
      * add mega file structure and plain pytorch version of mega source code
      
      * added config class with old naming conventions
      
      * filled in mega documentation
      
      * added config class and embeddings with optional token types
      
      * updated notes
      
      * starting the conversion process, deleted intermediate and added use_cache back to config
      
      * renamed config attributes in modeling_mega.py
      
      * checkpointing before refactoring incremental decoding functions
      
      * removed stateful incremental key/values for EMA and self-attention
      
      * refactored MovingAverageGatedAttention to remove stateful k/v history and use unified attention mask
      
      * MovingAverageGatedAttention works with incremental decoding + past values, added sequence length enforcement
      
      * more comments in MovingAverageGatedAttention + checkpointing before GatedCrossAttention
      
      * bug fix in attention mask handling in MovingAverageGatedAttention
      
      * removed incremental state from GatedCrossAttention and removed IncrementalState class
      
      * finished gated cross attention and got MegaLayer working
      
      * fixed causal masking in mega decoder
      
      * fixed how padding and causal masks are passed through MegaLayer with and without k/v caching
      
      * finished MegaModel; tested with encoder, decoder-only, and cross-attention type inputs; started work on downstream classes; removed mentions of position_ids
      
      * added optional dense hidden layer for masked and causal LM classes
      
      * docstring updates in MultiHeadEMA and GatedCrossAttention, removed unnecessary inputs in cross-attention
      
      * removed before_attn_fn in Mega class and updated docstrings and comments up to there
      
      * bug fix in MovingAverageGatedAttention masking
      
      * working conversion of MLM checkpoint in scratchpad script -- perfect matches
      
      * moved arg for hidden dense layer in LM head to config; discovered issue where from_pretrained is renaming gamma and beta parameters
      
      * renamed gamma and beta parameters to avoid HF renaming when loading from checkpoint
      
      * finished checkpoint conversion script
      
      * cleanup old class in mega config script
      
      * removed 'copied from' statements and passing integration tests
      
      * added num_attention_heads=1 to config for integration compatibility, decoder tests working, generation tests failing
      
      * fixed tuple output of megamodel
      
      * all common tests passing after fixing issues in decoder, gradient retention, and initialization
      
      * added mega-specific tests, ready for more documentation and style checks
      
      * updated docstrings; checkpoint before style fixes
      
      * style and quality checks, fixed initialization problem in float_tensor, ready for PR
      
      * added mega to toctree
      
      * removed unnecessary arg in megaconfig
      
      * removed unused arg and fixed code samples with leftover roberta models
      
      * Apply suggestions from code review
      
      Applied all suggestions except the one renaming a class, as I'll need to update that througout
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixed issue where .view breaks batch dimension, conversion script fixed with absolute imports, updated readme with Mega->MEGA
      
      * removed asserts in Mega code, renamed sequencenorm, gatedcrossattention, and NFFN, replaced get_activation_fn with ACTFN, and added sequencenorm to layer norms
      
      * reformatted .forward() docstrings to match style and removed unused mask input in cross-attention
      
      * removed all reset_parameters() methods and rolled into MegaPreTrainedModel._init_weights()
      
      * renamed all single-letter variables and improved readability in tensor size comments, Mega->MEGA in 2 documentation files
      
      * variable names in NFFN
      
      * manual Mega->MEGA changes in docs
      
      * Mega->MEGA in config auto
      
      * style and quality fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * renamed parameters and variables with confusing names, added copied from statements, moved fft conv to its own method, other cleanup from PR comments
      
      * commit before dealing with merge conflicts
      
      * made new attention activation functions available in ACT2FN and added generation test from OPT
      
      * style and quality in activations and tests
      
      * documentation fixes, renaming variables in dropout and rotary positions, used built-in causal masking, encoders->layers in MegaModel, moved comments into docstrings
      
      * style and quality fixes after latest updates, before rotary position ids
      
      * causal mask in MegaBlock docstring + added missing device passing
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * added Mega prefixes where missing, reverted MegaSequenceNorm to if-else, other module renaming requested in PR
      
      * style and quality fixes + readme updates pointing to main
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      57f25f4b
  13. 20 Mar, 2023 1 commit
  14. 17 Mar, 2023 1 commit
  15. 27 Feb, 2023 1 commit
  16. 15 Feb, 2023 1 commit
    • Susnato Dhar's avatar
      Add Ernie-M Model to huggingface (#21349) · 0c9c8472
      Susnato Dhar authored
      * config and tokenization(fast too) changed and ErnieEncoder added
      
      * Slow Tokenization Added
      
      * Tokenizer(slow) is now working and Fast Tokenizer removed
      
      * Added Config code
      
      * Added Base Model and utils
      
      * ErnieMModel is now working
      
      * All added except tests
      
      * All tests passed except ErnieUIEM
      
      * All tests passed
      
      * all fixes done
      
      * all fixes done
      
      * fixed MAP
      
      * fixed check_code_quality
      
      * fixed Build PR Documentation issue
      
      * Added changes(comments) and also updated to the latest upstream/main
      
      * Added fixup
      
      * Added # Copied comments
      
      * Added fixup
      
      * Added more comments and some nits
      
      * Added fixup
      
      * Fixed README_hd.md
      
      * Added more fixes
      
      * ErnieMTokenizer (being sentencepiece) protected and other docs edited
      
      * Added code_quality fix
      
      * Fixed for
      
      * Added more fix
      
      * modified AZ
      
      * ernie-m tokenization test added!
      
      * attention mask part fixed(with 0->self.config.pad_token_id)
      
      * applied make fixup
      0c9c8472
  17. 10 Feb, 2023 1 commit
    • Jannis Vamvas's avatar
      Add X-MOD (#20939) · b0d539cc
      Jannis Vamvas authored
      
      
      * Add X-MOD to Readme
      
      * Add documentation for X-MOD
      
      * Implement X-MOD
      
      * Fix formatting of X-MOD docs
      
      * Change signature of X-MOD forward methods to use lang_ids
      
      * Minor changes
      
      * Rebase with main and run make fix-copies
      
      * Make suggested changes to docstrings
      
      * Improve code readability
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * Fix code style
      
      * Conversion script: Remove asserts and type annotations
      
      * Remove _TOKENIZER_FOR_DOC
      
      * XMOD -> Xmod
      
      * Update copyright note
      
      * Fix doctests
      
      * Fix docstring
      
      * Add integration test for FillMaskPipeline
      
      * Revert "Add integration test for FillMaskPipeline"
      
      This reverts commit 4381eb3b1d0f5d85785f89caba83928e6efa6d1f.
      
      * Add end-to-end integration test for mask fill
      
      * make style
      
      * Rebase with main and make fix-copies
      
      ---------
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      b0d539cc
  18. 02 Feb, 2023 1 commit
  19. 27 Jan, 2023 1 commit
    • Maria Khalusova's avatar
      Automated compatible models list for task guides (#21338) · 73a2ff69
      Maria Khalusova authored
      * initial commit. added tip placeholders and a script
      
      * removed unused imports, fixed paths
      
      * fixed generated links
      
      * make style
      
      * split language modeling doc into two: causal language modeling and masked language modeling
      
      * added check_task_guides.py to make fix-copies
      
      * review feedback addressed
      73a2ff69
  20. 21 Nov, 2022 1 commit
    • Steven Liu's avatar
      Add inference section to task guides (#18781) · d896029e
      Steven Liu authored
      * 📝 start adding inference section to task guides
      
      *  make style
      
      * 📝 add multiple choice
      
      * add rest of inference sections
      
      * make style
      
      * add compute_metric, push_to_hub, pipeline
      
      * make style
      
      * add updated sequence and token classification
      
      * make style
      
      * make edits in token classification
      
      * add audio classification
      
      * make style
      
      * add asr
      
      * make style
      
      * add image classification
      
      * make style
      
      * add summarization
      
      * make style
      
      * add translation
      
      * make style
      
      * add multiple choice
      
      * add language modeling
      
      * add qa
      
      * make style
      
      * review and edits
      
      * apply reviews
      
      * make style
      
      * fix call to processor
      
      * apply audio reviews
      
      * update to better asr model
      
      * make style
      d896029e
  21. 07 Sep, 2022 1 commit
  22. 06 Jul, 2022 1 commit
  23. 04 Apr, 2022 1 commit
  24. 25 Mar, 2022 1 commit
  25. 22 Mar, 2022 1 commit
  26. 18 Mar, 2022 1 commit
  27. 15 Mar, 2022 1 commit
  28. 23 Feb, 2022 1 commit