1. 03 Apr, 2023 5 commits
  2. 31 Mar, 2023 2 commits
  3. 30 Mar, 2023 4 commits
  4. 29 Mar, 2023 4 commits
  5. 27 Mar, 2023 3 commits
    • Arthur's avatar
      [WIP]`NLLB-MoE` Adds the moe model (#22024) · 19ade242
      Arthur authored
      * Initial commit
      
      * update modeling code
      
      * update doc
      
      * add functions necessary
      
      * fix impotrs
      
      * revert changes
      
      * fixup
      
      * more styling to get going
      
      * remove standalone encoder
      
      * update code
      
      * styling
      
      * fix config and model
      
      * update code and some refactoring
      
      * make more tests pass
      
      * Adding NLLB-200 - MoE - 54.5B for no language left behind
      Fixes #21300
      
      * fix mor common tests
      
      * styke
      
      * update testing file
      
      * update
      
      * update
      
      * Router2 doc
      
      * update check config with sparse layer
      
      * add dummy router
      
      * update current conversion script
      
      * create on the fly conversion script
      
      * Fixup
      
      * style
      
      * style 2
      
      * fix empty return
      
      * fix return
      
      * Update default config sparse layers
      
      * easier to create sparse layers
      
      * update
      
      * update conversion script
      
      * update modeling
      
      * add to toctree
      
      * styling
      
      * make ruff happy
      
      * update docstring
      
      * update conversion script
      
      * update, will break tests but impelemting top2
      
      * update
      
      * local groups are supported here
      
      * ️ Support for local groups is now removed ️
      
      This is because it has to work with model parallelism that we do not support
      
      * finish simplificaiton
      
      * Fix forward
      
      * style
      
      * fixup
      
      * Update modelling and test, refactoring
      
      * update tests
      
      * remove final layer)norm as it is done in the FF
      
      * routing works! Logits test added
      
      * nit in test
      
      * remove top1router
      
      * style
      
      * make sure sparse are tested. Had to change route_tokens a liottle bit
      
      * add support for unslip models when converting
      
      * fixup
      
      * style
      
      * update test s
      
      * update test
      
      * REFACTOR
      
      * encoder outputs match!
      
      * style
      
      * update testing
      
      * 🎉encoder and decoder logits match 🎉
      
      
      
      * styleing
      
      * update tests
      
      * cleanup tests
      
      * fix router test and CIs
      
      * cleanup
      
      * cleanup test styling
      
      * fix tests
      
      * Finally the generation tests match!
      
      * cleanup
      
      * update test
      
      * style testing file
      
      * remove script
      
      * cleanup
      
      * more cleanup
      
      * nits
      
      * update
      
      * NLLB tokenizer is wrong and will be fixed soon
      
      * use LongTensors
      
      * update tests
      
      * revert some small changes
      
      * fix second expert sampling and batch prioritized routing
      
      * update tests
      
      * finish last tests
      
      * make ruff happy
      
      * update
      
      * ruff again
      
      * style
      
      * Update docs/source/en/model_doc/nllb-moe.mdx
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Updates based on review
      
      * style and fix import issue
      
      * nit
      
      * more nits
      
      * cleanup
      
      * styling
      
      * update test_seconde_expert_policy
      
      * fix name
      
      * last nit on the markdown examples
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      19ade242
    • NielsRogge's avatar
      [Pix2Struct] Add support to resize embeddings (#22394) · 0e708178
      NielsRogge authored
      * First draft
      
      * Fix integration test
      
      * Remove script
      
      * Fix test and typos
      
      * Fix one more test
      
      * Skip tied embeddings test
      
      * Remove line
      
      * Address comments
      0e708178
    • Joao Gante's avatar
  6. 24 Mar, 2023 3 commits
    • Shubhamai's avatar
      Resnet flax (#21472) · a0cbbba3
      Shubhamai authored
      
      
      * [WIP] flax resnet
      
      * added pretrained flax models, results reproducible
      
      * Added pretrained flax models, results reproducible
      
      * working on tests
      
      * no real code change, just some comments
      
      * [flax] adding support for batch norm layers
      
      * fixing bugs related to pt+flax integration
      
      * removing loss from modeling flax output class
      
      * fixing classifier tests
      
      * fixing comments, model output
      
      * cleaning comments
      
      * review changes
      
      * review changes
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * renaming Flax to PyTorch
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      a0cbbba3
    • Mitch Naylor's avatar
      Add Mega: Moving Average Equipped Gated Attention (#21766) · 57f25f4b
      Mitch Naylor authored
      
      
      * add mega file structure and plain pytorch version of mega source code
      
      * added config class with old naming conventions
      
      * filled in mega documentation
      
      * added config class and embeddings with optional token types
      
      * updated notes
      
      * starting the conversion process, deleted intermediate and added use_cache back to config
      
      * renamed config attributes in modeling_mega.py
      
      * checkpointing before refactoring incremental decoding functions
      
      * removed stateful incremental key/values for EMA and self-attention
      
      * refactored MovingAverageGatedAttention to remove stateful k/v history and use unified attention mask
      
      * MovingAverageGatedAttention works with incremental decoding + past values, added sequence length enforcement
      
      * more comments in MovingAverageGatedAttention + checkpointing before GatedCrossAttention
      
      * bug fix in attention mask handling in MovingAverageGatedAttention
      
      * removed incremental state from GatedCrossAttention and removed IncrementalState class
      
      * finished gated cross attention and got MegaLayer working
      
      * fixed causal masking in mega decoder
      
      * fixed how padding and causal masks are passed through MegaLayer with and without k/v caching
      
      * finished MegaModel; tested with encoder, decoder-only, and cross-attention type inputs; started work on downstream classes; removed mentions of position_ids
      
      * added optional dense hidden layer for masked and causal LM classes
      
      * docstring updates in MultiHeadEMA and GatedCrossAttention, removed unnecessary inputs in cross-attention
      
      * removed before_attn_fn in Mega class and updated docstrings and comments up to there
      
      * bug fix in MovingAverageGatedAttention masking
      
      * working conversion of MLM checkpoint in scratchpad script -- perfect matches
      
      * moved arg for hidden dense layer in LM head to config; discovered issue where from_pretrained is renaming gamma and beta parameters
      
      * renamed gamma and beta parameters to avoid HF renaming when loading from checkpoint
      
      * finished checkpoint conversion script
      
      * cleanup old class in mega config script
      
      * removed 'copied from' statements and passing integration tests
      
      * added num_attention_heads=1 to config for integration compatibility, decoder tests working, generation tests failing
      
      * fixed tuple output of megamodel
      
      * all common tests passing after fixing issues in decoder, gradient retention, and initialization
      
      * added mega-specific tests, ready for more documentation and style checks
      
      * updated docstrings; checkpoint before style fixes
      
      * style and quality checks, fixed initialization problem in float_tensor, ready for PR
      
      * added mega to toctree
      
      * removed unnecessary arg in megaconfig
      
      * removed unused arg and fixed code samples with leftover roberta models
      
      * Apply suggestions from code review
      
      Applied all suggestions except the one renaming a class, as I'll need to update that througout
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixed issue where .view breaks batch dimension, conversion script fixed with absolute imports, updated readme with Mega->MEGA
      
      * removed asserts in Mega code, renamed sequencenorm, gatedcrossattention, and NFFN, replaced get_activation_fn with ACTFN, and added sequencenorm to layer norms
      
      * reformatted .forward() docstrings to match style and removed unused mask input in cross-attention
      
      * removed all reset_parameters() methods and rolled into MegaPreTrainedModel._init_weights()
      
      * renamed all single-letter variables and improved readability in tensor size comments, Mega->MEGA in 2 documentation files
      
      * variable names in NFFN
      
      * manual Mega->MEGA changes in docs
      
      * Mega->MEGA in config auto
      
      * style and quality fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * renamed parameters and variables with confusing names, added copied from statements, moved fft conv to its own method, other cleanup from PR comments
      
      * commit before dealing with merge conflicts
      
      * made new attention activation functions available in ACT2FN and added generation test from OPT
      
      * style and quality in activations and tests
      
      * documentation fixes, renaming variables in dropout and rotary positions, used built-in causal masking, encoders->layers in MegaModel, moved comments into docstrings
      
      * style and quality fixes after latest updates, before rotary position ids
      
      * causal mask in MegaBlock docstring + added missing device passing
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * added Mega prefixes where missing, reverted MegaSequenceNorm to if-else, other module renaming requested in PR
      
      * style and quality fixes + readme updates pointing to main
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      57f25f4b
    • Joao Gante's avatar
      0fa46524
  7. 23 Mar, 2023 4 commits
  8. 22 Mar, 2023 7 commits
    • Yih-Dar's avatar
      Fix PipelineTests skip conditions (#22320) · 8b05ace0
      Yih-Dar authored
      
      
      * check what tests fail
      
      * Skip failing tests
      
      * Skip failing tests
      
      * Skip failing tests
      
      * Skip failing tests
      
      * clean up
      
      * clean up
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      8b05ace0
    • Luc CAILLIAU's avatar
      Chunkable token classification pipeline (#21771) · d62e7d88
      Luc CAILLIAU authored
      
      
      * Chunkable classification pipeline 
      
      The TokenClassificationPipeline is now able to process sequences longer than 512. No matter the framework, the model, the tokenizer. We just have to pass process_all=True and a stride number (optional). The behavior remains the same if you don't pass these optional parameters. For overlapping parts when using stride above 0, we consider only the max scores for each overlapped token in all chunks where the token is.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * update with latest black format
      
      * update black format
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * format correction
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update comments
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      
      * Update token_classification.py
      
      Correct spaces, remove process_all and keep only stride. If stride is provided, the pipeline is applied to the whole text.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update chunk aggregation
      
      Update the chunk aggregation strategy based on entities aggregation.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      Remove unnecessary pop from outputs dict
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add chunking tests
      
      * correct formating
      
      * correct formatting
      
      * correct model id for test chunking
      
      * update scores with nested simplify
      
      * Update test_pipelines_token_classification.py
      
      * Update test_pipelines_token_classification.py
      
      * update model to a tiny one
      
      * Update test_pipelines_token_classification.py
      
      * Adding smaller test for chunking.
      
      * Fixup
      
      * Update token_classification.py
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      d62e7d88
    • Younes Belkada's avatar
      Add Pix2Struct (#21400) · 0f68a7f4
      Younes Belkada authored
      
      
      * v1 all keys match
      
      * clean up
      
      * forward pass ok
      
      * add correct image transform
      
      * generate works, logits matching
      
      * clean up
      
      * more refactor
      
      * revert
      
      * revert
      
      * clean up
      
      * clean ups
      
      * clean up
      
      * refactor
      
      * refactor
      
      * fix doc
      
      * fix tokenizer test
      
      * fix toctree
      
      * revert toctree
      
      * oops
      
      * few fixes
      
      * replace to `pixel_embeds`
      
      * make fixup
      
      * test processing & feat extractor
      
      * fix some tests
      
      * more fixes
      
      * make fixup
      
      * clean up
      
      * more clean up
      
      * add a single slow test
      
      * fix test
      
      * make fixup
      
      * fix
      
      * fix authors
      
      * fix toctree
      
      * update docs
      
      * add docstring
      
      * revert change
      
      * Update src/transformers/models/pix2struct/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix tokenizer
      
      * fix processor test
      
      * fix test
      
      * make fixup
      
      * refactor
      
      * fix config
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * format
      
      * fix
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * make fixup
      
      * add docstring
      
      * fix issues
      
      * fix
      
      * fix
      
      * fix
      
      * add slow test
      
      * fix
      
      * fix
      
      * fix batched issue
      
      * fix training issues
      
      * fix ci test
      
      * fix slow test
      
      * fix conversion script
      
      * remove unneeded classes
      
      * fix slow test
      
      * fix require backends
      
      * fix masked fill
      
      * revert
      
      * fix softmax
      
      * add large models support
      
      * fix conditional generation
      
      * few fixes
      
      * add instructions
      
      * rm unneeded file
      
      * Update src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py
      
      * fix ci test
      
      * fix ci test really
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix nit
      
      * fix nits
      
      * fix image processors nits
      
      * docstring
      
      * clean up
      
      * fix nit
      
      * fix tests
      
      * docstring nit
      
      * fix reshape
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fix nit
      
      * fix repetition
      
      * refactor processor
      
      * make patch size consistent
      
      * refactor forward
      
      * fix docstring
      
      * fix max_patches issue
      
      * update docstirng
      
      * update docstring
      
      * fix coped from
      
      * add skip reasons
      
      * few fixes
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * format
      
      * fix doctests
      
      * refactor and fix
      
      * fix doc build issue
      
      * fix processor test
      
      * small fix conversion script
      
      * replace correct weights
      
      * make fixup
      
      * fix some issues
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * revert config and fixes
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * more details
      
      * fixes
      
      * fix processor
      
      * fix processor test
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * fix processor
      
      * Update src/transformers/models/pix2struct/modeling_pix2struct.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add copied
      
      * make fixup
      
      * fix copies
      
      * update docstring
      
      * refactor
      
      * fix docstring
      
      * fix conversion script
      
      * fix vqa issue
      
      * replace to `flattened_patches`
      
      * nit
      
      * fix numpy issue
      
      * fix image processors
      
      * add batched vqa support
      
      * fix vqa conversion
      
      * make fixup
      
      * fix conversion script
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * add correct docstring
      
      * update docstring
      
      * fix module level + channel dim
      
      * use `make_list_of_images`
      
      * refactor
      
      * correct docstring
      
      * fix authors
      
      * remove `data_format`
      
      * add header text test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * add checkpoints
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      0f68a7f4
    • Joao Gante's avatar
      Beef up Llama tests (#22314) · fd3eb3e3
      Joao Gante authored
      * tmp commit
      
      * beef up llama tests
      fd3eb3e3
    • Joao Gante's avatar
      Generate: Export TF generate with a TF tokenizer (#22310) · 12febc20
      Joao Gante authored
      * Export TF generate with a TF tokenizer
      
      * remove unused lines
      12febc20
    • silentghoul-spec's avatar
      Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer (#22302) · 48bef3a7
      silentghoul-spec authored
      
      
      Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer. Earlier xpath_sub_list was same as xpath_tags_list
      Co-authored-by: default avatardusejat <dusejat@amazon.com>
      48bef3a7
    • Alara Dirik's avatar
      Add MaskedImageModelingOutput (#22212) · 0558914d
      Alara Dirik authored
      * Add MaskedImageModelingOutput
      0558914d
  9. 21 Mar, 2023 2 commits
  10. 17 Mar, 2023 1 commit
  11. 16 Mar, 2023 3 commits
    • Yih-Dar's avatar
      🔥py38 + torch 2 🔥🔥🔥🚀 (#22204) · 5110e574
      Yih-Dar authored
      
      
      * py38 + torch 2
      
      * increment cache versions
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      5110e574
    • Jason Phang's avatar
      LLaMA Implementation (#21955) · 0041be5b
      Jason Phang authored
      
      
      * LLaMA
      
      * sharding and docs
      
      * tweak
      
      * black
      
      * inits
      
      * ruff
      
      * LLAMA_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * init
      
      * no checkpoint
      
      * docs
      
      * ruff
      
      * type_vocab_size
      
      * tokenizer fixes
      
      * tokenizer fixes
      
      * Update tokenization_llama.py
      
      * Update tokenization_llama.py
      
      * Update configuration_llama.py
      
      * Update modeling_llama.py
      
      * tokenizer add_bos by default
      
      * licenses
      
      * remove decoder
      
      * norms and mlp
      
      * rope overhaul
      
      * tweaks
      
      * black
      
      * mention OPT implementation
      
      * off-by-one naming
      
      * typo
      
      * fix
      
      * tokenization fix and slicing bug
      
      * padding config
      
      * cleanup
      
      * black
      
      * update tests
      
      * undo typo
      
      * fix vocab caching logic
      
      * ruff
      
      * docbuilder
      
      * attn fix from BlackSamorez
      
      * initial feedback
      
      * typo
      
      * docs
      
      * llama case
      
      * llama case
      
      * load checkpoint docs
      
      * comment about tokenizer
      
      * tokenizer defaults
      
      * clear past_key_values if use_cache=False
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      ---------
      Co-authored-by: default avatarStella Biderman <stellabiderman@gmail.com>
      0041be5b
    • Yih-Dar's avatar
      52a57f7c
  12. 15 Mar, 2023 2 commits