"vscode:/vscode.git/clone" did not exist on "c385de24414e4ec6125ee14c46c128bfe70ecb66"
  1. 27 Mar, 2023 4 commits
  2. 24 Mar, 2023 12 commits
    • Stas Bekman's avatar
      [safetensors] don't use in `torch<1.10` (#22370) · cae78c46
      Stas Bekman authored
      * [safetensors] don't use in pt<1.10
      
      * better fix
      cae78c46
    • Sylvain Gugger's avatar
      Fix TF pipeline job · cfab34e1
      Sylvain Gugger authored
      cfab34e1
    • Stas Bekman's avatar
    • Shubhamai's avatar
      Resnet flax (#21472) · a0cbbba3
      Shubhamai authored
      
      
      * [WIP] flax resnet
      
      * added pretrained flax models, results reproducible
      
      * Added pretrained flax models, results reproducible
      
      * working on tests
      
      * no real code change, just some comments
      
      * [flax] adding support for batch norm layers
      
      * fixing bugs related to pt+flax integration
      
      * removing loss from modeling flax output class
      
      * fixing classifier tests
      
      * fixing comments, model output
      
      * cleaning comments
      
      * review changes
      
      * review changes
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * renaming Flax to PyTorch
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      a0cbbba3
    • Joao Gante's avatar
      88dae78f
    • Samuel Bub谩n's avatar
      Improve error message (#22361) · 3a7f5fa9
      Samuel Bub谩n authored
      * Improve error message
      
      * Fix consistency
      3a7f5fa9
    • Sylvain Gugger's avatar
      Pin tensorflow-text to go with tensorflow (#22362) · 6587125c
      Sylvain Gugger authored
      * Pin tensorflow-text to go with tensorflow
      
      * Make it more convenient to pin TensorFlow
      
      * setup don't like f-strings
      6587125c
    • Yih-Dar's avatar
      Update docker files to use official torch 2.0.0 (#22357) · 01203475
      Yih-Dar authored
      
      
      * update docker files to use official torch 2.0.0
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      01203475
    • Mitch Naylor's avatar
      Add Mega: Moving Average Equipped Gated Attention (#21766) · 57f25f4b
      Mitch Naylor authored
      
      
      * add mega file structure and plain pytorch version of mega source code
      
      * added config class with old naming conventions
      
      * filled in mega documentation
      
      * added config class and embeddings with optional token types
      
      * updated notes
      
      * starting the conversion process, deleted intermediate and added use_cache back to config
      
      * renamed config attributes in modeling_mega.py
      
      * checkpointing before refactoring incremental decoding functions
      
      * removed stateful incremental key/values for EMA and self-attention
      
      * refactored MovingAverageGatedAttention to remove stateful k/v history and use unified attention mask
      
      * MovingAverageGatedAttention works with incremental decoding + past values, added sequence length enforcement
      
      * more comments in MovingAverageGatedAttention + checkpointing before GatedCrossAttention
      
      * bug fix in attention mask handling in MovingAverageGatedAttention
      
      * removed incremental state from GatedCrossAttention and removed IncrementalState class
      
      * finished gated cross attention and got MegaLayer working
      
      * fixed causal masking in mega decoder
      
      * fixed how padding and causal masks are passed through MegaLayer with and without k/v caching
      
      * finished MegaModel; tested with encoder, decoder-only, and cross-attention type inputs; started work on downstream classes; removed mentions of position_ids
      
      * added optional dense hidden layer for masked and causal LM classes
      
      * docstring updates in MultiHeadEMA and GatedCrossAttention, removed unnecessary inputs in cross-attention
      
      * removed before_attn_fn in Mega class and updated docstrings and comments up to there
      
      * bug fix in MovingAverageGatedAttention masking
      
      * working conversion of MLM checkpoint in scratchpad script -- perfect matches
      
      * moved arg for hidden dense layer in LM head to config; discovered issue where from_pretrained is renaming gamma and beta parameters
      
      * renamed gamma and beta parameters to avoid HF renaming when loading from checkpoint
      
      * finished checkpoint conversion script
      
      * cleanup old class in mega config script
      
      * removed 'copied from' statements and passing integration tests
      
      * added num_attention_heads=1 to config for integration compatibility, decoder tests working, generation tests failing
      
      * fixed tuple output of megamodel
      
      * all common tests passing after fixing issues in decoder, gradient retention, and initialization
      
      * added mega-specific tests, ready for more documentation and style checks
      
      * updated docstrings; checkpoint before style fixes
      
      * style and quality checks, fixed initialization problem in float_tensor, ready for PR
      
      * added mega to toctree
      
      * removed unnecessary arg in megaconfig
      
      * removed unused arg and fixed code samples with leftover roberta models
      
      * Apply suggestions from code review
      
      Applied all suggestions except the one renaming a class, as I'll need to update that througout
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixed issue where .view breaks batch dimension, conversion script fixed with absolute imports, updated readme with Mega->MEGA
      
      * removed asserts in Mega code, renamed sequencenorm, gatedcrossattention, and NFFN, replaced get_activation_fn with ACTFN, and added sequencenorm to layer norms
      
      * reformatted .forward() docstrings to match style and removed unused mask input in cross-attention
      
      * removed all reset_parameters() methods and rolled into MegaPreTrainedModel._init_weights()
      
      * renamed all single-letter variables and improved readability in tensor size comments, Mega->MEGA in 2 documentation files
      
      * variable names in NFFN
      
      * manual Mega->MEGA changes in docs
      
      * Mega->MEGA in config auto
      
      * style and quality fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * renamed parameters and variables with confusing names, added copied from statements, moved fft conv to its own method, other cleanup from PR comments
      
      * commit before dealing with merge conflicts
      
      * made new attention activation functions available in ACT2FN and added generation test from OPT
      
      * style and quality in activations and tests
      
      * documentation fixes, renaming variables in dropout and rotary positions, used built-in causal masking, encoders->layers in MegaModel, moved comments into docstrings
      
      * style and quality fixes after latest updates, before rotary position ids
      
      * causal mask in MegaBlock docstring + added missing device passing
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * added Mega prefixes where missing, reverted MegaSequenceNorm to if-else, other module renaming requested in PR
      
      * style and quality fixes + readme updates pointing to main
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      57f25f4b
    • Joao Gante's avatar
      0fa46524
    • Ashwin Mathur's avatar
      Fix typo in Greedy Search Description (#22345) · b7960765
      Ashwin Mathur authored
      Fix typo in greedy search docs
      b7960765
    • James Reed's avatar
      [HFTracer] Make embeddings ops take on the dtype of the weight (#22347) · c0fa2aa0
      James Reed authored
      * [HFTracer] Make embeddings ops take on the dtype of the weight
      
      * fix bug
      c0fa2aa0
  3. 23 Mar, 2023 13 commits
  4. 22 Mar, 2023 11 commits
    • Stas Bekman's avatar
      [deepspeed zero3] need `generate(synced_gpus=True, ...)` (#22242) · 73fdc8c5
      Stas Bekman authored
      
      
      * [deepspeed zero3] need generate(synced_gpus=True, ...)
      
      * fix
      
      * rework per Sylvain's suggestion
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      73fdc8c5
    • Yih-Dar's avatar
      Fix PipelineTests skip conditions (#22320) · 8b05ace0
      Yih-Dar authored
      
      
      * check what tests fail
      
      * Skip failing tests
      
      * Skip failing tests
      
      * Skip failing tests
      
      * Skip failing tests
      
      * clean up
      
      * clean up
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      8b05ace0
    • Luc CAILLIAU's avatar
      Chunkable token classification pipeline (#21771) · d62e7d88
      Luc CAILLIAU authored
      
      
      * Chunkable classification pipeline 
      
      The TokenClassificationPipeline is now able to process sequences longer than 512. No matter the framework, the model, the tokenizer. We just have to pass process_all=True and a stride number (optional). The behavior remains the same if you don't pass these optional parameters. For overlapping parts when using stride above 0, we consider only the max scores for each overlapped token in all chunks where the token is.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * update with latest black format
      
      * update black format
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * format correction
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update comments
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      
      * Update token_classification.py
      
      Correct spaces, remove process_all and keep only stride. If stride is provided, the pipeline is applied to the whole text.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update chunk aggregation
      
      Update the chunk aggregation strategy based on entities aggregation.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      Remove unnecessary pop from outputs dict
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add chunking tests
      
      * correct formating
      
      * correct formatting
      
      * correct model id for test chunking
      
      * update scores with nested simplify
      
      * Update test_pipelines_token_classification.py
      
      * Update test_pipelines_token_classification.py
      
      * update model to a tiny one
      
      * Update test_pipelines_token_classification.py
      
      * Adding smaller test for chunking.
      
      * Fixup
      
      * Update token_classification.py
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      d62e7d88
    • Tom Aarsen's avatar
      docs: Resolve incorrect type typo in trainer methods (#22316) · f48d3314
      Tom Aarsen authored
      Resolve incorrect type typo in trainer methods
      f48d3314
    • Younes Belkada's avatar
      Add Pix2Struct (#21400) · 0f68a7f4
      Younes Belkada authored
      
      
      * v1 all keys match
      
      * clean up
      
      * forward pass ok
      
      * add correct image transform
      
      * generate works, logits matching
      
      * clean up
      
      * more refactor
      
      * revert
      
      * revert
      
      * clean up
      
      * clean ups
      
      * clean up
      
      * refactor
      
      * refactor
      
      * fix doc
      
      * fix tokenizer test
      
      * fix toctree
      
      * revert toctree
      
      * oops
      
      * few fixes
      
      * replace to `pixel_embeds`
      
      * make fixup
      
      * test processing & feat extractor
      
      * fix some tests
      
      * more fixes
      
      * make fixup
      
      * clean up
      
      * more clean up
      
      * add a single slow test
      
      * fix test
      
      * make fixup
      
      * fix
      
      * fix authors
      
      * fix toctree
      
      * update docs
      
      * add docstring
      
      * revert change
      
      * Update src/transformers/models/pix2struct/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix tokenizer
      
      * fix processor test
      
      * fix test
      
      * make fixup
      
      * refactor
      
      * fix config
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * format
      
      * fix
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * make fixup
      
      * add docstring
      
      * fix issues
      
      * fix
      
      * fix
      
      * fix
      
      * add slow test
      
      * fix
      
      * fix
      
      * fix batched issue
      
      * fix training issues
      
      * fix ci test
      
      * fix slow test
      
      * fix conversion script
      
      * remove unneeded classes
      
      * fix slow test
      
      * fix require backends
      
      * fix masked fill
      
      * revert
      
      * fix softmax
      
      * add large models support
      
      * fix conditional generation
      
      * few fixes
      
      * add instructions
      
      * rm unneeded file
      
      * Update src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py
      
      * fix ci test
      
      * fix ci test really
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix nit
      
      * fix nits
      
      * fix image processors nits
      
      * docstring
      
      * clean up
      
      * fix nit
      
      * fix tests
      
      * docstring nit
      
      * fix reshape
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fix nit
      
      * fix repetition
      
      * refactor processor
      
      * make patch size consistent
      
      * refactor forward
      
      * fix docstring
      
      * fix max_patches issue
      
      * update docstirng
      
      * update docstring
      
      * fix coped from
      
      * add skip reasons
      
      * few fixes
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * format
      
      * fix doctests
      
      * refactor and fix
      
      * fix doc build issue
      
      * fix processor test
      
      * small fix conversion script
      
      * replace correct weights
      
      * make fixup
      
      * fix some issues
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * revert config and fixes
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * more details
      
      * fixes
      
      * fix processor
      
      * fix processor test
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * fix processor
      
      * Update src/transformers/models/pix2struct/modeling_pix2struct.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add copied
      
      * make fixup
      
      * fix copies
      
      * update docstring
      
      * refactor
      
      * fix docstring
      
      * fix conversion script
      
      * fix vqa issue
      
      * replace to `flattened_patches`
      
      * nit
      
      * fix numpy issue
      
      * fix image processors
      
      * add batched vqa support
      
      * fix vqa conversion
      
      * make fixup
      
      * fix conversion script
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * add correct docstring
      
      * update docstring
      
      * fix module level + channel dim
      
      * use `make_list_of_images`
      
      * refactor
      
      * correct docstring
      
      * fix authors
      
      * remove `data_format`
      
      * add header text test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * add checkpoints
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      0f68a7f4
    • Joao Gante's avatar
      Beef up Llama tests (#22314) · fd3eb3e3
      Joao Gante authored
      * tmp commit
      
      * beef up llama tests
      fd3eb3e3
    • Joao Gante's avatar
      Generate: Export TF generate with a TF tokenizer (#22310) · 12febc20
      Joao Gante authored
      * Export TF generate with a TF tokenizer
      
      * remove unused lines
      12febc20
    • Sylvain Gugger's avatar
      Enforce `max_memory` for device_map strategies (#22311) · 5fd4e3c8
      Sylvain Gugger authored
      Enforce  for device_map strategies
      5fd4e3c8
    • silentghoul-spec's avatar
      Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer (#22302) · 48bef3a7
      silentghoul-spec authored
      
      
      Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer. Earlier xpath_sub_list was same as xpath_tags_list
      Co-authored-by: default avatardusejat <dusejat@amazon.com>
      48bef3a7
    • Nick Hill's avatar
      Fix position embeddings for GPT-J and CodeGen (#22069) · 4e94c6c0
      Nick Hill authored
      * Revert "[GPT-J] add deprecation warning (#21869)"
      
      This reverts commit fb76994c.
      
      * Fix position embeddings for GPT-J and CodeGen
      
      * Address review comments from @gante
      
      * Fix "Copied from" comment referencing wrong function
      
      * Fix copy/paste mistake
      
      * Fix training path
      
      * Hopefully make torch.fx happy
      
      * Move position_ids long cast
      
      * Revert "Hopefully make torch.fx happy"
      
      This reverts commit e41a6f4cad3ff441124c7457b19cfb630d4ca025.
      
      * Changes to help with torch.fx tracing
      
      * Linter fix
      
      * Correct position_ids tensor type hint
      
      * Work-around torch.fx tracing issue
      
      * Get the changes to work with torch.fx
      
      * Address review comment from @michaelbenayoun
      
      * Another small adjustment
      
      * Add explanatory comment; small code tidyup
      4e94c6c0
    • Connor Henderson's avatar
      fix: Allow only test_file in pytorch and flax summarization (#22293) · 8e6c34b3
      Connor Henderson authored
      allow only test_file in pytorch and flax summarization
      8e6c34b3