"tests/models/flaubert/test_modeling_flaubert.py" did not exist on "52040517b8abc55cdb4ba2f2549164a91acb44cc"
  1. 13 Dec, 2022 1 commit
  2. 07 Dec, 2022 1 commit
  3. 15 Nov, 2022 1 commit
    • Younes Belkada's avatar
      Add Switch transformers (#19323) · 163ac3d3
      Younes Belkada authored
      
      
      * first commit
      
      * add more comments
      
      * add router v1
      
      * clean up
      
      - remove `tf` modeling files
      
      * clean up
      
      - remove `tf` modeling files
      
      * clean up
      
      * v0 routers
      
      * added more router
      
      - Implemented `ExpertsChooseMaskedRouter`
      
      - added tests
      - 2 more routers to implement
      
      * last router
      
      * improved docstring
      
      - completed the docstring in `router.py`
      - added more args in the config
      
      * v0 sparse mlp
      
      * replace wrong naming
      
      * forward pass run
      
      * update MOE layer
      
      * small router update
      
      * fixup
      
      * consistency
      
      * remove scatter router
      
      * remove abstract layer
      
      * update test and model for integration testing
      
      * v1 conversion
      
      * update
      
      * hardcode hack
      
      * all keys match
      
      * add gin conversion, without additional libraries
      
      * update conversion sctipy
      
      * delete router file
      
      * update tests wrt router deletion
      
      * fix router issues
      
      * update expert code
      
      * update, logits match, code needsREFACTORING
      
      * Refactor code
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * add generate tests
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      
      * add support for router loss
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * fix forward error
      
      * refactor a bit
      
      * remove `FlaxSwitchTransformers` modules
      
      * more tests pass
      
      * Update code
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * fixup
      
      * fix tests
      
      * fix doc
      
      * fix doc + tokenization
      
      * fix tokenizer test
      
      * fix test
      
      * fix loss output
      
      * update code for backward pass
      
      * add loss support
      
      * update documentation
      
      * fix documentation, clean tokenizer
      
      * more doc fix, cleanup example_switch
      
      * fix failing test
      
      * fix test
      
      * fix test
      
      * fix loss issue
      
      * move layer
      
      * update doc and fix router capacity usage
      
      * fixup
      
      * add sparse mlp index for documentation on hub
      
      * fixup
      
      * test sparse mix architecture
      
      * Apply suggestions from code review
      
      * Update docs/source/en/model_doc/switch_transformers.mdx
      
      * fixup on update
      
      * fix tests
      
      * fix another test
      
      * attempt fix
      
      * Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/switch_transformers/convert_switch_transformers_original_flax_checkpoint_to_pytorch.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * try
      
      * all tests pass
      
      * fix jitter noise
      
      * Apply suggestions from code review
      
      * doc tests pass
      
      * Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove assert
      
      * change config order
      
      * fix readme japanese
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * remove parallelizable tests + add one liners
      
      * remove ONNX config
      
      * fix nits
      
      - add `T5Tokenizer` in auto mapping
      - remove `Switch Transformers` from ONNX supported models
      
      * remove `_get_router`
      
      * remove asserts
      
      * add check in test for `router_dtype`
      
      * add `SwitchTransformersConfig` in `run_pipeline_test`
      
      * Update tests/pipelines/test_pipelines_summarization.py
      
      * add huge model conversion script
      
      * fix slow tests
      
      - add better casting for `Linear8bitLt`
      - remove `torchscript` tests
      
      * add make dir
      
      * style on new script
      
      * fix nits
      
      - doctest
      - remove `_keys_to_ignore_on_load_unexpected`
      
      * Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
      
      * add google as authors
      
      * fix year
      
      * remove last `assert` statements
      
      * standardize vertical spaces
      
      * fix failing import
      
      * fix another failing test
      
      * Remove strange 脿uthorized_keys`
      
      * removing todo and padding that is never used
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarybelkada <younes@huggingface.co>
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarArthur Zucker <arthur@huggingface.co>
      163ac3d3
  4. 07 Oct, 2022 1 commit
    • Sylvain Gugger's avatar
      Rework pipeline tests (#19366) · 9ac586b3
      Sylvain Gugger authored
      * Rework pipeline tests
      
      * Try to fix Flax tests
      
      * Try to put it before
      
      * Use a new decorator instead
      
      * Remove ignore marker since it doesn't work
      
      * Filter pipeline tests
      
      * Woopsie
      
      * Use the fitlered list
      
      * Clean up and fake modif
      
      * Remove init
      
      * Revert fake modif
      9ac586b3
  5. 13 Jun, 2022 1 commit
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f
  6. 12 May, 2022 1 commit
  7. 23 Feb, 2022 1 commit
  8. 29 Oct, 2021 1 commit
  9. 26 Aug, 2021 1 commit
  10. 14 Jun, 2021 1 commit
  11. 18 May, 2021 1 commit
  12. 05 Mar, 2021 1 commit
  13. 10 Feb, 2021 1 commit
    • Suraj Patil's avatar
      remove adjust_logits_during_generation method (#10087) · c130e67d
      Suraj Patil authored
      * add forced logits processors
      
      * delete adjust_logits method
      
      * add forced_eos_token_id argument in config
      
      * add tests for forced logits processors
      
      * update gen utils tests
      
      * add forced option to tf generate
      
      * remove adjust_logits method from tf models
      
      * update adjust_logits for marian
      
      * delete _force_token_id_to_be_generated method
      
      * style
      
      * import warnings
      
      * pass max_length to _get_logits_processor
      
      * set forced_eos_token_id to None
      
      * set forced attributes in conf utils
      
      * typo
      
      * fix rag generate
      
      * add forced_eos_token_id in rag config
      
      * remove force_bos_token_to_be_generated from BartConfig
      
      * remove _force_token_ids_generation from FSMT
      
      * nit
      
      * fix negative constant
      
      * apply suggestions from code review
      c130e67d
  14. 11 Jan, 2021 1 commit
    • Nicolas Patry's avatar
      Enable TruncationStrategy override for pipelines (#9432) · d20e9c72
      Nicolas Patry authored
      * Enable TruncationStrategy override for pipelines
      
      * Update isort.
      
      * Fixing test
      
      * Fixing text_generation pipeline.
      
      * Using same DummyTok as other PR  for easier merge later.
      
      * Some more import guards.
      
      * Remove bogus file.
      
      * Do not pass `generate_kwargs` to `_parse_and_tokenize`.
      @patrickvonplaten
      
      * Removed DummyTok.
      
      * Doc quality.
      d20e9c72
  15. 07 Dec, 2020 1 commit
  16. 23 Oct, 2020 1 commit