"...internlm2-math-7b.git" did not exist on "9067e0a4add41726cc9926579929ef64faf7a8c5"
  1. 13 Jun, 2022 1 commit
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f
  2. 11 Dec, 2020 1 commit
  3. 20 Jun, 2020 1 commit
    • Kevin Canwen Xu's avatar
      Add BERT Loses Patience (Patience-based Early Exit) (#5078) · 2fd28d43
      Kevin Canwen Xu authored
      * Add BERT Loses Patience (Patience-based Early Exit)
      
      * update model archive
      
      * update format
      
      * sort import
      
      * flake8
      
      * Add results
      
      * full results
      
      * align the table
      
      * refactor to inherit
      
      * default per gpu eval = 1
      
      * Formatting
      
      * Formatting
      
      * isort
      
      * modify readme
      
      * Add check
      
      * Fix format
      
      * Fix format
      
      * Doc strings
      
      * ALBERT & BERT for sequence classification don't inherit from the original anymore
      
      * Remove incorrect comments
      
      * Remove incorrect comments
      
      * Remove incorrect comments
      
      * Sync up with new code
      
      * Sync up with new code
      
      * Add a test
      
      * Add a test
      
      * Add a test
      
      * Add a test
      
      * Add a test
      
      * Add a test
      
      * Finishing up!
      2fd28d43
  4. 03 Mar, 2020 1 commit
    • Sam Shleifer's avatar
      Summarization Examples: add Bart CNN Evaluation (#3082) · 5b396457
      Sam Shleifer authored
      * Rename and improve example
      
      * Add test
      
      * slightly faster test
      
      * style
      
      * This breaks remy prolly
      
      * shorter test string
      
      * no slow
      
      * newdir structure
      
      * New tree
      
      * Style
      
      * shorter
      
      * docs
      
      * clean
      
      * Attempt future import
      
      * more import hax
      5b396457
  5. 06 Jan, 2020 2 commits
  6. 22 Dec, 2019 1 commit
  7. 26 Sep, 2019 1 commit
  8. 05 Jul, 2019 1 commit
  9. 02 Jul, 2019 1 commit