1. 05 Jul, 2022 2 commits
  2. 04 Jul, 2022 3 commits
  3. 01 Jul, 2022 3 commits
    • Matt's avatar
      XLA train step fixes (#17973) · d6cec458
      Matt authored
      * Copy inputs to train and test step before modifying them, as this breaks things
      
      * Add XLA tests, fix our loss functions to be XLA-compatible
      
      * make fixup
      
      * Update loss computation test to expect vector of per-sample losses
      
      * Patch loss for TFLED
      
      * Patch loss for TFAlbert
      
      * Add a tf_legacy_loss config flag that enables old loss functions
      
      * Stop using config.get() because it's not a dict
      
      * Skip loss computation test for RAG because its loss is very strange and I'm afraid to rewrite it
      
      * make fixup
      
      * Add XLA-compatible RAG loss
      
      * Fix dtype of loss mask for TFAlbert
      
      * Fix test for XLNet too because it overrides the default one
      
      * make fixup
      
      * Fix config test
      
      * No more depending on GPU NaN behaviour
      
      * Add test, avoid potential zero division
      
      * Fix test item assignment
      
      * Fix loss computation masking test
      
      * make fixup
      
      * Fix dtype bugs
      d6cec458
    • Yih-Dar's avatar
      569b679a
    • Yih-Dar's avatar
      skip some gpt_neox tests that require 80G RAM (#17923) · 14fb8a63
      Yih-Dar authored
      
      
      * skip some gpt_neox tests that require 80G RAM
      
      * remove tests
      
      * fix quality
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      14fb8a63
  4. 30 Jun, 2022 1 commit
  5. 29 Jun, 2022 6 commits
  6. 28 Jun, 2022 1 commit
  7. 27 Jun, 2022 3 commits
    • Yih-Dar's avatar
      fix (#17890) · 9a345384
      Yih-Dar authored
      
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      9a345384
    • Matt's avatar
      Add a TF in-graph tokenizer for BERT (#17701) · ee0d001d
      Matt authored
      * Add a TF in-graph tokenizer for BERT
      
      * Add from_pretrained
      
      * Add proper truncation, option handling to match other tokenizers
      
      * Add proper imports and guards
      
      * Add test, fix all the bugs exposed by said test
      
      * Fix truncation of paired texts in graph mode, more test updates
      
      * Small fixes, add a (very careful) test for savedmodel
      
      * Add tensorflow-text dependency, make fixup
      
      * Update documentation
      
      * Update documentation
      
      * make fixup
      
      * Slight changes to tests
      
      * Add some docstring examples
      
      * Update tests
      
      * Update tests and add proper lowercasing/normalization
      
      * make fixup
      
      * Add docstring for padding!
      
      * Mark slow tests
      
      * make fixup
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * make fixup
      
      * Properly handle tensorflow-text dummies
      ee0d001d
    • Yih-Dar's avatar
      401fcca6
  8. 24 Jun, 2022 7 commits
  9. 23 Jun, 2022 2 commits
  10. 21 Jun, 2022 4 commits
  11. 20 Jun, 2022 3 commits
  12. 14 Jun, 2022 4 commits
  13. 13 Jun, 2022 1 commit
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f