1. 07 Jul, 2022 1 commit
  2. 06 Jul, 2022 1 commit
  3. 05 Jul, 2022 2 commits
  4. 04 Jul, 2022 3 commits
  5. 01 Jul, 2022 3 commits
    • Matt's avatar
      XLA train step fixes (#17973) · d6cec458
      Matt authored
      * Copy inputs to train and test step before modifying them, as this breaks things
      
      * Add XLA tests, fix our loss functions to be XLA-compatible
      
      * make fixup
      
      * Update loss computation test to expect vector of per-sample losses
      
      * Patch loss for TFLED
      
      * Patch loss for TFAlbert
      
      * Add a tf_legacy_loss config flag that enables old loss functions
      
      * Stop using config.get() because it's not a dict
      
      * Skip loss computation test for RAG because its loss is very strange and I'm afraid to rewrite it
      
      * make fixup
      
      * Add XLA-compatible RAG loss
      
      * Fix dtype of loss mask for TFAlbert
      
      * Fix test for XLNet too because it overrides the default one
      
      * make fixup
      
      * Fix config test
      
      * No more depending on GPU NaN behaviour
      
      * Add test, avoid potential zero division
      
      * Fix test item assignment
      
      * Fix loss computation masking test
      
      * make fixup
      
      * Fix dtype bugs
      d6cec458
    • Yih-Dar's avatar
      569b679a
    • Yih-Dar's avatar
      skip some gpt_neox tests that require 80G RAM (#17923) · 14fb8a63
      Yih-Dar authored
      
      
      * skip some gpt_neox tests that require 80G RAM
      
      * remove tests
      
      * fix quality
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      14fb8a63
  6. 30 Jun, 2022 1 commit
  7. 29 Jun, 2022 6 commits
  8. 28 Jun, 2022 1 commit
  9. 27 Jun, 2022 3 commits
    • Yih-Dar's avatar
      fix (#17890) · 9a345384
      Yih-Dar authored
      
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      9a345384
    • Matt's avatar
      Add a TF in-graph tokenizer for BERT (#17701) · ee0d001d
      Matt authored
      * Add a TF in-graph tokenizer for BERT
      
      * Add from_pretrained
      
      * Add proper truncation, option handling to match other tokenizers
      
      * Add proper imports and guards
      
      * Add test, fix all the bugs exposed by said test
      
      * Fix truncation of paired texts in graph mode, more test updates
      
      * Small fixes, add a (very careful) test for savedmodel
      
      * Add tensorflow-text dependency, make fixup
      
      * Update documentation
      
      * Update documentation
      
      * make fixup
      
      * Slight changes to tests
      
      * Add some docstring examples
      
      * Update tests
      
      * Update tests and add proper lowercasing/normalization
      
      * make fixup
      
      * Add docstring for padding!
      
      * Mark slow tests
      
      * make fixup
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * make fixup
      
      * Properly handle tensorflow-text dummies
      ee0d001d
    • Yih-Dar's avatar
      401fcca6
  10. 24 Jun, 2022 7 commits
  11. 23 Jun, 2022 2 commits
  12. 21 Jun, 2022 4 commits
  13. 20 Jun, 2022 3 commits
  14. 14 Jun, 2022 3 commits