1. 01 Jun, 2020 4 commits
  2. 30 May, 2020 2 commits
  3. 29 May, 2020 11 commits
  4. 28 May, 2020 4 commits
    • Abdullah Rashwan's avatar
      Internal change · 980b27d5
      Abdullah Rashwan authored
      PiperOrigin-RevId: 313662797
      980b27d5
    • Hongkun Yu's avatar
      Deprecate old customized training loop for run_classifier.py as compile/fit... · abf60128
      Hongkun Yu authored
      Deprecate old customized training loop for run_classifier.py as compile/fit fully satisfy needs/performance.
      
      PiperOrigin-RevId: 313660745
      abf60128
    • Reed Wanderman-Milne's avatar
      Use float32 activation in Transformer. · 94b1efc1
      Reed Wanderman-Milne authored
      Float32 is used if the model uses mixed precision with bfloat16. Float16 activation are unchanged.
      
      The motivation is that BERT with the LAMB optimizer with a gelu activation has an unstable loss when gelu is in bfloat16. Unfortunately, it is not easy to check if the LAMB optimizer and gelu is used, and perhaps there are other cases that work better with float32 activations instead of bfloat16 activations, so we always do the activation in float32 instead of bfloat16.
      
      PiperOrigin-RevId: 313618322
      94b1efc1
    • A. Unique TensorFlower's avatar
      Internal change · fbec2dbe
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 313536026
      fbec2dbe
  5. 27 May, 2020 4 commits
  6. 26 May, 2020 7 commits
  7. 25 May, 2020 2 commits
  8. 24 May, 2020 3 commits
  9. 23 May, 2020 2 commits
  10. 22 May, 2020 1 commit