1. 17 Nov, 2021 1 commit
  2. 12 Apr, 2021 2 commits
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · ba8ad4f5
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      ba8ad4f5
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · 0d8f9807
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      0d8f9807
  3. 10 Mar, 2021 2 commits
  4. 14 Nov, 2020 2 commits
  5. 26 Oct, 2020 2 commits
  6. 30 Sep, 2020 2 commits
  7. 29 Aug, 2020 2 commits
  8. 28 Aug, 2020 2 commits
  9. 27 Aug, 2020 2 commits
  10. 12 Aug, 2020 2 commits
  11. 19 Jun, 2020 1 commit
  12. 26 May, 2020 1 commit
  13. 19 May, 2020 1 commit
  14. 04 May, 2020 1 commit
  15. 17 Apr, 2020 1 commit
  16. 10 Apr, 2020 1 commit
  17. 07 Apr, 2020 1 commit
  18. 27 Mar, 2020 2 commits
  19. 26 Mar, 2020 1 commit
  20. 24 Mar, 2020 1 commit
  21. 17 Mar, 2020 3 commits
  22. 06 Mar, 2020 1 commit
    • Zongwei Zhou's avatar
      Temporarily disable explicit allreduce in BERT SQuAD · 11ccb99e
      Zongwei Zhou authored
      In BERT SQuAD, disable explicit allreduce for now to keep the original clip_by_global_norm math. With explicit allreduce, the gradients before allreduce are scaled so even if we move clip_by_global_norm before allreduce (as in TF1 and pre-TF 2.2) it will operate on scaled gradients, the math will be changed. So with explicit allreduce, it is better to move clip_by_global_norm to after allreduce.
      
      PiperOrigin-RevId: 299278082
      11ccb99e
  23. 05 Mar, 2020 1 commit
  24. 26 Feb, 2020 1 commit
  25. 25 Feb, 2020 1 commit
  26. 24 Feb, 2020 1 commit
  27. 23 Feb, 2020 1 commit
  28. 20 Feb, 2020 1 commit