1. 05 Aug, 2021 2 commits
  2. 24 Jul, 2021 1 commit
  3. 23 Jul, 2021 1 commit
  4. 21 Jul, 2021 1 commit
  5. 20 Jul, 2021 1 commit
  6. 15 Jul, 2021 1 commit
  7. 13 Jul, 2021 1 commit
  8. 09 Jul, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      With float16, always use LossScaleOptimizer. · be3575f5
      Reed Wanderman-Milne authored
      Before, it was too easy to accidentally forget to set runtime.loss_scale, which had to always be done if mixed precision is used, otherwise the model would converge to worse accuracy. Now, all that needs to be done to use mixed precision is to set runtime.mixed_precision_dtype=float16.
      
      PiperOrigin-RevId: 383767033
      be3575f5
  9. 24 Jun, 2021 1 commit
  10. 23 Jun, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Improve error message when certain flags are not specified. · 8b47c484
      Reed Wanderman-Milne authored
      In nlp/train.py and vision/beta/train.py, certain flags are marked as required. Additionally, in certain functions, error messages are improved if a necessary flag is not specified, which is a fallback in case a file calling define_flags() does not mark the necessary flags are required. Previously if any of these flags were not specified, it would crash with a cryptic error message, making it hard to tell what went wrong.
      
      In a subsequent change, I will mark flags as required in more files which call define_flags().
      
      PiperOrigin-RevId: 381066985
      8b47c484
  11. 22 Jun, 2021 1 commit
  12. 20 Jun, 2021 1 commit
  13. 16 Jun, 2021 2 commits
  14. 11 Jun, 2021 2 commits
  15. 01 Jun, 2021 2 commits
  16. 28 May, 2021 2 commits
  17. 17 May, 2021 2 commits
  18. 14 May, 2021 2 commits
  19. 13 May, 2021 2 commits
  20. 06 May, 2021 2 commits
  21. 16 Apr, 2021 2 commits
  22. 13 Apr, 2021 4 commits
  23. 12 Apr, 2021 2 commits
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · ba8ad4f5
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      ba8ad4f5
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · 0d8f9807
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      0d8f9807
  24. 05 Apr, 2021 3 commits
    • Reed Wanderman-Milne's avatar
      Use nonexperimental LSO API in base_task.py. · 2ca3440e
      Reed Wanderman-Milne authored
      This shouldn't break any official models, since I changed all LossScaleOptimizer isinstance checks to use the nonexperimental version (the experimental LSO subclasses the nonexperimental LSO, so changing isinstance checks in this way is always safe).
      
      PiperOrigin-RevId: 366891847
      2ca3440e
    • Reed Wanderman-Milne's avatar
      Use nonexperimental LSO API in base_task.py. · cc12499b
      Reed Wanderman-Milne authored
      This shouldn't break any official models, since I changed all LossScaleOptimizer isinstance checks to use the nonexperimental version (the experimental LSO subclasses the nonexperimental LSO, so changing isinstance checks in this way is always safe).
      
      PiperOrigin-RevId: 366891847
      cc12499b
    • A. Unique TensorFlower's avatar
      Internal change · cdbe340e
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 366879385
      cdbe340e