1. 13 Jul, 2021 1 commit
  2. 09 Jul, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      With float16, always use LossScaleOptimizer. · 21286f77
      Reed Wanderman-Milne authored
      Before, it was too easy to accidentally forget to set runtime.loss_scale, which had to always be done if mixed precision is used, otherwise the model would converge to worse accuracy. Now, all that needs to be done to use mixed precision is to set runtime.mixed_precision_dtype=float16.
      
      PiperOrigin-RevId: 383767033
      21286f77
  3. 24 Jun, 2021 1 commit
  4. 23 Jun, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Improve error message when certain flags are not specified. · 0a9026e4
      Reed Wanderman-Milne authored
      In nlp/train.py and vision/beta/train.py, certain flags are marked as required. Additionally, in certain functions, error messages are improved if a necessary flag is not specified, which is a fallback in case a file calling define_flags() does not mark the necessary flags are required. Previously if any of these flags were not specified, it would crash with a cryptic error message, making it hard to tell what went wrong.
      
      In a subsequent change, I will mark flags as required in more files which call define_flags().
      
      PiperOrigin-RevId: 381066985
      0a9026e4
  5. 22 Jun, 2021 1 commit
  6. 20 Jun, 2021 1 commit
  7. 16 Jun, 2021 1 commit
  8. 11 Jun, 2021 1 commit
  9. 01 Jun, 2021 1 commit
  10. 28 May, 2021 1 commit
  11. 17 May, 2021 1 commit
  12. 14 May, 2021 1 commit
  13. 13 May, 2021 1 commit
  14. 06 May, 2021 1 commit
  15. 16 Apr, 2021 1 commit
  16. 13 Apr, 2021 2 commits
  17. 12 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · 0d8f9807
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      0d8f9807
  18. 05 Apr, 2021 3 commits
  19. 02 Apr, 2021 1 commit
  20. 01 Apr, 2021 1 commit
  21. 24 Mar, 2021 1 commit
  22. 19 Mar, 2021 1 commit
  23. 18 Mar, 2021 1 commit
  24. 17 Mar, 2021 1 commit
  25. 13 Mar, 2021 1 commit
  26. 03 Mar, 2021 2 commits
  27. 02 Mar, 2021 1 commit
  28. 01 Mar, 2021 3 commits
  29. 22 Feb, 2021 1 commit
  30. 19 Feb, 2021 1 commit
  31. 15 Feb, 2021 1 commit
  32. 12 Feb, 2021 1 commit
  33. 07 Feb, 2021 1 commit
    • Abdullah Rashwan's avatar
      Support EMA · 98839bd2
      Abdullah Rashwan authored
      - Create shadow weights at the beginning of training.
      - Swap the weights during.
      - Save best checkpoint with average weights.
      
      The following fields need to be set in order to activate the best checkpoint exporter.
      best_checkpoint_eval_metric
      best_checkpoint_export_subdir
      best_checkpoint_metric_comp
      
      To serve, or to finetune the trained checkpoints on a target dataset, use checkpoints under best_checkpoint_export_subdir.
      
      PiperOrigin-RevId: 356093831
      98839bd2
  34. 02 Feb, 2021 1 commit