1. 04 Aug, 2021 2 commits
  2. 23 Jun, 2021 2 commits
    • Reed Wanderman-Milne's avatar
      Improve error message when certain flags are not specified. · 8b47c484
      Reed Wanderman-Milne authored
      In nlp/train.py and vision/beta/train.py, certain flags are marked as required. Additionally, in certain functions, error messages are improved if a necessary flag is not specified, which is a fallback in case a file calling define_flags() does not mark the necessary flags are required. Previously if any of these flags were not specified, it would crash with a cryptic error message, making it hard to tell what went wrong.
      
      In a subsequent change, I will mark flags as required in more files which call define_flags().
      
      PiperOrigin-RevId: 381066985
      8b47c484
    • Reed Wanderman-Milne's avatar
      Improve error message when certain flags are not specified. · 0a9026e4
      Reed Wanderman-Milne authored
      In nlp/train.py and vision/beta/train.py, certain flags are marked as required. Additionally, in certain functions, error messages are improved if a necessary flag is not specified, which is a fallback in case a file calling define_flags() does not mark the necessary flags are required. Previously if any of these flags were not specified, it would crash with a cryptic error message, making it hard to tell what went wrong.
      
      In a subsequent change, I will mark flags as required in more files which call define_flags().
      
      PiperOrigin-RevId: 381066985
      0a9026e4
  3. 12 Apr, 2021 2 commits
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · ba8ad4f5
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      ba8ad4f5
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · 0d8f9807
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      0d8f9807
  4. 10 Mar, 2021 2 commits
  5. 03 Mar, 2021 2 commits
  6. 22 Jan, 2021 2 commits
  7. 13 Nov, 2020 2 commits
  8. 17 Sep, 2020 2 commits
  9. 13 Sep, 2020 2 commits
  10. 01 Sep, 2020 2 commits