- 04 Aug, 2021 2 commits
- 23 Jun, 2021 2 commits
-
-
Reed Wanderman-Milne authored
In nlp/train.py and vision/beta/train.py, certain flags are marked as required. Additionally, in certain functions, error messages are improved if a necessary flag is not specified, which is a fallback in case a file calling define_flags() does not mark the necessary flags are required. Previously if any of these flags were not specified, it would crash with a cryptic error message, making it hard to tell what went wrong. In a subsequent change, I will mark flags as required in more files which call define_flags(). PiperOrigin-RevId: 381066985
-
Reed Wanderman-Milne authored
In nlp/train.py and vision/beta/train.py, certain flags are marked as required. Additionally, in certain functions, error messages are improved if a necessary flag is not specified, which is a fallback in case a file calling define_flags() does not mark the necessary flags are required. Previously if any of these flags were not specified, it would crash with a cryptic error message, making it hard to tell what went wrong. In a subsequent change, I will mark flags as required in more files which call define_flags(). PiperOrigin-RevId: 381066985
-
- 12 Apr, 2021 2 commits
-
-
Reed Wanderman-Milne authored
For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact. Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version. PiperOrigin-RevId: 368101975
-
Reed Wanderman-Milne authored
For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact. Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version. PiperOrigin-RevId: 368101975
-
- 10 Mar, 2021 2 commits
-
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
- 03 Mar, 2021 2 commits
-
-
Reed Wanderman-Milne authored
The default is True, but I plan on changing it to False soon. After that, I plan on removing the argument and never using the experimental API. PiperOrigin-RevId: 360724698
-
Reed Wanderman-Milne authored
The default is True, but I plan on changing it to False soon. After that, I plan on removing the argument and never using the experimental API. PiperOrigin-RevId: 360724698
-
- 22 Jan, 2021 2 commits
- 13 Nov, 2020 2 commits
- 17 Sep, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 332314917
-
Hongkun Yu authored
PiperOrigin-RevId: 332314917
-
- 13 Sep, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 331359058
-
Hongkun Yu authored
PiperOrigin-RevId: 331359058
-
- 01 Sep, 2020 2 commits