- 08 Dec, 2021 1 commit
-
-
Smit Hinsu authored
TPUs don't support dynamic Squeeze as it won't be able to identify the dimensions to remove if the input shape is dynamic. For example, [<=16, 1] could return a vector or scalar result depending on the size of the dynamic dim. PiperOrigin-RevId: 415083667
-
- 01 Dec, 2021 2 commits
- 17 Nov, 2021 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 410629444
-
- 27 Aug, 2021 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 393366505
-
Hongkun Yu authored
PiperOrigin-RevId: 393366505
-
- 06 Jun, 2021 4 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 377803367
-
Hongkun Yu authored
PiperOrigin-RevId: 377803367
-
Hongkun Yu authored
PiperOrigin-RevId: 377801393
-
Hongkun Yu authored
PiperOrigin-RevId: 377801393
-
- 12 Apr, 2021 2 commits
-
-
Reed Wanderman-Milne authored
For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact. Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version. PiperOrigin-RevId: 368101975
-
Reed Wanderman-Milne authored
For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact. Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version. PiperOrigin-RevId: 368101975
-
- 10 Apr, 2021 1 commit
-
-
Reed Wanderman-Milne authored
All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling. PiperOrigin-RevId: 367719521
-
- 09 Apr, 2021 1 commit
-
-
Reed Wanderman-Milne authored
All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling. PiperOrigin-RevId: 367719521
-
- 06 Apr, 2021 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 367101911
-
Hongkun Yu authored
PiperOrigin-RevId: 367101911
-
- 10 Mar, 2021 2 commits
-
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
- 25 Feb, 2021 4 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 359545082
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 359545082
-
Chen Chen authored
1. Save the BertPretrainerV2 checkpoint using .checkpoint_items in the tf1->tf2 checkpoint converter. 2. In export_tfhub_lib, remove the support of legacy tf2 checkpoint that were converted from tf1 before this commit. The current export_tfhub.py only worked with checkpoint converted after https://github.com/tensorflow/models/commit/78a367e150f625f1b138c847d49ea51498d5263a 3. Also fix the albert tf1->tf2 checkpoint converter which does not work after the above commit. PiperOrigin-RevId: 359406945
-
Chen Chen authored
1. Save the BertPretrainerV2 checkpoint using .checkpoint_items in the tf1->tf2 checkpoint converter. 2. In export_tfhub_lib, remove the support of legacy tf2 checkpoint that were converted from tf1 before this commit. The current export_tfhub.py only worked with checkpoint converted after https://github.com/tensorflow/models/commit/78a367e150f625f1b138c847d49ea51498d5263a 3. Also fix the albert tf1->tf2 checkpoint converter which does not work after the above commit. PiperOrigin-RevId: 359406945
-
- 05 Jan, 2021 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 350052731
-
Hongkun Yu authored
PiperOrigin-RevId: 350052731
-
- 22 Dec, 2020 2 commits
- 21 Dec, 2020 1 commit
-
-
Samuel Marks authored
-
- 19 Dec, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 348291651
-
Hongkun Yu authored
PiperOrigin-RevId: 348291651
-
- 15 Dec, 2020 2 commits
- 14 Dec, 2020 4 commits
-
-
Kyle Ziegler authored
Adjusted sequence length and predictions per sequence flags.
-
Kyle Ziegler authored
-
Kyle Ziegler authored
-
Kyle Ziegler authored
-
- 25 Nov, 2020 2 commits
- 17 Nov, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 342770296
-
Hongkun Yu authored
PiperOrigin-RevId: 342770296
-
- 16 Nov, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 342566888
-