- 09 Apr, 2021 1 commit
-
-
Reed Wanderman-Milne authored
All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling. PiperOrigin-RevId: 367719521
-
- 10 Mar, 2021 1 commit
-
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
- 29 Aug, 2020 1 commit
-
-
Zongwei Zhou authored
PiperOrigin-RevId: 329042049
-
- 28 Aug, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 328888268
-
- 27 Aug, 2020 1 commit
-
-
Zongwei Zhou authored
PiperOrigin-RevId: 328842362
-
- 12 Aug, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
- 26 May, 2020 1 commit
-
-
André Susano Pinto authored
This allows one to finetune a BERT model into a task before using it for another task. E.g. SQuAD before finetune another QA type of tasks. PiperOrigin-RevId: 313145768
-
- 19 May, 2020 1 commit
-
-
André Susano Pinto authored
A default of 1 all the times is bad for TPU users which end up not using the device effectively. A larger default all the times is bad for GPU users. So compromise and make this dependent on the devices available. PiperOrigin-RevId: 312230371
-
- 04 May, 2020 1 commit
-
-
Hongkun Yu authored
Move gin flags to hyperparams_flags are they are in the same category and we will use them more widely. PiperOrigin-RevId: 309779408
-
- 17 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 306994199
-
- 10 Apr, 2020 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 305948522
-
- 27 Mar, 2020 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 303356961
-
- 25 Mar, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 302977474
-
- 11 Mar, 2020 1 commit
-
-
Will Cromar authored
PiperOrigin-RevId: 300433601
-
- 07 Mar, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 299594839
-
- 05 Mar, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 299169021
-
- 02 Mar, 2020 1 commit
-
-
Will Cromar authored
PiperOrigin-RevId: 298466825
-
- 26 Feb, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 297383836
-
- 25 Feb, 2020 2 commits
-
-
Zongwei Zhou authored
PiperOrigin-RevId: 297222995
-
Hongkun Yu authored
PiperOrigin-RevId: 297002741
-
- 24 Feb, 2020 1 commit
-
-
Chen Chen authored
PiperOrigin-RevId: 296933982
-
- 18 Feb, 2020 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 295757618
-
- 30 Jan, 2020 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 292369407
-
- 28 Jan, 2020 2 commits
-
-
Zongwei Zhou authored
PiperOrigin-RevId: 292029030
-
Yanhui Liang authored
PiperOrigin-RevId: 291851815
-
- 10 Dec, 2019 1 commit
-
-
Chen Chen authored
PiperOrigin-RevId: 284792715
-
- 19 Nov, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 281337671
-
- 05 Nov, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 278672795
-
- 11 Oct, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 274090672
-
- 09 Oct, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 273653001
-
- 17 Sep, 2019 1 commit
-
-
Hongkun Yu authored
Refactor basic utils to modeling/ PiperOrigin-RevId: 269600561
-
- 04 Sep, 2019 1 commit
-
-
Hongkun Yu authored
add a flag to control loss scaling. PiperOrigin-RevId: 267091566
-
- 03 Sep, 2019 1 commit
-
-
Vinh Nguyen authored
-
- 26 Aug, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 265510206
-
- 22 Aug, 2019 1 commit
-
-
Ayush Dubey authored
PiperOrigin-RevId: 264935345
-
- 07 Aug, 2019 1 commit
-
-
Hongkun Yu authored
262178259 by hongkuny<hongkuny@google.com>: We should call training=True in CTL train step. -- 262081759 by akuegel<akuegel@google.com>: Internal change PiperOrigin-RevId: 262178259
-
- 01 Aug, 2019 1 commit
-
-
Hongkun Yu authored
261202754 by hongkuny<hongkuny@google.com>: Use enable_xla flag for classifier and squad, so xla option is exposed to users. -- PiperOrigin-RevId: 261202754
-
- 26 Jul, 2019 1 commit
-
-
Hongkun Yu authored
260060237 by zongweiz<zongweiz@google.com>: [BERT SQuAD] Enable mixed precision training Add mixed precision training support for BERT SQuAD model. Using the experimental Keras mixed precision API. For numeric stability, use fp32 for layer normalization, dense layers with GELU activation, etc. -- PiperOrigin-RevId: 260060237
-
- 02 Jul, 2019 1 commit
-
-
saberkun authored
256204636 by hongkuny<hongkuny@google.com>: Internal -- 256079834 by hongkuny<hongkuny@google.com>: Clean up: move common flags together for further refactoring Enable steps_per_loop option for all applications. -- PiperOrigin-RevId: 256204636
-