- 01 Aug, 2019 1 commit
-
-
Hongkun Yu authored
261202754 by hongkuny<hongkuny@google.com>: Use enable_xla flag for classifier and squad, so xla option is exposed to users. -- PiperOrigin-RevId: 261202754
-
- 26 Jul, 2019 1 commit
-
-
Hongkun Yu authored
260060237 by zongweiz<zongweiz@google.com>: [BERT SQuAD] Enable mixed precision training Add mixed precision training support for BERT SQuAD model. Using the experimental Keras mixed precision API. For numeric stability, use fp32 for layer normalization, dense layers with GELU activation, etc. -- PiperOrigin-RevId: 260060237
-
- 25 Jul, 2019 1 commit
-
-
Hongkun Yu authored
259889221 by hongkuny<hongkuny@google.com>: Add no ds / xla / eager perfzero tests -- PiperOrigin-RevId: 259889221
-
- 02 Jul, 2019 1 commit
-
-
saberkun authored
256204636 by hongkuny<hongkuny@google.com>: Internal -- 256079834 by hongkuny<hongkuny@google.com>: Clean up: move common flags together for further refactoring Enable steps_per_loop option for all applications. -- PiperOrigin-RevId: 256204636
-
- 28 Jun, 2019 1 commit
-
-
David M. Chen authored
255493073 by hongkuny<hongkuny@google.com>: BERT initial OSS readme update. -- 255470372 by dmchen<dmchen@google.com>: Slightly expand expected range for F1 score in BERT SQuAD accuracy test -- 255109240 by hongkuny<hongkuny@google.com>: Update eval/predict batch sizes. -- 255010016 by hongkuny<hongkuny@google.com>: Internal -- PiperOrigin-RevId: 255493073
-
- 18 Jun, 2019 1 commit
-
-
David M. Chen authored
253636854 by dmchen<dmchen@google.com>: Run only training in BERT SQuAD performance test -- 253118910 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 253636854
-
- 12 Jun, 2019 1 commit
-
-
David M. Chen authored
252697519 by dmchen<dmchen@google.com>: BERT SQuAD accuracy test 25266352 by hongjunchoi<hongjunchoi@google.com>: Internal change 252647871 by hongjunchoi<hongjunchoi@google.com>: Enable multi worker TPU training for BERT pretraining.
-
- 07 Jun, 2019 1 commit
-
-
davidmochen authored
-