- 06 Aug, 2019 1 commit
-
-
Hongkun Yu authored
261786323 by yanhuasun<yanhuasun@google.com>: Replace set, dict with ObjectIdentityDict/Set to prepare for eq implementation -- PiperOrigin-RevId: 261786323
-
- 03 Aug, 2019 1 commit
-
-
Hongkun Yu authored
261393597 by hongkuny<hongkuny@google.com>: add an encoder mode for BertModel which returns all layers. -- PiperOrigin-RevId: 261393597
-
- 02 Aug, 2019 1 commit
-
-
Haoyu Zhang authored
261339941 by haoyuzhang<haoyuzhang@google.com>: Own library functions in Keras ResNet models, and remove dependencies on v1 Estimator version of ResNet models. Most dependencies that the Keras version has are related to data input pipelines. Created dedicated files (cifar_preprocessing.py, imagenet_preprocessing.py) to collect all logic handling Cifar and ImageNet data input function. -- 261339166 by haoyuzhang<haoyuzhang@google.com>: Internal change 261317601 by akuegel<akuegel@google.com>: Internal change 261218818 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change PiperOrigin-RevId: 261339941
-
- 01 Aug, 2019 3 commits
-
-
Hongkun Yu authored
261202754 by hongkuny<hongkuny@google.com>: Use enable_xla flag for classifier and squad, so xla option is exposed to users. -- PiperOrigin-RevId: 261202754 -
Haoyu Zhang authored
261171038 by gjn<gjn@google.com>: Remove weight_decay_rate 0 early exit check Removing this code path should be fine since this was actually not doing what it meant to do. Since weight_decay_rate is actually a tensor, the equality check was only looking at the id of the object and comparing to 0. This should never be true. Evaluating a tensor is also not what we want to do at this point of the code. Thus it should be fine to simply remove this code. -- 261169862 by haoyuzhang<haoyuzhang@google.com>: Internal change 261153520 by haoyuzhang<haoyuzhang@google.com>: Internal change 261140302 by hongkuny<hongkuny@google.com>: Clean up -- PiperOrigin-RevId: 261171038 -
Hongkun Yu authored
260862396 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix BERT pretraining input pipeline to shuffle and shard dataset properly for multi-worker training. -- PiperOrigin-RevId: 260862396
-
- 30 Jul, 2019 1 commit
-
-
Hongkun Yu authored
260601376 by hongkuny<hongkuny@google.com>: reorder Q,K to make TPU faster. -- PiperOrigin-RevId: 260601376
-
- 29 Jul, 2019 1 commit
-
-
Hongkun Yu authored
260580119 by hongkuny<hongkuny@google.com>: Adds expect_partial() -- PiperOrigin-RevId: 260580119
-
- 26 Jul, 2019 2 commits
-
-
Hongkun Yu authored
260060237 by zongweiz<zongweiz@google.com>: [BERT SQuAD] Enable mixed precision training Add mixed precision training support for BERT SQuAD model. Using the experimental Keras mixed precision API. For numeric stability, use fp32 for layer normalization, dense layers with GELU activation, etc. -- PiperOrigin-RevId: 260060237 -
Hongkun Yu authored
260052674 by hongkuny<hongkuny@google.com>: Add expect_partial() -- PiperOrigin-RevId: 260052674
-
- 25 Jul, 2019 2 commits
-
-
Hongkun Yu authored
259889221 by hongkuny<hongkuny@google.com>: Add no ds / xla / eager perfzero tests -- PiperOrigin-RevId: 259889221 -
Hongkun Yu authored
259790197 by hongkuny<hongkuny@google.com>: Update pretraining model to match tf1 var names. -- PiperOrigin-RevId: 259790197
-
- 24 Jul, 2019 1 commit
-
-
Hongkun Yu authored
259649972 by hongkuny<hongkuny@google.com>: Update docs. -- 259470074 by hongkuny<hongkuny@google.com>: Adds a dedup phase for trainable variables. -- PiperOrigin-RevId: 259649972
-
- 23 Jul, 2019 1 commit
-
-
Hongkun Yu authored
259442882 by hongkuny<hongkuny@google.com>: Internal -- 259341546 by mrry<mrry@google.com>: Remove DEBUG-level logging from the BERT benchmark. This triggers graph serialization and other verbose logging in the TensorFlow runtime, which inflates the execution time. -- 259253185 by hongkuny<hongkuny@google.com>: Writes a separated checkpoint for the core model in pretraining. Clean up export utils to just take a model as argument. -- 258893811 by hongkuny<hongkuny@google.com>: Adds summaries for metrics, allowing metrics inside keras.model. -- 258881002 by hongkuny<hongkuny@google.com>: Fix lint. -- 258597234 by rxsang<rxsang@google.com>: Update all the TPUStrategy examples to use the new v2 APIs, i.e. make_dataset_iterator -> experimental_distribute_dataset, make_input_fn_iterator -> experimental_distribute_datasets_from_function, unwrap -> experimental_local_results, experimental_run -> experimental_run_v2 -- 258581998 by taylorrobie<taylorrobie@google.com>: Update keras v2 optimizers to reuse coefficients which are shared across all updates, which reduces the total number of ops created by between 5% (for simple optimizers such as SGD and Adagrad) and 25% (for complicated optimizers such as Adam and NAdam). Separate copies are made for each device and dtype. The effect of this change on run time is fairly minimal since Grappler is expected to consolidate most of these ops; however it does improve graph construction time. -- 258208153 by hongkuny<hongkuny@google.com>: Adds run_eagerly option for bert. -- 257883986 by hongkuny<hongkuny@google.com>: Adds tf.summary for bert training -- 256204636 by hongkuny<hongkuny@google.com>: Internal -- 256079834 by hongkuny<hongkuny@google.com>: Clean up: move common flags together for further refactoring Enable steps_per_loop option for all applications. -- 255493073 by hongkuny<hongkuny@google.com>: BERT initial OSS readme update. -- 255470372 by dmchen<dmchen@google.com>: Slightly expand expected range for F1 score in BERT SQuAD accuracy test -- 255109240 by hongkuny<hongkuny@google.com>: Update eval/predict batch sizes. -- 255010016 by hongkuny<hongkuny@google.com>: Internal -- 254874613 by hongkuny<hongkuny@google.com>: Update glue tasks enum to match directory name -- 254866171 by taylorrobie<taylorrobie@google.com>: Internal change 254785517 by zongweiz<zongweiz@google.com>: Use train_single_step for BERT GPU models to temporarily work around some performance bugs in GPU runs -- 254497647 by hongkuny<hongkuny@google.com>: Fix device placement for TPU export model. -- 254134531 by yuefengz<yuefengz@google.com>: Fix a typo in bert_benchmark.py -- 254069984 by hongkuny<hongkuny@google.com>: Automated rollback of changelist 254060732. 254061429 by hongkuny<hongkuny@google.com>: Use host while loop for training steps. -- 254060732 by yifeif<yifeif@google.com>: Automated rollback of changelist 254027750. 254027750 by hongkuny<hongkuny@google.com>: Internal change 253850824 by hongkuny<hongkuny@google.com>: Improve bert training utils. -- 253818191 by hongkuny<hongkuny@google.com>: Update savedmodel export to use new model.save() api. -- 253636854 by dmchen<dmchen@google.com>: Run only training in BERT SQuAD performance test -- 253118910 by hongkuny<hongkuny@google.com>: Internal change 253113801 by zongweiz<zongweiz@google.com>: Internal change 252697519 by dmchen<dmchen@google.com>: BERT SQuAD accuracy test -- 252663512 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change -- 252647871 by A. Unique TensorFlower<gardener@tensorflow.org>: Enable multi worker TPU training for BERT pretraining. -- 252522861 by hongkuny<hongkuny@google.com>: Remove export using trained model due to implementation error -- 252156812 by yuefengz<yuefengz@google.com>: Fix the callback method name in BERT: replaced on_batch_start with on_batch_begin. Without the fix, it won't work with Keras callbacks. -- 251782065 by dmchen<dmchen@google.com>: Internal change 251681245 by hongkuny<hongkuny@google.com>: Update bert to use the new tf.distribute APIs -- 251575972 by A. Unique TensorFlower<gardener@tensorflow.org>: Remove `steps_per_run` when instantiating TPUStrategy. -- 251325964 by hongkuny<hongkuny@google.com>: Improve flags -- 250942274 by tobyboyd<tobyboyd@google.com>: Internal change 250779087 by A. Unique TensorFlower<gardener@tensorflow.org>: Reduce BERT Perfzero benchmark test training steps. -- 250713045 by hongkuny<hongkuny@google.com>: TPU util -- 250606180 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix BERT benchamrk test errors. -- 250589623 by A. Unique TensorFlower<gardener@tensorflow.org>: Change BERT benchmark test pretrained checkpoint url. -- 250587892 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix error in BERT custom training loop checkpoint restoration. -- 250577163 by A. Unique TensorFlower<gardener@tensorflow.org>: Add logic to inject callback that measures performance in BERT custom training loop. -- 250529526 by hongkuny<hongkuny@google.com>: Internal clean up -- 250428976 by hongkuny<hongkuny@google.com>: Internal change 250415383 by A. Unique TensorFlower<gardener@tensorflow.org>: Add min/max value to BERT classifier benchmark test. -- 250376246 by A. Unique TensorFlower<gardener@tensorflow.org>: Add benchmark performance test to run BERT on multiple numbers of GPUs. -- 250347237 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix linting errors in BERT benchmark test. -- 250326131 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 250315593 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 250303528 by haoyuzhang<haoyuzhang@google.com>: Add method docstring to fix lint error. -- 250009207 by A. Unique TensorFlower<gardener@tensorflow.org>: Add feature in BERT to write training metrics to a summary file. -- 249896208 by hongkuny<hongkuny@google.com>: Adds __init__.py -- 249883771 by hongkuny<hongkuny@google.com>: Creates a benchmark dir -- 249580533 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 249566870 by A. Unique TensorFlower<gardener@tensorflow.org>: Set up BERT benchmark test. -- 249500988 by hongkuny<hongkuny@google.com>: Lints -- 249377254 by hongkuny<hongkuny@google.com>: Internal change 249373328 by hongkuny<hongkuny@google.com>: Clean up tf import -- 249333938 by hongkuny<hongkuny@google.com>: Fix tf1 import -- 249325089 by hongkuny<hongkuny@google.com>: BERT 2.0 -- 249173564 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 259442882
-
- 19 Jul, 2019 2 commits
-
-
Jing Li authored
* Merged commit includes the following changes: 258867180 by jingli<jingli@google.com>: Add new folders for upcoming reorg in model garden. -- 258893811 by hongkuny<hongkuny@google.com>: Adds summaries for metrics, allowing metrics inside keras.model. -- 258893048 by isaprykin<isaprykin@google.com>: Remove the `cloning` argument to `compile()`. Keras models are distributed by cloning in graph mode and without cloning in eager mode as of the change # 258652546. -- 258881002 by hongkuny<hongkuny@google.com>: Fix lint. -- 258874998 by hongkuny<hongkuny@google.com>: Internal -- 258872662 by hongkuny<hongkuny@google.com>: Fix doc -- PiperOrigin-RevId: 258867180 * Create __init__.py * Update __init__.py * Update __init__.py * Update __init__.py -
Hongkun Yu authored
258881002 by hongkuny<hongkuny@google.com>: Fix lint. -- 258874998 by hongkuny<hongkuny@google.com>: Internal -- 258872662 by hongkuny<hongkuny@google.com>: Fix doc -- 258871624 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 258881002
-
- 18 Jul, 2019 1 commit
-
-
Hongkun Yu authored
258597234 by rxsang<rxsang@google.com>: Update all the TPUStrategy examples to use the new v2 APIs, i.e. make_dataset_iterator -> experimental_distribute_dataset, make_input_fn_iterator -> experimental_distribute_datasets_from_function, unwrap -> experimental_local_results, experimental_run -> experimental_run_v2 -- 258581998 by taylorrobie<taylorrobie@google.com>: Update keras v2 optimizers to reuse coefficients which are shared across all updates, which reduces the total number of ops created by between 5% (for simple optimizers such as SGD and Adagrad) and 25% (for complicated optimizers such as Adam and NAdam). Separate copies are made for each device and dtype. The effect of this change on run time is fairly minimal since Grappler is expected to consolidate most of these ops; however it does improve graph construction time. -- PiperOrigin-RevId: 258597234
-
- 16 Jul, 2019 1 commit
-
-
Hongkun Yu authored
258208153 by hongkuny<hongkuny@google.com>: Adds run_eagerly option for bert. -- PiperOrigin-RevId: 258208153
-
- 15 Jul, 2019 1 commit
-
-
Hongkun Yu authored
257883986 by hongkuny<hongkuny@google.com>: Adds tf.summary for bert training -- PiperOrigin-RevId: 257883986
-
- 02 Jul, 2019 1 commit
-
-
saberkun authored
256204636 by hongkuny<hongkuny@google.com>: Internal -- 256079834 by hongkuny<hongkuny@google.com>: Clean up: move common flags together for further refactoring Enable steps_per_loop option for all applications. -- PiperOrigin-RevId: 256204636
-
- 28 Jun, 2019 1 commit
-
-
David M. Chen authored
255493073 by hongkuny<hongkuny@google.com>: BERT initial OSS readme update. -- 255470372 by dmchen<dmchen@google.com>: Slightly expand expected range for F1 score in BERT SQuAD accuracy test -- 255109240 by hongkuny<hongkuny@google.com>: Update eval/predict batch sizes. -- 255010016 by hongkuny<hongkuny@google.com>: Internal -- PiperOrigin-RevId: 255493073
-
- 25 Jun, 2019 1 commit
-
-
saberkun authored
254874613 by hongkuny<hongkuny@google.com>: Update glue tasks enum to match directory name -- 254866171 by taylorrobie<taylorrobie@google.com>: Internal change PiperOrigin-RevId: 254874613
-
- 24 Jun, 2019 1 commit
-
-
saberkun authored
254785517 by A. Unique TensorFlower<gardener@tensorflow.org>: Use train_single_step for BERT GPU models to temporarily work around some performance bugs in GPU runs -- 254497647 by hongkuny<hongkuny@google.com>: Fix device placement for TPU export model. -- PiperOrigin-RevId: 254785517
-
- 20 Jun, 2019 2 commits
-
-
saberkun authored
254134531 by yuefengz<yuefengz@google.com>: Fix a typo in bert_benchmark.py -- PiperOrigin-RevId: 254134531 -
saberkun authored
254069984 by hongkuny<hongkuny@google.com>: Automated rollback of changelist 254060732. 254061429 by hongkuny<hongkuny@google.com>: Use host while loop for training steps. -- 254060732 by yifeif<yifeif@google.com>: Automated rollback of changelist 254027750. 254027750 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 254069984
-
- 19 Jun, 2019 1 commit
-
-
Toby Boyd authored
-
- 18 Jun, 2019 2 commits
-
-
saberkun authored
253850824 by hongkuny<hongkuny@google.com>: Improve bert training utils. -- 253818191 by hongkuny<hongkuny@google.com>: Update savedmodel export to use new model.save() api. -- PiperOrigin-RevId: 253850824 -
David M. Chen authored
253636854 by dmchen<dmchen@google.com>: Run only training in BERT SQuAD performance test -- 253118910 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 253636854
-
- 13 Jun, 2019 2 commits
-
-
Taylor Robie authored
* move step and epoch counts after super init call * move comment block * move super to the top
-
saberkun authored
253113801 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 252697519 by dmchen<dmchen@google.com>: BERT SQuAD accuracy test -- 252663512 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change -- 252647871 by A. Unique TensorFlower<gardener@tensorflow.org>: Enable multi worker TPU training for BERT pretraining. -- PiperOrigin-RevId: 253113801
-
- 12 Jun, 2019 1 commit
-
-
David M. Chen authored
252697519 by dmchen<dmchen@google.com>: BERT SQuAD accuracy test 25266352 by hongjunchoi<hongjunchoi@google.com>: Internal change 252647871 by hongjunchoi<hongjunchoi@google.com>: Enable multi worker TPU training for BERT pretraining.
-
- 11 Jun, 2019 1 commit
-
-
saberkun authored
252522861 by hongkuny<hongkuny@google.com>: Remove export using trained model due to implementation error -- 252156812 by yuefengz<yuefengz@google.com>: Fix the callback method name in BERT: replaced on_batch_start with on_batch_begin. Without the fix, it won't work with Keras callbacks. -- 251782065 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change PiperOrigin-RevId: 252522861
-
- 07 Jun, 2019 1 commit
-
-
davidmochen authored
-
- 05 Jun, 2019 1 commit
-
-
saberkun authored
251681245 by hongkuny<hongkuny@google.com>: Update bert to use the new tf.distribute APIs -- 251575972 by A. Unique TensorFlower<gardener@tensorflow.org>: Remove `steps_per_run` when instantiating TPUStrategy. -- PiperOrigin-RevId: 251681245
-
- 04 Jun, 2019 1 commit
-
-
saberkun authored
251325964 by hongkuny<hongkuny@google.com>: Improve flags -- 250942274 by tobyboyd<tobyboyd@google.com>: Internal change PiperOrigin-RevId: 251325964
-
- 31 May, 2019 1 commit
-
-
Hongjun Choi authored
250779087 by A. Unique TensorFlower<gardener@tensorflow.org>: Reduce BERT Perfzero benchmark test training steps. -- PiperOrigin-RevId: 250779087
-
- 30 May, 2019 2 commits
-
-
saberkun authored
250713045 by hongkuny<hongkuny@google.com>: TPU util -- PiperOrigin-RevId: 250713045 -
Hongjun Choi authored
250606180 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix BERT benchamrk test errors. -- 250589623 by A. Unique TensorFlower<gardener@tensorflow.org>: Change BERT benchmark test pretrained checkpoint url. -- 250587892 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix error in BERT custom training loop checkpoint restoration. -- 250577163 by A. Unique TensorFlower<gardener@tensorflow.org>: Add logic to inject callback that measures performance in BERT custom training loop. -- 250529526 by hongkuny<hongkuny@google.com>: Internal clean up -- 250428976 by hongkuny<hongkuny@google.com>: Internal change 250415383 by A. Unique TensorFlower<gardener@tensorflow.org>: Add min/max value to BERT classifier benchmark test. -- 250376246 by A. Unique TensorFlower<gardener@tensorflow.org>: Add benchmark performance test to run BERT on multiple numbers of GPUs. -- PiperOrigin-RevId: 250606180
-
- 28 May, 2019 1 commit
-
-
Hongjun Choi authored
250347237 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix linting errors in BERT benchmark test. -- 250326131 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 250315593 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 250303528 by haoyuzhang<haoyuzhang@google.com>: Add method docstring to fix lint error. -- PiperOrigin-RevId: 250347237
-
- 26 May, 2019 1 commit
-
-
Hongjun Choi authored
250009207 by A. Unique TensorFlower<gardener@tensorflow.org>: Add feature in BERT to write training metrics to a summary file. -- PiperOrigin-RevId: 250009207
-