- 19 Jun, 2019 4 commits
-
-
anj-s authored
* first version of ctl * fix indent * remove monkey patching for core * add dtype arg * fix dtype arg * add logging lib * remove compat.v1.logging * add datetime import * fix FLAGS import * add constant vals * move to using as tf import * move to using as tf import * remove steps per epoch = 1 * test train and test for one step * test train and test for one step * test train and test for one step * test train and test for the entire dataset * use an iterator for test * pass tensors instead of an iterator * add stats dict * fix list declaration * fix list declaration * fix elapsed time calc * print lr at epoch boundary alone * Use regular tf import instead of compat * remove tensorboard chkpts * add correct logging import * add correct logging import * add benchmark configs * add tests and configs * add tests and configs * add keras flags import * add keras flags import * fix eval ds creation cond * return numpy value of train_loss * return numpy value of loss and acc values * add option for full eager mode * fix lint errors * add ctl flags * add ctl import * add the xla flag * enable v2 behavior in unit tests * rename dataset var * add synthetic dataset without monkey patching * add ctl local constants * add ctl local constants * change to using v2 imports * change to using v2 imports * change to using v2 imports * change to using keras synthetic input fn * remove enable_eager flag from benchmarks * remove enable_eager flag from benchmarks * remove enable_eager flag from benchmarks * add option for no distrat * add lambda for flags * remove no_func benchmarks due to OOM error * remove README * remove unused comments * remove unchanged file * remove unchanged file * remove unused drop_remainder_arg * use keras.common lr function * address PR comments * remove reference to deleted file * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * fix lint errors * . * add flags info
-
anj-s authored
* first version of ctl * fix indent * remove monkey patching for core * add dtype arg * fix dtype arg * add logging lib * remove compat.v1.logging * add datetime import * fix FLAGS import * add constant vals * move to using as tf import * move to using as tf import * remove steps per epoch = 1 * test train and test for one step * test train and test for one step * test train and test for one step * test train and test for the entire dataset * use an iterator for test * pass tensors instead of an iterator * add stats dict * fix list declaration * fix list declaration * fix elapsed time calc * print lr at epoch boundary alone * Use regular tf import instead of compat * remove tensorboard chkpts * add correct logging import * add correct logging import * add benchmark configs * add tests and configs * add tests and configs * add keras flags import * add keras flags import * fix eval ds creation cond * return numpy value of train_loss * return numpy value of loss and acc values * add option for full eager mode * fix lint errors * add ctl flags * add ctl import * add the xla flag * enable v2 behavior in unit tests * rename dataset var * add synthetic dataset without monkey patching * add ctl local constants * add ctl local constants * change to using v2 imports * change to using v2 imports * change to using v2 imports * change to using keras synthetic input fn * remove enable_eager flag from benchmarks * remove enable_eager flag from benchmarks * remove enable_eager flag from benchmarks * add option for no distrat * add lambda for flags * remove no_func benchmarks due to OOM error * remove README * remove unused comments * remove unchanged file * remove unchanged file * remove unused drop_remainder_arg * use keras.common lr function * address PR comments * remove reference to deleted file * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * fix lint errors * .
-
Toby Boyd authored
-
Toby Boyd authored
-
- 18 Jun, 2019 4 commits
-
-
Toby Boyd authored
-
saberkun authored
253850824 by hongkuny<hongkuny@google.com>: Improve bert training utils. -- 253818191 by hongkuny<hongkuny@google.com>: Update savedmodel export to use new model.save() api. -- PiperOrigin-RevId: 253850824 -
nnigania authored
* adding a new perf test for ncf, and changing some names * Added change to make ncf use the data from the gcp bucket, and removed the need to re-download data >1day old. Reorganized the perf-zero tests
-
David M. Chen authored
253636854 by dmchen<dmchen@google.com>: Run only training in BERT SQuAD performance test -- 253118910 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 253636854
-
- 14 Jun, 2019 3 commits
-
-
Toby Boyd authored
* tf.compat.v1.train.experimental.enable_mixed_precision_graph_rewrite * Remove num_parallel_batches which is not used.
-
Toby Boyd authored
* layout off for some tests and channels last. * 8 gpu tests channels_last * more layout off tests.
-
Toby Boyd authored
* Add 1 gpu force_eager benchmark * Add accuracy for no dist strat eager * remvove return.
-
- 13 Jun, 2019 11 commits
-
-
Toby Boyd authored
-
Taylor Robie authored
* move step and epoch counts after super init call * move comment block * move super to the top
-
saberkun authored
253113801 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 252697519 by dmchen<dmchen@google.com>: BERT SQuAD accuracy test -- 252663512 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change -- 252647871 by A. Unique TensorFlower<gardener@tensorflow.org>: Enable multi worker TPU training for BERT pretraining. -- PiperOrigin-RevId: 253113801 -
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
- 12 Jun, 2019 1 commit
-
-
David M. Chen authored
252697519 by dmchen<dmchen@google.com>: BERT SQuAD accuracy test 25266352 by hongjunchoi<hongjunchoi@google.com>: Internal change 252647871 by hongjunchoi<hongjunchoi@google.com>: Enable multi worker TPU training for BERT pretraining.
-
- 11 Jun, 2019 2 commits
-
-
saberkun authored
252534787 by hongkuny<hongkuny@google.com>: Transformer vocab fix to strip correctly in py2 -- PiperOrigin-RevId: 252534787 -
saberkun authored
252522861 by hongkuny<hongkuny@google.com>: Remove export using trained model due to implementation error -- 252156812 by yuefengz<yuefengz@google.com>: Fix the callback method name in BERT: replaced on_batch_start with on_batch_begin. Without the fix, it won't work with Keras callbacks. -- 251782065 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change PiperOrigin-RevId: 252522861
-
- 10 Jun, 2019 1 commit
-
-
rxsang authored
-
- 07 Jun, 2019 1 commit
-
-
davidmochen authored
-
- 06 Jun, 2019 6 commits
-
-
Reed authored
Before, there was a global default loss scale for all models. Currently, only resnet uses loss scaling, but this will be useful once more models support it.
-
Reed authored
-
Haoyu Zhang authored
-
guptapriya authored
-
saberkun authored
251762562 by hongkuny<hongkuny@google.com>: Fix blue score inconsistency -- PiperOrigin-RevId: 251762562 -
Haoyu Zhang authored
* Modify tweaked tests for better performance in no cloning mode * Tweak trivial models
-
- 05 Jun, 2019 7 commits
-
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
guptapriya authored
-
saberkun authored
251681245 by hongkuny<hongkuny@google.com>: Update bert to use the new tf.distribute APIs -- 251575972 by A. Unique TensorFlower<gardener@tensorflow.org>: Remove `steps_per_run` when instantiating TPUStrategy. -- PiperOrigin-RevId: 251681245 -
guptapriya authored
-