- 19 Jun, 2019 2 commits
-
-
anj-s authored
* first version of ctl * fix indent * remove monkey patching for core * add dtype arg * fix dtype arg * add logging lib * remove compat.v1.logging * add datetime import * fix FLAGS import * add constant vals * move to using as tf import * move to using as tf import * remove steps per epoch = 1 * test train and test for one step * test train and test for one step * test train and test for one step * test train and test for the entire dataset * use an iterator for test * pass tensors instead of an iterator * add stats dict * fix list declaration * fix list declaration * fix elapsed time calc * print lr at epoch boundary alone * Use regular tf import instead of compat * remove tensorboard chkpts * add correct logging import * add correct logging import * add benchmark configs * add tests and configs * add tests and configs * add keras flags import * add keras flags import * fix eval ds creation cond * return numpy value of train_loss * return numpy value of loss and acc values * add option for full eager mode * fix lint errors * add ctl flags * add ctl import * add the xla flag * enable v2 behavior in unit tests * rename dataset var * add synthetic dataset without monkey patching * add ctl local constants * add ctl local constants * change to using v2 imports * change to using v2 imports * change to using v2 imports * change to using keras synthetic input fn * remove enable_eager flag from benchmarks * remove enable_eager flag from benchmarks * remove enable_eager flag from benchmarks * add option for no distrat * add lambda for flags * remove no_func benchmarks due to OOM error * remove README * remove unused comments * remove unchanged file * remove unchanged file * remove unused drop_remainder_arg * use keras.common lr function * address PR comments * remove reference to deleted file * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * . * fix lint errors * .
-
Toby Boyd authored
-
- 14 Jun, 2019 3 commits
-
-
Toby Boyd authored
* tf.compat.v1.train.experimental.enable_mixed_precision_graph_rewrite * Remove num_parallel_batches which is not used.
-
Toby Boyd authored
* layout off for some tests and channels last. * 8 gpu tests channels_last * more layout off tests.
-
Toby Boyd authored
* Add 1 gpu force_eager benchmark * Add accuracy for no dist strat eager * remvove return.
-
- 13 Jun, 2019 1 commit
-
-
Toby Boyd authored
-
- 10 Jun, 2019 1 commit
-
-
rxsang authored
-
- 06 Jun, 2019 3 commits
-
-
Reed authored
Before, there was a global default loss scale for all models. Currently, only resnet uses loss scaling, but this will be useful once more models support it.
-
Haoyu Zhang authored
-
Haoyu Zhang authored
* Modify tweaked tests for better performance in no cloning mode * Tweak trivial models
-
- 05 Jun, 2019 1 commit
-
-
rxsang authored
-
- 04 Jun, 2019 1 commit
-
-
Ayush Dubey authored
* Add multi-worker benchmarks to official resnet estimator_benchmark.py. * fix super constructor calls * set datasets_num_private_threads to 32 in multi worker tweaked benchmarks
-
- 03 Jun, 2019 2 commits
-
-
Haoyu Zhang authored
Because we run warmup tests in all real data benchmarks, XLA bugs will cause non-XLA tests to fail as well.
-
Toby Boyd authored
* Add mlperf like test. * Final comments. * docstring wording tweak. * non-tweaked version
-
- 31 May, 2019 3 commits
-
-
Haoyu Zhang authored
-
Goldie Gadde authored
-
Haoyu Zhang authored
* Support pure eager execution in ResNet50 * Use smaller batch size
-
- 29 May, 2019 1 commit
-
-
Haoyu Zhang authored
-
- 28 May, 2019 2 commits
-
-
Haoyu Zhang authored
-
Haoyu Zhang authored
* Run different numbers of steps on different platforms * Add new tests for delayed performance measurement
-
- 24 May, 2019 3 commits
-
-
rxsang authored
* Add a graph optional_next Reset benchmark. * Fix lint error.
-
Toby Boyd authored
-
Tian Lin authored
* Merged commit includes the following changes: 249776315 by tianlin<tianlin@google.com>: Internal change 249763206 by tianlin<tianlin@google.com>: For TF 2.0 (related to Beam Search), expand cond dims in tf.where(cond, x, y) to make all parameters broadcastable. -- 249392724 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 249776315 * Merged commit includes the following changes: 249823043 by tianlin<tianlin@google.com>: Bring back v2 test for predict and eval. -- PiperOrigin-RevId: 249823043
-
- 23 May, 2019 3 commits
-
-
rxsang authored
* Add a test enabling get_next_as_optional behavior. * Remove repeated flag. * Remove trailing space. * Make the name shorter. * Fix lint error. * Refine the benchmark name.
-
rxsang authored
-
rxsang authored
* Add enable_get_next_as_optional flag. * Set enable_get_next_as_optional to strategy. * Add comments to explain the flag. * Remove trailing whitespace. * Remove trailing space.
-
- 22 May, 2019 1 commit
-
-
Haoyu Zhang authored
-
- 21 May, 2019 1 commit
-
-
Haoyu Zhang authored
-
- 20 May, 2019 1 commit
-
-
Ayush Dubey authored
* Delete accuracy if exists in eval results. * get global_step only if it exists in eval results
-
- 18 May, 2019 2 commits
-
-
Reed authored
This will allow one to easily reproduce a benchmark by running with the flags.
-
Ayush Dubey authored
-
- 15 May, 2019 3 commits
-
-
Rachel Lim authored
-
Igor authored
* Set the --clone_model_in_keras_dist_strat to None. Remove the separate no_cloning benchmarks and add a couple of cloning ones. Fixes the learning rate schedule to cache its ops per graph.
-
Rachel Lim authored
* Added 'tfdata_exp' version of all benchmarks which set FLAGS.tf_data_experimental_slack = True. Renamed `data_prefetch_with_slack` to `data_delay_prefetch` (haoyu's change) to make the names more distinct. * Add flag to resnet input pipeline and surface through keras_imagenet_main.py
-
- 11 May, 2019 1 commit
-
-
Toby Boyd authored
* Add FP16 and benchmarks. * add missing run and report. * Add loss_scale as option not included with dtype. * move loss_scale validation under dtype conditional. * add loss_scale to flags tested.
-
- 10 May, 2019 5 commits
-
-
Haoyu Zhang authored
* Fix trivial model to work properly with fp16 * Add comment on manual casting
-
Haoyu Zhang authored
Previously we had one dense layer in trivial model. The weight was [224*224*3, num_classes]. Using two dense layers, the weights are [224*224*3, 1] and [1, num_classes].
-
Haoyu Zhang authored
* Do not report metrics in performance benchmarks * Rename flag
-
Haoyu Zhang authored
-
Haoyu Zhang authored
* Modified tweaked tests to use tensor learning rate
-