"backend/apps/vscode:/vscode.git/clone" did not exist on "032d7c7440c1bd0117d5d8e2046352d54330c851"
- 28 May, 2019 8 commits
-
-
Bruce Fontaine authored
* Add a custom training loop for NCF model with TF2.0 * Fix long line in ncf_keras_main.py * Remove dataset repeat when using custom training loop.
-
guptapriya authored
this is not going to help with current tf.data semantics. so removing it.
-
Igor authored
* Fixes that make transformer run. * Remove debug print statements. * Changed the permissions to 644. * Fix the rest of the permissions. * enable static batch in all benchmarks * Restrict dist strat hack to training mode For now we will do predict/eval without dist strat, so remove that hack in non training cases. * Use `inputs` instead of `x` as arg name for call Keras has different behavior based on whether the inputs are called `inputs` or not. Using `inputs` gives expected behaviors. * Avoid extra map fn on input in dist strat case * Update how we handle custom metrics This new approach works with and without dist strat. The previous one didn't work with dist strat. We need to fix that but this is reasonable in meantime (b/133724664). * Update benchmarks * typo in metrics code * Revert metrics change Didn't actually work in distributed case..
-
Hongjun Choi authored
250347237 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix linting errors in BERT benchmark test. -- 250326131 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 250315593 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 250303528 by haoyuzhang<haoyuzhang@google.com>: Add method docstring to fix lint error. -- PiperOrigin-RevId: 250347237 -
Haoyu Zhang authored
* Run different numbers of steps on different platforms * Add new tests for delayed performance measurement
-
guptapriya authored
This shuffling should help in getting shuffling each epoch.
-
guptapriya authored
-
guptapriya authored
-
- 26 May, 2019 1 commit
-
-
Hongjun Choi authored
250009207 by A. Unique TensorFlower<gardener@tensorflow.org>: Add feature in BERT to write training metrics to a summary file. -- PiperOrigin-RevId: 250009207
-
- 24 May, 2019 7 commits
-
-
saberkun authored
249896208 by hongkuny<hongkuny@google.com>: Adds __init__.py -- PiperOrigin-RevId: 249896208 -
Priya Gupta authored
Add early stopping logic to ncf keras when desired threshold is met. Also change the default batch size to match the tuned hyperparams
-
saberkun authored
249883771 by hongkuny<hongkuny@google.com>: Creates a benchmark dir -- PiperOrigin-RevId: 249883771 -
Toby Boyd authored
* Moved common keras code to utils. * Initial 1 gpu benchmark - Aligned flags with resnet example - removed code/features that are not super useful - eval as part of train if bleu source/ref provided - add exp_per_second hook * Rename benchmark classes, pass batch-size and log_steps. * fix docstring * Predict done with checkpoints inline - perfzero baseclass * steps not epochs with smoother training loop. * do not initialize history outside loop. * 5000 between eval not 500 * estimator to keras. * remove epochs var. * use range not xrange. * 200K steps for 1 gpu * fix global step
-
rxsang authored
* Add a graph optional_next Reset benchmark. * Fix lint error.
-
Toby Boyd authored
-
Tian Lin authored
* Merged commit includes the following changes: 249776315 by tianlin<tianlin@google.com>: Internal change 249763206 by tianlin<tianlin@google.com>: For TF 2.0 (related to Beam Search), expand cond dims in tf.where(cond, x, y) to make all parameters broadcastable. -- 249392724 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 249776315 * Merged commit includes the following changes: 249823043 by tianlin<tianlin@google.com>: Bring back v2 test for predict and eval. -- PiperOrigin-RevId: 249823043
-
- 23 May, 2019 6 commits
-
-
rxsang authored
* Add a test enabling get_next_as_optional behavior. * Remove repeated flag. * Remove trailing space. * Make the name shorter. * Fix lint error. * Refine the benchmark name.
-
rxsang authored
-
guptapriya authored
Adding validation every epoch allows us to view the progress during training instead of having to wait until the last eval. Mostly useful for manual runs.
-
guptapriya authored
Current batch size 160000 does not converge to the desired HR. So we decrease to 99k which is known to converge. Tested locally and got to 63.5 at epoch 7. Also decreasing number of epochs as I don't see any improvement after epoch 7-8.
-
Hongjun Choi authored
249580533 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 249566870 by A. Unique TensorFlower<gardener@tensorflow.org>: Set up BERT benchmark test. -- PiperOrigin-RevId: 249580533 -
rxsang authored
* Add enable_get_next_as_optional flag. * Set enable_get_next_as_optional to strategy. * Add comments to explain the flag. * Remove trailing whitespace. * Remove trailing space.
-
- 22 May, 2019 6 commits
-
-
Toby Boyd authored
-
saberkun authored
249500988 by hongkuny<hongkuny@google.com>: Lints -- PiperOrigin-RevId: 249500988 -
Toby Boyd authored
* Add big tests. * fix super * Add fp16, increase 8xGPU batch-sizes * Adding the rest of the fp16 tests. * Big accuracy test batch_perf_gpu * fix docstrings * add _run_and_report * Edited docstrings
-
Tian Lin authored
* Merged commit includes the following changes: 249218656 by tianlin<tianlin@google.com>: Deal with imports, fix a typo and make unit tests fast. -- 249198645 by tianlin<tianlin@google.com>: Trivial: Remove one empty line before "import tensorflow" -- 249195490 by tianlin<tianlin@google.com>: Initialize Transformer TF V2 Model with Keras subclassing implementation. (Compatible with TF V1) -- 249195008 by tianlin<tianlin@google.com>: Internal change 249173564 by hongkuny<hongkuny@google.com>: Internal change 249079258 by hongkuny<hongkuny@google.com>: Internal change 247691534 by haoyuzhang<haoyuzhang@google.com>: Internal change 247533725 by haoyuzhang<haoyuzhang@google.com>: Internal change 247509295 by haoyuzhang<haoyuzhang@google.com>: Internal change 247311355 by wangtz<wangtz@google.com>: Internal change 247303127 by wangtz<wangtz@google.com>: ... -
Haoyu Zhang authored
-
saberkun authored
249377254 by hongkuny<hongkuny@google.com>: Internal change 249373328 by hongkuny<hongkuny@google.com>: Clean up tf import -- 249333938 by hongkuny<hongkuny@google.com>: Fix tf1 import -- 249325089 by hongkuny<hongkuny@google.com>: BERT 2.0 -- 249173564 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 249377254
-
- 21 May, 2019 1 commit
-
-
Haoyu Zhang authored
-
- 20 May, 2019 1 commit
-
-
Ayush Dubey authored
* Delete accuracy if exists in eval results. * get global_step only if it exists in eval results
-
- 18 May, 2019 2 commits
-
-
Reed authored
This will allow one to easily reproduce a benchmark by running with the flags.
-
Ayush Dubey authored
-
- 15 May, 2019 3 commits
-
-
Rachel Lim authored
-
Igor authored
* Set the --clone_model_in_keras_dist_strat to None. Remove the separate no_cloning benchmarks and add a couple of cloning ones. Fixes the learning rate schedule to cache its ops per graph.
-
Rachel Lim authored
* Added 'tfdata_exp' version of all benchmarks which set FLAGS.tf_data_experimental_slack = True. Renamed `data_prefetch_with_slack` to `data_delay_prefetch` (haoyu's change) to make the names more distinct. * Add flag to resnet input pipeline and surface through keras_imagenet_main.py
-
- 11 May, 2019 2 commits
-
-
Toby Boyd authored
- Test passes locally python3 and test is already skipped for python2. -
Toby Boyd authored
* Add FP16 and benchmarks. * add missing run and report. * Add loss_scale as option not included with dtype. * move loss_scale validation under dtype conditional. * add loss_scale to flags tested.
-
- 10 May, 2019 3 commits
-
-
Haoyu Zhang authored
* Fix trivial model to work properly with fp16 * Add comment on manual casting
-
Haoyu Zhang authored
Previously we had one dense layer in trivial model. The weight was [224*224*3, num_classes]. Using two dense layers, the weights are [224*224*3, 1] and [1, num_classes].
-
Haoyu Zhang authored
* Do not report metrics in performance benchmarks * Rename flag
-