- 11 Jul, 2019 3 commits
-
-
Toby Boyd authored
-
saberkun authored
257314238 by hongkuny<hongkuny@google.com>: Creates transformer v2 README. Remove contents that are not implemented. -- PiperOrigin-RevId: 257314238 -
Toby Boyd authored
* Move to global_step. * Hook to use global_step. * fix comment start step 1 not step 0. * remove hack used for testing. * Add docstring.
-
- 10 Jul, 2019 1 commit
-
-
Rahul Nimbal authored
* Update research/maskgan/README.md Co-Authored-By:Andrew M Dai <andy.dai@gmail.com>
-
- 09 Jul, 2019 2 commits
-
-
Haoyu Zhang authored
* Improve performance for Cifar ResNet benchmarks * Revert batch size changes to benchmarks
-
David Andersen authored
Update to tf 1.14 syntax, fix bug #7125 (needed additional expand for conv2d). Suppress compat warnings by moving to compat.v1 versions of some functions. Note that this code is not 2.0 compatible yet - that will be a future push. (#7177)
-
- 08 Jul, 2019 5 commits
-
-
Toby Boyd authored
-
Toby Boyd authored
* reduce iterations from 20 to 12. * add fp16 dynamic batch accuracy check. * fix existing lint issue.
-
Yang Liu authored
* Bug fix of cifar10_eval.py line 120: changed from eval_data = FLAGS.eval_data == 'test' to: eval_data = FLAGS.eval_data comment: the original code assigns 'true' to eval_data, when the script is being used to evaluate networks on the evaluation set, it DOES NOT load the evaluation set as intended, but actually loads the trainning set (line 105 of cifar10_input.py). * Remove one line that was an artifact of issue.
-
Yuhao Zhang authored
-
Devansh Singh authored
-
- 03 Jul, 2019 1 commit
-
-
Toby Boyd authored
* Fix unit tests failures. * 96% of TF 2.0 tests on GPU are passing. * Currently all passing GPU and CPU TF 2.0 * Address code comments. * use tf 2.0 cast. * Comment about working on TF 2.0 CPU * Uses contrib turn off for TF 2.0. * Fix wide_deep and add keras_common_tests. * use context to get num_gpus. * Switch to tf.keras.metrics
-
- 02 Jul, 2019 3 commits
-
-
saberkun authored
256204636 by hongkuny<hongkuny@google.com>: Internal -- 256079834 by hongkuny<hongkuny@google.com>: Clean up: move common flags together for further refactoring Enable steps_per_loop option for all applications. -- PiperOrigin-RevId: 256204636 -
Yuefeng Zhou authored
* Add StepCounterHook to hooks_helper.py * Update symbol.
-
Yuefeng Zhou authored
when there are multiple workers.
-
- 28 Jun, 2019 4 commits
-
-
Toby Boyd authored
-
nnigania authored
* borrowing a tf1.x optimization which converts gradients from sparse to dense for better perf * cleanup after code review
-
saberkun authored
* Merged commit includes the following changes: 255493073 by hongkuny<hongkuny@google.com>: BERT initial OSS readme update. -- 255470372 by dmchen<dmchen@google.com>: Slightly expand expected range for F1 score in BERT SQuAD accuracy test -- 255109240 by hongkuny<hongkuny@google.com>: Update eval/predict batch sizes. -- 255010016 by hongkuny<hongkuny@google.com>: Internal -- 254874613 by hongkuny<hongkuny@google.com>: Update glue tasks enum to match directory name -- 254866171 by taylorrobie<taylorrobie@google.com>: Internal change 254785517 by zongweiz<zongweiz@google.com>: Use train_single_step for BERT GPU models to temporarily work around some performance bugs in GPU runs -- 254497647 by hongkuny<hongkuny@google.com>: Fix device placement for TPU export model. -- PiperOrigin-RevId: 255493073 * Update README.md -
David M. Chen authored
255493073 by hongkuny<hongkuny@google.com>: BERT initial OSS readme update. -- 255470372 by dmchen<dmchen@google.com>: Slightly expand expected range for F1 score in BERT SQuAD accuracy test -- 255109240 by hongkuny<hongkuny@google.com>: Update eval/predict batch sizes. -- 255010016 by hongkuny<hongkuny@google.com>: Internal -- PiperOrigin-RevId: 255493073
-
- 26 Jun, 2019 1 commit
-
-
Aysar authored
-
- 25 Jun, 2019 1 commit
-
-
saberkun authored
254874613 by hongkuny<hongkuny@google.com>: Update glue tasks enum to match directory name -- 254866171 by taylorrobie<taylorrobie@google.com>: Internal change PiperOrigin-RevId: 254874613
-
- 24 Jun, 2019 2 commits
-
-
saberkun authored
254785517 by A. Unique TensorFlower<gardener@tensorflow.org>: Use train_single_step for BERT GPU models to temporarily work around some performance bugs in GPU runs -- 254497647 by hongkuny<hongkuny@google.com>: Fix device placement for TPU export model. -- PiperOrigin-RevId: 254785517 -
nnigania authored
-
- 22 Jun, 2019 2 commits
-
-
George K authored
* restored missing function * missing import * missing imports * updated tutorial link * recovered _print_download_progress func * change default train with float16 instead of float32 accuracy * test disable func call * redundant function call, currently data is pulled automatically in * optimized imports * optimized imports
-
Toby Boyd authored
-
- 21 Jun, 2019 5 commits
-
-
guptapriya authored
* trying fake merge call * make metrics optional * Remove extra print
-
Neil authored
-
Toby Boyd authored
* cpu benchmark and accuracy tests. * add docstrings to fix lint.
-
Toby Boyd authored
* XLA FP32 and first test * More XLA benchmarks FP32. * Add eager to NCF and refactor resnet. * fix v2_0 calls and more flag refactor. * Remove extra flag args. * 90 epoch default * add return * remove xla not used by estimator. * Remove duplicate run_eagerly. * fix flag defaults. * Remove fp16_implementation flag option. * Remove stop early on mlperf test. * remove unneeded args. * load flags from keras mains.
-
Reed authored
-
- 20 Jun, 2019 8 commits
-
-
Haoyu Zhang authored
-
Haoyu Zhang authored
-
saberkun authored
254134531 by yuefengz<yuefengz@google.com>: Fix a typo in bert_benchmark.py -- PiperOrigin-RevId: 254134531 -
Igor authored
-
Haoyu Zhang authored
* Do not set learning phase when skipping eval * Do not set learning phase in no dist strat case * Added device placement, tweaked benchmarks * Added tweaked benchmarks for Cifar * Fix device scope * Fix lint * Add explicit GPU placement flag * Also run accuracy test with explicit GPU placement * Added doc string
-
saberkun authored
254069984 by hongkuny<hongkuny@google.com>: Automated rollback of changelist 254060732. 254061429 by hongkuny<hongkuny@google.com>: Use host while loop for training steps. -- 254060732 by yifeif<yifeif@google.com>: Automated rollback of changelist 254027750. 254027750 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 254069984 -
Toby Boyd authored
* Add XLA benchmark tests FP32 only for now. * Add FP16 XLA tests. * FP16 only tests.
-
anj-s authored
* . * .
-
- 19 Jun, 2019 2 commits
-
-
Reed authored
-
Toby Boyd authored
* set default steps to 300K. * Log flags to perfzero. * Add XLA support to transformer - Moved config logic to keras_utils - Added enable_xla flag to _performance flags - Did not refactor enable_xla flag from keras resnet due to reliance on calling FLAGs in estimator keras and that is a needed refactor for another time. * fix g3 lint complaint. * Refactor set config into keras_utils. * Move flags out of main. * pipe through enable_xla * Update official/transformer/v2/misc.py Co-Authored-By:Reed <reedwm@google.com>
-