- 01 Mar, 2021 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 360256877
-
- 28 Feb, 2021 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 359994674
-
Hongkun Yu authored
PiperOrigin-RevId: 359990341
-
- 22 Jan, 2021 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 353315689
-
- 19 Oct, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 337927475
-
- 13 Oct, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 336960641
-
Hongkun Yu authored
PiperOrigin-RevId: 336795303
-
- 12 Aug, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
- 06 Aug, 2020 2 commits
-
-
Hongkun Yu authored
Move mock_task to utils/testing/ PiperOrigin-RevId: 325275356
-
Hongkun Yu authored
Move mock_task to utils/testing/ PiperOrigin-RevId: 325275356
-
- 29 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 309079916
-
- 14 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 306521269
-
- 09 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 305709689
-
- 17 Mar, 2020 1 commit
-
-
ayushmankumar7 authored
-
- 14 Mar, 2020 1 commit
-
-
Sai Ganesh Bandiatmakuri authored
PiperOrigin-RevId: 300858086
-
- 29 Jan, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 292178802
-
- 11 Dec, 2019 1 commit
-
-
David Chen authored
PiperOrigin-RevId: 284874717
-
- 21 Nov, 2019 1 commit
-
-
Sai Ganesh Bandiatmakuri authored
PiperOrigin-RevId: 281846531
-
- 19 Nov, 2019 1 commit
-
-
Jose Baiocchi authored
PiperOrigin-RevId: 281192912
-
- 10 Oct, 2019 1 commit
-
-
A. Unique TensorFlower authored
change benchmark's log verbosity to logging.INFO. it seems to me that DEBUG map to ---v=1 internally, which is way to verbose for the purpose of benchmarking. PiperOrigin-RevId: 274040907
-
- 04 Sep, 2019 1 commit
-
-
Reed Wanderman-Milne authored
--clean, --train_epochs, and --epochs_between_evals have been unexposed from models which do not use them PiperOrigin-RevId: 267065651
-
- 21 Aug, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 264527204
-
- 19 Aug, 2019 1 commit
-
-
Reed Wanderman-Milne authored
Only the V1 resnet model uses --max_train_steps. This unexposes the flag in the keras_application_models, mnist, keras resnet, CTL resnet Models. Before this change, such models allowed the flag to be specified, but ignored it. I also removed the "max_train" argument from the run_synthetic function, since this only had any meaning for the V1 resnet model. Instead, the V1 resnet model now directly passes --max_train_steps=1 to run_synthetic. PiperOrigin-RevId: 264269836
-
- 02 Aug, 2019 2 commits
-
-
Hongkun Yu authored
* Update to use py3 lint * Update model_saving_utils.py Testing. To be reverted * Update model_saving_utils.py
-
Hongkun Yu authored
old lint is no longer used.
-
- 23 Jul, 2019 1 commit
-
-
Hongkun Yu authored
Only care about errors and output into an error file.
-
- 22 Jul, 2019 1 commit
-
-
Hongkun Yu authored
* Update pylint.rcfile * Update pylint.rcfile * Update pylint.rcfile * add new sanity check script for lint to replace current lint script. * Revert "Update pylint.rcfile" This reverts commit f6036cd7e7c4b9e3eeb47bb56a63927a040a2761. * Revert "Update pylint.rcfile" This reverts commit e3af497342e26bbbbecfc8c8f79cb0e24a2ef960. * Revert "Update pylint.rcfile" This reverts commit 6136636eee6e90fd191ebbb4ccaa9fb89c0290f4. * update scripts * disable trailing-newlines
-
- 03 Jul, 2019 1 commit
-
-
Toby Boyd authored
* Fix unit tests failures. * 96% of TF 2.0 tests on GPU are passing. * Currently all passing GPU and CPU TF 2.0 * Address code comments. * use tf 2.0 cast. * Comment about working on TF 2.0 CPU * Uses contrib turn off for TF 2.0. * Fix wide_deep and add keras_common_tests. * use context to get num_gpus. * Switch to tf.keras.metrics
-
- 22 Jun, 2019 1 commit
-
-
Toby Boyd authored
-
- 24 May, 2019 1 commit
-
-
Toby Boyd authored
* Moved common keras code to utils. * Initial 1 gpu benchmark - Aligned flags with resnet example - removed code/features that are not super useful - eval as part of train if bleu source/ref provided - add exp_per_second hook * Rename benchmark classes, pass batch-size and log_steps. * fix docstring * Predict done with checkpoints inline - perfzero baseclass * steps not epochs with smoother training loop. * do not initialize history outside loop. * 5000 between eval not 500 * estimator to keras. * remove epochs var. * use range not xrange. * 200K steps for 1 gpu * fix global step
-
- 11 May, 2019 1 commit
-
-
Toby Boyd authored
- Test passes locally python3 and test is already skipped for python2.
-
- 11 Feb, 2019 1 commit
-
-
Toby Boyd authored
* Remove contrib thread pool. * Remove commented out contrib import. * Fix lint issues. * move tf.data.options higher. Tweak line breaks.
-
- 08 Feb, 2019 1 commit
-
-
Goldie Gadde authored
This reverts commit 57e07520.
-
- 06 Feb, 2019 1 commit
-
-
Goldie Gadde authored
This reverts commit d6b2b83c.
-
- 05 Feb, 2019 1 commit
-
-
Goldie Gadde authored
* Add resnet56 short tests. (#6101) * Add resnet56 short tests. - created base benchmark module - renamed accuracy test class to contain the word Accuracy which will result in a need to update all the jobs and a loss of history but is worth it. - short tests are mostly copied from shining with oss refactor * Address feedback. * Move flag_methods to init - Address setting default flags repeatedly. * Rename accuracy tests. * Lint errors resolved. * fix model_dir set to flags.data_dir. * fixed not fulling pulling out flag_methods. * Use core mirrored strategy in official models (#6126) * Imagenet short tests (#6132) * Add short imagenet tests (taken from seemuch) - also rename to match go forward naming * fix method name * Update doc strings. * Fixe gpu number. * points default data_dir to child folder. (#6131) Failed test is python2 and was a kokoro failure * Imagenet short tests (#6136) * Add short imagenet tests (taken from seemuch) - also rename to match go forward naming * fix method name * Update doc strings. * Fixe gpu number. * Add fill_objects * fixed calling wrong class in super. * fix lint issue. * Flag (#6121) * Fix the turn_off_ds flag problem * add param names to all args * Export benchmark stats using tf.test.Benchmark.report_benchmark() (#6103) * Export benchmark stats using tf.test.Benchmark.report_benchmark() * Fix python style using pyformat * Typos. (#6120) * log verbosity=2 logs every epoch no progress bars (#6142) * tf_upgrade_v2 on resnet and utils folder. * tf_upgrade_v2 on resnet and utils folder.
-
- 07 Jan, 2019 1 commit
-
-
Taylor Robie authored
Add bisection based producer for increased scalability, enable fully deterministic data production, and use the materialized and bisection producer to check each other (via expected output md5's)
-
- 30 Jul, 2018 1 commit
-
-
Taylor Robie authored
* intermediate commit * ncf now working * reorder pipeline * allow batched decode for file backed dataset * fix bug * more tweaks * parallize false negative generation * shared pool hack * workers ignore sigint * intermediate commit * simplify buffer backed dataset creation to fixed length record approach only. (more cleanup needed) * more tweaks * simplify pipeline * fix misplaced cleanup() calls. (validation works\!) * more tweaks * sixify memoryview usage * more sixification * fix bug * add future imports * break up training input pipeline * more pipeline tuning * first pass at moving negative generation to async * refactor async pipeline to use files instead of ipc * refactor async pipeline * move expansion and concatenation from reduce worker to generation workers * abandon complete async due to interactions with the tensorflow threadpool * cleanup * remove performance_comparison.py * experiment with rough generator + interleave pipeline * yet more pipeline tuning * update on-the-fly pipeline * refactor preprocessing, and move train generation behind a GRPC server * fix leftover call * intermediate commit * intermediate commit * fix index error in data pipeline, and add logging to train data server * make sharding more robust to imbalance * correctly sample with replacement * file buffers are no longer needed for this branch * tweak sampling methods * add README for data pipeline * fix eval sampling, and vectorize eval metrics * add spillover and static training batch sizes * clean up cruft from earlier iterations * rough delint * delint 2 / n * add type annotations * update run script * make run.sh a bit nicer * change embedding initializer to match reference * rough pass at pure estimator model_fn * impose static shape hack (revisit later) * refinements * fix dir error in run.sh * add documentation * add more docs and fix an assert * old data test is no longer valid. Keeping it around as reference for the new one * rough draft of data pipeline validation script * don't rely on shuffle default * tweaks and documentation * add separate eval batch size for performance * initial commit * terrible hacking * mini hacks * missed a bug * messing about trying to get TPU running * TFRecords based TPU attempt * bug fixes * don't log remotely * more bug fixes * TPU tweaks and bug fixes * more tweaks * more adjustments * rework model definition * tweak data pipeline * refactor async TFRecords generation * temp commit to run.sh * update log behavior * fix logging bug * add check for subprocess start to avoid cryptic hangs * unify deserialize and make it TPU compliant * delint * remove gRPC pipeline code * fix logging bug * delint and remove old test files * add unit tests for NCF pipeline * delint * clean up run.sh, and add run_tpu.sh * forgot the most important line * fix run.sh bugs * yet more bash debugging * small tweak to add keras summaries to model_fn * Clean up sixification issues * address PR comments * delinting is never over
-
- 25 May, 2018 1 commit
-
-
Karmel Allison authored
* Using BenchmarkLogger * Using BenchmarkLogger * Fixing tests * Linting fixes. * Adding comments * Moving mock logger * Moving mock logger * Glinting * Responding to CR * Reverting assertEmpty
-
- 03 May, 2018 1 commit
-
-
Taylor Robie authored
* squash of modular absl usage commits * delint * address PR comments * change hooks to comma separated list, as absl behavior for space separated lists is not as expected
-