"eigen-master/bench/analyze-blocking-sizes.cpp" did not exist on "e7df86554156b36846008d8ddbcc4d8521a16554"
- 05 Sep, 2018 5 commits
-
-
Toby Boyd authored
Move tf.cast to tf.float16 in input pipeline
-
Reed authored
* Fix spurious "did not start correctly" error. The error "Generation subprocess did not start correctly" would occur if the async process started up after the main process checked for the subproc_alive file. * Add error message
-
Toby Boyd authored
-
Toby Boyd authored
-
Reed authored
When constructing the evaluation records, data_async_generation.py would copy the records into the final directory. The main process would wait until the eval records existed. However, the main process would sometimes read the eval records before they were fully copied, causing a DataLossError.
-
- 04 Sep, 2018 3 commits
-
-
Toby Boyd authored
Update resnet defaults v1
-
Toby Boyd authored
ResNet synthetic data performance enhancement.
-
Yanhui Liang authored
-
- 02 Sep, 2018 4 commits
-
-
Joel Shor authored
Added fused_batch_norm parameter
-
Mikael Souza authored
-
Toby Boyd authored
-
Toby Boyd authored
-
- 01 Sep, 2018 2 commits
- 30 Aug, 2018 6 commits
-
-
Aman Gupta authored
Bypassing Export model step, if training on TPU's. As this need inference to be supported on TPU's. Remove this check once inference is supported. (#5209)
-
Mark Daoust authored
minor bug fix
-
Mark Daoust authored
-
Mark Daoust authored
-
Mark Daoust authored
Fix list format and typos in save and restore tutorial.
-
Don Kirkby authored
-
- 29 Aug, 2018 1 commit
-
-
Yanhui Liang authored
* Add distribution strategy to keras benchmark * Fix comments * Fix lints
-
- 28 Aug, 2018 2 commits
-
-
Jaeman authored
* Fix bug on distributed training in mnist using MirroredStrategy API * Remove unnecessary codes and chagne distribution strategy source - Remove multi-gpu - Remove TowerOptimizer - Change from MirroredStrategy to distribution_utils.get_distribution_strategy
-
Josh Gordon authored
-
- 27 Aug, 2018 5 commits
-
-
Taylor Robie authored
* Make ResNet robust to the case that epochs_between_evals does not divide train_epochs, and add an --eval_only option * add some comments to make the control flow easier to follow * address PR comments
-
Toby Boyd authored
* Add 5 epoch warmup * get_lr with warm_up only for imagenet * Add base_lr, remove fp16 unittest arg validation * Remove validation check stopping v1 and FP16
-
Rutger Roffel authored
* Fixed TensorFlow version check in object_detection_tutorial_ipynb * Changed the minimum version to 1.9.0 for the object detection notebook
-
Mark Daoust authored
Stub moved notebooks.
-
Mark Daoust authored
-
- 25 Aug, 2018 1 commit
-
-
Toby Boyd authored
* Add top_5 to to eval. * labels shape to [?] from [?,1] matches unittest.
-
- 24 Aug, 2018 2 commits
-
-
Billy Lamberta authored
Typo fix
-
Steven Schmatz authored
-
- 23 Aug, 2018 3 commits
-
-
pkulzc authored
Update the runtime version in bash script
-
Wentao Xu authored
The bash script to submit training job for pets detection has runtime-version of 1.8. This will trigger `TypeError: non_max_suppression() got an unexpected keyword argument 'score_threshold'` on the Google Cloud since 1.8 and older does not support this keyword argument. Therefore, update this runtime version to 1.9, which is the most recent runtime version that is published on June 27, 2018. See https://github.com/tensorflow/models/issues/5056 https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list
-
Cameron Rudnick authored
Updated model_lib to use min_score_threshold and max_num_boxes_to_visualize from the eval config.
-
- 22 Aug, 2018 3 commits
-
-
bananabowl authored
Text classification tutorial clarification
-
bananabowl authored
Rename "num_examples" to "num_reviews" to be consistent with the "one-hot-encode" size description: "num_words * num_reviews".
-
Reed authored
* Fix convergence issues for MLPerf. Thank you to @robieta for helping me find these issues, and for providng an algorithm for the `get_hit_rate_and_ndcg_mlperf` function. This change causes every forked process to set a new seed, so that forked processes do not generate the same set of random numbers. This improves evaluation hit rates. Additionally, it adds a flag, --ml_perf, that makes further changes so that the evaluation hit rate can match the MLPerf reference implementation. I ran 4 times with --ml_perf and 4 times without. Without --ml_perf, the highest hit rates achieved by each run were 0.6278, 0.6287, 0.6289, and 0.6241. With --ml_perf, the highest hit rates were 0.6353, 0.6356, 0.6367, and 0.6353. * fix lint error * Fix failing test * Address @robieta's feedback * Address more feedback
-
- 21 Aug, 2018 3 commits
-
-
Mark Daoust authored
These have all moved to https://github.com/tensorflow/docs/tree/master/site/en
-
Billy Lamberta authored
Basic_Regression notebook: Python2 compatibility
-
Billy Lamberta authored
-