"tests/test_tools/__init__.py" did not exist on "f99149b8ddc24251dce6de33cfc4ec09e18821c2"
- 11 Oct, 2018 2 commits
-
-
Shawn Wang authored
-
Shawn Wang authored
-
- 09 Oct, 2018 2 commits
-
-
Shawn Wang authored
-
Shawn Wang authored
-
- 06 Oct, 2018 1 commit
-
-
Toby Boyd authored
-
- 05 Oct, 2018 2 commits
-
-
Toby Boyd authored
-
Taylor Robie authored
* improve default handling for eval_batch_size * return eval_batch_size default to None * fix syntax error
-
- 04 Oct, 2018 2 commits
-
-
Taylor Robie authored
* Update resnet README with new checkpoints and SavedModels * add more detail on channels_first vs channels_last * fix typo * add disclaimer about checkpoints
-
Taylor Robie authored
* set strip_default_attrs=True for SavedModel exports * specify dtype in resnet export * another dtype fix * fix another dtype issue, and set --image_bytes_as_serving_input to default to False
-
- 03 Oct, 2018 2 commits
-
-
Toby Boyd authored
-
Taylor Robie authored
* move evaluation from numpy to tensorflow fix syntax error don't use sigmoid to convert logits. there is too much precision loss. WIP: add logit metrics continue refactor of NCF evaluation fix syntax error fix bugs in eval loss calculation fix eval loss reweighting remove numpy based metric calculations fix logging hooks fix sigmoid to softmax bug fix comment catch rare PIPE error and address some PR comments * fix metric test and address PR comments * delint and fix python2 * fix test and address PR comments * extend eval to TPUs
-
- 02 Oct, 2018 1 commit
-
-
Reed authored
-
- 01 Oct, 2018 2 commits
-
-
Aman Gupta authored
Some changes specific to prediction. Removing traces of expected results, as this is just prediction.
-
netfs authored
with serving signature that accepts JPEG image bytes instead of a fixed size [HxWxC] image tensor. Passing JPEG image bytes is easier for inference/serving use cases. The model internally resizes/crops the JPEG image to required [HxWxC] tensor before passing it on for actual model inference. This change aligns with Cloud TPU/ResNet-50 model that offers a similar interface (jpeg bytes) for inferencing here: https://github.com/tensorflow/tpu/tree/master/models/official/resnet NOTE: This flag is set to `True` by default for ImageNet, and is disallowed for CIFAR (as it does not apply to CIFAR).
-
- 28 Sep, 2018 1 commit
-
-
Toby Boyd authored
-
- 25 Sep, 2018 2 commits
-
-
Aman Gupta authored
-
Aman Gupta authored
Right now we don't have input data for prediction. So using top 10 entries of test data as input.
-
- 20 Sep, 2018 1 commit
-
-
Taylor Robie authored
* bug fixes and add seed * more random corrections * make cleanup more robust * return cleanup fn * delint and address PR comments. * delint and fix tests * delinting is never done * add pipeline hashing * delint
-
- 19 Sep, 2018 1 commit
-
-
Naurril authored
-
- 17 Sep, 2018 1 commit
-
-
Tayo Oguntebi authored
-
- 14 Sep, 2018 1 commit
-
-
Reed authored
Sometimes it takes longer than 15 seconds, and even longer than 1 minute, to spawn and create the alive file.
-
- 13 Sep, 2018 4 commits
- 11 Sep, 2018 2 commits
- 05 Sep, 2018 4 commits
-
-
Reed authored
* Fix spurious "did not start correctly" error. The error "Generation subprocess did not start correctly" would occur if the async process started up after the main process checked for the subproc_alive file. * Add error message
-
Toby Boyd authored
-
Toby Boyd authored
-
Reed authored
When constructing the evaluation records, data_async_generation.py would copy the records into the final directory. The main process would wait until the eval records existed. However, the main process would sometimes read the eval records before they were fully copied, causing a DataLossError.
-
- 04 Sep, 2018 1 commit
-
-
Yanhui Liang authored
-
- 02 Sep, 2018 2 commits
- 01 Sep, 2018 2 commits
- 30 Aug, 2018 2 commits
-
-
Aman Gupta authored
Bypassing Export model step, if training on TPU's. As this need inference to be supported on TPU's. Remove this check once inference is supported. (#5209)
-
Aman Gupta authored
Bypassing Export model step, if training on TPU's. As this need inference to be supported on TPU's. Remove this check once inference is supported.
-
- 29 Aug, 2018 1 commit
-
-
Yanhui Liang authored
* Add distribution strategy to keras benchmark * Fix comments * Fix lints
-
- 28 Aug, 2018 1 commit
-
-
Jaeman authored
* Fix bug on distributed training in mnist using MirroredStrategy API * Remove unnecessary codes and chagne distribution strategy source - Remove multi-gpu - Remove TowerOptimizer - Change from MirroredStrategy to distribution_utils.get_distribution_strategy
-