- 18 Jun, 2019 1 commit
-
-
nnigania authored
* adding a new perf test for ncf, and changing some names * Added change to make ncf use the data from the gcp bucket, and removed the need to re-download data >1day old. Reorganized the perf-zero tests
-
- 20 Apr, 2019 1 commit
-
-
Shining Sun authored
* Remove contrib imports, or move them inline * Use exposed API for FixedLenFeature * Replace tf.logging with absl logging * Change GFile to v2 APIs * replace tf.logging with absl loggin in movielens * Fixing an import bug * Change gfile to v2 APIs in code * Swap to keras optimizer v2 * Bug fix for optimizer * Change tf.log to tf.keras.backend.log * Change the loss function to keras loss * convert another loss to keras loss * Resolve comments and fix lint * Add a doc string * Fix existing tests and add new tests for DS * Added tests for multi-replica * Fix lint * resolve comments * make estimator run in tf2.0 * use compat v1 loss * fix lint issue
-
- 27 Mar, 2019 1 commit
-
-
cclauss authored
* from NCF_input import NCFDataset for line 181 The type __NCFDataset__ is used in the type declaration on line 81 but it is never imported. [flake8](http://flake8.pycqa.org) testing of https://github.com/tensorflow/models on Python 3.7.1 $ __flake8 . --count --select=E9,F63,F72,F82 --show-source --statistics__ ``` ./official/recommendation/data_preprocessing.py:180:3: F821 undefined name 'NCFDataset' # type: (str, str, dict, typing.Optional[str], bool, typing.Optional[str]) -> (NCFDataset, typing.Callable) ^ 1 F821 undefined name 'NCFDataset' 1 ``` __E901,E999,F821,F822,F823__ are the "_showstopper_" [flake8](http://flake8.pycqa.org) issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety. * F821: undefined name `name` * F822: undefined name `name` in `__all__` * F823: local variable name referenced before assignment * E901: SyntaxError or IndentationError * E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree * int, int, data_pipeline.BaseDataConstructor
-
- 18 Mar, 2019 1 commit
-
-
Bruce Fontaine authored
* Add support for TPUEstimator to data processing pipeline and add the ability to store epochs in user specified location.
-
- 07 Jan, 2019 5 commits
-
-
Taylor Robie authored
-
Taylor Robie authored
-
Taylor Robie authored
Add bisection based producer for increased scalability, enable fully deterministic data production, and use the materialized and bisection producer to check each other (via expected output md5's)
-
Taylor Robie authored
-
Taylor Robie authored
2nd half of rough replacement pass fix dataset map functions reduce bias in sample selection cache pandas work on a daily basis cleanup and fix batch check for multi gpu multi device fix fix treatment of eval data padding print data producer replace epoch overlap with padding and masking move type and shape info into the producer class and update run.sh with larger batch size hyperparams remove xla for multi GPU more cleanup remove model runner altogether bug fixes address subtle pipeline hang and improve producer __repr__ fix crash fix assert use popen_helper to create pools add StreamingFilesDataset and abstract data storage to a separate class bug fix fix wait bug and add manual stack trace print more bug fixes and refactor valid point mask to work with TPU sharding misc bug fixes and adjust dtypes address crash from decoding bools fix remaining dtypes and change record writer pattern since it does not append fix synthetic data use TPUStrategy instead of TPUEstimator minor tweaks around moving to TPUStrategy cleanup some old code delint and simplify permutation generation remove low level tf layer definition, use single table with slice for keras, and misc fixes missed minor point on removing tf layer definition fix several bugs from recombinging layer definitions delint and add docstrings Update ncf_test.py. Section for identical inputs and different outputs was removed. update data test to run against the new producer class
-
- 20 Dec, 2018 1 commit
-
-
Alexandre Passos authored
-
- 07 Nov, 2018 1 commit
-
-
Reed authored
This tag should match EVAL_HP_NUM_NEG.
-
- 03 Nov, 2018 1 commit
-
-
Reed authored
I've noticed sometimes the async process's pool processes do not die when ncf_main.py ends and kills the async process. This commit fixes the issue.
-
- 30 Oct, 2018 1 commit
-
-
Taylor Robie authored
-
- 29 Oct, 2018 1 commit
-
-
Reed authored
The option is --nouse_estimator
-
- 26 Oct, 2018 1 commit
-
-
Reed authored
--ml_perf now just changes the model to make it MLPerf compliant. --output_ml_perf_compliance_logging adds the MLPerf compliance logs.
-
- 24 Oct, 2018 1 commit
-
-
Taylor Robie authored
* first pass at __getattr__ abuse logger * first pass at adding tags to NCF * minor formatting updates * fix tag name * convert metrics to python floats * getting closer... * direct mlperf logs to a file * small tweaks and add stitching * update tags * fix tag and add a sudo call * tweak format of run.sh * delint * use distribution strategies for evaluation * address PR comments * delint and fix test * adjust flag validation for xla * add prefix to distinguish log stitching * fix index bug * fix clear cache for root user * dockerize cache drop * TIL some regex magic
-
- 20 Oct, 2018 1 commit
-
-
Reed authored
-
- 18 Oct, 2018 1 commit
-
-
Taylor Robie authored
* intermediate commit finish replacing spillover with resampled padding intermediate commit * resolve merge conflict * intermediate commit * further consolidate the data pipeline * complete first pass at data pipeline refactor * remove some leftover code * fix test * remove resampling, and move train padding logic into neumf.py * small tweaks * fix weight bug * address PR comments * fix dict zip. (Reed led me astray) * delint * make data test deterministic and delint * Reed didn't lead me astray. I just can't read. * more delinting * even more delinting * use resampling for last batch padding * pad last batch with unique data * Revert "pad last batch with unique data" This reverts commit cbdf46efcd5c7907038a24105b88d38e7f1d6da2. * move padded batch to the beginning * delint * fix step check for synthetic data
-
- 14 Oct, 2018 1 commit
-
-
Taylor Robie authored
* move flagfile into the cache_dir * remove duplicate code * delint
-
- 13 Oct, 2018 1 commit
-
-
shizhiw authored
* Use data_dir instead of flags.FLAGS.data_dir in data_preprocessing.py. * Use data_dir instead of flags.FLAGS.data_dir in data_preprocessing.py. * Replace multiprocess pool with popen_helper.get_pool() in data_preprocessing.
-
- 11 Oct, 2018 3 commits
-
-
shizhiw authored
* Use data_dir instead of flags.FLAGS.data_dir in data_preprocessing.py. * Use data_dir instead of flags.FLAGS.data_dir in data_preprocessing.py.
-
Shawn Wang authored
Add comments, exit async process after waiting for flagfile for too long and make directory for data_dir in case it does not exist.
-
Shawn Wang authored
-
- 10 Oct, 2018 1 commit
-
-
Reed authored
* Add --use_synthetic_data option to NCF. * Add comment to _SYNTHETIC_BATCHES_PER_EPOCH * Fix test * Hopefully fix lint issue
-
- 09 Oct, 2018 1 commit
-
-
Shawn Wang authored
-
- 03 Oct, 2018 1 commit
-
-
Taylor Robie authored
* move evaluation from numpy to tensorflow fix syntax error don't use sigmoid to convert logits. there is too much precision loss. WIP: add logit metrics continue refactor of NCF evaluation fix syntax error fix bugs in eval loss calculation fix eval loss reweighting remove numpy based metric calculations fix logging hooks fix sigmoid to softmax bug fix comment catch rare PIPE error and address some PR comments * fix metric test and address PR comments * delint and fix python2 * fix test and address PR comments * extend eval to TPUs
-
- 20 Sep, 2018 1 commit
-
-
Taylor Robie authored
* bug fixes and add seed * more random corrections * make cleanup more robust * return cleanup fn * delint and address PR comments. * delint and fix tests * delinting is never done * add pipeline hashing * delint
-
- 14 Sep, 2018 1 commit
-
-
Reed authored
Sometimes it takes longer than 15 seconds, and even longer than 1 minute, to spawn and create the alive file.
-
- 05 Sep, 2018 1 commit
-
-
Reed authored
* Fix spurious "did not start correctly" error. The error "Generation subprocess did not start correctly" would occur if the async process started up after the main process checked for the subproc_alive file. * Add error message
-
- 22 Aug, 2018 1 commit
-
-
Reed authored
* Fix convergence issues for MLPerf. Thank you to @robieta for helping me find these issues, and for providng an algorithm for the `get_hit_rate_and_ndcg_mlperf` function. This change causes every forked process to set a new seed, so that forked processes do not generate the same set of random numbers. This improves evaluation hit rates. Additionally, it adds a flag, --ml_perf, that makes further changes so that the evaluation hit rate can match the MLPerf reference implementation. I ran 4 times with --ml_perf and 4 times without. Without --ml_perf, the highest hit rates achieved by each run were 0.6278, 0.6287, 0.6289, and 0.6241. With --ml_perf, the highest hit rates were 0.6353, 0.6356, 0.6367, and 0.6353. * fix lint error * Fix failing test * Address @robieta's feedback * Address more feedback
-
- 18 Aug, 2018 1 commit
-
-
Reed authored
This is done by using a higher Pickle protocol version, which the Python docs describe as being "slightly more efficient". This reduces the file write time at the beginning from 2 1/2 minutes to 5 seconds.
-
- 02 Aug, 2018 2 commits
-
-
Reed authored
-
Reed authored
The data_async_generation.py process would print to stderr, but the main process would redirect it's stderr to a pipe. The main process never read from the pipe, so when the pipe was full, data_async_generation.py would stall on a write to stderr. This change makes data_async_generation.py not write to stdout/stderr.
-
- 31 Jul, 2018 5 commits
-
-
Reed authored
-
Reed authored
-
Taylor Robie authored
* add indirection file * remove unused imports * fix import
-
Reed authored
-
Reed authored
-
- 30 Jul, 2018 1 commit
-
-
Taylor Robie authored
* intermediate commit * ncf now working * reorder pipeline * allow batched decode for file backed dataset * fix bug * more tweaks * parallize false negative generation * shared pool hack * workers ignore sigint * intermediate commit * simplify buffer backed dataset creation to fixed length record approach only. (more cleanup needed) * more tweaks * simplify pipeline * fix misplaced cleanup() calls. (validation works\!) * more tweaks * sixify memoryview usage * more sixification * fix bug * add future imports * break up training input pipeline * more pipeline tuning * first pass at moving negative generation to async * refactor async pipeline to use files instead of ipc * refactor async pipeline * move expansion and concatenation from reduce worker to generation workers * abandon complete async due to interactions with the tensorflow threadpool * cleanup * remove performance_comparison.py * experiment with rough generator + interleave pipeline * yet more pipeline tuning * update on-the-fly pipeline * refactor preprocessing, and move train generation behind a GRPC server * fix leftover call * intermediate commit * intermediate commit * fix index error in data pipeline, and add logging to train data server * make sharding more robust to imbalance * correctly sample with replacement * file buffers are no longer needed for this branch * tweak sampling methods * add README for data pipeline * fix eval sampling, and vectorize eval metrics * add spillover and static training batch sizes * clean up cruft from earlier iterations * rough delint * delint 2 / n * add type annotations * update run script * make run.sh a bit nicer * change embedding initializer to match reference * rough pass at pure estimator model_fn * impose static shape hack (revisit later) * refinements * fix dir error in run.sh * add documentation * add more docs and fix an assert * old data test is no longer valid. Keeping it around as reference for the new one * rough draft of data pipeline validation script * don't rely on shuffle default * tweaks and documentation * add separate eval batch size for performance * initial commit * terrible hacking * mini hacks * missed a bug * messing about trying to get TPU running * TFRecords based TPU attempt * bug fixes * don't log remotely * more bug fixes * TPU tweaks and bug fixes * more tweaks * more adjustments * rework model definition * tweak data pipeline * refactor async TFRecords generation * temp commit to run.sh * update log behavior * fix logging bug * add check for subprocess start to avoid cryptic hangs * unify deserialize and make it TPU compliant * delint * remove gRPC pipeline code * fix logging bug * delint and remove old test files * add unit tests for NCF pipeline * delint * clean up run.sh, and add run_tpu.sh * forgot the most important line * fix run.sh bugs * yet more bash debugging * small tweak to add keras summaries to model_fn * Clean up sixification issues * address PR comments * delinting is never over
-