"docs/advanced_features/router.md" did not exist on "18ea841f408c01a28c1a1db92f37ae95cfa12523"
  1. 18 Oct, 2018 2 commits
    • Taylor Robie's avatar
      Reorder NCF data pipeline (#5536) · 19d4eaaf
      Taylor Robie authored
      * intermediate commit
      
      finish replacing spillover with resampled padding
      
      intermediate commit
      
      * resolve merge conflict
      
      * intermediate commit
      
      * further consolidate the data pipeline
      
      * complete first pass at data pipeline refactor
      
      * remove some leftover code
      
      * fix test
      
      * remove resampling, and move train padding logic into neumf.py
      
      * small tweaks
      
      * fix weight bug
      
      * address PR comments
      
      * fix dict zip. (Reed led me astray)
      
      * delint
      
      * make data test deterministic and delint
      
      * Reed didn't lead me astray. I just can't read.
      
      * more delinting
      
      * even more delinting
      
      * use resampling for last batch padding
      
      * pad last batch with unique data
      
      * Revert "pad last batch with unique data"
      
      This reverts commit cbdf46efcd5c7907038a24105b88d38e7f1d6da2.
      
      * move padded batch to the beginning
      
      * delint
      
      * fix step check for synthetic data
      19d4eaaf
    • Shawn Wang's avatar
      Delint. · 3ec25e5d
      Shawn Wang authored
      3ec25e5d
  2. 17 Oct, 2018 2 commits
  3. 14 Oct, 2018 1 commit
  4. 13 Oct, 2018 1 commit
  5. 11 Oct, 2018 5 commits
  6. 10 Oct, 2018 2 commits
  7. 09 Oct, 2018 2 commits
  8. 05 Oct, 2018 1 commit
  9. 03 Oct, 2018 1 commit
    • Taylor Robie's avatar
      Move evaluation to .evaluate() (#5413) · c494582f
      Taylor Robie authored
      * move evaluation from numpy to tensorflow
      
      fix syntax error
      
      don't use sigmoid to convert logits. there is too much precision loss.
      
      WIP: add logit metrics
      
      continue refactor of NCF evaluation
      
      fix syntax error
      
      fix bugs in eval loss calculation
      
      fix eval loss reweighting
      
      remove numpy based metric calculations
      
      fix logging hooks
      
      fix sigmoid to softmax bug
      
      fix comment
      
      catch rare PIPE error and address some PR comments
      
      * fix metric test and address PR comments
      
      * delint and fix python2
      
      * fix test and address PR comments
      
      * extend eval to TPUs
      c494582f
  10. 02 Oct, 2018 1 commit
  11. 20 Sep, 2018 1 commit
  12. 14 Sep, 2018 1 commit
  13. 11 Sep, 2018 1 commit
  14. 05 Sep, 2018 2 commits
    • Reed's avatar
      Fix spurious "did not start correctly" error. (#5252) · 7babedc5
      Reed authored
      * Fix spurious "did not start correctly" error.
      
      The error "Generation subprocess did not start correctly" would occur if the async process started up after the main process checked for the subproc_alive file.
      
      * Add error message
      7babedc5
    • Reed's avatar
      Fix crash caused by race in the async process. (#5250) · 5856878d
      Reed authored
      When constructing the evaluation records, data_async_generation.py would copy the records into the final directory. The main process would wait until the eval records existed. However, the main process would sometimes read the eval records before they were fully copied, causing a DataLossError.
      5856878d
  15. 22 Aug, 2018 1 commit
    • Reed's avatar
      Fix convergence issues for MLPerf. (#5161) · 64710c05
      Reed authored
      * Fix convergence issues for MLPerf.
      
      Thank you to @robieta for helping me find these issues, and for providng an algorithm for the `get_hit_rate_and_ndcg_mlperf` function.
      
      This change causes every forked process to set a new seed, so that forked processes do not generate the same set of random numbers. This improves evaluation hit rates.
      
      Additionally, it adds a flag, --ml_perf, that makes further changes so that the evaluation hit rate can match the MLPerf reference implementation.
      
      I ran 4 times with --ml_perf and 4 times without. Without --ml_perf, the highest hit rates achieved by each run were 0.6278, 0.6287, 0.6289, and 0.6241. With --ml_perf, the highest hit rates were 0.6353, 0.6356, 0.6367, and 0.6353.
      
      * fix lint error
      
      * Fix failing test
      
      * Address @robieta's feedback
      
      * Address more feedback
      64710c05
  16. 18 Aug, 2018 1 commit
    • Reed's avatar
      Speed up cache construction. (#5131) · 5aee67b4
      Reed authored
      This is done by using a higher Pickle protocol version, which the Python docs describe as being "slightly more efficient". This reduces the file write time at the beginning from 2 1/2 minutes to 5 seconds.
      5aee67b4
  17. 02 Aug, 2018 2 commits
  18. 01 Aug, 2018 1 commit
  19. 31 Jul, 2018 8 commits
  20. 30 Jul, 2018 1 commit
    • Taylor Robie's avatar
      NCF pipeline refactor (take 2) and initial TPU port. (#4935) · 6518c1c7
      Taylor Robie authored
      * intermediate commit
      
      * ncf now working
      
      * reorder pipeline
      
      * allow batched decode for file backed dataset
      
      * fix bug
      
      * more tweaks
      
      * parallize false negative generation
      
      * shared pool hack
      
      * workers ignore sigint
      
      * intermediate commit
      
      * simplify buffer backed dataset creation to fixed length record approach only. (more cleanup needed)
      
      * more tweaks
      
      * simplify pipeline
      
      * fix misplaced cleanup() calls. (validation works\!)
      
      * more tweaks
      
      * sixify memoryview usage
      
      * more sixification
      
      * fix bug
      
      * add future imports
      
      * break up training input pipeline
      
      * more pipeline tuning
      
      * first pass at moving negative generation to async
      
      * refactor async pipeline to use files instead of ipc
      
      * refactor async pipeline
      
      * move expansion and concatenation from reduce worker to generation workers
      
      * abandon complete async due to interactions with the tensorflow threadpool
      
      * cleanup
      
      * remove performance_comparison.py
      
      * experiment with rough generator + interleave pipeline
      
      * yet more pipeline tuning
      
      * update on-the-fly pipeline
      
      * refactor preprocessing, and move train generation behind a GRPC server
      
      * fix leftover call
      
      * intermediate commit
      
      * intermediate commit
      
      * fix index error in data pipeline, and add logging to train data server
      
      * make sharding more robust to imbalance
      
      * correctly sample with replacement
      
      * file buffers are no longer needed for this branch
      
      * tweak sampling methods
      
      * add README for data pipeline
      
      * fix eval sampling, and vectorize eval metrics
      
      * add spillover and static training batch sizes
      
      * clean up cruft from earlier iterations
      
      * rough delint
      
      * delint 2 / n
      
      * add type annotations
      
      * update run script
      
      * make run.sh a bit nicer
      
      * change embedding initializer to match reference
      
      * rough pass at pure estimator model_fn
      
      * impose static shape hack (revisit later)
      
      * refinements
      
      * fix dir error in run.sh
      
      * add documentation
      
      * add more docs and fix an assert
      
      * old data test is no longer valid. Keeping it around as reference for the new one
      
      * rough draft of data pipeline validation script
      
      * don't rely on shuffle default
      
      * tweaks and documentation
      
      * add separate eval batch size for performance
      
      * initial commit
      
      * terrible hacking
      
      * mini hacks
      
      * missed a bug
      
      * messing about trying to get TPU running
      
      * TFRecords based TPU attempt
      
      * bug fixes
      
      * don't log remotely
      
      * more bug fixes
      
      * TPU tweaks and bug fixes
      
      * more tweaks
      
      * more adjustments
      
      * rework model definition
      
      * tweak data pipeline
      
      * refactor async TFRecords generation
      
      * temp commit to run.sh
      
      * update log behavior
      
      * fix logging bug
      
      * add check for subprocess start to avoid cryptic hangs
      
      * unify deserialize and make it TPU compliant
      
      * delint
      
      * remove gRPC pipeline code
      
      * fix logging bug
      
      * delint and remove old test files
      
      * add unit tests for NCF pipeline
      
      * delint
      
      * clean up run.sh, and add run_tpu.sh
      
      * forgot the most important line
      
      * fix run.sh bugs
      
      * yet more bash debugging
      
      * small tweak to add keras summaries to model_fn
      
      * Clean up sixification issues
      
      * address PR comments
      
      * delinting is never over
      6518c1c7
  21. 12 Jul, 2018 1 commit
  22. 25 Jun, 2018 1 commit
  23. 20 Jun, 2018 1 commit
    • Taylor Robie's avatar
      Wide Deep refactor and deep movies (#4506) · 20070ca4
      Taylor Robie authored
      * begin branch
      
      * finish download script
      
      * rename download to dataset
      
      * intermediate commit
      
      * intermediate commit
      
      * misc tweaks
      
      * intermediate commit
      
      * intermediate commit
      
      * intermediate commit
      
      * delint and update census test.
      
      * add movie tests
      
      * delint
      
      * fix py2 issue
      
      * address PR comments
      
      * intermediate commit
      
      * intermediate commit
      
      * intermediate commit
      
      * finish wide deep transition to vanilla movielens
      
      * delint
      
      * intermediate commit
      
      * intermediate commit
      
      * intermediate commit
      
      * intermediate commit
      
      * fix import
      
      * add default ncf csv construction
      
      * change default on download_if_missing
      
      * shard and vectorize example serialization
      
      * fix import
      
      * update ncf data unittests
      
      * delint
      
      * delint
      
      * more delinting
      
      * fix wide-deep movielens serialization
      
      * address PR comments
      
      * add file_io tests
      
      * investigate wide-deep test failure
      
      * remove hard coded path and properly use flags.
      
      * address file_io test PR comments
      
      * missed a hash_bucked_size
      20070ca4