1. 25 Oct, 2018 1 commit
  2. 24 Oct, 2018 1 commit
    • Taylor Robie's avatar
      Add logging calls to NCF (#5576) · 780f5265
      Taylor Robie authored
      * first pass at __getattr__ abuse logger
      
      * first pass at adding tags to NCF
      
      * minor formatting updates
      
      * fix tag name
      
      * convert metrics to python floats
      
      * getting closer...
      
      * direct mlperf logs to a file
      
      * small tweaks and add stitching
      
      * update tags
      
      * fix tag and add a sudo call
      
      * tweak format of run.sh
      
      * delint
      
      * use distribution strategies for evaluation
      
      * address PR comments
      
      * delint and fix test
      
      * adjust flag validation for xla
      
      * add prefix to distinguish log stitching
      
      * fix index bug
      
      * fix clear cache for root user
      
      * dockerize cache drop
      
      * TIL some regex magic
      780f5265
  3. 19 Oct, 2018 1 commit
  4. 18 Oct, 2018 1 commit
    • Taylor Robie's avatar
      Reorder NCF data pipeline (#5536) · 19d4eaaf
      Taylor Robie authored
      * intermediate commit
      
      finish replacing spillover with resampled padding
      
      intermediate commit
      
      * resolve merge conflict
      
      * intermediate commit
      
      * further consolidate the data pipeline
      
      * complete first pass at data pipeline refactor
      
      * remove some leftover code
      
      * fix test
      
      * remove resampling, and move train padding logic into neumf.py
      
      * small tweaks
      
      * fix weight bug
      
      * address PR comments
      
      * fix dict zip. (Reed led me astray)
      
      * delint
      
      * make data test deterministic and delint
      
      * Reed didn't lead me astray. I just can't read.
      
      * more delinting
      
      * even more delinting
      
      * use resampling for last batch padding
      
      * pad last batch with unique data
      
      * Revert "pad last batch with unique data"
      
      This reverts commit cbdf46efcd5c7907038a24105b88d38e7f1d6da2.
      
      * move padded batch to the beginning
      
      * delint
      
      * fix step check for synthetic data
      19d4eaaf
  5. 14 Oct, 2018 1 commit
  6. 11 Oct, 2018 2 commits
  7. 09 Oct, 2018 2 commits
  8. 03 Oct, 2018 1 commit
    • Taylor Robie's avatar
      Move evaluation to .evaluate() (#5413) · c494582f
      Taylor Robie authored
      * move evaluation from numpy to tensorflow
      
      fix syntax error
      
      don't use sigmoid to convert logits. there is too much precision loss.
      
      WIP: add logit metrics
      
      continue refactor of NCF evaluation
      
      fix syntax error
      
      fix bugs in eval loss calculation
      
      fix eval loss reweighting
      
      remove numpy based metric calculations
      
      fix logging hooks
      
      fix sigmoid to softmax bug
      
      fix comment
      
      catch rare PIPE error and address some PR comments
      
      * fix metric test and address PR comments
      
      * delint and fix python2
      
      * fix test and address PR comments
      
      * extend eval to TPUs
      c494582f
  9. 20 Sep, 2018 1 commit
  10. 11 Sep, 2018 1 commit
  11. 05 Sep, 2018 1 commit
    • Reed's avatar
      Fix crash caused by race in the async process. (#5250) · 5856878d
      Reed authored
      When constructing the evaluation records, data_async_generation.py would copy the records into the final directory. The main process would wait until the eval records existed. However, the main process would sometimes read the eval records before they were fully copied, causing a DataLossError.
      5856878d
  12. 22 Aug, 2018 1 commit
    • Reed's avatar
      Fix convergence issues for MLPerf. (#5161) · 64710c05
      Reed authored
      * Fix convergence issues for MLPerf.
      
      Thank you to @robieta for helping me find these issues, and for providng an algorithm for the `get_hit_rate_and_ndcg_mlperf` function.
      
      This change causes every forked process to set a new seed, so that forked processes do not generate the same set of random numbers. This improves evaluation hit rates.
      
      Additionally, it adds a flag, --ml_perf, that makes further changes so that the evaluation hit rate can match the MLPerf reference implementation.
      
      I ran 4 times with --ml_perf and 4 times without. Without --ml_perf, the highest hit rates achieved by each run were 0.6278, 0.6287, 0.6289, and 0.6241. With --ml_perf, the highest hit rates were 0.6353, 0.6356, 0.6367, and 0.6353.
      
      * fix lint error
      
      * Fix failing test
      
      * Address @robieta's feedback
      
      * Address more feedback
      64710c05
  13. 02 Aug, 2018 1 commit
    • Reed's avatar
      Fix bug where data_async_generation.py would freeze. (#4989) · 58037d2c
      Reed authored
      The data_async_generation.py process would print to stderr, but the main process would redirect it's stderr to a pipe. The main process never read from the pipe, so when the pipe was full, data_async_generation.py would stall on a write to stderr. This change makes data_async_generation.py not write to stdout/stderr.
      58037d2c
  14. 30 Jul, 2018 1 commit
    • Taylor Robie's avatar
      NCF pipeline refactor (take 2) and initial TPU port. (#4935) · 6518c1c7
      Taylor Robie authored
      * intermediate commit
      
      * ncf now working
      
      * reorder pipeline
      
      * allow batched decode for file backed dataset
      
      * fix bug
      
      * more tweaks
      
      * parallize false negative generation
      
      * shared pool hack
      
      * workers ignore sigint
      
      * intermediate commit
      
      * simplify buffer backed dataset creation to fixed length record approach only. (more cleanup needed)
      
      * more tweaks
      
      * simplify pipeline
      
      * fix misplaced cleanup() calls. (validation works\!)
      
      * more tweaks
      
      * sixify memoryview usage
      
      * more sixification
      
      * fix bug
      
      * add future imports
      
      * break up training input pipeline
      
      * more pipeline tuning
      
      * first pass at moving negative generation to async
      
      * refactor async pipeline to use files instead of ipc
      
      * refactor async pipeline
      
      * move expansion and concatenation from reduce worker to generation workers
      
      * abandon complete async due to interactions with the tensorflow threadpool
      
      * cleanup
      
      * remove per...
      6518c1c7