1. 11 Oct, 2018 2 commits
  2. 10 Oct, 2018 1 commit
  3. 09 Oct, 2018 1 commit
  4. 03 Oct, 2018 1 commit
    • Taylor Robie's avatar
      Move evaluation to .evaluate() (#5413) · c494582f
      Taylor Robie authored
      * move evaluation from numpy to tensorflow
      
      fix syntax error
      
      don't use sigmoid to convert logits. there is too much precision loss.
      
      WIP: add logit metrics
      
      continue refactor of NCF evaluation
      
      fix syntax error
      
      fix bugs in eval loss calculation
      
      fix eval loss reweighting
      
      remove numpy based metric calculations
      
      fix logging hooks
      
      fix sigmoid to softmax bug
      
      fix comment
      
      catch rare PIPE error and address some PR comments
      
      * fix metric test and address PR comments
      
      * delint and fix python2
      
      * fix test and address PR comments
      
      * extend eval to TPUs
      c494582f
  5. 20 Sep, 2018 1 commit
  6. 14 Sep, 2018 1 commit
  7. 05 Sep, 2018 1 commit
    • Reed's avatar
      Fix spurious "did not start correctly" error. (#5252) · 7babedc5
      Reed authored
      * Fix spurious "did not start correctly" error.
      
      The error "Generation subprocess did not start correctly" would occur if the async process started up after the main process checked for the subproc_alive file.
      
      * Add error message
      7babedc5
  8. 22 Aug, 2018 1 commit
    • Reed's avatar
      Fix convergence issues for MLPerf. (#5161) · 64710c05
      Reed authored
      * Fix convergence issues for MLPerf.
      
      Thank you to @robieta for helping me find these issues, and for providng an algorithm for the `get_hit_rate_and_ndcg_mlperf` function.
      
      This change causes every forked process to set a new seed, so that forked processes do not generate the same set of random numbers. This improves evaluation hit rates.
      
      Additionally, it adds a flag, --ml_perf, that makes further changes so that the evaluation hit rate can match the MLPerf reference implementation.
      
      I ran 4 times with --ml_perf and 4 times without. Without --ml_perf, the highest hit rates achieved by each run were 0.6278, 0.6287, 0.6289, and 0.6241. With --ml_perf, the highest hit rates were 0.6353, 0.6356, 0.6367, and 0.6353.
      
      * fix lint error
      
      * Fix failing test
      
      * Address @robieta's feedback
      
      * Address more feedback
      64710c05
  9. 18 Aug, 2018 1 commit
    • Reed's avatar
      Speed up cache construction. (#5131) · 5aee67b4
      Reed authored
      This is done by using a higher Pickle protocol version, which the Python docs describe as being "slightly more efficient". This reduces the file write time at the beginning from 2 1/2 minutes to 5 seconds.
      5aee67b4
  10. 02 Aug, 2018 2 commits
  11. 31 Jul, 2018 5 commits
  12. 30 Jul, 2018 1 commit
    • Taylor Robie's avatar
      NCF pipeline refactor (take 2) and initial TPU port. (#4935) · 6518c1c7
      Taylor Robie authored
      * intermediate commit
      
      * ncf now working
      
      * reorder pipeline
      
      * allow batched decode for file backed dataset
      
      * fix bug
      
      * more tweaks
      
      * parallize false negative generation
      
      * shared pool hack
      
      * workers ignore sigint
      
      * intermediate commit
      
      * simplify buffer backed dataset creation to fixed length record approach only. (more cleanup needed)
      
      * more tweaks
      
      * simplify pipeline
      
      * fix misplaced cleanup() calls. (validation works\!)
      
      * more tweaks
      
      * sixify memoryview usage
      
      * more sixification
      
      * fix bug
      
      * add future imports
      
      * break up training input pipeline
      
      * more pipeline tuning
      
      * first pass at moving negative generation to async
      
      * refactor async pipeline to use files instead of ipc
      
      * refactor async pipeline
      
      * move expansion and concatenation from reduce worker to generation workers
      
      * abandon complete async due to interactions with the tensorflow threadpool
      
      * cleanup
      
      * remove per...
      6518c1c7