- 10 Jul, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 320540920
-
Hongkun Yu authored
PiperOrigin-RevId: 320505320
-
- 26 Jun, 2020 1 commit
-
-
Chen Chen authored
PiperOrigin-RevId: 318387106
-
- 25 May, 2020 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 313030129
-
- 22 May, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 312841381
-
- 25 Mar, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 302955540
-
- 16 Mar, 2020 1 commit
-
-
Chen Chen authored
PiperOrigin-RevId: 301240555
-
- 03 Mar, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 298520611
-
- 28 Feb, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 297839074
-
- 18 Dec, 2019 1 commit
-
-
Alan Chiao authored
has been added). PiperOrigin-RevId: 286109114
-
- 27 Nov, 2019 1 commit
-
-
Jaehong Kim authored
PiperOrigin-RevId: 282695365
-
- 01 Nov, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 277973281
-
- 31 Oct, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 277793274
-
- 11 Oct, 2019 1 commit
-
-
Yeqing Li authored
PiperOrigin-RevId: 274241934
-
- 29 Sep, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 271873759
-
- 08 Jan, 2019 1 commit
-
-
Taylor Robie authored
-
- 07 Jan, 2019 1 commit
-
-
Taylor Robie authored
2nd half of rough replacement pass fix dataset map functions reduce bias in sample selection cache pandas work on a daily basis cleanup and fix batch check for multi gpu multi device fix fix treatment of eval data padding print data producer replace epoch overlap with padding and masking move type and shape info into the producer class and update run.sh with larger batch size hyperparams remove xla for multi GPU more cleanup remove model runner altogether bug fixes address subtle pipeline hang and improve producer __repr__ fix crash fix assert use popen_helper to create pools add StreamingFilesDataset and abstract data storage to a separate class bug fix fix wait bug and add manual stack trace print more bug fixes and refactor valid point mask to work with TPU sharding misc bug fixes and adjust dtypes address crash from decoding bools fix remaining dtypes and change record writer pattern since it does not append fix synthetic data use TPUStrategy instead of TPUEstimator minor tweaks around moving to TPUStrategy cleanup some old code delint and simplify permutation generation remove low level tf layer definition, use single table with slice for keras, and misc fixes missed minor point on removing tf layer definition fix several bugs from recombinging layer definitions delint and add docstrings Update ncf_test.py. Section for identical inputs and different outputs was removed. update data test to run against the new producer class
-
- 30 Oct, 2018 1 commit
-
-
Taylor Robie authored
-
- 24 Oct, 2018 1 commit
-
-
Taylor Robie authored
* first pass at __getattr__ abuse logger * first pass at adding tags to NCF * minor formatting updates * fix tag name * convert metrics to python floats * getting closer... * direct mlperf logs to a file * small tweaks and add stitching * update tags * fix tag and add a sudo call * tweak format of run.sh * delint * use distribution strategies for evaluation * address PR comments * delint and fix test * adjust flag validation for xla * add prefix to distinguish log stitching * fix index bug * fix clear cache for root user * dockerize cache drop * TIL some regex magic
-
- 30 Jul, 2018 1 commit
-
-
Taylor Robie authored
* intermediate commit * ncf now working * reorder pipeline * allow batched decode for file backed dataset * fix bug * more tweaks * parallize false negative generation * shared pool hack * workers ignore sigint * intermediate commit * simplify buffer backed dataset creation to fixed length record approach only. (more cleanup needed) * more tweaks * simplify pipeline * fix misplaced cleanup() calls. (validation works\!) * more tweaks * sixify memoryview usage * more sixification * fix bug * add future imports * break up training input pipeline * more pipeline tuning * first pass at moving negative generation to async * refactor async pipeline to use files instead of ipc * refactor async pipeline * move expansion and concatenation from reduce worker to generation workers * abandon complete async due to interactions with the tensorflow threadpool * cleanup * remove per...
-
- 20 Jun, 2018 1 commit
-
-
Taylor Robie authored
* begin branch * finish download script * rename download to dataset * intermediate commit * intermediate commit * misc tweaks * intermediate commit * intermediate commit * intermediate commit * delint and update census test. * add movie tests * delint * fix py2 issue * address PR comments * intermediate commit * intermediate commit * intermediate commit * finish wide deep transition to vanilla movielens * delint * intermediate commit * intermediate commit * intermediate commit * intermediate commit * fix import * add default ncf csv construction * change default on download_if_missing * shard and vectorize example serialization * fix import * update ncf data unittests * delint * delint * more delinting * fix wide-deep movielens serialization * address PR comments * add file_io tests * investigate wide-deep test failure * remove hard coded path and properly use flags. * address file_io test PR comments * missed a hash_bucked_size
-
- 04 Jun, 2018 1 commit
-
-
Taylor Robie authored
* port changes from previous branch now that transformer util changes are in master fix incorrect count correct (hopefully) treatment of batch_size set eval_metrics to a dummy function for now add some comments start bringing metrics to transformer TPU resolve logits shape metrics are now working except for tf.py_func metrics increase batch_size for tpu, and create summary host call fix host call reduce tpu default batch size further tune batch sizes add minibatch loss to summary handle case of single_iteration_train_steps > number points in an epoch begin to incorporate hooks add sleep workarounds disable hooks altogether generalize host call function and move to newly created tpu utils module remove all traces of params as an object switch from to address some PR comments, and change the number of data points. minor tweaks add tpu dry run for testing, and use matmul for TPU embedding infeed/outfeed queue issue is fixed. Sleeps are no longer necessary add some documentation. cleanup and address PR comments delint add accelerator __init__ fix embedding missed PR comment address PR comments fix validator bug rewrite cloud storage validator, and add oauth dependency to requirements.txt * delint
-
- 18 May, 2018 1 commit
-
-
Younghee Kwon authored
* Add boosted_trees to the official models * Comments addressed from review, and a test added; using absl.flags instead of argparser. * Used help_wrap. Also added instructions for inference.
-
- 28 Mar, 2018 1 commit
-
-
Qianli Scott Zhu authored
* Add benchmark upload util to bigquery. Also update the benchmark logger and bigquery schema for the errors found during the integration test. * Fix lint error. * Update test to clear all the env vars during test. This was causing error since the Kokoro test has TF_PKG=tf-nightly injected during test. * Update lintrc to ignore google related package. * Another attempt to fix lint import error. * Address the review comment. * Fix lint error. * Another fix for lint. * Update test comment for env var clean up.
-
- 27 Mar, 2018 1 commit
-
-
Taylor Robie authored
* add requirements.txt now that there are dependencies beyond tensorflow * direct pip info to README
-