"examples/language-modeling/run_mlm_no_trainer.py" did not exist on "efb5c0a453ea76a51b16e5160c8fa25036f1f17d"
- 08 Apr, 2021 1 commit
-
-
Stas Bekman authored
* synced gpus * fix * fix * need to use t5-small for quality tests * notes * complete merge * fix a disappearing std stream problem * start zero3 tests * wip * tune params * sorting out the pre-trained model loading * reworking generate loop wip * wip * style * fix tests * split the tests * refactor tests * wip * parameterized * fix * workout the resume from non-ds checkpoint pass + test * cleanup * remove no longer needed code * split getter/setter functions * complete the docs * suggestions * gpus and their compute capabilities link * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * style * remove invalid paramgd * automatically configure zero3 params that rely on hidden size * make _get_resized_embeddings zero3-aware * add test exercising resize_token_embeddings() * add docstring Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 17 Mar, 2021 1 commit
-
-
Stas Bekman authored
* deepspeed checkpoint loading code plus tests * style * style
-
- 16 Mar, 2021 1 commit
-
-
Cheng Li authored
* pass hf optimizer and scheduler to deepspeed if not specified in ds config * pass hf optimizer and scheduler to deepspeed if not specified in ds config * update * make init_deepspeed support config dict * fix docstring formatting * clean up trainer's comments * add new tests * fix type * composit argparse doesn't work * style * add a new test, rename others * document new functionality * complete tests, add docs * style * correct level * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add new methods to the doc * must tell DS we are using a non-native optimizer * add protection against cpu_offload + HF optimizer combo * fix the cli overrides * sync docs + tests * restore AdamW * better docs * need new version * no longer needed * remove outdate information * refactor duplicated code Co-authored-by:
Stas Bekman <stas@stason.org> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 15 Mar, 2021 1 commit
-
-
Th茅o Matussi猫re authored
* split seq2seq script, update docs * needless diff * fix readme * remove test diff * s/summarization/translation Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * cr * fix arguments & better mbart/t5 refs * copyright Co-authored-by:
Suraj Patil <surajp815@gmail.com> * reword readme Co-authored-by:
Suraj Patil <surajp815@gmail.com> * s/summarization/translation * short script names * fix tests * fix isort, include mbart doc * delete old script, update tests * automate source prefix * automate source prefix for translation * s/translation/trans Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * fix script name (short version) * typos Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * exact parameter Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * remove superfluous source_prefix calls in docs * rename scripts & warn for source prefix * black * flake8 Co-authored-by:
theo <theo@matussie.re> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com>
-
- 24 Feb, 2021 1 commit
-
-
Stas Bekman authored
* handle get_last_lr() before first step() * abstract away the lr getting logic * cleanup * add test * move to utils
-
- 22 Feb, 2021 1 commit
-
-
Stas Bekman authored
* implement gradient_accumulation_steps support in DeepSpeed integration * typo * cleanup * cleanup
-
- 18 Feb, 2021 1 commit
-
-
Stas Bekman authored
* memory tracker metrics * go back to eval for somewhat consistency * handle no-gpu case * deal with stackable eval calls * restore callback order * style * simplify the API * add test * docs * consistently use eval_ prefix * improve docs * Update src/transformers/trainer_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * rename method * style Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 17 Feb, 2021 1 commit
-
-
Stas Bekman authored
* fix invalid port * missing requirements
-
- 15 Feb, 2021 1 commit
-
-
Stas Bekman authored
* fix run_seq2seq.py; porting DeepSpeed tests to it * unrefactor * defensive programming * defensive programming 2 * port the rest of the trainer tests * style * a cleaner scripts dir finder * cleanup
-
- 11 Feb, 2021 1 commit
-
-
Stas Bekman authored
* init devices/setup explicitly * docs + test * simplify * cleanup * cleanup * cleanup * correct the required dist setup * derive local_rank from env LOCAL_RANK
-
- 10 Feb, 2021 1 commit
-
-
Stas Bekman authored
* free up memory at the end of train * rework tests * consistent formatting * correction
-
- 08 Feb, 2021 2 commits
-
-
Stas Bekman authored
-
Stas Bekman authored
* deepspeed bug fixes and tests * manual wrap?
-