"vscode:/vscode.git/clone" did not exist on "14ad512a786fc30cffbe9a63c49d5f83762b7a2a"
- 07 Feb, 2019 1 commit
-
-
Ruty Rinott authored
Summary: 1. add call to binarization to complete preprocessing pipeline 2. add ability to specify task to select the dictionary, and add a bert task 3. Get rid of function calls that are no longer needed after moving functions from fairseq here Reviewed By: jingfeidu Differential Revision: D13977842 fbshipit-source-id: ec9bbb4e98e62e12c20ba68bb52b8bcc94aee91d
-
- 05 Feb, 2019 1 commit
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/489 Differential Revision: D13956810 Pulled By: myleott fbshipit-source-id: 61ace179d1d3790226c38b3f3e47f5452b5ec514
-
- 01 Feb, 2019 1 commit
-
-
Davide Caroselli authored
Summary: The `preprocess.py` script has been refactored in order to: 1. Use the `options` module for command line arguments parsing. This will give to `preprocess.py` the ability to load custom modules with `--user-dir` flag (already implemented to all other binaries) 2. Dictionary loading and building code has moved to Task implementation. This allows custom Dictionary classes to be used during the data generation step. Pull Request resolved: https://github.com/pytorch/fairseq/pull/448 Differential Revision: D13674819 Pulled By: myleott fbshipit-source-id: b40648a98ed6c08284577e5ec25876e018d8c822
-
- 29 Jan, 2019 1 commit
-
-
Jingfei Du authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/482 With this change, we can use different dictionary classes when calling build_dictionary and build_and_save_dictionary Reviewed By: liaimi Differential Revision: D13855100 fbshipit-source-id: 62e6db310b5f078e05c547d2671252233be7b7f0
-
- 24 Jan, 2019 2 commits
-
-
Davide Caroselli authored
Summary: When opening text files without specifying the encoding (i.e. `open(path, "r")` or `open(path, "w")`), python3 will use the preferred locale encoding (`locale.getpreferredencoding()`) so the result is platform dependent and can change from one machine to another. I believe fairseq should enforce its standard (UTF-8 seems like the best choice to me). This pull request explicity specify UTF-8 encoding when reading text files. Pull Request resolved: https://github.com/pytorch/fairseq/pull/460 Differential Revision: D13802525 Pulled By: myleott fbshipit-source-id: 672fd55707ee559ab36d74bc1c24026166ea2367
-
vufg authored
Summary: Although both are supported by Python 3.6, I think it would be better to unify the usage of string format function. Pull Request resolved: https://github.com/pytorch/fairseq/pull/467 Differential Revision: D13802506 Pulled By: myleott fbshipit-source-id: 5c4877547b1c4ca806ab54c80ae483cfbaa7827a
-
- 16 Jan, 2019 1 commit
-
-
Davide Caroselli authored
Summary: On a multi-gpu training scenario, the `train.py` script spawns new processes with `torch.multiprocessing.spawn`. Unfortunately those child processes don't inherit the modules imported with `--user-dir`. This pull request fixes this problem: custom module import in now explicit on every `main()` function. Pull Request resolved: https://github.com/pytorch/fairseq/pull/449 Differential Revision: D13676922 Pulled By: myleott fbshipit-source-id: 520358d66155697885b878a37e7d0484bddbc1c6
-
- 05 Jan, 2019 1 commit
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/translate/pull/283 Pull Request resolved: https://github.com/pytorch/fairseq/pull/428 Differential Revision: D13564190 Pulled By: myleott fbshipit-source-id: 3b62282d7069c288f5bdd1dd2c120788cee4abb5
-
- 06 Dec, 2018 1 commit
-
-
Myle Ott authored
Summary: Not switching to Black formatting just yet, but adding fmt: off directives in case we decide to later. Pull Request resolved: https://github.com/pytorch/fairseq/pull/399 Differential Revision: D13364674 Pulled By: myleott fbshipit-source-id: a20a11a18be3d583ee30eff770278fb4bd05b93c
-
- 18 Nov, 2018 1 commit
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/372 Differential Revision: D13114426 Pulled By: myleott fbshipit-source-id: 6c24b96a3556a0ecd3d1f350642a884254a40bd3
-
- 10 Nov, 2018 1 commit
-
-
Ruty Rinott authored
Summary: step 2 of pipeline for LM training assumes tokenized text data as input. Splits it into train/validation/test, and runs binarization (step a_ii in https://fb.quip.com/kazzAxvZHBj9) Reviewed By: borguz Differential Revision: D10454705 fbshipit-source-id: 74e8679041f5507c4e404c1b719547c2ae9ed983
-
- 25 Sep, 2018 1 commit
-
-
Sergey Edunov authored
-
- 03 Sep, 2018 1 commit
-
-
Myle Ott authored
-
- 31 Jul, 2018 1 commit
-
-
alvations authored
-
- 21 Jun, 2018 1 commit
-
-
Myle Ott authored
-
- 15 Jun, 2018 3 commits
-
-
alexeib authored
This implements convolutional language model from https://arxiv.org/pdf/1612.08083.pdf There are 3 modes for constructing batches: - token block: fill each sample with a specified number of tokens without regard for sentence delimiters - this is what was used for training in the paper - complete: fill each sample with a specified number of tokens but make sure it contains only complete sentences (i.e. if next sentence goes over token block limit, move it to the next sample) - this was used for evaluation in the paper - eos: one sentence per sample (skip blank lines) some results: GCNN-13 - GBW - 37.46 GCNN-14B - GBW - 33.88 GCNN-8 - Wiki103 - 43.76 GCNN-14 - Wiki103 - 35.66 train: python train.py /private/home/abaevski/data/wiki103 --save-dir /tmp --fp16 --max-epoch 35 --save-interval 1 --save-interval-updates 1000 --keep-interval-updates 25 --arch fconv_lm --optimizer nag --lr 1.0 --lr-scheduler reduce_lr_on_plateau --lr-shrink 0.5 --decoder-embed-dim 280 --decoder-layers '[(850, 6)] * 3 + [(850,1)] + [(850,5)] * 4 + [(850,1)] + [(850,4)] * 3 + [(1024,4)] + [(2048, 4)]' --clip-norm 0.1 --dropout 0.2 --weight-decay 5e-06 --criterion cross_entropy --max-tokens 1024 --max-target-positions 1024 --seed 1 --log-format json --log-interval 500 eval: python eval_lm.py ~abaevski/data/wiki103 --path '/checkpoint02/abaevski/2018-04-27/lm_wiki.fp16.mxup300000.fconv.adam.lrs=reduce_lr_on_plateau.emb280.layers(850,6)*3+(850,1)+(850,5)*4+(850,1)+(850,4)*3+(1024,1)+(2048,4).lr0.0005.clp0.1.drp0.3.wd0.0.crt=cross_entropy.mxtk2048.smptk256.seed1.ngpu8/checkpoint_last.pt'
-
Myle Ott authored
-
Myle Ott authored
-
- 05 Mar, 2018 1 commit
-
-
Sergey Edunov authored
* Allow more flexible pre-processing and generation * Addressing CR comments * small fix
-
- 27 Feb, 2018 2 commits
-
-
Myle Ott authored
-
Myle Ott authored
This PR includes breaking API changes to modularize fairseq-py and adds support for distributed training across multiple nodes. Changes: - c7033ef: add support for distributed training! See updated README for usage. - e016299: modularize fairseq-py, adding support for register_model, register_criterion, register_optimizer, etc. - 154e440: update LSTM implementation to use PackedSequence objects in the encoder, better following best practices and improving perf - 90c2973 and 1da6265: improve unit test coverage
-
- 13 Nov, 2017 1 commit
-
-
Myle Ott authored
-
- 08 Nov, 2017 2 commits
-
-
Louis Martin authored
* Add <eos> for unk replacement * Add IndexedRawTextDataset to load raw text files * Replace unk with original string * Add load_raw_text_dataset() and --output-format * Move has_binary_files to data.py
-
Myle Ott authored
-
- 19 Oct, 2017 1 commit
-
-
Louis Martin authored
-
- 15 Sep, 2017 1 commit
-
-
Sergey Edunov authored
-