- 10 Apr, 2019 3 commits
-
-
Liezl Puzon authored
Summary: I added an upgrade_state_dict function so that loading old models will still work layer_norms[0] --> self_attn_layer_norm layer_norms[1] --> final_layer_norm Reviewed By: pipibjc Differential Revision: D14689849 fbshipit-source-id: b2809262c11fe9d083e571fa31044798aefd48ce
-
Kritika Singh authored
Summary: Used in fairspeq/train.py Reviewed By: myleott, yqwangustc Differential Revision: D14841512 fbshipit-source-id: 02fd7b58841c32e2797e3159e65f2bef36f02da1
-
Peng-Jen Chen authored
Summary: - Add language token to MultilingualTranslation task - Add back translation and denoising loss to MultilingualTranslation task Pull Request resolved: https://github.com/pytorch/fairseq/pull/620 Reviewed By: liezl200 Differential Revision: D14756873 Pulled By: pipibjc fbshipit-source-id: 89d668db26848fd95f446edf5923bab2113636f7
-
- 09 Apr, 2019 2 commits
-
-
Kartikay Khandelwal authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/628 Updating embedding layers in TransformerSentenceEncoder to be compatible with the transformer model. Reviewed By: liezl200 Differential Revision: D14836883 fbshipit-source-id: 2240f61bf40b191d01b4efdaac4dd7562b4166c6
-
Kartikay Khandelwal authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/626 While training a model on multiple GPUs, the current fairseq train workflow fails while creating the directory from which to load a checkpoint. This seems to be happening because multiple nodes attempt to create the same directory thus causing some weird interaction with os.makedirs option "exist_ok=True". Fixing this by making sure only rank 0 creates this directory. Reviewed By: myleott Differential Revision: D14841304 fbshipit-source-id: c9b73ba804de97e2cb19a616189fefce476d8c74
-
- 07 Apr, 2019 1 commit
-
-
Haoran Li authored
Summary: There are constantly wait timeout issue for using multiple nodes, even setting copylocallytempdir:/ doesn't help, eg f105637629. It seems to be working after I moved distributed_init after get_batch_iterator, eg f106520580 Reviewed By: myleott Differential Revision: D14817769 fbshipit-source-id: edbb101a28d8082241c7bdd8c5500c9dad27647c
-
- 05 Apr, 2019 3 commits
-
-
Liezl Puzon authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/605 Eval and log on a subset of directions for multimodel training This reduces code duplication in PyTorch Translate's semi_supervised task and will enable clean multitask setups in the future. Reviewed By: pipibjc, dpacgopinath Differential Revision: D14672779 fbshipit-source-id: 1342c71781f0824cc56a38ad1c1822e34eaef337
-
Kartikay Khandelwal authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/622 Updating some defaults to more meaningful values Reviewed By: rutyrinott Differential Revision: D14761263 fbshipit-source-id: 7ac670aa370f315ddfb511c63273583a6062c569
-
Kartikay Khandelwal authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/621 In this commit, I add some modules to Fairseq needed to set up Bert/XLM style pretraining. Reviewed By: borguz Differential Revision: D14719663 fbshipit-source-id: 1c5c36b6b2cde1c9bcd3c9e9ac853d2b7ae64102
-
- 04 Apr, 2019 1 commit
-
-
Jay Mahadeokar authored
Summary: This diff adds: 1. Aligned training task specifically for doing cross entropy criterion training using prod data and prod like models 2. Few changes to correctly register the task and criterions. 3. Changes to trainer code for propogating accuracy metrics which we care about for training. Couple of things are hacky right now: - The reporting is not modular (this needs to be thought about in general for fairseq). - The get dummy batch could be specific to task instead of specific for dataset. Reviewed By: myleott Differential Revision: D14670482 fbshipit-source-id: dc077247b2ae9d26a8e842a386ec5faa5771e836
-
- 03 Apr, 2019 2 commits
-
-
James Cross authored
Summary: Pull Request resolved: https://github.com/pytorch/translate/pull/429 Pull Request resolved: https://github.com/pytorch/fairseq/pull/618 PyTorch export for transformer models was broken because as written, they used a placeholder `None` value during inference for the variable `key_padding_mask` to indicate no padding, but PyTorch is unable trace such values. This diff adds a minor hack to allow the use of an empty tensor for the same purpose. Reviewed By: jmp84 Differential Revision: D14581730 fbshipit-source-id: 2ea4664c20ecab8478c578b2182a85319140036c
-
Paco Guzman authored
Summary: Sorts dictionaries lexicographically before creating counter. This makes distributed preprocessing deterministic Reviewed By: myleott Differential Revision: D14678214 fbshipit-source-id: 7a9e2f0cb367e8fb76da01e108dda4c6c5aab505
-
- 02 Apr, 2019 3 commits
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/613 Differential Revision: D14712311 Pulled By: myleott fbshipit-source-id: 3e7646629b539c10b6af89dece2c0c564f31125f
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/614 Differential Revision: D14712321 Pulled By: myleott fbshipit-source-id: 8ef973c5d30ebccf0df0f1cabdddd590248a8f8d
-
Yash Kumar Atri authored
Summary: Correcting the syntax error in assert function cause of a character before error message. Assertion and the code is working fine now, Tested with wmt-ende task. Pull Request resolved: https://github.com/pytorch/fairseq/pull/598 Differential Revision: D14712846 Pulled By: myleott fbshipit-source-id: 3f708aa2362ceecba19174750f9ffc9238537512
-
- 29 Mar, 2019 5 commits
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/606 Differential Revision: D14680968 Pulled By: myleott fbshipit-source-id: 8044d828a8167199c10f2aee24f7e611feb91802
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/607 Differential Revision: D14681031 Pulled By: myleott fbshipit-source-id: 466ee526a30543218e2b7138fb651db866ae5ab3
-
Stefan Schweter authored
Summary: Hi, currently, the link to the language model readme is broken on the `examples/language_model/transformer_lm` page. This PR fixes the link :) Pull Request resolved: https://github.com/pytorch/fairseq/pull/600 Differential Revision: D14680985 Pulled By: myleott fbshipit-source-id: 62291efbf4ece2af54fae45c408c2759863f9847
-
Facebook Community Bot authored
Summary: This is pull request was created automatically because we noticed your project was missing a Code of Conduct file. Code of Conduct files facilitate respectful and constructive communities by establishing expected behaviors for project contributors. This PR was crafted with love by Facebook's Open Source Team. Pull Request resolved: https://github.com/pytorch/fairseq/pull/603 Differential Revision: D14680981 Pulled By: myleott fbshipit-source-id: 653262641554735d89f96c392c72fb311e53a451
-
Felix Wu authored
Summary: The unfold1d.py has the same name as the function `unfold1d` function, which will cause an error when using DynamicConv1dTBC with `unfold=True`. This doesn't affect the NMT models which don't use the unfolding mode though. I rename `unfold1d.py` as `unfold.py` to fix this bug. Originally we would get `TypeError` when running this code: ``` import torch from fairseq.modules import LightweightConv1dTBC, DynamicConv1dTBC x = torch.rand(4, 10, 8) m = LightweightConv1dTBC(8, 4, 3) o = m(x, unfold=True) m = DynamicConv1dTBC(8, 4, 3) o = m(x, unfold=True) ``` Pull Request resolved: https://github.com/pytorch/fairseq/pull/593 Differential Revision: D14597117 Pulled By: myleott fbshipit-source-id: 59752fd7ff62c53a4aba8b56b83155291e5f5792
-
- 26 Mar, 2019 1 commit
-
-
Haoran Li authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/597 Pull Request resolved: https://github.com/facebookresearch/pytext/pull/424 Fixes two issues: 1. the new Layernorm has issues in exporting 2. fix tensorboard writing by using the "RAW" operator_export_type Differential Revision: D14610694 fbshipit-source-id: 1b859f54c571a90766128ab28539a9901375c3e6
-
- 19 Mar, 2019 2 commits
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/587 Differential Revision: D14517597 Pulled By: myleott fbshipit-source-id: 4831ea5a9da1c2e207529a4ab3c4d0b070f5f34e
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/586 Differential Revision: D14517550 Pulled By: myleott fbshipit-source-id: fab68a8f597a98cf28d812d89eff845c5776b65b
-
- 16 Mar, 2019 1 commit
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/580 Differential Revision: D14494390 Pulled By: myleott fbshipit-source-id: 524cc16a106f2af630357e2ebdf7dde35fa7d494
-
- 15 Mar, 2019 1 commit
-
-
Myle Ott authored
Summary: Changelog: - 998ba4f: Add language models from Baevski & Auli (2018) - 4294c4f6: Add mixture of experts code from Shen et al. (2019) - 00493490: Add example for multilingual training - 48d9afbe: Speed improvements, including fused operators from apex - 44d27e64: Add Tensorboard support - d17fa851: Add Adadelta optimizer - 9e1c880f: Add `FairseqEncoderModel` - b65c579b: Add `FairseqTask.inference_step` to modularize generate.py - 2ad1178e: Add back `--curriculum` - Misc bug fixes and other features Pull Request resolved: https://github.com/pytorch/fairseq/pull/577 Differential Revision: D14481233 Pulled By: myleott fbshipit-source-id: 4ff8625ef1c0b24273fc65df7c5658e3c932e8b7
-
- 14 Mar, 2019 2 commits
-
-
Myle Ott authored
Summary: * Add FusedLayerNorm and FusedAdam * Softmax and zero grad optimizations Pull Request resolved: https://github.com/pytorch/fairseq/pull/531 Differential Revision: D14218457 Pulled By: myleott fbshipit-source-id: 5656b2d0152cd85f77dc21ec0e1439ec04b9fa89
-
Wen-Ding Li authored
Summary: Add `\` to fix for the shell command. Pull Request resolved: https://github.com/pytorch/fairseq/pull/561 Differential Revision: D14460091 Pulled By: myleott fbshipit-source-id: 3658ca41e69bcd00d4ad8ec2d79ddcc6a8de586e
-
- 13 Mar, 2019 1 commit
-
-
Qing Sun authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/571 Enable sampling from Fairseq Reviewed By: akinh Differential Revision: D13981666 fbshipit-source-id: 2af1bd67701a73a2c76a9255bd8381d6a7518876
-
- 12 Mar, 2019 3 commits
-
-
Dmytro Okhonko authored
Summary: sequence_generator assumes that model input is 2d tensor of longs. But it can be something like 3d tensor of floats and we should be able to handle this as long as first dimension is batch size followed by source lengths. Reviewed By: myleott Differential Revision: D14420044 fbshipit-source-id: bf8b1e42ad1873f7b803c1a377b0af21648db015
-
Dmytro Okhonko authored
Summary: Adding Adadelta optimizer to fairseq as wrapper around torch.optim.Adadelta Reviewed By: myleott Differential Revision: D14418635 fbshipit-source-id: 6bf5ec008e905a4a2cbf7415e9492f5eea3ff07f
-
Dmytro Okhonko authored
Summary: Base class for encoder-only models. Some models doesn't have decoder part. Reviewed By: myleott Differential Revision: D14413406 fbshipit-source-id: f36473b91dcf3c835fd6d50e2eb6002afa75f11a
-
- 11 Mar, 2019 2 commits
-
-
Matt Le authored
Summary: This allows one to call fairseq_cli functions from within python without dispatching to bash. Reviewed By: myleott Differential Revision: D14404719 fbshipit-source-id: 044eb652045bb15fc40e72ecbaf6fb10df9f8c61
-
Jose Fonollosa authored
Summary: The regex pattern without parentheses is not correct. The checkpoints are not sorted in descending order Pull Request resolved: https://github.com/pytorch/fairseq/pull/567 Differential Revision: D14404380 Pulled By: myleott fbshipit-source-id: 98cd0cfa8c92b78a03ffbb94840bc0f7a118eca1
-
- 04 Mar, 2019 2 commits
-
-
Louis MARTIN authored
Summary: Accessing sys.stdin.fileno() raises an error in multiple contexts (pytest, joblib, jupyter...). Thus accessing it at the top level of the file can cause other scripts to crash when they import fairseq. This is why it is moved inside the method of MultiprocessingPdb to only be accessed at runtime if needed. See Issue #517 Pull Request resolved: https://github.com/pytorch/fairseq/pull/553 Differential Revision: D14309284 Pulled By: myleott fbshipit-source-id: 6ca36f2053a86ebc02e2d6f025459c6a78c592e7
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/554 Differential Revision: D14300596 Pulled By: myleott fbshipit-source-id: f38c8e58daef99d5e4b97dd423e4142e4294a4f0
-
- 02 Mar, 2019 1 commit
-
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/551 Differential Revision: D14295227 Pulled By: myleott fbshipit-source-id: 404f2a2697a62ce0dbf22e5ab2e1cf932acc83ac
-
- 01 Mar, 2019 4 commits
-
-
James King authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/548 Differential Revision: D14286021 Pulled By: myleott fbshipit-source-id: 7c725304185e63787220371a812ec860e178872c
-
Myle Ott authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/550 Differential Revision: D14286008 Pulled By: myleott fbshipit-source-id: 6055acf98023fdd01f85ac3d7c4e7fb786e54389
-
Kartikay Khandelwal authored
Summary: The current BERTDataset has a lot of components needed for generic MaskedLM training but is too restrictive in terms of the assumptions it makes - two blocks being masked, the special tokens used for the sentence embedding as well as the separator etc. In this diff I refactor this dataset and at the same time add make some of the parameters including the probabilities associated with masking configurable. Reviewed By: rutyrinott Differential Revision: D14222467 fbshipit-source-id: e9f78788dfe7f56646ba09c62967c4c0bd30aed8
-
JingboWang1997 authored
Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/546 Differential Revision: D14272808 Pulled By: myleott fbshipit-source-id: e993450354e7d7561b14b56c12d4859a8ee7121b
-