"torchvision/git@developer.sourcefind.cn:OpenDAS/vision.git" did not exist on "f80b83ea298a49ddb4e5b4ce0fe59910beca70b4"
Commit 3d764a3d authored by Myle Ott's avatar Myle Ott Committed by Facebook Github Bot
Browse files

Update torch.hub usage

Summary: Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/770

Differential Revision: D16491911

Pulled By: myleott

fbshipit-source-id: 8dd2b76f8fa24183640ae9d1129ea47ded77d43d
parent b49ea81c
...@@ -13,14 +13,14 @@ Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381); WMT'1 ...@@ -13,14 +13,14 @@ Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381); WMT'1
Interactive generation from the full ensemble via PyTorch Hub: Interactive generation from the full ensemble via PyTorch Hub:
``` ```
>>> import torch >>> import torch
>>> torch.hub.list('pytorch/fairseq')
[..., 'transformer.wmt14.en-fr', 'transformer.wmt16.en-de', 'transformer.wmt18.en-de', ... ]
>>> en2de_ensemble = torch.hub.load( >>> en2de_ensemble = torch.hub.load(
... 'pytorch/fairseq', ... 'pytorch/fairseq',
... 'transformer', ... 'transformer.wmt18.en-de',
... model_name_or_path='transformer.wmt18.en-de',
... checkpoint_file='wmt18.model1.pt:wmt18.model2.pt:wmt18.model3.pt:wmt18.model4.pt:wmt18.model5.pt', ... checkpoint_file='wmt18.model1.pt:wmt18.model2.pt:wmt18.model3.pt:wmt18.model4.pt:wmt18.model5.pt',
... data_name_or_path='.', ... data_name_or_path='.',
... tokenizer='moses', ... tokenizer='moses',
... aggressive_dash_splits=True,
... bpe='subword_nmt', ... bpe='subword_nmt',
... ) ... )
>>> len(en2de_ensemble.models) >>> len(en2de_ensemble.models)
......
...@@ -13,13 +13,13 @@ Adaptive Inputs <br> ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853) ...@@ -13,13 +13,13 @@ Adaptive Inputs <br> ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)
Interactive generation via PyTorch Hub: Interactive generation via PyTorch Hub:
``` ```
>>> import torch >>> import torch
>>> torch.hub.list('pytorch/fairseq')
[..., 'transformer_lm.gbw.adaptive_huge', 'transformer_lm.wiki103.adaptive', ...]
>>> lm = torch.hub.load( >>> lm = torch.hub.load(
... 'pytorch/fairseq', ... 'pytorch/fairseq',
... 'transformer_lm', ... 'transformer_lm.wiki103.adaptive',
... model_name_or_path='transformer_lm.wiki103.adaptive',
... data_name_or_path='./data-bin', ... data_name_or_path='./data-bin',
... tokenizer='moses', ... tokenizer='moses',
... aggressive_dash_splits=True,
... no_escape=True, ... no_escape=True,
... beam=1, ... beam=1,
... sampling=True, ... sampling=True,
......
...@@ -16,13 +16,13 @@ Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381); WMT'1 ...@@ -16,13 +16,13 @@ Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381); WMT'1
Interactive generation via PyTorch Hub: Interactive generation via PyTorch Hub:
``` ```
>>> import torch >>> import torch
>>> torch.hub.list('pytorch/fairseq')
[..., 'transformer.wmt14.en-fr', 'transformer.wmt16.en-de', 'transformer.wmt18.en-de', ... ]
>>> en2de = torch.hub.load( >>> en2de = torch.hub.load(
... 'pytorch/fairseq', ... 'pytorch/fairseq',
... 'transformer', ... 'transformer.wmt16.en-de',
... model_name_or_path='transformer.wmt16.en-de',
... data_name_or_path='.', ... data_name_or_path='.',
... tokenizer='moses', ... tokenizer='moses',
... aggressive_dash_splits=True,
... bpe='subword_nmt', ... bpe='subword_nmt',
... ) ... )
>>> print(en2de.models[0].__class__) >>> print(en2de.models[0].__class__)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment