marian.rst 8.41 KB
Newer Older
1
MarianMT
Sylvain Gugger's avatar
Sylvain Gugger committed
2
-----------------------------------------------------------------------------------------------------------------------
3
4
5

**Bugs:** If you see something strange, file a `Github Issue
<https://github.com/huggingface/transformers/issues/new?assignees=sshleifer&labels=&template=bug-report.md&title>`__
6
and assign @patrickvonplaten.
7

8
Translations should be similar, but not identical to output in the test set linked to in each model card.
9
10

Implementation Notes
Sylvain Gugger's avatar
Sylvain Gugger committed
11
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
12
13

- Each model is about 298 MB on disk, there are more than 1,000 models.
14
- The list of supported language pairs can be found `here <https://huggingface.co/Helsinki-NLP>`__.
Sylvain Gugger's avatar
Sylvain Gugger committed
15
16
17
- Models were originally trained by `Jrg Tiedemann
  <https://researchportal.helsinki.fi/en/persons/j%C3%B6rg-tiedemann>`__ using the `Marian
  <https://marian-nmt.github.io/>`__ C++ library, which supports fast training and translation.
18
19
- All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented
  in a model card.
Sylvain Gugger's avatar
Sylvain Gugger committed
20
- The 80 opus models that require BPE preprocessing are not supported.
21
- The modeling code is the same as :class:`~transformers.BartForConditionalGeneration` with a few minor modifications:
Sylvain Gugger's avatar
Sylvain Gugger committed
22

23
24
25
26
27
28
    - static (sinusoid) positional embeddings (:obj:`MarianConfig.static_position_embeddings=True`)
    - a new final_logits_bias (:obj:`MarianConfig.add_bias_logits=True`)
    - no layernorm_embedding (:obj:`MarianConfig.normalize_embedding=False`)
    - the model starts generating with :obj:`pad_token_id` (which has 0 as a token_embedding) as the prefix (Bart uses
      :obj:`<s/>`),
- Code to bulk convert models can be found in ``convert_marian_to_pytorch.py``.
29

30
Naming
Sylvain Gugger's avatar
Sylvain Gugger committed
31
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
32

Sylvain Gugger's avatar
Sylvain Gugger committed
33
- All model names use the following format: :obj:`Helsinki-NLP/opus-mt-{src}-{tgt}`
34
- The language codes used to name models are inconsistent. Two digit codes can usually be found `here
Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
  <https://developers.google.com/admin-sdk/directory/v1/languages>`__, three digit codes require googling "language
  code {code}".
37
- Codes formatted like :obj:`es_AR` are usually :obj:`code_{region}`. That one is Spanish from Argentina.
38
39
- The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second
  group use a combination of ISO-639-5 codes and ISO-639-2 codes.
40
41


42
Examples
Sylvain Gugger's avatar
Sylvain Gugger committed
43
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
44

45
46
47
48
49
50
51
52
53
54
55
- Since Marian models are smaller than many other translation models available in the library, they can be useful for
  fine-tuning experiments and integration tests.
- `Fine-tune on TPU
  <https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/train_distil_marian_enro_tpu.sh>`__
- `Fine-tune on GPU
  <https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/train_distil_marian_enro.sh>`__
- `Fine-tune on GPU with pytorch-lightning
  <https://github.com/huggingface/transformers/blob/master/examples/seq2seq/distil_marian_no_teacher.sh>`__

Multilingual Models
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56

57
58
59
60
61
62
63
- All model names use the following format: :obj:`Helsinki-NLP/opus-mt-{src}-{tgt}`:
- If a model can output multiple languages, and you should specify a language code by prepending the desired output
  language to the :obj:`src_text`.
- You can see a models's supported language codes in its model card, under target constituents, like in `opus-mt-en-roa
  <https://huggingface.co/Helsinki-NLP/opus-mt-en-roa>`__.
- Note that if a model is only multilingual on the source side, like :obj:`Helsinki-NLP/opus-mt-roa-en`, no language
  codes are required.
64

65
66
New multi-lingual models from the `Tatoeba-Challenge repo <https://github.com/Helsinki-NLP/Tatoeba-Challenge>`__
require 3 character language codes:
67
68
69
70
71

.. code-block:: python

    from transformers import MarianMTModel, MarianTokenizer
    src_text = [
72
73
74
        '>>fra<< this is a sentence in english that we want to translate to french',
        '>>por<< This should go to portuguese',
        '>>esp<< And this to Spanish'
75
76
    ]

77
    model_name = 'Helsinki-NLP/opus-mt-en-roa'
78
79
80
    tokenizer = MarianTokenizer.from_pretrained(model_name)
    print(tokenizer.supported_language_codes)
    model = MarianMTModel.from_pretrained(model_name)
81
    translated = model.generate(**tokenizer.prepare_seq2seq_batch(src_text))
82
83
84
85
86
    tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
    # ["c'est une phrase en anglais que nous voulons traduire en fran莽ais",
    # 'Isto deve ir para o portugu锚s.',
    # 'Y esto al espa帽ol']

87
88


89

90
91
92
93
94
95
96
97
98
99
100
101
Code to see available pretrained models:

.. code-block:: python

    from transformers.hf_api import HfApi
    model_list = HfApi().model_list()
    org = "Helsinki-NLP"
    model_ids = [x.modelId for x in model_list if x.modelId.startswith(org)]
    suffix = [x.split('/')[1] for x in model_ids]
    old_style_multi_models = [f'{org}/{s}' for s in suffix if s != s.lower()]


102

103
104
Old Style Multi-Lingual Models
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
105

106
107
These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language
group:
108
109
110

.. code-block:: python

111
112
113
114
115
116
117
118
119
120
121
122
    ['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU',
     'Helsinki-NLP/opus-mt-ROMANCE-en',
     'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA',
     'Helsinki-NLP/opus-mt-de-ZH',
     'Helsinki-NLP/opus-mt-en-CELTIC',
     'Helsinki-NLP/opus-mt-en-ROMANCE',
     'Helsinki-NLP/opus-mt-es-NORWAY',
     'Helsinki-NLP/opus-mt-fi-NORWAY',
     'Helsinki-NLP/opus-mt-fi-ZH',
     'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI',
     'Helsinki-NLP/opus-mt-sv-NORWAY',
     'Helsinki-NLP/opus-mt-sv-ZH']
123
124
125
126
127
128
129
130
131
132
133
134
    GROUP_MEMBERS = {
     'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],
     'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],
     'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
     'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
     'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],
     'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],
     'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']
    }



135
136
137
138
139
140
141

Example of translating english to many romance languages, using old-style 2 character language codes


.. code-block::python

    from transformers import MarianMTModel, MarianTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
142
143
144
145
146
147
148
149
150
    src_text = [
        '>>fr<< this is a sentence in english that we want to translate to french',
        '>>pt<< This should go to portuguese',
        '>>es<< And this to Spanish'
    ]

    model_name = 'Helsinki-NLP/opus-mt-en-ROMANCE'
    tokenizer = MarianTokenizer.from_pretrained(model_name)
    print(tokenizer.supported_language_codes)
151

Sylvain Gugger's avatar
Sylvain Gugger committed
152
153
154
    model = MarianMTModel.from_pretrained(model_name)
    translated = model.generate(**tokenizer.prepare_seq2seq_batch(src_text))
    tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
155
156
    # ["c'est une phrase en anglais que nous voulons traduire en fran莽ais", 'Isto deve ir para o portugu锚s.',  'Y esto al espa帽ol']

157

158

Sylvain Gugger's avatar
Sylvain Gugger committed
159
MarianConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
160
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
161

Sylvain Gugger's avatar
Sylvain Gugger committed
162
163
164
165
166
.. autoclass:: transformers.MarianConfig
    :members:


MarianTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
167
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
168
169

.. autoclass:: transformers.MarianTokenizer
170
    :members: prepare_seq2seq_batch
Sylvain Gugger's avatar
Sylvain Gugger committed
171
172


173
174
MarianMTModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175

176
.. autoclass:: transformers.MarianMTModel
177
178
179
180
181
182


TFMarianMTModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFMarianMTModel