multilingual.rst 5.1 KB
Newer Older
LysandreJik's avatar
LysandreJik committed
1
Multi-lingual models
Sylvain Gugger's avatar
Sylvain Gugger committed
2
=======================================================================================================================
LysandreJik's avatar
LysandreJik committed
3

Sylvain Gugger's avatar
Sylvain Gugger committed
4
5
6
Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual
models are available and have a different mechanisms than mono-lingual models. This page details the usage of these
models.
LysandreJik's avatar
LysandreJik committed
7
8
9
10

The two models that currently support multiple languages are BERT and XLM.

XLM
Sylvain Gugger's avatar
Sylvain Gugger committed
11
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
LysandreJik's avatar
LysandreJik committed
12
13
14
15
16

XLM has a total of 10 different checkpoints, only one of which is mono-lingual. The 9 remaining model checkpoints can
be split in two categories: the checkpoints that make use of language embeddings, and those that don't

XLM & Language Embeddings
Sylvain Gugger's avatar
Sylvain Gugger committed
17
-----------------------------------------------------------------------------------------------------------------------
LysandreJik's avatar
LysandreJik committed
18
19
20
21
22
23
24
25
26
27
28
29
30

This section concerns the following checkpoints:

- ``xlm-mlm-ende-1024`` (Masked language modeling, English-German)
- ``xlm-mlm-enfr-1024`` (Masked language modeling, English-French)
- ``xlm-mlm-enro-1024`` (Masked language modeling, English-Romanian)
- ``xlm-mlm-xnli15-1024`` (Masked language modeling, XNLI languages)
- ``xlm-mlm-tlm-xnli15-1024`` (Masked language modeling + Translation, XNLI languages)
- ``xlm-clm-enfr-1024`` (Causal language modeling, English-French)
- ``xlm-clm-ende-1024`` (Causal language modeling, English-German)

These checkpoints require language embeddings that will specify the language used at inference time. These language
embeddings are represented as a tensor that is of the same shape as the input ids passed to the model. The values in
Sylvain Gugger's avatar
Sylvain Gugger committed
31
32
these tensors depend on the language used and are identifiable using the ``lang2id`` and ``id2lang`` attributes from
the tokenizer.
LysandreJik's avatar
LysandreJik committed
33
34
35
36
37
38

Here is an example using the ``xlm-clm-enfr-1024`` checkpoint (Causal language modeling, English-French):


.. code-block::

39
40
    >>> import torch
    >>> from transformers import XLMTokenizer, XLMWithLMHeadModel
LysandreJik's avatar
LysandreJik committed
41

42
43
    >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024")
    >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024")
LysandreJik's avatar
LysandreJik committed
44
45
46
47
48
49
50


The different languages this model/tokenizer handles, as well as the ids of these languages are visible using the
``lang2id`` attribute:

.. code-block::

51
52
    >>> print(tokenizer.lang2id)
    {'en': 0, 'fr': 1}
LysandreJik's avatar
LysandreJik committed
53
54
55
56
57
58


These ids should be used when passing a language parameter during a model pass. Let's define our inputs:

.. code-block::

59
    >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1
LysandreJik's avatar
LysandreJik committed
60
61
62
63
64
65
66


We should now define the language embedding by using the previously defined language id. We want to create a tensor
filled with the appropriate language ids, of the same size as input_ids. For english, the id is 0:

.. code-block::

67
68
    >>> language_id = tokenizer.lang2id['en']  # 0
    >>> langs = torch.tensor([language_id] * input_ids.shape[1])  # torch.tensor([0, 0, 0, ..., 0])
LysandreJik's avatar
LysandreJik committed
69

70
71
    >>> # We reshape it to be of size (batch_size, sequence_length)
    >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1)
LysandreJik's avatar
LysandreJik committed
72
73
74
75
76
77


You can then feed it all as input to your model:

.. code-block::

78
    >>> outputs = model(input_ids, langs=langs)
LysandreJik's avatar
LysandreJik committed
79
80


Sylvain Gugger's avatar
Sylvain Gugger committed
81
82
83
The example `run_generation.py
<https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py>`__ can generate
text using the CLM checkpoints from XLM, using the language embeddings.
LysandreJik's avatar
LysandreJik committed
84
85

XLM without Language Embeddings
Sylvain Gugger's avatar
Sylvain Gugger committed
86
-----------------------------------------------------------------------------------------------------------------------
LysandreJik's avatar
LysandreJik committed
87
88
89
90
91
92

This section concerns the following checkpoints:

- ``xlm-mlm-17-1280`` (Masked language modeling, 17 languages)
- ``xlm-mlm-100-1280`` (Masked language modeling, 100 languages)

Sylvain Gugger's avatar
Sylvain Gugger committed
93
94
These checkpoints do not require language embeddings at inference time. These models are used to have generic sentence
representations, differently from previously-mentioned XLM checkpoints.
LysandreJik's avatar
LysandreJik committed
95
96
97


BERT
Sylvain Gugger's avatar
Sylvain Gugger committed
98
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
LysandreJik's avatar
LysandreJik committed
99
100
101
102
103
104

BERT has two checkpoints that can be used for multi-lingual tasks:

- ``bert-base-multilingual-uncased`` (Masked language modeling + Next sentence prediction, 102 languages)
- ``bert-base-multilingual-cased`` (Masked language modeling + Next sentence prediction, 104 languages)

Sylvain Gugger's avatar
Sylvain Gugger committed
105
106
These checkpoints do not require language embeddings at inference time. They should identify the language used in the
context and infer accordingly.
107
108

XLM-RoBERTa
Sylvain Gugger's avatar
Sylvain Gugger committed
109
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
110

Sylvain Gugger's avatar
Sylvain Gugger committed
111
112
113
XLM-RoBERTa was trained on 2.5TB of newly created clean CommonCrawl data in 100 languages. It provides strong gains
over previously released multi-lingual models like mBERT or XLM on downstream taks like classification, sequence
labeling and question answering.
114
115
116
117
118

Two XLM-RoBERTa checkpoints can be used for multi-lingual tasks:

- ``xlm-roberta-base`` (Masked language modeling, 100 languages)
- ``xlm-roberta-large`` (Masked language modeling, 100 languages)