"vscode:/vscode.git/clone" did not exist on "f56b8033f09c01f1217d944f18c45355e4bdc65b"
deberta.rst 3.12 KB
Newer Older
Pengcheng He's avatar
Pengcheng He committed
1
DeBERTa
Sylvain Gugger's avatar
Sylvain Gugger committed
2
-----------------------------------------------------------------------------------------------------------------------
Pengcheng He's avatar
Pengcheng He committed
3
4

Overview
Sylvain Gugger's avatar
Sylvain Gugger committed
5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pengcheng He's avatar
Pengcheng He committed
6

Sylvain Gugger's avatar
Sylvain Gugger committed
7
8
9
The DeBERTa model was proposed in `DeBERTa: Decoding-enhanced BERT with Disentangled Attention
<https://arxiv.org/abs/2006.03654>`__ by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
Pengcheng He's avatar
Pengcheng He committed
10

Sylvain Gugger's avatar
Sylvain Gugger committed
11
12
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
Pengcheng He's avatar
Pengcheng He committed
13
14
15

The abstract from the paper is the following:

Sylvain Gugger's avatar
Sylvain Gugger committed
16
17
18
19
20
21
22
23
24
25
26
*Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pre-training and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half
of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.*
Pengcheng He's avatar
Pengcheng He committed
27
28
29
30
31
32


The original code can be found `here <https://github.com/microsoft/DeBERTa>`__.


DebertaConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
33
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pengcheng He's avatar
Pengcheng He committed
34
35
36
37
38
39

.. autoclass:: transformers.DebertaConfig
    :members:


DebertaTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
40
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pengcheng He's avatar
Pengcheng He committed
41
42
43
44
45
46
47

.. autoclass:: transformers.DebertaTokenizer
    :members: build_inputs_with_special_tokens, get_special_tokens_mask,
        create_token_type_ids_from_sequences, save_vocabulary


DebertaModel
Sylvain Gugger's avatar
Sylvain Gugger committed
48
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pengcheng He's avatar
Pengcheng He committed
49
50
51
52
53
54

.. autoclass:: transformers.DebertaModel
    :members:


DebertaPreTrainedModel
Sylvain Gugger's avatar
Sylvain Gugger committed
55
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pengcheng He's avatar
Pengcheng He committed
56
57
58
59
60
61

.. autoclass:: transformers.DebertaPreTrainedModel
    :members:


DebertaForSequenceClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
62
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pengcheng He's avatar
Pengcheng He committed
63
64
65

.. autoclass:: transformers.DebertaForSequenceClassification
    :members: