perplexity.rst 7.75 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
.. 
    Copyright 2020 The HuggingFace Team. All rights reserved.

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
    the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
    an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
    specific language governing permissions and limitations under the License.

13
Perplexity of fixed-length models
Sylvain Gugger's avatar
Sylvain Gugger committed
14
=======================================================================================================================
15

Sylvain Gugger's avatar
Sylvain Gugger committed
16
17
18
19
Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note
that the metric applies specifically to classical language models (sometimes called autoregressive or causal language
models) and is not well defined for masked language models like BERT (see :doc:`summary of the models
<model_summary>`).
20

Sylvain Gugger's avatar
Sylvain Gugger committed
21
22
Perplexity is defined as the exponentiated average log-likelihood of a sequence. If we have a tokenized sequence
:math:`X = (x_0, x_1, \dots, x_t)`, then the perplexity of :math:`X` is,
23
24
25
26
27
28

.. math::

    \text{PPL}(X)
    = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}

Sylvain Gugger's avatar
Sylvain Gugger committed
29
30
31
32
33
where :math:`\log p_\theta (x_i|x_{<i})` is the log-likelihood of the ith token conditioned on the preceding tokens
:math:`x_{<i}` according to our model. Intuitively, it can be thought of as an evaluation of the model's ability to
predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization
procedure has a direct impact on a model's perplexity which should always be taken into consideration when comparing
different models.
34

Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
37
This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more
intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this
`fantastic blog post on The Gradient <https://thegradient.pub/understanding-evaluation-metrics-for-language-models/>`_.
38
39

Calculating PPL with fixed-length models
Sylvain Gugger's avatar
Sylvain Gugger committed
40
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
41

Sylvain Gugger's avatar
Sylvain Gugger committed
42
43
If we weren't limited by a model's context size, we would evaluate the model's perplexity by autoregressively
factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below.
44
45
46
47
48

.. image:: imgs/ppl_full.gif
    :width: 600
    :alt: Full decomposition of a sequence with unlimited context length

Sylvain Gugger's avatar
Sylvain Gugger committed
49
50
51
When working with approximate models, however, we typically have a constraint on the number of tokens the model can
process. The largest version of :doc:`GPT-2 <model_doc/gpt2>`, for example, has a fixed length of 1024 tokens, so we
cannot calculate :math:`p_\theta(x_t|x_{<t})` directly when :math:`t` is greater than 1024.
52

Sylvain Gugger's avatar
Sylvain Gugger committed
53
54
55
56
57
Instead, the sequence is typically broken into subsequences equal to the model's maximum input size. If a model's max
input size is :math:`k`, we then approximate the likelihood of a token :math:`x_t` by conditioning only on the
:math:`k-1` tokens that precede it rather than the entire context. When evaluating the model's perplexity of a
sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed
log-likelihoods of each segment independently.
58
59
60
61
62

.. image:: imgs/ppl_chunked.gif
    :width: 600
    :alt: Suboptimal PPL not taking advantage of full available context

Sylvain Gugger's avatar
Sylvain Gugger committed
63
64
65
This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor
approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will
have less context at most of the prediction steps.
66

Sylvain Gugger's avatar
Sylvain Gugger committed
67
68
Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly
sliding the context window so that the model has more context when making each prediction.
69
70
71
72
73

.. image:: imgs/ppl_sliding.gif
    :width: 600
    :alt: Sliding window PPL taking advantage of all available context

Sylvain Gugger's avatar
Sylvain Gugger committed
74
75
76
This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more
favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good
practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by
Santiago Castro's avatar
Santiago Castro committed
77
1 token a time. This allows computation to proceed much faster while still giving the model a large context to make
Sylvain Gugger's avatar
Sylvain Gugger committed
78
predictions at each step.
79
80

Example: Calculating perplexity with GPT-2 in 馃 Transformers
Sylvain Gugger's avatar
Sylvain Gugger committed
81
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
82
83
84
85
86
87
88
89
90
91
92

Let's demonstrate this process with GPT-2.

.. code-block:: python

    from transformers import GPT2LMHeadModel, GPT2TokenizerFast
    device = 'cuda'
    model_id = 'gpt2-large'
    model = GPT2LMHeadModel.from_pretrained(model_id).to(device)
    tokenizer = GPT2TokenizerFast.from_pretrained(model_id)

Sylvain Gugger's avatar
Sylvain Gugger committed
93
94
95
We'll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since
this dataset is small and we're just doing one forward pass over the set, we can just load and encode the entire
dataset in memory.
96
97
98
99
100
101
102

.. code-block:: python

    from nlp import load_dataset
    test = load_dataset('wikitext', 'wikitext-2-raw-v1', split='test')
    encodings = tokenizer('\n\n'.join(test['text']), return_tensors='pt')

Sylvain Gugger's avatar
Sylvain Gugger committed
103
104
105
106
107
108
109
With 馃 Transformers, we can simply pass the ``input_ids`` as the ``labels`` to our model, and the average
log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in
the tokens we pass to the model at each iteration. We don't want the log-likelihood for the tokens we're just treating
as context to be included in our loss, so we can set these targets to ``-100`` so that they are ignored. The following
is an example of how we could do this with a stride of ``512``. This means that the model will have at least 512 tokens
for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens
available to condition on).
110
111
112
113
114
115
116

.. code-block:: python

    max_length = model.config.n_positions
    stride = 512

    lls = []
Joe Davison's avatar
Joe Davison committed
117
    for i in tqdm(range(0, encodings.input_ids.size(1), stride)):
118
        begin_loc = max(i + stride - max_length, 0)
119
120
        end_loc = min(i + stride, encodings.input_ids.size(1))
        trg_len = end_loc - i    # may be different from stride on last loop
121
122
        input_ids = encodings.input_ids[:,begin_loc:end_loc].to(device)
        target_ids = input_ids.clone()
123
        target_ids[:,:-trg_len] = -100
124
125
126

        with torch.no_grad():
            outputs = model(input_ids, labels=target_ids)
127
            log_likelihood = outputs[0] * trg_len
128
129

        lls.append(log_likelihood)
130
131

    ppl = torch.exp(torch.stack(lls).sum() / end_loc)
132

Sylvain Gugger's avatar
Sylvain Gugger committed
133
134
135
136
137
138
139
140
Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window
strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction,
and the better the reported perplexity will typically be.

When we run the above with ``stride = 1024``, i.e. no overlap, the resulting PPL is ``19.64``, which is about the same
as the ``19.93`` reported in the GPT-2 paper. By using ``stride = 512`` and thereby employing our striding window
strategy, this jumps down to ``16.53``. This is not only a more favorable score, but is calculated in a way that is
closer to the true autoregressive decomposition of a sequence likelihood.