dialogpt.md 3.04 KB
Newer Older
1
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Sylvain Gugger's avatar
Sylvain Gugger committed
2

3
4
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
Sylvain Gugger's avatar
Sylvain Gugger committed
5

6
http://www.apache.org/licenses/LICENSE-2.0
Sylvain Gugger's avatar
Sylvain Gugger committed
7

8
9
10
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

15
-->
Sylvain Gugger's avatar
Sylvain Gugger committed
16

17
# DialoGPT
18

19
## Overview
20

21
DialoGPT was proposed in [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao,
Sylvain Gugger's avatar
Sylvain Gugger committed
22
23
Jianfeng Gao, Jingjing Liu, Bill Dolan. It's a GPT2 Model trained on 147M conversation-like exchanges extracted from
Reddit.
24
25
26

The abstract from the paper is the following:

Sylvain Gugger's avatar
Sylvain Gugger committed
27
28
29
30
31
32
33
*We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained
transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning
from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human
both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems
that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline
systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response
generation and the development of more intelligent open-domain dialogue systems.*
34
35
36

Tips:

Sylvain Gugger's avatar
Sylvain Gugger committed
37
38
39
40
- DialoGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
  than the left.
- DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful
  at response generation in open-domain dialogue systems.
41
- DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on [DialoGPT's model card](https://huggingface.co/microsoft/DialoGPT-medium).
42
43
44

Training:

Sylvain Gugger's avatar
Sylvain Gugger committed
45
46
47
48
In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: *We
follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language
modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,..., x_N (N is the
sequence length), ended by the end-of-text token.* For more information please confer to the original paper.
49

Sylvain Gugger's avatar
Sylvain Gugger committed
50

51
DialoGPT's architecture is based on the GPT2 model, so one can refer to [GPT2's documentation page](gpt2).
52

53
The original code can be found [here](https://github.com/microsoft/DialoGPT).