### INTERFACE FOR ENCODER AND TASK SPECIFIC MODEL ###
### INTERFACE FOR ENCODER AND TASK SPECIFIC MODEL ###
...
@@ -378,9 +426,21 @@ class DilBertPreTrainedModel(PreTrainedModel):
...
@@ -378,9 +426,21 @@ class DilBertPreTrainedModel(PreTrainedModel):
DILBERT_START_DOCSTRING=r"""
DILBERT_START_DOCSTRING=r"""
Smaller, faster, cheaper, lighter: DilBERT
DilBERT is a small, fast, cheap and light Transformer model
trained by distilling Bert base. It has 40% less parameters than
`bert-base-uncased`, runs 60% faster while preserving over 95% of
Bert's performances as measured on the GLUE language understanding benchmark.
Here are the differences between the interface of Bert and DilBert:
- DilBert doesn't have `token_type_ids`, you don't need to indicate which token belong to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`)
- DilBert doesn't have options to select the input positions (`position_ids` input). This could be added if necessary though, just let's us know if you need this option.
For more information on DilBERT, you should check TODO(Link): Link to Medium
For more information on DilBERT, please refer to our
@add_start_docstrings("""DilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of
@add_start_docstrings("""DilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of
the hidden-states output to compute `span start logits` and `span end logits`). """,
the hidden-states output to compute `span start logits` and `span end logits`). """,