Here are two quick-start examples using `Bert` and `GPT2` with pre-trained models.
See the [documentation](#documentation) for the details of all the models and classes.
### BERT example
First let's prepare a tokenized input from a text string using `BertTokenizer`
Let's do a very quick overview of PyTorch-Transformers. Detailled examples for each model architecture (Bert, GPT, GPT-2, Transformer-XL, XLNet and XLM) can be found in the [full documentation](https://huggingface.co/pytorch-transformers/).
# SOTA examples for GLUE, SQUAD, text generation...
```
### OpenAI GPT-2
Here is a quick-start example using `GPT2Tokenizer` and `GPT2LMHeadModel` class with OpenAI's pre-trained model to predict the next token from a text prompt.
First let's prepare a tokenized input from our text string using `GPT2Tokenizer`
assertpredicted_text=='Who was Jim Henson? Jim Henson was a man'
```
Examples for each model class of each model architecture (Bert, GPT, GPT-2, Transformer-XL, XLNet and XLM) can be found in the [documentation](#documentation).
## Quick tour: Fine-tuning/usage scripts
## Quick tour of the fine-tuning/usage scripts
The library comprises several example scripts with SOTA performances for NLU and NLG tasks: