*This model was released on 2019-07-26 and added to Hugging Face Transformers on 2020-11-16.*
# RoBERTa
[RoBERTa](https://huggingface.co/papers/1907.11692) improves BERT with new pretraining objectives, demonstrating [BERT](./bert) was undertrained and training design is important. The pretraining objectives include dynamic masking, sentence packing, larger batches and a byte-level BPE tokenizer.
You can find all the original RoBERTa checkpoints under the [Facebook AI](https://huggingface.co/FacebookAI) organization.
> [!TIP]
> Click on the RoBERTa models in the right sidebar for more examples of how to apply RoBERTa to different language tasks.
The example below demonstrates how to predict the `` token with [`Pipeline`], [`AutoModel`], and from the command line.
```py
import torch
from transformers import pipeline
pipeline = pipeline(
task="fill-mask",
model="FacebookAI/roberta-base",
dtype=torch.float16,
device=0
)
pipeline("Plants create through a process known as photosynthesis.")
```
```py
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"FacebookAI/roberta-base",
)
model = AutoModelForMaskedLM.from_pretrained(
"FacebookAI/roberta-base",
dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
inputs = tokenizer("Plants create through a process known as photosynthesis.", return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]
predicted_token_id = predictions[0, masked_index].argmax(dim=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")
```
```bash
echo -e "Plants create through a process known as photosynthesis." | transformers run --task fill-mask --model FacebookAI/roberta-base --device 0
```
## Notes
- RoBERTa doesn't have `token_type_ids` so you don't need to indicate which token belongs to which segment. Separate your segments with the separation token `tokenizer.sep_token` or ``.
## RobertaConfig
[[autodoc]] RobertaConfig
## RobertaTokenizer
[[autodoc]] RobertaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## RobertaTokenizerFast
[[autodoc]] RobertaTokenizerFast
- build_inputs_with_special_tokens
## RobertaModel
[[autodoc]] RobertaModel
- forward
## RobertaForCausalLM
[[autodoc]] RobertaForCausalLM
- forward
## RobertaForMaskedLM
[[autodoc]] RobertaForMaskedLM
- forward
## RobertaForSequenceClassification
[[autodoc]] RobertaForSequenceClassification
- forward
## RobertaForMultipleChoice
[[autodoc]] RobertaForMultipleChoice
- forward
## RobertaForTokenClassification
[[autodoc]] RobertaForTokenClassification
- forward
## RobertaForQuestionAnswering
[[autodoc]] RobertaForQuestionAnswering
- forward