*This model was released on 2019-11-10 and added to Hugging Face Transformers on 2020-11-16.*
PyTorch SDPA
# CamemBERT [CamemBERT](https://huggingface.co/papers/1911.03894) is a language model based on [RoBERTa](./roberta), but trained specifically on French text from the OSCAR dataset, making it more effective for French language tasks. What sets CamemBERT apart is that it learned from a huge, high quality collection of French data, as opposed to mixing lots of languages. This helps it really understand French better than many multilingual models. Common applications of CamemBERT include masked language modeling (Fill-mask prediction), text classification (sentiment analysis), token classification (entity recognition) and sentence pair classification (entailment tasks). You can find all the original CamemBERT checkpoints under the [ALMAnaCH](https://huggingface.co/almanach/models?search=camembert) organization. > [!TIP] > This model was contributed by the [ALMAnaCH (Inria)](https://huggingface.co/almanach) team. > > Click on the CamemBERT models in the right sidebar for more examples of how to apply CamemBERT to different NLP tasks. The examples below demonstrate how to predict the `` token with [`Pipeline`], [`AutoModel`], and from the command line. ```python import torch from transformers import pipeline pipeline = pipeline("fill-mask", model="camembert-base", dtype=torch.float16, device=0) pipeline("Le camembert est un délicieux fromage .") ``` ```python import torch from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("camembert-base") model = AutoModelForMaskedLM.from_pretrained("camembert-base", dtype="auto", device_map="auto", attn_implementation="sdpa") inputs = tokenizer("Le camembert est un délicieux fromage .", return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1] predicted_token_id = predictions[0, masked_index].argmax(dim=-1) predicted_token = tokenizer.decode(predicted_token_id) print(f"The predicted token is: {predicted_token}") ``` ```bash echo -e "Le camembert est un délicieux fromage ." | transformers run --task fill-mask --model camembert-base --device 0 ``` Quantization reduces the memory burden of large models by representing weights in lower precision. Refer to the [Quantization](../quantization/overview) overview for available options. The example below uses [bitsandbytes](../quantization/bitsandbytes) quantization to quantize the weights to 8-bits. ```python from transformers import AutoTokenizer, AutoModelForMaskedLM, BitsAndBytesConfig import torch quant_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForMaskedLM.from_pretrained( "almanach/camembert-large", quantization_config=quant_config, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("almanach/camembert-large") inputs = tokenizer("Le camembert est un délicieux fromage .", return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits masked_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] predicted_token_id = predictions[0, masked_index].argmax(dim=-1) predicted_token = tokenizer.decode(predicted_token_id) print(f"The predicted token is: {predicted_token}") ``` ## CamembertConfig [[autodoc]] CamembertConfig ## CamembertTokenizer [[autodoc]] CamembertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## CamembertTokenizerFast [[autodoc]] CamembertTokenizerFast ## CamembertModel [[autodoc]] CamembertModel ## CamembertForCausalLM [[autodoc]] CamembertForCausalLM ## CamembertForMaskedLM [[autodoc]] CamembertForMaskedLM ## CamembertForSequenceClassification [[autodoc]] CamembertForSequenceClassification ## CamembertForMultipleChoice [[autodoc]] CamembertForMultipleChoice ## CamembertForTokenClassification [[autodoc]] CamembertForTokenClassification ## CamembertForQuestionAnswering [[autodoc]] CamembertForQuestionAnswering