camembert-base-README.md 906 Bytes
Newer Older
1
2
3
4
---
language: french
---

5
6
7
8
9
10
11
12
13
# CamemBERT 

CamemBERT is a state-of-the-art language model for French based on the RoBERTa architecture pretrained on the French subcorpus of the newly available multilingual corpus OSCAR.  

CamemBERT was originally evaluated on four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI); improving the state of the art for most tasks over previous monolingual and multilingual approaches, which confirms the effectiveness of large pretrained language models for French.   

CamemBERT was trained and evaluated by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su谩rez, Yoann Dupont, Laurent Romary, 脡ric Villemonte de la Clergerie, Djam茅 Seddah and Beno卯t Sagot.  

Preprint can be found [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894)