README.md 536 Bytes
Newer Older
1
2
---
language:
3
- ru
4
5
6
7
---

# rubert-base-cased

8
RuBERT \(Russian, cased, 12鈥憀ayer, 768鈥慼idden, 12鈥慼eads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT鈥慴ase as an initialization for RuBERT\[1\].
9
10


11
\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).