README.md 533 Bytes
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
language:
- russian
---

# rubert-base-cased

RuBERT \(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\) was trained on the Russian part of Wikipedia
and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version
of BERT-base as an initialization for RuBERT\[1\].


\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language.
arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).