README.md 3.65 KB
Newer Older
1
---
2
language: ms
3
4
5
6
---

# Bahasa Tiny-BERT Model

7
General Distilled Tiny BERT language model for Malay and Indonesian. 
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

## Pretraining Corpus

`tiny-bert-bahasa-cased` model was distilled on ~1.8 Billion words. We distilled on both standard and social media language structures, and below is list of data we distilled on,

1. [dumping wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [local instagram](https://github.com/huseinzol05/Malaya-Dataset#instagram).
3. [local twitter](https://github.com/huseinzol05/Malaya-Dataset#twitter-1).
4. [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
5. [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
6. [local singlish/manglish text](https://github.com/huseinzol05/Malaya-Dataset#singlish-text).
7. [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
8. [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
9. [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).

Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).

## Distilling details

- This model was distilled using huawei-noah Tiny-BERT's github [repository](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT) on 3 Titan V100 32GB VRAM.
- All steps can reproduce from here, [Malaya/pretrained-model/tiny-bert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/tiny-bert).

## Load Distilled Model

You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:  

```python
from transformers import AlbertTokenizer, BertModel

model = BertModel.from_pretrained('huseinzol05/tiny-bert-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
39
    'huseinzol05/tiny-bert-bahasa-cased',
40
41
42
43
44
45
46
47
48
49
50
51
52
    unk_token = '[UNK]',
    pad_token = '[PAD]',
    do_lower_case = False,
)
```

We use [google/sentencepiece](https://github.com/google/sentencepiece) to train the tokenizer, so to use it, need to load from `AlbertTokenizer`.

## Example using AutoModelWithLMHead

```python
from transformers import AlbertTokenizer, AutoModelWithLMHead, pipeline

53
model = AutoModelWithLMHead.from_pretrained('huseinzol05/tiny-bert-bahasa-cased')
54
tokenizer = AlbertTokenizer.from_pretrained(
55
    'huseinzol05/tiny-bert-bahasa-cased',
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
    unk_token = '[UNK]',
    pad_token = '[PAD]',
    do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model = model, tokenizer = tokenizer)
print(fill_mask('makan ayam dengan [MASK]'))
```

Output is,

```text
[{'sequence': '[CLS] makan ayam dengan berbual[SEP]',
  'score': 0.00015769545279908925,
  'token': 17859},
 {'sequence': '[CLS] makan ayam dengan kembar[SEP]',
  'score': 0.0001448775001335889,
  'token': 8289},
 {'sequence': '[CLS] makan ayam dengan memaklumkan[SEP]',
  'score': 0.00013484008377417922,
  'token': 6881},
 {'sequence': '[CLS] makan ayam dengan Senarai[SEP]',
  'score': 0.00013061291247140616,
  'token': 11698},
 {'sequence': '[CLS] makan ayam dengan Tiga[SEP]',
  'score': 0.00012453157978598028,
  'token': 4232}]
```

## Results

For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.

## Acknowledgement

Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train BERT for Bahasa.