README.md 4.56 KB
Newer Older
Rayyyyy's avatar
Rayyyyy committed
1
2
3
4
5
6
7
8
# Sentence-BERT
## 论文
`Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks`
- https://arxiv.org/pdf/1908.10084.pdf

## 模型结构

<div align=center>
Rayyyyy's avatar
Rayyyyy committed
9
    <img src="./doc/model.png" width=300 height=400/>
Rayyyyy's avatar
Rayyyyy committed
10
11
12
13
14
15
</div>

## 算法原理
对于每个句子对,通过网络传递句子A和句子B,从而得到embeddings u 和 v。使用余弦相似度计算embedding的相似度,并将结果与 gold similarity score进行比较。这允许网络进行微调,并识别句子的相似性.

<div align=center>
Rayyyyy's avatar
Rayyyyy committed
16
    <img src="./doc/infer.png" width=500 height=520/>
Rayyyyy's avatar
Rayyyyy committed
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
</div>

## 环境配置
1. -v 路径、docker_name和imageID根据实际情况修改
2. transformers/trainer_pt_utils.py文件 line 37 修改为:
```bash
try:
    from torch.optim.lr_scheduler import _LRScheduler as LRScheduler
except ImportError:
    from torch.optim.lr_scheduler import LRScheduler as LRScheduler
```
<div align=center>
    <img src="./doc/example.png"/>
</div>

### Docker(方法一)

```bash
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-centos7.6-dtk24.04-py310

docker run -it -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro --shm-size=32G --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name docker_name imageID bash

cd /your_code_path/sentence-bert_pytorch
pip install -e .
Rayyyyy's avatar
Rayyyyy committed
41
42
pip install -U huggingface_hub hf_transfer
export HF_ENDPOINT=https://hf-mirror.com
Rayyyyy's avatar
Rayyyyy committed
43
44
45
46
47
48
49
50
51
52
53
54
55
```

### Dockerfile(方法二)

```bash
cd ./docker
cp ../requirements.txt requirements.txt

docker build --no-cache -t sbert:latest .
docker run -it -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro --shm-size=32G --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name docker_name imageID bash

cd /your_code_path/sentence-bert_pytorch
pip install -e .
Rayyyyy's avatar
Rayyyyy committed
56
57
pip install -U huggingface_hub hf_transfer
export HF_ENDPOINT=https://hf-mirror.com
Rayyyyy's avatar
Rayyyyy committed
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
```

### Anaconda(方法三)
1. 关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装: https://developer.hpccube.com/tool/

```bash
DTK软件栈:dtk24.04
python:python3.10
torch:2.1.0
```

Tips:以上dtk软件栈、python、torch等DCU相关工具版本需要严格一一对应

2. 其他非特殊库直接按照requirements.txt安装

```bash
cd /your_code_path/sentence-bert_pytorch
pip install -e .
Rayyyyy's avatar
Rayyyyy committed
76
77
pip install -U huggingface_hub hf_transfer
export HF_ENDPOINT=https://hf-mirror.com
Rayyyyy's avatar
Rayyyyy committed
78
79
80
81
82
```

## 数据集
使用来自多个数据集的結合来微调模型,句子对的总数超过10亿个句子。对每个数据集进行抽样,给出一个加权概率,该概率在data_config.json文件中详细说明。

Rayyyyy's avatar
Rayyyyy committed
83
因数据较多,这里仅用[Simple Wikipedia Version 1.0](https://cs.pomona.edu/~dkauchak/simplification/)数据集进行展示,数据集已在 datasets/simple_wikipedia_v1 中提供,详细数据请参考[all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)模型中的Model card。
Rayyyyy's avatar
Rayyyyy committed
84
85
86
87
88


数据集的目录结构如下:
```
├── datasets
Rayyyyy's avatar
Rayyyyy committed
89
│   ├──stsbenchmark.tsv.gz
Rayyyyy's avatar
Rayyyyy committed
90
91
92
93
94
95
│   ├──simple_wikipedia_v1
│       ├──simple_wiki_pair.txt # 生成的
│       ├──wiki.simple
│       └──wiki.unsimplified
```

Rayyyyy's avatar
Rayyyyy committed
96
97
推理数据需要转换成txt格式,参考[gen_simple_wikipedia_v1.py](./gen_simple_wikipedia_v1.py)文件,生成`simple_wiki_pair.txt`

Rayyyyy's avatar
Rayyyyy committed
98
## 训练
Rayyyyy's avatar
Rayyyyy committed
99
默认使用预训练模型[MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)进行finetune训练,有关预训练程序的详细信息,请参阅 model card。
Rayyyyy's avatar
Rayyyyy committed
100
101
102
103
104
105
106
107
108
109
110
111

### 单机多卡
```bash
bash finetune.sh
```

### 单机单卡
```bash
python finetune.py
```

## 推理
Rayyyyy's avatar
Rayyyyy committed
112
113
114
115
1. 预训练模型下载[pretrained models](https://www.sbert.net/docs/pretrained_models.html), 当前默认为[all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)模型;

2. 执行以下命令,测试数据默认为`./datasets/simple_wikipedia_v1/simple_wiki_pair.txt`,可修改`--data_path`参数为其他待测文件地址,文件内容格式请参考[simple_wiki_pair.txt](./datasets/simple_wikipedia_v1/simple_wiki_pair.txt)

Rayyyyy's avatar
Rayyyyy committed
116
```bash
Rayyyyy's avatar
Rayyyyy committed
117
python infer.py --data_path ./datasets/simple_wikipedia_v1/simple_wiki_pair.txt --model_name_or_path all-MiniLM-L6-v2
Rayyyyy's avatar
Rayyyyy committed
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
```

## result

<div align=center>
    <img src="./doc/results.png"/>
</div>

### 精度
暂无

## 应用场景
### 算法类别
NLP

### 热点应用行业
教育,网安,政府

## 源码仓库及问题反馈
- https://developer.hpccube.com/codes/modelzoo/sentence-bert_pytorch

## 参考资料
- https://github.com/UKPLab/sentence-transformers