README.md 3.16 KB
Newer Older
jerrrrry's avatar
jerrrrry committed
1
<!-- EasyStart v0.1 完整使用手册(单文件版) -->
jerrrrry's avatar
jerrrrry committed
2
3
4
<p align="center">
  <img src="images/logo.png" alt="EasyStart" width="180"/>
</p>
jerrrrry's avatar
jerrrrry committed
5

jerrrrry's avatar
jerrrrry committed
6
7
8
<h1 align="center">
  EasyStart v0.1 —— 一键启动,零门槛大模型测试
</h1>
jerrrrry's avatar
jerrrrry committed
9

jerrrrry's avatar
jerrrrry committed
10
<p align="center">
jerrrrry's avatar
jerrrrry committed
11
12
13
  <a href="#scene1">环境测试</a>
  <a href="#scene2">测试+下载+推理</a>
  <a href="#scene3">批量本地推理</a>
jerrrrry's avatar
jerrrrry committed
14
</p>
jerrrrry's avatar
jerrrrry committed
15

jerrrrry's avatar
jerrrrry committed
16
---
jerrrrry's avatar
jerrrrry committed
17

jerrrrry's avatar
jerrrrry committed
18
19
> **一句话总结**  
> 无论做交付、做评测还是做批量实验,只要一条命令,环境、模型、推理全搞定。
jerrrrry's avatar
jerrrrry committed
20

jerrrrry's avatar
jerrrrry committed
21
---
jerrrrry's avatar
jerrrrry committed
22

jerrrrry's avatar
jerrrrry committed
23
## 🚀 快速开始
jerrrrry's avatar
jerrrrry committed
24

jerrrrry's avatar
jerrrrry committed
25
26
| 场景 | 一键指令 |
|------|-----------|
jerrrrry's avatar
jerrrrry committed
27
28
29
| 1️⃣ 纯环境测试 | `git clone http://developer.sourcefind.cn/codes/jerrrrry/easystart_v0.1.git && cd easystart_v0.1/1_env_check && bash start.sh` |
| 2️⃣ 环境测试 + 模型下载 + 大模型推理 | `git clone http://developer.sourcefind.cn/codes/jerrrrry/easystart_v0.1.git && cd "easystart_v0.1/2_env_check&model_download&llm_inference" && bash start.sh` |
| 3️⃣ 环境测试 + 批量本地模型推理 | `git clone http://developer.sourcefind.cn/codes/jerrrrry/easystart_v0.1.git && cd "easystart_v0.1/3_env_check&batches_llm_inference" && bash start.sh` |
jerrrrry's avatar
jerrrrry committed
30

jerrrrry's avatar
jerrrrry committed
31
---
jerrrrry's avatar
jerrrrry committed
32

jerrrrry's avatar
jerrrrry committed
33
34
<a name="scene1"></a>
## 📦 1️⃣ 环境测试(`1_env_check`)
jerrrrry's avatar
jerrrrry committed
35
36
37
38
39
40
41
- ✅ ROCm 带宽测试  
- ✅ 4/8 卡 RCCL 带宽  
- ✅ DCU 环境检查(贵哥发版)  
- ✅ ACS 监控  
- ✅ CPU & DCU 状态  
- ✅ 存储 & 内存  
- ✅ 网络连通性  
jerrrrry's avatar
jerrrrry committed
42

jerrrrry's avatar
jerrrrry committed
43
📁 结果输出:`./outputs/env_check_outputs`
jerrrrry's avatar
jerrrrry committed
44

jerrrrry's avatar
jerrrrry committed
45
46
47
<p align="center">
  <img src="images/1.png" width="600"/>
</p>
jerrrrry's avatar
jerrrrry committed
48

jerrrrry's avatar
jerrrrry committed
49
---
jerrrrry's avatar
jerrrrry committed
50

jerrrrry's avatar
jerrrrry committed
51
52
<a name="scene2"></a>
## 📦 2️⃣ 环境测试 + 模型下载 + 推理(`2_env_check&model_download&llm_inference`)
jerrrrry's avatar
jerrrrry committed
53

jerrrrry's avatar
jerrrrry committed
54
### ① 填写待测模型  
jerrrrry's avatar
jerrrrry committed
55
`download-list.cfg` 中按以下格式添加模型:
jerrrrry's avatar
jerrrrry committed
56
模型ID;本地保存路径(模型ID对应modelscope的模型ID)
jerrrrry's avatar
jerrrrry committed
57
58
59
60
> 可一次填写多个,支持批量下载与测试。  
<p align="center">
  <img src="images/3.png" width="400"/>
</p>
jerrrrry's avatar
jerrrrry committed
61

jerrrrry's avatar
jerrrrry committed
62
63
64
65
66
67
68
69
70
### ② 配置推理参数  
编辑 `model_to_test.cfg`,按指定格式填入推理参数。  
<p align="center">
  <img src="images/4.png" width="400"/>
</p>

### ③ 运行脚本
```bash
bash start.sh
jerrrrry's avatar
jerrrrry committed
71
```
jerrrrry's avatar
jerrrrry committed
72
73
74



jerrrrry's avatar
jerrrrry committed
75

jerrrrry's avatar
jerrrrry committed
76
77
78
79
80
### ④ 结果查看

环境报告:./outputs/env_check_outputs
推理结果:./outputs/inference_outputs
下载模型:./outputs/models
jerrrrry's avatar
jerrrrry committed
81

jerrrrry's avatar
jerrrrry committed
82
83
84
85
86
<p align="center">
  <img src="images/5.png" width="600"/>
  <img src="images/6.png" width="600"/>
  <img src="images/7.png" width="600"/>
</p>
jerrrrry's avatar
jerrrrry committed
87
88
89
90
91
92




<a name="scene3"></a>
## 📦 3️⃣ 环境测试 + 批量本地模型推理(3_env_check&batches_llm_inference)
jerrrrry's avatar
jerrrrry committed
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125

### ① 挂载本地模型

-v /your/local/model/path:/workspace/models


### ② 配置推理参数

编辑同目录下的 model_to_test.cfg,按指定格式填入测试参数。

### ③ 运行脚本
```bash
bash start.sh
```

### ④ 结果查看

推理结果统一输出到 ./outputs/inference_outputs

<p align="center">
  <img src="images/8.png" width="600"/>
</p>


## 📝 小贴士
所有脚本均基于 Docker,确保已安装 Docker & ROCm 环境。
建议首次运行前执行场景 1,确认环境无虞。
遇到任何问题,欢迎提 Issue。

<p align="center">
  Made with ❤️ by <strong>SourceFind</strong>
</p>
```