"vscode:/vscode.git/clone" did not exist on "59433ca1ae76445908e63fe4e2f9aceaf95baf0c"
Commit 2ceabd9d authored by Rayyyyy's avatar Rayyyyy
Browse files

Add icon and SCNet

parent 172aa5d8
...@@ -75,15 +75,15 @@ pip install mmengine==0.10.3 ...@@ -75,15 +75,15 @@ pip install mmengine==0.10.3
# 注意bitsandbytes库版本,如果环境中一致可不安装,否则需要重新安装 # 注意bitsandbytes库版本,如果环境中一致可不安装,否则需要重新安装
pip install bitsandbytes-0.37.0+das1.0+gitd3d888f.abi0.dtk2404.torch2.1-py3-none-any.whl pip install bitsandbytes-0.37.0+das1.0+gitd3d888f.abi0.dtk2404.torch2.1-py3-none-any.whl
``` ```
2. 下载预训练模型,具体模型请修改`download_models.py`
```bash 2. 通过[预训练权重](#预训练权重)下载预训练模型,当前用例使用[Meta-Llama-3-8B-Instruct](http://113.200.138.88:18080/aimodels/Meta-Llama-3-8B-Instruct)模型;
cd /your_code_path/llama3_pytorch
pip install modelscope
python download_models.py
```
3. 修改[llama3_8b_instruct_qlora_alpaca_e3_M.py](./llama3_8b_instruct_qlora_alpaca_e3_M.py)代码中的`pretrained_model_name_or_path``data_path`为本地模型、数据地址; 3. 修改[llama3_8b_instruct_qlora_alpaca_e3_M.py](./llama3_8b_instruct_qlora_alpaca_e3_M.py)代码中的`pretrained_model_name_or_path``data_path`为本地模型、数据地址;
4. 根据硬件环境和自身训练需求来调整`max_length``batch_size``accumulative_counts``max_epochs``lr``save_steps``evaluation_freq`、model.lora中的`r``lora_alpha`参数,默认参数支持4*32G; 4. 根据硬件环境和自身训练需求来调整`max_length``batch_size``accumulative_counts``max_epochs``lr``save_steps``evaluation_freq`、model.lora中的`r``lora_alpha`参数,默认参数支持4*32G;
5. ${DCU_NUM}参数修改为要使用的DCU卡数量,不同数据集需要修改llama3_8b_instruct_qlora_alpaca_e3_M.py中`SYSTEM``evaluation_inputs``dataset_map_fn``train_dataloader.sampler``train_cfg`参数设置,详情请参考代码注释项,当前默认alpaca数据集,**`--work-dir`设定保存模型路径** 5. ${DCU_NUM}参数修改为要使用的DCU卡数量,不同数据集需要修改llama3_8b_instruct_qlora_alpaca_e3_M.py中`SYSTEM``evaluation_inputs``dataset_map_fn``train_dataloader.sampler``train_cfg`参数设置,详情请参考代码注释项,当前默认alpaca数据集,**`--work-dir`设定保存模型路径**
6. 执行 6. 执行
```bash ```bash
bash finetune.sh bash finetune.sh
...@@ -92,7 +92,8 @@ NPROC_PER_NODE=${DCU_NUM} xtuner train ./llama3_8b_instruct_qlora_alpaca_e3_M.py ...@@ -92,7 +92,8 @@ NPROC_PER_NODE=${DCU_NUM} xtuner train ./llama3_8b_instruct_qlora_alpaca_e3_M.py
``` ```
## 推理 ## 推理
预训练模型下载方法请参考下面的[预训练权重](#预训练权重)章节,不同的模型需要不同的模型并行(MP)值,如下表所示: 预训练模型下载
请参考下面的[预训练权重](#预训练权重)章节,不同的模型需要不同的模型并行(MP)值,如下表所示:
| Model | MP | | Model | MP |
|--------|----| |--------|----|
...@@ -190,37 +191,11 @@ python eval.py --model hf --model_args pretrained=/home/llama3/Meta-Llama-3-8B-I ...@@ -190,37 +191,11 @@ python eval.py --model hf --model_args pretrained=/home/llama3/Meta-Llama-3-8B-I
制造,广媒,家居,教育 制造,广媒,家居,教育
## 预训练权重 ## 预训练权重
1. 环境安装 通过[SCNet AIModels](http://113.200.138.88:18080/aimodels)下载预训练模型:
```bash - [Meta-Llama-3-8B](http://113.200.138.88:18080/aimodels/Meta-Llama-3-8B)
pip install -U huggingface_hub hf_transfer - [Meta-Llama-3-8B-Instruct](http://113.200.138.88:18080/aimodels/Meta-Llama-3-8B-Instruct)
export HF_ENDPOINT=https://hf-mirror.com - [Meta-Llama-3-70B](http://113.200.138.88:18080/aimodels/Meta-Llama-3-70B)
``` - [Meta-Llama-3-70B-Instruct](http://113.200.138.88:18080/aimodels/Meta-Llama-3-70B-Instruct)
2. 预训练模型下载,**token**参数通过huggingface账号获取
- Meta-Llama-3-8B 模型
```bash
mkdir Meta-Llama-3-8B
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B --token hf_*
```
- Meta-Llama-3-8B-Instruct 模型
```bash
mkdir Meta-Llama-3-8B-Instruct
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct --token hf_*
```
- Meta-Llama-3-70B 模型
```bash
mkdir Meta-Llama-3-70B
huggingface-cli download meta-llama/Meta-Llama-3-70B --include "original/*" --local-dir Meta-Llama-3-70B --token hf_*
```
- Meta-Llama-3-70B-Instruct 模型
```bash
mkdir Meta-Llama-3-70B-Instruct
huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct --token hf_*
```
模型目录结构如下: 模型目录结构如下:
```bash ```bash
......
icon.png

53.8 KB

Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment