Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
Bw-bestperf
Qwen2.5-VL-7B-LlamaFactory
Commits
2d3d1efd
Commit
2d3d1efd
authored
Feb 06, 2026
by
litzh
Browse files
update README.md
parent
b59a5620
Pipeline
#3384
failed with stages
in 0 seconds
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
208 additions
and
0 deletions
+208
-0
README.md
README.md
+208
-0
No files found.
README.md
View file @
2d3d1efd
# Qwen2.5-VL-7B-LlamaFactory
## 项目简介
该项目旨在使用llama-factory 组件 deepspeed 引擎,基于 chartqa 数据集对 Qwen2.5-VL-7B-Instruct 进行 SFT
---
## 环境部署
### 1. 拉取镜像
```
bash
docker pull image.sourcefind.cn:5000/dcu/admin/base/vllm:0.8.5-ubuntu22.04-dtk25.04.1-rc5-das1.6-py3.10-20250724
```
### 2. 创建容器
```
bash
docker run
-dit
\
--name
Qwen2_5_SFT
\
--network
=
host
\
--ipc
=
host
\
--shm-size
=
256G
\
--group-add
video
\
--cap-add
=
SYS_PTRACE
\
--device
/dev/kfd
\
--device
/dev/dri
\
-v
/opt/hyhal:/opt/hyhal:ro
\
-v
$(
pwd
)
:/workspace
\
-w
/workspace
\
--privileged
\
-u
root
\
--ulimit
stack
=
-1
:-1
\
--ulimit
memlock
=
-1
:-1
\
--security-opt
seccomp
=
unconfined
\
image.sourcefind.cn:5000/dcu/admin/base/vllm:0.8.5-ubuntu22.04-dtk25.04.1-rc5-das1.6-py3.10-20250724
\
/bin/bash
```
进入容器
```
bash
docker
exec
-it
Qwen2_5_SFT bash
```
---
## 测试步骤
### 1. 拉取代码&安装
```
bash
http://developer.sourcefind.cn/codes/bw-bestperf/qwen2.5-vl-7b-llamafactory.git
cd
qwen2.5-vl-7b-llamafactory
pip
install
-e
".[torch,metrics]"
--no-build-isolation
-i
https://pypi.tuna.tsinghua.edu.cn/simple
```
### 2. 配置转换数据集
**需要将 chartqa 数据集转为 llama-factory 可训练的 json 格式**
chartqa数据集:https://www.modelscope.cn/datasets/swift/ChartQA/files
脚本
`scripts/convert_chartqa.py`
将数据集转为 image+json 格式脚本,注意修改脚本
`data_dir`
数据集路径
**生成数据集移动到/data 路径下,并在 /data/data_info.json 注册数据集**
```
bash
# 生成 image+json 数据集
python scripts/convert_chartqa.py
# 移动到 data 路径
mv
scripts/llamafactory_chartqa.json ./data/
mv
scripts/chartqa ./data/
```
`/data/data_info.json`
数据集添加注册
```
json
"chartqa"
:
{
"file_name"
:
"llamafactory_chartqa.json"
,
"formatting"
:
"sharegpt"
,
"columns"
:
{
"messages"
:
"messages"
,
"images"
:
"images"
},
"tags"
:
{
"role_tag"
:
"role"
,
"content_tag"
:
"content"
,
"user_tag"
:
"user"
,
"assistant_tag"
:
"assistant"
}
}
,
```
### 3. 下载模型
模型链接:https://modelscope.cn/models/Qwen/Qwen2.5-VL-7B-Instruct/summary
安装 ModelScope
```
bash
pip
install
modelscope
```
下载所需模型
```
bash
cd
workspace
modelscope download
--model
Qwen/Qwen2.5-VL-7B-Instruct
--local_dir
./Qwen2.5-VL-7B-Instruct
```
---
## 测试脚本(8卡)
脚本
`examples/train_lora/qwen2_5vl_chartqa.sh`
```
bash
#! /usr/bin/env bash
set
-ex
DATESTR
=
`
date
+%Y%m%d-%H%M%S
`
OUTPUT_DIR
=
output/
${
RUN_NAME
}
-
${
DATESTR
}
-
${
LR
}
CACHE_DIR
=
cache
MASTER_PORT
=
$(
shuf
-n
1
-i
10000-65535
)
export
HIP_VISIBLE_DEVICES
=
0,1,2,3,4,5,6,7
export
HSA_FORCE_FINE_GRAIN_PCIE
=
1
export
NCCL_LAUNCH_MODE
=
GROUP
# export NCCL_DEBUG=INFO
export
NCCL_P2P_DISABLE
=
0
# export MASTER_ADDR="127.0.0.1"
# export MASTER_PORT=59992
export
LLAMA_NN
=
1
export
TORCH_NCCL_TIMEOUT
=
3600000
export
TORCH_DISTRIBUTED_DEFAULT_TIMEOUT
=
1800
export
NCCL_MAX_NCHANNELS
=
16
export
NCCL_MIN_NCHANNELS
=
20
export
NCCL_P2P_LEVEL
=
SYS
export
ROCBLAS_COMPUTETYPE_FP16R
=
0
export
TOKENIZERS_PARALLELISM
=
false
# 可以通过设置环境变量临时屏蔽这些警告
export
PYTHONWARNINGS
=
"ignore"
mkdir
-p
$OUTPUT_DIR
deepspeed
--num_gpus
8
--num_nodes
1
--master_port
=
$MASTER_PORT
src/train.py
\
--stage
sft
\
--do_train
\
--lora_rank
8
\
--lora_alpha
8
\
--lora_target
all
\
--resize_vocab
True
\
--optim
adamw_torch
\
--model_name_or_path
/workspace/models/Qwen2.5-VL-7B-Instruct
\
--dataset
chartqa
\
--template
qwen2_vl
\
--finetuning_type
lora
\
--output_dir
$OUTPUT_DIR
\
--overwrite_cache
\
--overwrite_output_dir
True
\
--warmup_steps
100
\
--max_grad_norm
1.0
\
--weight_decay
0.1
\
--ddp_timeout
120000000
\
--per_device_train_batch_size
15
\
--gradient_accumulation_steps
15
\
--lr_scheduler_type
cosine
\
--logging_steps
10
\
--learning_rate
1e-4
\
--num_train_epochs
2
\
--max_samples
1000
\
--plot_loss
\
--bf16
\
--logging_dir
qwen2.5_vl_prof/logs
\
--deepspeed
examples/deepspeed/ds_z2_config.json
\
--dataloader_num_workers
32 2>&1 |
tee
-a
${
OUTPUT_DIR
}
/train.log
```
启动指令
```
bash
cd
qwen2.5-vl-7b-llamafactory
bash examples/train_lora/qwen2_5vl_chartqa.sh
```
---
## 贡献指南
欢迎对 qwen2.5-vl-7b-llamafactory 项目进行贡献!请遵循以下步骤:
1.
Fork 本仓库,并新建分支进行功能开发或问题修复。
2.
提交规范的 commit 信息,描述清晰。
3.
提交 Pull Request,简述修改内容及目的。
4.
遵守项目代码规范和测试标准。
5.
参与代码评审,积极沟通改进方案。
---
## 许可证
本项目遵循 Apache 2.0 许可证,详见
[
LICENSE
](
./LICENSE
)
文件。
---
感谢您的关注与支持!如有问题,欢迎提交 Issue 或联系维护团队。
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment