Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
MobileVLM_pytorch
Commits
d59d40db
Commit
d59d40db
authored
Aug 26, 2024
by
dcuai
Browse files
更换dtk24.04.1镜像
parent
17bb554d
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
12 additions
and
22 deletions
+12
-22
README.md
README.md
+12
-22
No files found.
README.md
View file @
d59d40db
...
@@ -22,15 +22,12 @@ mv mobilevlm_pytorch MobileVLM # 去框架名后缀
...
@@ -22,15 +22,12 @@ mv mobilevlm_pytorch MobileVLM # 去框架名后缀
### Docker(方法一)
### Docker(方法一)
```
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-
centos7.6-dtk23.10
-py3
8
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-
ubuntu20.04-dtk24.04.1
-py3
.10
# <your IMAGE ID>为以上拉取的docker的镜像ID替换
,本镜像为:ffa1f63239fc
# <your IMAGE ID>为以上拉取的docker的镜像ID替换
docker run -it --shm-size=32G -v $PWD/MobileVLM:/home/MobileVLM -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name mobilevlm <your IMAGE ID> bash
docker run -it --shm-size=32G -v $PWD/MobileVLM:/home/MobileVLM -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name mobilevlm <your IMAGE ID> bash
cd /home/MobileVLM
cd /home/MobileVLM
pip install -r requirements.txt
pip install -r requirements.txt
# deepspeed、flash_attn2、bitsandbytes可从whl.zip文件里获取安装:
pip install deepspeed-0.12.3+git299681e.abi0.dtk2310.torch2.1.0a0-cp38-cp38-linux_x86_64.whl
pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl
pip install bitsandbytes-0.43.0-py3-none-any.whl
```
```
### Dockerfile(方法二)
### Dockerfile(方法二)
```
```
...
@@ -38,31 +35,20 @@ cd MobileVLM/docker
...
@@ -38,31 +35,20 @@ cd MobileVLM/docker
docker build --no-cache -t mobilevlm:latest .
docker build --no-cache -t mobilevlm:latest .
docker run --shm-size=32G --name mobilevlm -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video -v $PWD/../../MobileVLM:/home/MobileVLM -it mobilevlm bash
docker run --shm-size=32G --name mobilevlm -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video -v $PWD/../../MobileVLM:/home/MobileVLM -it mobilevlm bash
# 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:pip install -r requirements.txt。
# 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:pip install -r requirements.txt。
# deepspeed、flash_attn2、bitsandbytes可从whl.zip文件里获取安装:
pip install deepspeed-0.12.3+git299681e.abi0.dtk2310.torch2.1.0a0-cp38-cp38-linux_x86_64.whl
pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl
pip install bitsandbytes-0.43.0-py3-none-any.whl
```
```
### Anaconda(方法三)
### Anaconda(方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装:
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装:
-
https://developer.hpccube.com/tool/
-
https://developer.hpccube.com/tool/
```
```
DTK驱动:dtk2
3.10
DTK驱动:dtk2
4.04.1
python:python3.
8
python:python3.
10
torch:2.1.0
torch:2.1.0
torchvision:0.16.0
torchvision:0.16.0
triton:2.1.0
triton:2.1.0
apex:
0
.1
apex:
1
.1
.0
deepspeed:0.12.3
deepspeed:0.12.3
flash_attn:2.0.4
flash-attn:2.0.4
bitsandbytes:0.43.0
bitsandbytes:0.42.0
```
```
# flash_attn2、bitsandbytes可从whl.zip文件里获取安装:
pip install deepspeed-0.12.3+git299681e.abi0.dtk2310.torch2.1.0a0-cp38-cp38-linux_x86_64.whl
pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl
pip install bitsandbytes-0.43.0-py3-none-any.whl
```
```
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应。`
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应。`
...
@@ -75,6 +61,10 @@ pip install -r requirements.txt # requirements.txt
...
@@ -75,6 +61,10 @@ pip install -r requirements.txt # requirements.txt
## 推理
## 推理
预训练权重
`mtgv/MobileVLM_V2-1.7B`
下载地址:https://huggingface.co/mtgv/MobileVLM_V2-1.7B
预训练权重
`mtgv/MobileVLM_V2-1.7B`
下载地址:https://huggingface.co/mtgv/MobileVLM_V2-1.7B
-
模型权重快速下载中心[SCNet AIModels]:(http://113.200.138.88:18080/aimodels)
-
[模型权重快速下载地址]:[mtgv/MobileVLM_V2-1.7B]:(http://113.200.138.88:18080/aimodels/MobileVLM_V2-1.7B)
[openai/clip-vit-large-patch14-336]:(http://113.200.138.88:18080/aimodels/clip-vit-large-patch14-336)
```
```
export HIP_VISIBLE_DEVICES=0
export HIP_VISIBLE_DEVICES=0
python infer.py # 单机单卡
python infer.py # 单机单卡
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment