Commit 60577dcc authored by dcuai's avatar dcuai
Browse files

Update dtk24.04.1

parent 7cc0d202
...@@ -19,19 +19,11 @@ mv minicpm_classify_pytorch MiniCPM_classify # 去框架名后缀 ...@@ -19,19 +19,11 @@ mv minicpm_classify_pytorch MiniCPM_classify # 去框架名后缀
### Docker(方法一) ### Docker(方法一)
``` ```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-centos7.6-dtk23.10-py38 docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
# <your IMAGE ID>为以上拉取的docker的镜像ID替换,本镜像为:ffa1f63239fc # <your IMAGE ID>为以上拉取的docker的镜像ID替换
docker run -it --shm-size=32G -v $PWD/MiniCPM_classify:/home/MiniCPM_classify -v /opt/hyhal:/opt/hyhal --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name minicpm_classify <your IMAGE ID> bash docker run -it --shm-size=32G -v $PWD/MiniCPM_classify:/home/MiniCPM_classify -v /opt/hyhal:/opt/hyhal --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name minicpm_classify <your IMAGE ID> bash
cd /home/MiniCPM_classify cd /home/MiniCPM_classify
pip install -r finetune/requirements.txt # finetune/requirements.txt pip install -r finetune/requirements.txt # finetune/requirements.txt
# deepspeed、flash_attn2、xformers可从whl.zip文件里获取安装:
pip install deepspeed-0.12.3+git299681e.abi0.dtk2310.torch2.1.0a0-cp38-cp38-linux_x86_64.whl
pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl
# xformers
tar -xvf xformers-0.0.23.tar
cd xformers-0.0.23
pip install xformers==0.0.23 --no-deps
bash patch_xformers.rocm.sh
``` ```
### Dockerfile(方法二) ### Dockerfile(方法二)
``` ```
...@@ -39,21 +31,13 @@ cd MiniCPM_classify/docker ...@@ -39,21 +31,13 @@ cd MiniCPM_classify/docker
docker build --no-cache -t minicpm_classify:latest . docker build --no-cache -t minicpm_classify:latest .
docker run --shm-size=32G --name minicpm_classify -v /opt/hyhal:/opt/hyhal --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video -v $PWD/../../MiniCPM_classify:/home/MiniCPM_classify -it minicpm_classify bash docker run --shm-size=32G --name minicpm_classify -v /opt/hyhal:/opt/hyhal --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video -v $PWD/../../MiniCPM_classify:/home/MiniCPM_classify -it minicpm_classify bash
# 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:pip install -r requirements.txt。 # 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:pip install -r requirements.txt。
# deepspeed、flash_attn2、xformers可从whl.zip文件里获取安装:
pip install deepspeed-0.12.3+git299681e.abi0.dtk2310.torch2.1.0a0-cp38-cp38-linux_x86_64.whl
pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl
# xformers
tar -xvf xformers-0.0.23.tar
cd xformers-0.0.23
pip install xformers==0.0.23 --no-deps
bash patch_xformers.rocm.sh
``` ```
### Anaconda(方法三) ### Anaconda(方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装: 1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装:
- https://developer.hpccube.com/tool/ - https://developer.hpccube.com/tool/
``` ```
DTK驱动:dtk23.10 DTK驱动:dtk24.04.1
python:python3.8 python:python3.10
torch:2.1.0 torch:2.1.0
torchvision:0.16.0 torchvision:0.16.0
triton:2.1.0 triton:2.1.0
...@@ -62,18 +46,6 @@ deepspeed:0.12.3 ...@@ -62,18 +46,6 @@ deepspeed:0.12.3
flash_attn:2.0.4 flash_attn:2.0.4
xformers:0.0.23 xformers:0.0.23
``` ```
```
# deepspeed、flash_attn2、xformers可从whl.zip文件里获取安装:
pip install deepspeed-0.12.3+git299681e.abi0.dtk2310.torch2.1.0a0-cp38-cp38-linux_x86_64.whl
pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl
# xformers
tar -xvf xformers-0.0.23.tar
cd xformers-0.0.23
pip install xformers==0.0.23 --no-deps
bash patch_xformers.rocm.sh
```
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应。` `Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应。`
2、其它非特殊库参照requirements.txt安装 2、其它非特殊库参照requirements.txt安装
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment