Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Qwen1.5_vllm
Commits
b4fa15d2
Commit
b4fa15d2
authored
Jun 20, 2024
by
zhuwenwen
Browse files
update readme
parent
c6a37f4c
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
15 additions
and
5 deletions
+15
-5
Dockerfile
Dockerfile
+3
-2
README.md
README.md
+12
-3
No files found.
Dockerfile
View file @
b4fa15d2
FROM
image.sourcefind.cn:5000/dcu/admin/base/custom:vllm0.3.3-dtk24.04-centos7.6-py310-v1
ENV
LANG C.UTF-8
\ No newline at end of file
FROM
image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
ENV
LANG C.UTF-8
RUN
pip
install
aiohttp
==
3.9.1
outlines
==
0.0.37
openai
==
1.23.3
-i
http://mirrors.aliyun.com/pypi/simple/
--trusted-host
mirrors.aliyun.com
\ No newline at end of file
README.md
View file @
b4fa15d2
...
...
@@ -2,7 +2,7 @@
*
@Author: zhuww
*
@email: zhuww@sugon.com
*
@Date: 2024-05-24 14:15:07
*
@LastEditTime: 2024-0
5
-2
4 15:24
:01
*
@LastEditTime: 2024-0
6
-2
0 08:40
:01
-->
# Qwen1.5
...
...
@@ -27,12 +27,15 @@ Qwen1.5是阿里云开源大型语言模型系列,是Qwen2.0的beta版本。
提供
[
光源
](
https://www.sourcefind.cn/#/image/dcu/custom
)
拉取推理的docker镜像:
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/
custom:vllm0.3.3-dtk24.04-centos7.6
-py310
-v1
docker pull image.sourcefind.cn:5000/dcu/admin/base/
pytorch:2.1.0-ubuntu20.04-dtk24.04.1
-py3
.
10
# <Image ID>用上面拉取docker镜像的ID替换
# <Host Path>主机端路径
# <Container Path>容器映射路径
docker run -it --name qwen1.5_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> <Image ID> /bin/bash
pip install aiohttp==3.9.1 outlines==0.0.37 openai==1.23.3 -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
```
`Tips:若在K100/Z100L上使用,需要替换flash_attn,下载链接:https://forum.hpccube.com/thread/515`
### Dockerfile(方法二)
```
...
...
@@ -41,6 +44,7 @@ docker run -it --name qwen1.5_vllm --privileged --shm-size=64G --device=/dev/kf
docker build -t qwen1.5:latest .
docker run -it --name qwen1.5_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> qwen1.5:latest /bin/bash
```
`Tips:若在K100/Z100L上使用,需要替换flash_attn,下载链接:https://forum.hpccube.com/thread/515`
### Anaconda(方法三)
```
...
...
@@ -56,7 +60,7 @@ pip install aiohttp==3.9.1 outlines==0.0.37 openai==1.23.3
*
flash_attn: 2.0.4
*
python: python3.10
`Tips:
以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应.目前只能在K100_AI上使用
`
`Tips:
若在K100/Z100L上使用,需要替换flash_attn,下载链接:https://forum.hpccube.com/thread/515
`
## 数据集
无
...
...
@@ -79,11 +83,16 @@ cd dist && pip install vllm*
| 基座模型 | chat模型 | GPTQ模型 |
| ------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
|
[
Qwen-7B
](
https://huggingface.co/Qwen/Qwen-7B
)
|
[
Qwen-7B-Chat
](
https://huggingface.co/Qwen/Qwen-7B-Chat
)
|
|
[
Qwen-14B
](
https://huggingface.co/Qwen/Qwen-14B
)
|
[
Qwen-14B-Chat
](
https://huggingface.co/Qwen/Qwen-14B-Chat
)
|
|
[
Qwen-72B
](
https://huggingface.co/Qwen/Qwen-72B
)
|
[
Qwen-72B-Chat
](
https://huggingface.co/Qwen/Qwen-72B-Chat
)
|
|
[
Qwen1.5-7B
](
https://huggingface.co/Qwen/Qwen1.5-7B
)
|
[
Qwen1.5-7B-Chat
](
https://huggingface.co/Qwen/Qwen1.5-7B-Chat
)
|
[
Qwen1.5-7B-Chat-GPTQ-Int4
](
https://huggingface.co/Qwen/Qwen1.5-7B-Chat-GPTQ-Int4
)
|
|
[
Qwen1.5-14B
](
https://huggingface.co/Qwen/Qwen1.5-14B
)
|
[
Qwen1.5-14B-Chat
](
https://huggingface.co/Qwen/Qwen1.5-14B-Chat
)
|
[
Qwen1.5-14B-Chat-GPTQ-Int4
](
https://huggingface.co/Qwen/Qwen1.5-14B-Chat-GPTQ-Int4
)
|
|
[
Qwen1.5-32B
](
https://huggingface.co/Qwen/Qwen1.5-32B
)
|
[
Qwen1.5-32B-Chat
](
https://huggingface.co/Qwen/Qwen1.5-32B-Chat
)
|
[
Qwen1.5-32B-Chat-GPTQ-Int4
](
https://huggingface.co/Qwen/Qwen1.5-32B-Chat-GPTQ-Int4
)
|
|
[
Qwen1.5-72B
](
https://huggingface.co/Qwen/Qwen1.5-72B
)
|
[
Qwen1.5-72B-Chat
](
https://huggingface.co/Qwen/Qwen1.5-72B-Chat
)
|
[
Qwen1.5-72B-Chat-GPTQ-Int4
](
https://huggingface.co/Qwen/Qwen1.5-72B-Chat-GPTQ-Int4
)
|
|
[
Qwen1.5-110B
](
https://huggingface.co/Qwen/Qwen1.5-110B
)
|
[
Qwen1.5-110B-Chat
](
https://huggingface.co/Qwen/Qwen1.5-110B-Chat
)
|
[
Qwen1.5-110B-Chat-GPTQ-Int4
](
https://huggingface.co/Qwen/Qwen1.5-110B-Chat-GPTQ-Int4
)
|
|
[
Qwen2-7B
](
https://huggingface.co/Qwen/Qwen2-7B
)
|
[
Qwen2-7B-Instruct
](
https://huggingface.co/Qwen/Qwen2-7B-Instruct
)
|
[
Qwen2-7B-Instruct-GPTQ-Int4
](
https://huggingface.co/Qwen/Qwen2-7B-Instruct-GPTQ-Int4
)
|
|
[
Qwen2-72B
](
https://huggingface.co/Qwen/Qwen2-72B
)
|
[
Qwen2-72B-Instruct
](
https://huggingface.co/Qwen/Qwen2-72B-Instruct
)
|
[
Qwen2-72B-Instruct-GPTQ-Int4
](
https://huggingface.co/Qwen/Qwen2-72B-Instruct-GPTQ-Int4
)
|
### 离线批量推理
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment