Commit c3145190 authored by zhuwenwen's avatar zhuwenwen
Browse files

update fa

parent 14c71428
FROM image.sourcefind.cn:5000/dcu/admin/base/custom:vllm0.3.3-dtk24.04-centos7.6-py310-v1 FROM image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
ENV LANG C.UTF-8 ENV LANG C.UTF-8
RUN pip install aiohttp==3.9.1 outlines==0.0.37 openai==1.23.3 -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
\ No newline at end of file
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
* @Author: zhuww * @Author: zhuww
* @email: zhuww@sugon.com * @email: zhuww@sugon.com
* @Date: 2024-04-25 10:38:07 * @Date: 2024-04-25 10:38:07
* @LastEditTime: 2024-06-12 15:47:01 * @LastEditTime: 2024-06-20 09:00:01
--> -->
# LLAMA # LLAMA
...@@ -28,12 +28,15 @@ LLama是一个基础语言模型的集合,参数范围从7B到65B。在数万亿 ...@@ -28,12 +28,15 @@ LLama是一个基础语言模型的集合,参数范围从7B到65B。在数万亿
提供[光源](https://www.sourcefind.cn/#/image/dcu/custom)拉取推理的docker镜像: 提供[光源](https://www.sourcefind.cn/#/image/dcu/custom)拉取推理的docker镜像:
``` ```
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:vllm0.3.3-dtk24.04-centos7.6-py310-v1 docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
# <Image ID>用上面拉取docker镜像的ID替换 # <Image ID>用上面拉取docker镜像的ID替换
# <Host Path>主机端路径 # <Host Path>主机端路径
# <Container Path>容器映射路径 # <Container Path>容器映射路径
docker run -it --name llama_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> <Image ID> /bin/bash docker run -it --name llama_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> <Image ID> /bin/bash
pip install aiohttp==3.9.1 outlines==0.0.37 openai==1.23.3 -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
``` ```
`Tips:若在K100/Z100L上使用,需要替换flash_attn,下载链接:https://forum.hpccube.com/thread/515`
### Dockerfile(方法二) ### Dockerfile(方法二)
``` ```
...@@ -42,6 +45,7 @@ docker run -it --name llama_vllm --privileged --shm-size=64G --device=/dev/kfd ...@@ -42,6 +45,7 @@ docker run -it --name llama_vllm --privileged --shm-size=64G --device=/dev/kfd
docker build -t llama:latest . docker build -t llama:latest .
docker run -it --name llama_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> llama:latest /bin/bash docker run -it --name llama_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> llama:latest /bin/bash
``` ```
`Tips:若在K100/Z100L上使用,需要替换flash_attn,下载链接:https://forum.hpccube.com/thread/515`
### Anaconda(方法三) ### Anaconda(方法三)
``` ```
...@@ -57,7 +61,7 @@ pip install aiohttp==3.9.1 outlines==0.0.37 openai==1.23.3 ...@@ -57,7 +61,7 @@ pip install aiohttp==3.9.1 outlines==0.0.37 openai==1.23.3
* flash_attn: 2.0.4 * flash_attn: 2.0.4
* python: python3.10 * python: python3.10
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应.目前只能在K100_AI上使用` `Tips:若在K100/Z100L上使用,需要替换flash_attn,下载链接:https://forum.hpccube.com/thread/515`
## 数据集 ## 数据集
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment