Commit 8a64423d authored by zhuwenwen's avatar zhuwenwen
Browse files

update README.md

parent 0a52ac24
...@@ -17,7 +17,10 @@ LLama是一个基础语言模型的集合,参数范围从7B到65B。在数万亿 ...@@ -17,7 +17,10 @@ LLama是一个基础语言模型的集合,参数范围从7B到65B。在数万亿
提供[光源](https://www.sourcefind.cn/#/service-details)拉取推理的docker镜像: 提供[光源](https://www.sourcefind.cn/#/service-details)拉取推理的docker镜像:
``` ```
docker pull docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:fastertransformer-dtk23.04-latest docker pull docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:fastertransformer-dtk23.04-latest
docker run -it --name llama --shm-size=32G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:fastertransformer-dtk23.04-latest /bin/bash # <Image ID>用上面拉取docker镜像的ID替换
# <Host Path>主机端路径
# <Container Path>容器内路径
docker run -it --name llama --shm-size=32G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v <Host Path>:<Container Path> <Image ID> /bin/bash
``` ```
镜像版本依赖: 镜像版本依赖:
...@@ -70,7 +73,7 @@ python ../examples/cpp/llama/huggingface_llama_convert.py \ ...@@ -70,7 +73,7 @@ python ../examples/cpp/llama/huggingface_llama_convert.py \
data_type = 0 (FP32) or 1 (FP16) data_type = 0 (FP32) or 1 (FP16)
```bash ```bash
./bin/gpt_gemm 1 1 20 52 128 17920 32000 1 1 ./bin/gpt_gemm 1 1 20 32 128 11008 32000 1 1
``` ```
上述参数对应为 上述参数对应为
...@@ -81,7 +84,7 @@ data_type = 0 (FP32) or 1 (FP16) ...@@ -81,7 +84,7 @@ data_type = 0 (FP32) or 1 (FP16)
2. 配置`../examples/cpp/llama/llama_config.ini` 2. 配置`../examples/cpp/llama/llama_config.ini`
data_type = 1时,data_type = fp16;data_type = 0时,data_type = fp32,tensor_para_size和模型转换设置的tp数保持一致,model_name=llama_7B,model_dir为对应的模型权重,request_batch_size为推理的batch_size数量,request_output_len为输出长度,`../examples/cpp/llama//start_ids.csv`可以修改输入的起始id. data_type = 1时,data_type = fp16;data_type = 0时,data_type = fp32,tensor_para_size和模型转换设置的tp数保持一致,model_name=llama_7B,model_dir为对应的模型权重,request_batch_size为推理的batch_size数量,request_output_len为输出长度,`../examples/cpp/llama/start_ids.csv`可以修改输入的起始id.
3. 运行 3. 运行
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment