Commit e187d8b6 authored by wanglch's avatar wanglch
Browse files

Update README.md

parent e136b88e
...@@ -95,7 +95,7 @@ $ tree ./data/ ...@@ -95,7 +95,7 @@ $ tree ./data/
``` ```
### 模型权重下载 ### 模型权重下载
1. 方式一:下载huggingface格式模型。以 7B 模型为例,首先下载预训练[LLaMA权重](https://huggingface.co/decapoda-research/llama-7b-hf),转换到TencentPretrain格式: 1. 方式一:下载huggingface格式模型。以 7B 模型为例,首先下载预训练[llama-7b-hf](http://113.200.138.88:18080/aimodels/llama-7b-hf),转换到TencentPretrain格式:
```commandline ```commandline
python3 scripts/convert_llama_from_huggingface_to_tencentpretrain.py --input_model_path $LLaMA_HF_PATH \ python3 scripts/convert_llama_from_huggingface_to_tencentpretrain.py --input_model_path $LLaMA_HF_PATH \
--output_model_path models/llama-7b.bin --type 7B --output_model_path models/llama-7b.bin --type 7B
...@@ -235,6 +235,14 @@ TencentPretrain格式模型推理请参考[llama_inference_pytorch](https://deve ...@@ -235,6 +235,14 @@ TencentPretrain格式模型推理请参考[llama_inference_pytorch](https://deve
`医疗,教育,科研,金融` `医疗,教育,科研,金融`
## 预训练权重
预训练权重快速下载中心:[SCNet AIModels](http://113.200.138.88:18080/aimodels)
项目中的预训练权重可从快速下载通道下载:
[llama-7b-hf](http://113.200.138.88:18080/aimodels/llama-7b-hf)
## 源码仓库及问题反馈 ## 源码仓库及问题反馈
- https://developer.hpccube.com/codes/modelzoo/llama_tencentpretrain_pytorch - https://developer.hpccube.com/codes/modelzoo/llama_tencentpretrain_pytorch
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment