"git@developer.sourcefind.cn:OpenDAS/ollama.git" did not exist on "81092147c4b2aef13be6a11009be7a9474e9985c"
Commit f7684949 authored by Rayyyyy's avatar Rayyyyy
Browse files

add download infos of 70B in README

parent 1c339d9a
...@@ -27,6 +27,7 @@ pip install -e . ...@@ -27,6 +27,7 @@ pip install -e .
### Dockerfile(方法二) ### Dockerfile(方法二)
```bash ```bash
cd docker
docker build --no-cache -t llama3:latest . docker build --no-cache -t llama3:latest .
docker run -it -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro --shm-size=32G --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name docker_name imageID bash docker run -it -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro --shm-size=32G --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name docker_name imageID bash
...@@ -65,7 +66,7 @@ pip install -e . ...@@ -65,7 +66,7 @@ pip install -e .
所有模型都支持序列长度高达8192个tokens,但我们根据max_seq_len和max_batch_size值预先分配缓存。根据你的硬件设置。 所有模型都支持序列长度高达8192个tokens,但我们根据max_seq_len和max_batch_size值预先分配缓存。根据你的硬件设置。
**Tips:** **Tips:**
- `–nproc_per_node`需要根据模型的MP值进行设置。 - `–nproc_per_node`需要根据模型的MP值进行设置(参考上表)
- `max_seq_len``max_batch_size`参数按需设置。 - `max_seq_len``max_batch_size`参数按需设置。
### Pretrained模型 ### Pretrained模型
...@@ -126,29 +127,66 @@ export HF_ENDPOINT=https://hf-mirror.com ...@@ -126,29 +127,66 @@ export HF_ENDPOINT=https://hf-mirror.com
``` ```
2. 预训练模型下载,**token**参数通过huggingface账号获取 2. 预训练模型下载,**token**参数通过huggingface账号获取
- Meta-Llama-3-8B 模型
```bash
mkdir Meta-Llama-3-8B
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B --token hf_*
```
- Meta-Llama-3-8B-Instruct 模型 - Meta-Llama-3-8B-Instruct 模型
```bash ```bash
mkdir Meta-Llama-3-8B-Instruct mkdir Meta-Llama-3-8B-Instruct
huggingface-cli download --resume-download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct --token hf_* huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct --token hf_*
``` ```
- Meta-Llama-3-8B 模型 - Meta-Llama-3-70B 模型
```bash ```bash
mkdir Meta-Llama-3-8B mkdir Meta-Llama-3-70B
huggingface-cli download --resume-download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B --token hf_* huggingface-cli download meta-llama/Meta-Llama-3-70B --include "original/*" --local-dir Meta-Llama-3-70B --token hf_*
```
- Meta-Llama-3-70B-Instruct 模型
```bash
mkdir Meta-Llama-3-70B-Instruct
huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct --token hf_*
``` ```
模型目录结构如下: 模型目录结构如下:
```bash ```bash
├── llama3_pytorch ├── llama3_pytorch
│ ├── Meta-Llama-3-8B
│ ├── original
│ ├── consolidated.00.pth
│ ├── params.json
│ └── tokenizer.model
│ ├── Meta-Llama-3-8B-Instruct │ ├── Meta-Llama-3-8B-Instruct
│ ├── original │ ├── original
│ ├── consolidated.00.pth │ ├── consolidated.00.pth
│ ├── params.json │ ├── params.json
│ └── tokenizer.model │ └── tokenizer.model
│ ├── Meta-Llama-3-8B │ ├── Meta-Llama-3-70B
│ ├── original
│ ├── consolidated.00.pth
│ ├── consolidated.01.pth
│ ├── consolidated.02.pth
│ ├── consolidated.03.pth
│ ├── consolidated.04.pth
│ ├── consolidated.05.pth
│ ├── consolidated.06.pth
│ ├── consolidated.07.pth
│ ├── params.json
│ └── tokenizer.model
│ ├── Meta-Llama-3-70B-Instruct
│ ├── original │ ├── original
│ ├── consolidated.00.pth │ ├── consolidated.00.pth
│ ├── consolidated.01.pth
│ ├── consolidated.02.pth
│ ├── consolidated.03.pth
│ ├── consolidated.04.pth
│ ├── consolidated.05.pth
│ ├── consolidated.06.pth
│ ├── consolidated.07.pth
│ ├── params.json │ ├── params.json
│ └── tokenizer.model │ └── tokenizer.model
``` ```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment