Commit 95365b8d authored by wangkaixiong's avatar wangkaixiong 🚴🏼
Browse files

init

parent 8ab929e8
# docker_test # docker_test
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin http://10.0.51.48/wangkaixiong/docker_test.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](http://10.0.51.48/wangkaixiong/docker_test/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
# https://blog.csdn.net/wbsu2004/article/details/136443260?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522172225391516800207041892%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=172225391516800207041892&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~baidu_landing_v2~default-4-136443260-null-null.142^v100^pc_search_result_base8&utm_term=open-webui%20&spm=1018.2226.3001.4187
version: '3'
services:
vllm-test:
image: dcu_ai:vllm-0.5.0-ubuntu20.04-dtk24.04.1-py3.10_v1.0
container_name: vllm_wkx
restart: unless-stopped
# ports:
# - 28120:22
volumes:
- /opt/hyhal:/opt/hyhal
- /datav/ai-oem/Models:/datasets
- ./test_script:/test
devices:
- /dev/kfd
- /dev/mkfd
- /dev/dri
environment:
- LD_LIBRARY_PATH=/workspace/package/rocblas-install/lib:$LD_LIBRARY_PATH
- LLAMA_NN=1
- HSA_FORCE_FINE_GRAIN_PCIE=1
- NCCL_LAUNCH_MODE=GROUP
- NCCL_P2P_LEVEL=SYS
- NCCL_MAX_NCHANNELS=20
- NCCL_MIN_NCHANNELS=20
shm_size: 32g
command: bash -c "source /opt/dtk/env.sh && cd /test && python test.py && tail -f /dev/null"
# - HIP_VISIBLE_DEVICES=4,5,6,7
# - LD_LIBRARY_PATH=/workspace/package/rocblas-install/lib:$LD_LIBRARY_PATH
\ No newline at end of file
#!/bin/bash
apt install -y docker-compose
# 指定要检查的文件路径
file_path="./test_script/finished"
rm -rf file_path
docker compose up &
# 使用 while 循环不断检查文件是否存在
while true; do
if [ -e "$file_path" ]; then
echo "文件 $file_path 已存在,退出循环。"
break
else
echo "文件 $file_path 不存在,继续检查..."
fi
sleep 1 # 等待 1 秒
done
docker compose down
echo "继续测试"
DTKROOT=/opt/dtk
C_INCLUDE_PATH=/opt/dtk/include:/opt/hyhal/include:/opt/dtk/llvm/include:/opt/dtk/include:/opt/hyhal/include:/opt/dtk/llvm/include:
LANGUAGE=en_US.UTF-8
MIOPEN_FIND_MODE=3
HOSTNAME=79cfa5f821d3
OS_VERSION=ubuntu20.04
SHLVL=1
LD_LIBRARY_PATH=/opt/dtk/hip/lib:/opt/dtk/llvm/lib:/opt/dtk/lib:/opt/dtk/lib64:/opt/hyhal/lib:/opt/hyhal/lib64:/workspace/package/rocblas-install/lib:/opt/dtk-24.04.2/hip/lib:/opt/dtk-24.04.2/llvm/lib:/opt/dtk-24.04.2/lib:/opt/dtk-24.04.2/lib64:/opt/hyhal/lib:/opt/hyhal/lib64:/opt/dtk-24.04.2/hip/lib:/opt/dtk-24.04.2/llvm/lib:/opt/dtk-24.04.2/lib:/opt/dtk-24.04.2/lib64:/opt/hyhal/lib:/opt/hyhal/lib64::/opt/mpi/lib:/opt/mpi/lib
NCCL_LAUNCH_MODE=GROUP
HOME=/root
OLDPWD=/
NCCL_MIN_NCHANNELS=20
NCCL_MAX_NCHANNELS=20
NCCL_P2P_LEVEL=SYS
HYHAL_PATH=/opt/hyhal
AMDGPU_TARGETS=gfx906;gfx926;gfx928
CPLUS_INCLUDE_PATH=/opt/dtk/include:/opt/hyhal/include:/opt/dtk/llvm/include:/opt/dtk/include:/opt/hyhal/include:/opt/dtk/llvm/include:
_=/usr/local/bin/python
MOFED_VERSION=5.8-3.0.7.0
HIP_PATH=/opt/dtk/hip
PATH=/opt/dtk/bin:/opt/dtk/llvm/bin:/opt/dtk/hip/bin:/opt/dtk/hip/bin/hipify:/opt/hyhal/bin:/opt/dtk/bin:/opt/dtk/llvm/bin:/opt/dtk/hip/bin:/opt/dtk/hip/bin/hipify:/opt/hyhal/bin:/opt/mpi/bin:/opt/hwloc/bin/:/opt/cmake/bin/:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PLATFORM=x86_64
LANG=en_US.UTF-8
ROCM_PATH=/opt/dtk
HSA_FORCE_FINE_GRAIN_PCIE=1
SHELL=/bin/bash
LLAMA_NN=1
PWD=/test
LC_ALL=en_US.UTF-8
PYTHONPATH=/usr/local/lib/python3.10/site-packages:/usr/local/:
TZ=Asia/Shanghai
MANPATH=/opt/mpi/share/man:
/usr/local/lib/python3.10/site-packages/triton/compiler/compiler.py:70: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert(False, "unsupported target")
INFO: Please install lmslim if you want to infer gptq or awq model.
Namespace(backend='vllm', dataset=None, input_len=1, output_len=2000, model='/datasets/Llama-2-7b-chat-hf/', tokenizer='/datasets/Llama-2-7b-chat-hf/', quantization=None, tensor_parallel_size=1, n=1, use_beam_search=False, num_prompts=32, seed=0, hf_max_batch_size=None, trust_remote_code=True, max_model_len=None, dtype='float16', gpu_memory_utilization=0.9, enforce_eager=False, kv_cache_dtype='auto', quantization_param_path=None, device='cuda', enable_prefix_caching=False, enable_chunked_prefill=False, max_num_batched_tokens=None, download_dir=None, output_json=None, distributed_executor_backend=None)
INFO 10-16 16:23:17 llm_engine.py:161] Initializing an LLM engine (v0.5.0) with config: model='/datasets/Llama-2-7b-chat-hf/', speculative_config=None, tokenizer='/datasets/Llama-2-7b-chat-hf/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/datasets/Llama-2-7b-chat-hf/)
INFO 10-16 16:23:18 selector.py:55] Using ROCmFlashAttention backend.
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:23:18.616631 41 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94211286087552
I1016 16:23:19.494025 41 ProcessGroupNCCL.cpp:1340] NCCL_DEBUG: N/A
I1016 16:23:19.497731 41 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94211258560320
I1016 16:23:19.501755 41 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94211259053008
INFO 10-16 16:23:19 selector.py:55] Using ROCmFlashAttention backend.
INFO 10-16 16:23:30 model_runner.py:159] Loading model weights took 12.5523 GB
INFO 10-16 16:24:24 gpu_executor.py:83] # GPU blocks: 5452, # CPU blocks: 512
INFO 10-16 16:24:38 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 10-16 16:24:38 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 10-16 16:24:51 model_runner.py:981] Graph capturing finished in 13 secs.
Processed prompts: 0%| | 0/32 [00:00<?, ?it/s, Generation Speed: 0.00 toks/s] Processed prompts: 3%|▎ | 1/32 [02:39<1:22:10, 159.05s/it, Generation Speed: 12.57 toks/s] Processed prompts: 100%|██████████| 32/32 [02:39<00:00, 4.97s/it, Generation Speed: 402.38 toks/s]
Throughput: 0.20 requests/s, 402.53 tokens/s
Elapsed_time: 159.07 s
Generate Throughput: 402.33 tokens/s
TTFT mean: 0.1981 s
I1016 16:27:31.328974 41 ProcessGroupNCCL.cpp:874] [Rank 0] Destroyed 1communicators on CUDA device 0
INFO: Please install lmslim if you want to infer gptq or awq model.
Namespace(backend='vllm', dataset=None, input_len=100, output_len=100, model='/datasets/Qwen1.5-32B-Chat/', tokenizer='/datasets/Qwen1.5-32B-Chat/', quantization=None, tensor_parallel_size=4, n=1, use_beam_search=False, num_prompts=64, seed=0, hf_max_batch_size=None, trust_remote_code=True, max_model_len=None, dtype='float16', gpu_memory_utilization=0.9, enforce_eager=False, kv_cache_dtype='auto', quantization_param_path=None, device='cuda', enable_prefix_caching=False, enable_chunked_prefill=False, max_num_batched_tokens=None, download_dir=None, output_json=None, distributed_executor_backend=None)
WARNING 10-16 16:27:43 config.py:1224] Casting torch.bfloat16 to torch.float16.
2024-10-16 16:27:45,537 INFO worker.py:1786 -- Started a local Ray instance.
INFO 10-16 16:27:51 config.py:629] Defaulting to use mp for distributed inference
INFO 10-16 16:27:51 config.py:645] Disabled the custom all-reduce kernel because it is not supported on AMD GPUs.
INFO 10-16 16:27:51 llm_engine.py:161] Initializing an LLM engine (v0.5.0) with config: model='/datasets/Qwen1.5-32B-Chat/', speculative_config=None, tokenizer='/datasets/Qwen1.5-32B-Chat/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, disable_custom_all_reduce=True, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/datasets/Qwen1.5-32B-Chat/)
INFO 10-16 16:27:51 selector.py:55] Using ROCmFlashAttention backend.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
(VllmWorkerProcess pid=8109) INFO 10-16 16:27:56 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=8109) INFO 10-16 16:27:56 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:27:56.435388 8109 ProcessGroupNCCL.cpp:686] [Rank 1] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94595162264032
(VllmWorkerProcess pid=8110) INFO 10-16 16:27:56 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=8110) INFO 10-16 16:27:56 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:27:56.554723 8110 ProcessGroupNCCL.cpp:686] [Rank 2] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94549760811168
(VllmWorkerProcess pid=8111) INFO 10-16 16:27:56 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=8111) INFO 10-16 16:27:56 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:27:56.597939 8111 ProcessGroupNCCL.cpp:686] [Rank 3] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94803335683968
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:27:56.601776 475 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94697219919904
I1016 16:27:57.681679 475 ProcessGroupNCCL.cpp:1340] NCCL_DEBUG: N/A
I1016 16:27:57.684379 475 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94697668204736
I1016 16:27:57.684387 8109 ProcessGroupNCCL.cpp:686] [Rank 1] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94595171359856
I1016 16:27:57.684439 8111 ProcessGroupNCCL.cpp:686] [Rank 3] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94803345643920
I1016 16:27:57.684448 8110 ProcessGroupNCCL.cpp:686] [Rank 2] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94549773974320
(VllmWorkerProcess pid=8109) INFO 10-16 16:27:57 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=8109) INFO 10-16 16:27:57 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=8111) INFO 10-16 16:27:57 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=8111) INFO 10-16 16:27:57 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=8110) INFO 10-16 16:27:57 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=8110) INFO 10-16 16:27:57 pynccl.py:65] vLLM is using nccl==2.18.3
INFO 10-16 16:27:57 utils.py:623] Found nccl from library librccl.so.1
INFO 10-16 16:27:57 pynccl.py:65] vLLM is using nccl==2.18.3
I1016 16:27:57.960775 475 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94697662069104
I1016 16:27:57.961210 8110 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94549770765664
I1016 16:27:57.961233 8111 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94803338812128
I1016 16:27:57.961279 8109 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94595171362048
WARNING 10-16 16:27:57 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=8111) WARNING 10-16 16:27:57 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=8109) WARNING 10-16 16:27:57 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=8110) WARNING 10-16 16:27:57 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
INFO 10-16 16:27:58 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=8110) INFO 10-16 16:27:58 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=8111) INFO 10-16 16:27:58 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=8109) INFO 10-16 16:27:58 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=8111) INFO 10-16 16:28:10 model_runner.py:159] Loading model weights took 15.1963 GB
(VllmWorkerProcess pid=8109) INFO 10-16 16:28:10 model_runner.py:159] Loading model weights took 15.1963 GB
(VllmWorkerProcess pid=8110) INFO 10-16 16:28:11 model_runner.py:159] Loading model weights took 15.1963 GB
INFO 10-16 16:28:11 model_runner.py:159] Loading model weights took 15.1963 GB
I1016 16:28:12.718705 475 ProcessGroupNCCL.cpp:1340] NCCL_DEBUG: N/A
INFO 10-16 16:28:59 distributed_gpu_executor.py:56] # GPU blocks: 32793, # CPU blocks: 4096
(VllmWorkerProcess pid=8110) INFO 10-16 16:29:14 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=8110) INFO 10-16 16:29:14 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=8111) INFO 10-16 16:29:14 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=8111) INFO 10-16 16:29:14 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 10-16 16:29:15 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 10-16 16:29:15 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=8109) INFO 10-16 16:29:15 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=8109) INFO 10-16 16:29:15 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 10-16 16:29:42 model_runner.py:981] Graph capturing finished in 27 secs.
(VllmWorkerProcess pid=8109) INFO 10-16 16:29:42 model_runner.py:981] Graph capturing finished in 27 secs.
(VllmWorkerProcess pid=8110) INFO 10-16 16:29:42 model_runner.py:981] Graph capturing finished in 28 secs.
(VllmWorkerProcess pid=8111) INFO 10-16 16:29:42 model_runner.py:981] Graph capturing finished in 28 secs.
WARNING 10-16 16:29:42 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
Processed prompts: 0%| | 0/64 [00:00<?, ?it/s, Generation Speed: 0.00 toks/s] Processed prompts: 2%|▏ | 1/64 [00:12<13:21, 12.72s/it, Generation Speed: 7.86 toks/s] Processed prompts: 100%|██████████| 64/64 [00:12<00:00, 5.03it/s, Generation Speed: 503.01 toks/s]
INFO 10-16 16:29:55 multiproc_worker_utils.py:154] Terminating local vLLM worker processes
(VllmWorkerProcess pid=8109) INFO 10-16 16:29:55 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=8110) INFO 10-16 16:29:55 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=8111) INFO 10-16 16:29:55 multiproc_worker_utils.py:255] Worker exiting
Throughput: 5.02 requests/s, 1003.77 tokens/s
Elapsed_time: 12.75 s
Generate Throughput: 501.88 tokens/s
TTFT mean: 1.7136 s
I1016 16:29:56.392279 8109 ProcessGroupNCCL.cpp:874] [Rank 1] Destroyed 1communicators on CUDA device 1
I1016 16:29:56.516038 8110 ProcessGroupNCCL.cpp:874] [Rank 2] Destroyed 1communicators on CUDA device 2
I1016 16:29:56.664018 8111 ProcessGroupNCCL.cpp:874] [Rank 3] Destroyed 1communicators on CUDA device 3
I1016 16:29:56.968946 8109 ProcessGroupNCCL.cpp:874] [Rank 1] Destroyed 1communicators on CUDA device 1
I1016 16:29:57.076572 8110 ProcessGroupNCCL.cpp:874] [Rank 2] Destroyed 1communicators on CUDA device 2
I1016 16:29:57.213865 8111 ProcessGroupNCCL.cpp:874] [Rank 3] Destroyed 1communicators on CUDA device 3
I1016 16:30:15.474757 475 ProcessGroupNCCL.cpp:874] [Rank 0] Destroyed 1communicators on CUDA device 0
I1016 16:30:16.021216 475 ProcessGroupNCCL.cpp:874] [Rank 0] Destroyed 1communicators on CUDA device 0
INFO: Please install lmslim if you want to infer gptq or awq model.
Namespace(backend='vllm', dataset=None, input_len=100, output_len=100, model='/datasets/Qwen2-72B-Instruct/', tokenizer='/datasets/Qwen2-72B-Instruct/', quantization=None, tensor_parallel_size=8, n=1, use_beam_search=False, num_prompts=128, seed=0, hf_max_batch_size=None, trust_remote_code=True, max_model_len=None, dtype='float16', gpu_memory_utilization=0.9, enforce_eager=False, kv_cache_dtype='auto', quantization_param_path=None, device='cuda', enable_prefix_caching=False, enable_chunked_prefill=False, max_num_batched_tokens=None, download_dir=None, output_json=None, distributed_executor_backend=None)
WARNING 10-16 16:30:28 config.py:1224] Casting torch.bfloat16 to torch.float16.
2024-10-16 16:30:30,557 INFO worker.py:1786 -- Started a local Ray instance.
INFO 10-16 16:30:33 config.py:629] Defaulting to use mp for distributed inference
INFO 10-16 16:30:33 config.py:645] Disabled the custom all-reduce kernel because it is not supported on AMD GPUs.
INFO 10-16 16:30:33 llm_engine.py:161] Initializing an LLM engine (v0.5.0) with config: model='/datasets/Qwen2-72B-Instruct/', speculative_config=None, tokenizer='/datasets/Qwen2-72B-Instruct/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=8, disable_custom_all_reduce=True, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/datasets/Qwen2-72B-Instruct/)
INFO 10-16 16:30:33 selector.py:55] Using ROCmFlashAttention backend.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
INFO: Please install lmslim if you want to infer gptq or awq model.
(VllmWorkerProcess pid=17072) INFO 10-16 16:30:38 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17072) INFO 10-16 16:30:38 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:38.736608 17072 ProcessGroupNCCL.cpp:686] [Rank 5] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94090694074192
(VllmWorkerProcess pid=17069) INFO 10-16 16:30:38 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17068) INFO 10-16 16:30:38 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17069) INFO 10-16 16:30:38 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:38.786351 17069 ProcessGroupNCCL.cpp:686] [Rank 2] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94255256256112
(VllmWorkerProcess pid=17068) INFO 10-16 16:30:38 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:38.811360 17068 ProcessGroupNCCL.cpp:686] [Rank 1] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94766911471760
(VllmWorkerProcess pid=17074) INFO 10-16 16:30:39 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17074) INFO 10-16 16:30:39 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:39.091476 17074 ProcessGroupNCCL.cpp:686] [Rank 7] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94742731339680
(VllmWorkerProcess pid=17073) INFO 10-16 16:30:39 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17071) INFO 10-16 16:30:39 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17070) INFO 10-16 16:30:39 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17073) INFO 10-16 16:30:39 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:39.149852 17073 ProcessGroupNCCL.cpp:686] [Rank 6] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94878171542064
(VllmWorkerProcess pid=17071) INFO 10-16 16:30:39 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:39.155395 17071 ProcessGroupNCCL.cpp:686] [Rank 4] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=93997087099104
(VllmWorkerProcess pid=17070) INFO 10-16 16:30:39 multiproc_worker_utils.py:233] Worker ready; awaiting tasks
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:39.164678 17070 ProcessGroupNCCL.cpp:686] [Rank 3] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94652536430096
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1016 16:30:39.167346 9444 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94272506368032
I1016 16:30:40.310235 9444 ProcessGroupNCCL.cpp:1340] NCCL_DEBUG: N/A
I1016 16:30:40.314352 9444 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94272873201408
I1016 16:30:40.314357 17070 ProcessGroupNCCL.cpp:686] [Rank 3] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94652545306064
I1016 16:30:40.314363 17072 ProcessGroupNCCL.cpp:686] [Rank 5] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94090694095584
I1016 16:30:40.314405 17071 ProcessGroupNCCL.cpp:686] [Rank 4] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=93997100210416
I1016 16:30:40.314409 17069 ProcessGroupNCCL.cpp:686] [Rank 2] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94255269656048
I1016 16:30:40.314388 17073 ProcessGroupNCCL.cpp:686] [Rank 6] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94878182445472
I1016 16:30:40.314486 17074 ProcessGroupNCCL.cpp:686] [Rank 7] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94742741465008
I1016 16:30:40.314513 17068 ProcessGroupNCCL.cpp:686] [Rank 1] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94766921226160
INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=17069) INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=17071) INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=17073) INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=17069) INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=17071) INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=17073) INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=17068) INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=17068) INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=17070) INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=17070) INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=17074) INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=17072) INFO 10-16 16:30:40 utils.py:623] Found nccl from library librccl.so.1
(VllmWorkerProcess pid=17074) INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
(VllmWorkerProcess pid=17072) INFO 10-16 16:30:40 pynccl.py:65] vLLM is using nccl==2.18.3
I1016 16:30:40.632812 9444 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94272950340288
I1016 16:30:40.632882 17068 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94766921633088
I1016 16:30:40.632970 17069 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94255267050448
I1016 16:30:40.633188 17072 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94090699339168
I1016 16:30:40.633219 17071 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=93997097205936
I1016 16:30:40.633245 17070 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94652536451488
I1016 16:30:40.633351 17073 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94878178933680
I1016 16:30:40.633558 17074 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=94742731349600
WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17071) WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17068) WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17069) WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17072) WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17070) WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17073) WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17074) WARNING 10-16 16:30:40 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
(VllmWorkerProcess pid=17068) INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17070) INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17069) INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17074) INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17072) INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17073) INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17071) INFO 10-16 16:30:40 selector.py:55] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17069) INFO 10-16 16:31:16 model_runner.py:159] Loading model weights took 16.9987 GB
(VllmWorkerProcess pid=17074) INFO 10-16 16:31:16 model_runner.py:159] Loading model weights took 16.9987 GB
(VllmWorkerProcess pid=17073) INFO 10-16 16:31:17 model_runner.py:159] Loading model weights took 16.9987 GB
(VllmWorkerProcess pid=17072) INFO 10-16 16:31:17 model_runner.py:159] Loading model weights took 16.9987 GB
(VllmWorkerProcess pid=17068) INFO 10-16 16:31:17 model_runner.py:159] Loading model weights took 16.9987 GB
(VllmWorkerProcess pid=17071) INFO 10-16 16:31:17 model_runner.py:159] Loading model weights took 16.9987 GB
(VllmWorkerProcess pid=17070) INFO 10-16 16:31:17 model_runner.py:159] Loading model weights took 16.9987 GB
INFO 10-16 16:31:18 model_runner.py:159] Loading model weights took 16.9987 GB
I1016 16:31:40.260812 9444 ProcessGroupNCCL.cpp:1340] NCCL_DEBUG: N/A
INFO 10-16 16:32:31 distributed_gpu_executor.py:56] # GPU blocks: 49408, # CPU blocks: 6553
(VllmWorkerProcess pid=17072) INFO 10-16 16:32:50 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=17072) INFO 10-16 16:32:50 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=17069) INFO 10-16 16:32:51 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=17069) INFO 10-16 16:32:51 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 10-16 16:32:52 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 10-16 16:32:52 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=17070) INFO 10-16 16:32:52 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=17070) INFO 10-16 16:32:52 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=17074) INFO 10-16 16:32:52 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=17074) INFO 10-16 16:32:52 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=17068) INFO 10-16 16:32:53 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=17068) INFO 10-16 16:32:53 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=17073) INFO 10-16 16:32:53 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=17073) INFO 10-16 16:32:53 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=17071) INFO 10-16 16:32:53 model_runner.py:905] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=17071) INFO 10-16 16:32:53 model_runner.py:909] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=17072) INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 44 secs.
(VllmWorkerProcess pid=17073) INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 42 secs.
(VllmWorkerProcess pid=17074) INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 42 secs.
(VllmWorkerProcess pid=17068) INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 42 secs.
(VllmWorkerProcess pid=17071) INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 42 secs.
(VllmWorkerProcess pid=17070) INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 42 secs.
INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 42 secs.
(VllmWorkerProcess pid=17069) INFO 10-16 16:33:35 model_runner.py:981] Graph capturing finished in 44 secs.
WARNING 10-16 16:33:35 __init__.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention
Processed prompts: 0%| | 0/128 [00:00<?, ?it/s, Generation Speed: 0.00 toks/s] Processed prompts: 1%| | 1/128 [00:26<55:17, 26.12s/it, Generation Speed: 3.83 toks/s] Processed prompts: 100%|██████████| 128/128 [00:26<00:00, 4.90it/s, Generation Speed: 489.96 toks/s]
INFO 10-16 16:34:01 multiproc_worker_utils.py:154] Terminating local vLLM worker processes
(VllmWorkerProcess pid=17069) INFO 10-16 16:34:01 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=17073) INFO 10-16 16:34:01 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=17068) INFO 10-16 16:34:01 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=17072) INFO 10-16 16:34:01 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=17074) INFO 10-16 16:34:01 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=17071) INFO 10-16 16:34:01 multiproc_worker_utils.py:255] Worker exiting
(VllmWorkerProcess pid=17070) INFO 10-16 16:34:01 multiproc_worker_utils.py:255] Worker exiting
Throughput: 4.89 requests/s, 978.21 tokens/s
Elapsed_time: 26.17 s
Generate Throughput: 489.11 tokens/s
TTFT mean: 7.9979 s
I1016 16:34:02.772032 17072 ProcessGroupNCCL.cpp:874] [Rank 5] Destroyed 1communicators on CUDA device 5
I1016 16:34:02.782814 17073 ProcessGroupNCCL.cpp:874] [Rank 6] Destroyed 1communicators on CUDA device 6
I1016 16:34:02.783049 17074 ProcessGroupNCCL.cpp:874] [Rank 7] Destroyed 1communicators on CUDA device 7
I1016 16:34:02.806020 17069 ProcessGroupNCCL.cpp:874] [Rank 2] Destroyed 1communicators on CUDA device 2
I1016 16:34:03.035131 17068 ProcessGroupNCCL.cpp:874] [Rank 1] Destroyed 1communicators on CUDA device 1
I1016 16:34:03.088078 17071 ProcessGroupNCCL.cpp:874] [Rank 4] Destroyed 1communicators on CUDA device 4
I1016 16:34:03.129138 17070 ProcessGroupNCCL.cpp:874] [Rank 3] Destroyed 1communicators on CUDA device 3
I1016 16:34:03.375077 17069 ProcessGroupNCCL.cpp:874] [Rank 2] Destroyed 1communicators on CUDA device 2
I1016 16:34:03.381778 17072 ProcessGroupNCCL.cpp:874] [Rank 5] Destroyed 1communicators on CUDA device 5
I1016 16:34:03.388473 17074 ProcessGroupNCCL.cpp:874] [Rank 7] Destroyed 1communicators on CUDA device 7
I1016 16:34:03.390317 17073 ProcessGroupNCCL.cpp:874] [Rank 6] Destroyed 1communicators on CUDA device 6
I1016 16:34:03.611083 17068 ProcessGroupNCCL.cpp:874] [Rank 1] Destroyed 1communicators on CUDA device 1
I1016 16:34:03.649227 17071 ProcessGroupNCCL.cpp:874] [Rank 4] Destroyed 1communicators on CUDA device 4
I1016 16:34:03.678121 17070 ProcessGroupNCCL.cpp:874] [Rank 3] Destroyed 1communicators on CUDA device 3
I1016 16:34:19.893970 9444 ProcessGroupNCCL.cpp:874] [Rank 0] Destroyed 1communicators on CUDA device 0
I1016 16:34:20.428572 9444 ProcessGroupNCCL.cpp:874] [Rank 0] Destroyed 1communicators on CUDA device 0
import os
from datetime import datetime
# 获取当前时间
now = datetime.now()
# 格式化日期时间字符串
folder_name = now.strftime("%Y%m%d%H%M")
save_path = "/test/log"
save_log_path = os.path.join(save_path, folder_name)
mark_file = "/test/finished"
# 调用函数
os.system('env > env.log')
os.chdir("/workspace/vllm_benchmark")
os.system(f"rm -rf {mark_file}")
os.makedirs(save_log_path, exist_ok=True)
os.system(f"bash test_benchmark_llama2_7b.sh 2>&1 | tee {save_log_path}/test_benchmark_llama2_7b.log")
os.system(f"bash test_benchmark_qwen1.5_32b.sh 2>&1 | tee {save_log_path}/test_benchmark_qwen1.5_32b.log")
os.system(f"bash test_benchmark_qwen2_72b.sh 2>&1 | tee {save_log_path}/test_benchmark_qwen2_72b.log")
os.system(f"touch {mark_file}")
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment