Unverified Commit 71deddc8 authored by binmakeswell's avatar binmakeswell Committed by GitHub
Browse files

[doc] resize figure (#2705)

* [doc] resize figure

* [doc] resize figure
parent 6a8cd687
...@@ -219,14 +219,14 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的 ...@@ -219,14 +219,14 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的
- 最高可提升单机训练速度7.73倍,单卡推理速度1.42倍 - 最高可提升单机训练速度7.73倍,单卡推理速度1.42倍
<p id="ChatGPT-1GPU" align="center"> <p id="ChatGPT-1GPU" align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/ChatGPT-1GPU.jpg" width=800/> <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/ChatGPT-1GPU.jpg" width=450/>
</p> </p>
- 单卡模型容量最多提升10.3倍 - 单卡模型容量最多提升10.3倍
- 最小demo训练流程最低仅需1.62GB显存 (任意消费级GPU) - 最小demo训练流程最低仅需1.62GB显存 (任意消费级GPU)
<p id="inference" align="center"> <p id="inference" align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/LoRA%20data.jpg" width=800/> <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/LoRA%20data.jpg" width=600/>
</p> </p>
- 提升单卡的微调模型容量3.7倍 - 提升单卡的微调模型容量3.7倍
......
...@@ -221,14 +221,14 @@ A low-cost [ChatGPT](https://openai.com/blog/chatgpt/) equivalent implementation ...@@ -221,14 +221,14 @@ A low-cost [ChatGPT](https://openai.com/blog/chatgpt/) equivalent implementation
- Up to 7.73 times faster for single server training and 1.42 times faster for single-GPU inference - Up to 7.73 times faster for single server training and 1.42 times faster for single-GPU inference
<p id="ChatGPT-1GPU" align="center"> <p id="ChatGPT-1GPU" align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/ChatGPT-1GPU.jpg" width=800/> <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/ChatGPT-1GPU.jpg" width=450/>
</p> </p>
- Up to 10.3x growth in model capacity on one GPU - Up to 10.3x growth in model capacity on one GPU
- A mini demo training process requires only 1.62GB of GPU memory (any consumer-grade GPU) - A mini demo training process requires only 1.62GB of GPU memory (any consumer-grade GPU)
<p id="inference" align="center"> <p id="inference" align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/LoRA%20data.jpg" width=800/> <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/LoRA%20data.jpg" width=600/>
</p> </p>
- Increase the capacity of the fine-tuning model by up to 3.7 times on a single GPU - Increase the capacity of the fine-tuning model by up to 3.7 times on a single GPU
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment