Unverified Commit 1124d95d authored by Minki Kim's avatar Minki Kim Committed by GitHub
Browse files

๐ŸŒ [i18n-KO] Translated `gptq.md` to Korean (#32293)



* fix: manual edits

* fix: manual edits2

* fix: delete files

* fix: resolve suggestions
Co-authored-by: default avatarSungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: default avatarSeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: default avatar๊น€์ค€์žฌ <55151385+junejae@users.noreply.github.com>

* fix: resolve suggestions
Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>

---------
Co-authored-by: default avatarSungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: default avatarSeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: default avatar๊น€์ค€์žฌ <55151385+junejae@users.noreply.github.com>
Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
parent b7fb393f
......@@ -143,8 +143,8 @@
title: (๋ฒˆ์—ญ์ค‘) Getting started
- local: in_translation
title: (๋ฒˆ์—ญ์ค‘) bitsandbytes
- local: in_translation
title: (๋ฒˆ์—ญ์ค‘) GPTQ
- local: quantization/gptq
title: GPTQ
- local: in_translation
title: (๋ฒˆ์—ญ์ค‘) AWQ
- local: in_translation
......
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# GPTQ [[gptq]]
<Tip>
PEFT๋ฅผ ํ™œ์šฉํ•œ GPTQ ์–‘์žํ™”๋ฅผ ์‚ฌ์šฉํ•ด๋ณด์‹œ๋ ค๋ฉด ์ด [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb)์„ ์ฐธ๊ณ ํ•˜์‹œ๊ณ , ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์ด [๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๋ฌผ](https://huggingface.co/blog/gptq-integration)์—์„œ ํ™•์ธํ•˜์„ธ์š”!
</Tip>
[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” GPTQ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ›ˆ๋ จ ํ›„ ์–‘์žํ™” ๊ธฐ๋ฒ•์œผ๋กœ, ๊ฐ€์ค‘์น˜ ํ–‰๋ ฌ์˜ ๊ฐ ํ–‰์„ ๋…๋ฆฝ์ ์œผ๋กœ ์–‘์žํ™”ํ•˜์—ฌ ์˜ค์ฐจ๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฐ€์ค‘์น˜ ๋ฒ„์ „์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ค‘์น˜๋Š” int4๋กœ ์–‘์žํ™”๋˜์ง€๋งŒ, ์ถ”๋ก  ์ค‘์—๋Š” ์‹ค์‹œ๊ฐ„์œผ๋กœ fp16์œผ๋กœ ๋ณต์›๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” int4 ๊ฐ€์ค‘์น˜๊ฐ€ GPU์˜ ์ „์—ญ ๋ฉ”๋ชจ๋ฆฌ ๋Œ€์‹  ๊ฒฐํ•ฉ๋œ ์ปค๋„์—์„œ ์—ญ์–‘์žํ™”๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ 4๋ฐฐ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋” ๋‚ฎ์€ ๋น„ํŠธ ๋„ˆ๋น„๋ฅผ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ ํ†ต์‹  ์‹œ๊ฐ„์ด ์ค„์–ด๋“ค์–ด ์ถ”๋ก  ์†๋„๊ฐ€ ๋นจ๋ผ์งˆ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ๋‹ค์Œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค์ด ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”:
```bash
pip install auto-gptq
pip install --upgrade accelerate optimum transformers
```
๋ชจ๋ธ์„ ์–‘์žํ™”ํ•˜๋ ค๋ฉด(ํ˜„์žฌ ํ…์ŠคํŠธ ๋ชจ๋ธ๋งŒ ์ง€์›๋จ) [`GPTQConfig`] ํด๋ž˜์Šค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ์–‘์žํ™”ํ•  ๋น„ํŠธ ์ˆ˜, ์–‘์žํ™”๋ฅผ ์œ„ํ•œ ๊ฐ€์ค‘์น˜ ๊ต์ • ๋ฐ์ดํ„ฐ์…‹, ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ์…‹์„ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
gptq_config = GPTQConfig(bits=4, dataset="c4", tokenizer=tokenizer)
```
์ž์‹ ์˜ ๋ฐ์ดํ„ฐ์…‹์„ ๋ฌธ์ž์—ด ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, GPTQ ๋…ผ๋ฌธ์—์„œ ์‚ฌ์šฉํ•œ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.
```py
dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."]
gptq_config = GPTQConfig(bits=4, dataset=dataset, tokenizer=tokenizer)
```
์–‘์žํ™”ํ•  ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  `gptq_config`์„ [`~AutoModelForCausalLM.from_pretrained`] ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•˜์„ธ์š”. ๋ชจ๋ธ์„ ๋ฉ”๋ชจ๋ฆฌ์— ๋งž์ถ”๊ธฐ ์œ„ํ•ด `device_map="auto"`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ชจ๋ธ์„ ์ž๋™์œผ๋กœ CPU๋กœ ์˜คํ”„๋กœ๋“œํ•˜๊ณ , ์–‘์žํ™”๋ฅผ ์œ„ํ•ด ๋ชจ๋ธ ๋ชจ๋“ˆ์ด CPU์™€ GPU ๊ฐ„์— ์ด๋™ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค.
```py
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config)
```
๋ฐ์ดํ„ฐ์…‹์ด ๋„ˆ๋ฌด ์ปค์„œ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ๋ฅผ ๋Œ€๋น„ํ•œ ๋””์Šคํฌ ์˜คํ”„๋กœ๋“œ๋Š” ํ˜„์žฌ ์ง€์›ํ•˜์ง€ ์•Š๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿด ๋•Œ๋Š” `max_memory` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋””๋ฐ”์ด์Šค(GPU ๋ฐ CPU)์—์„œ ์‚ฌ์šฉํ•  ๋ฉ”๋ชจ๋ฆฌ ์–‘์„ ํ• ๋‹นํ•ด ๋ณด์„ธ์š”:
```py
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", max_memory={0: "30GiB", 1: "46GiB", "cpu": "30GiB"}, quantization_config=gptq_config)
```
<Tip warning={true}>
ํ•˜๋“œ์›จ์–ด์™€ ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜๋Ÿ‰์— ๋”ฐ๋ผ ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ์–‘์žํ™”ํ•˜๋Š” ๋ฐ ๋“œ๋Š” ์‹œ๊ฐ„์ด ์„œ๋กœ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌด๋ฃŒ ๋“ฑ๊ธ‰์˜ Google Colab GPU๋กœ ๋น„๊ต์  ๊ฐ€๋ฒผ์šด [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) ๋ชจ๋ธ์„ ์–‘์žํ™”ํ•˜๋Š” ๋ฐ ์•ฝ 5๋ถ„์ด ๊ฑธ๋ฆฌ์ง€๋งŒ, NVIDIA A100์œผ๋กœ 175B์— ๋‹ฌํ•˜๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์„ ์–‘์žํ™”ํ•˜๋Š” ๋ฐ๋Š” ์•ฝ 4์‹œ๊ฐ„์— ๋‹ฌํ•˜๋Š” ์‹œ๊ฐ„์ด ๊ฑธ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์–‘์žํ™”ํ•˜๊ธฐ ์ „์—, Hub์—์„œ ํ•ด๋‹น ๋ชจ๋ธ์˜ GPTQ ์–‘์žํ™” ๋ฒ„์ „์ด ์ด๋ฏธ ์กด์žฌํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค.
</Tip>
๋ชจ๋ธ์ด ์–‘์žํ™”๋˜๋ฉด, ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ Hub์— ํ‘ธ์‹œํ•˜์—ฌ ์‰ฝ๊ฒŒ ๊ณต์œ ํ•˜๊ณ  ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`GPTQConfig`]๋ฅผ ์ €์žฅํ•˜๊ธฐ ์œ„ํ•ด [`~PreTrainedModel.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”:
```py
quantized_model.push_to_hub("opt-125m-gptq")
tokenizer.push_to_hub("opt-125m-gptq")
```
์–‘์žํ™”๋œ ๋ชจ๋ธ์„ ๋กœ์ปฌ์— ์ €์žฅํ•˜๋ ค๋ฉด [`~PreTrainedModel.save_pretrained`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด `device_map` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์–‘์žํ™”๋˜์—ˆ์„ ๊ฒฝ์šฐ, ์ €์žฅํ•˜๊ธฐ ์ „์— ์ „์ฒด ๋ชจ๋ธ์„ GPU๋‚˜ CPU๋กœ ์ด๋™ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ชจ๋ธ์„ CPU์— ์ €์žฅํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•ฉ๋‹ˆ๋‹ค:
```py
quantized_model.save_pretrained("opt-125m-gptq")
tokenizer.save_pretrained("opt-125m-gptq")
# device_map์ด ์„ค์ •๋œ ์ƒํƒœ์—์„œ ์–‘์žํ™”๋œ ๊ฒฝ์šฐ
quantized_model.to("cpu")
quantized_model.save_pretrained("opt-125m-gptq")
```
์–‘์žํ™”๋œ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋กœ๋“œํ•˜๋ ค๋ฉด [`~PreTrainedModel.from_pretrained`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ , `device_map="auto"`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ชจ๋“  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ GPU์— ๋ชจ๋ธ์„ ์ž๋™์œผ๋กœ ๋ถ„์‚ฐ์‹œ์ผœ ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์œผ๋ฉด์„œ ๋ชจ๋ธ์„ ๋” ๋น ๋ฅด๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
```py
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto")
```
## ExLlama [[exllama]]
[ExLlama](https://github.com/turboderp/exllama)์€ [Llama](model_doc/llama) ๋ชจ๋ธ์˜ Python/C++/CUDA ๊ตฌํ˜„์ฒด๋กœ, 4๋น„ํŠธ GPTQ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค(์ด [๋ฒค์น˜๋งˆํฌ](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”). ['GPTQConfig'] ๊ฐ์ฒด๋ฅผ ์ƒ์„ฑํ•  ๋•Œ ExLlama ์ปค๋„์ด ๊ธฐ๋ณธ์ ์œผ๋กœ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ์ถ”๋ก  ์†๋„๋ฅผ ๋”์šฑ ๋†’์ด๊ธฐ ์œ„ํ•ด, `exllama_config` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ตฌ์„ฑํ•˜์—ฌ [ExLlamaV2](https://github.com/turboderp/exllamav2) ์ปค๋„์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:
```py
import torch
from transformers import AutoModelForCausalLM, GPTQConfig
gptq_config = GPTQConfig(bits=4, exllama_config={"version":2})
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=gptq_config)
```
<Tip warning={true}>
4๋น„ํŠธ ๋ชจ๋ธ๋งŒ ์ง€์›๋˜๋ฉฐ, ์–‘์žํ™”๋œ ๋ชจ๋ธ์„ PEFT๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒฝ์šฐ ExLlama ์ปค๋„์„ ๋น„ํ™œ์„ฑํ™”ํ•  ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.
</Tip>
ExLlama ์ปค๋„์€ ์ „์ฒด ๋ชจ๋ธ์ด GPU์— ์žˆ์„ ๋•Œ๋งŒ ์ง€์›๋ฉ๋‹ˆ๋‹ค. AutoGPTQ(๋ฒ„์ „ 0.4.2 ์ด์ƒ)๋กœ CPU์—์„œ ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒฝ์šฐ ExLlama ์ปค๋„์„ ๋น„ํ™œ์„ฑํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด config.json ํŒŒ์ผ์˜ ์–‘์žํ™” ์„ค์ •์—์„œ ExLlama ์ปค๋„๊ณผ ๊ด€๋ จ๋œ ์†์„ฑ์„ ๋ฎ์–ด์จ์•ผ ํ•ฉ๋‹ˆ๋‹ค.
```py
import torch
from transformers import AutoModelForCausalLM, GPTQConfig
gptq_config = GPTQConfig(bits=4, use_exllama=False)
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="cpu", quantization_config=gptq_config)
```
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment