"src/array/vscode:/vscode.git/clone" did not exist on "e18c2ab408f060b7cd988e3e237ef2b59b955454"
integrations.mdx 5.9 KB
Newer Older
1
# Integrations
Titus's avatar
Titus committed
2

3
bitsandbytes is widely integrated with many of the libraries in the Hugging Face and wider PyTorch ecosystem. This guide provides a brief overview of the integrations and how to use bitsandbytes with them. For more details, you should refer to the linked documentation for each library.
Titus's avatar
Titus committed
4

5
## Transformers
Titus's avatar
Titus committed
6

7
8
9
10
11
12
> [!TIP]
> Learn more in the bitsandbytes Transformers integration [guide](https://huggingface.co/docs/transformers/quantization#bitsandbytes).

With Transformers, it's very easy to load any model in 4 or 8-bit and quantize them on the fly. To configure the quantization parameters, specify them in the [`~transformers.BitsAndBytesConfig`] class.

For example, to load and quantize a model to 4-bits and use the bfloat16 data type for compute:
Titus's avatar
Titus committed
13

Steven Liu's avatar
Steven Liu committed
14
> [!WARNING]
15
> bfloat16 is the ideal `compute_dtype` if your hardware supports it. While the default `compute_dtype`, float32, ensures backward compatibility (due to wide-ranging hardware support) and numerical stability, it is large and slows down computations. In contrast, float16 is smaller and faster but can lead to numerical instabilities. bfloat16 combines the best aspects of both; it offers the numerical stability of float32 and the reduced memory footprint and speed of a 16-bit data type. Check if your hardware supports bfloat16 and configure it using the `bnb_4bit_compute_dtype` parameter in [`~transformers.BitsAndBytesConfig`]!
Titus's avatar
Titus committed
16
17

```py
18
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
Titus's avatar
Titus committed
19
20

quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
model_4bit = AutoModelForCausalLM.from_pretrained(
    "bigscience/bloom-1b7",
    device_map=device_map,
    quantization_config=quantization_config,
)
```

### 8-bit optimizers

You can use any of the 8-bit or paged optimizers with Transformers by passing them to the [`~transformers.Trainer`] class on initialization. All bitsandbytes optimizers are supported by passing the correct string in the [`~transformers.TrainingArguments`] `optim` parameter. For example, to load a [`~bitsandbytes.optim.PagedAdamW32bit`] optimizer:

```py
from transformers import TrainingArguments, Trainer

training_args = TrainingArguments(
    ...,
    optim="paged_adamw_32bit",
)
trainer = Trainer(model, training_args, ...)
trainer.train()
```

## PEFT

> [!TIP]
> Learn more in the bitsandbytes PEFT integration [guide](https://huggingface.co/docs/peft/developer_guides/quantization#quantization).

PEFT builds on the bitsandbytes Transformers integration, and extends it for training with a few more steps. Let's prepare the 4-bit model from the section above for training.

Call the [`~peft.prepare_model_for_kbit_training`] method to prepare the model for training. This only works for Transformers models!

```py
from peft import prepare_model_for_kbit_training

model_4bit = prepare_model_for_kbit_training(model_4bit)
Titus's avatar
Titus committed
56
57
```

58
59
60
61
62
63
64
65
66
67
68
69
70
71
Setup a [`~peft.LoraConfig`] to use QLoRA:

```py
from peft import LoraConfig

config = LoraConfig(
    r=16,
    lora_alpha=8,
    target_modules="all-linear",
    lora_dropout=0.05
    bias="none",
    task_type="CAUSAL_LM"
)
```
Titus's avatar
Titus committed
72

73
74
75
76
77
78
79
Now call the [`~peft.get_peft_model`] function on your model and config to create a trainable [`PeftModel`].

```py
from peft import get_peft_model

model = get_peft_model(model_4bit, config)
```
Titus's avatar
Titus committed
80

81
## Accelerate
Titus's avatar
Titus committed
82

83
84
85
86
> [!TIP]
> Learn more in the bitsandbytes Accelerate integration [guide](https://huggingface.co/docs/accelerate/usage_guides/quantization).

bitsandbytes is also easily usable from Accelerate and you can quantize any PyTorch model by passing a [`~accelerate.utils.BnbQuantizationConfig`] with your desired settings, and then calling the [`~accelerate.utils.load_and_quantize_model`] function to quantize it.
Titus's avatar
Titus committed
87

88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
```py
from accelerate import init_empty_weights
from accelerate.utils import BnbQuantizationConfig, load_and_quantize_model
from mingpt.model import GPT

model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024

with init_empty_weights():
    empty_model = GPT(model_config)

bnb_quantization_config = BnbQuantizationConfig(
  load_in_4bit=True,
  bnb_4bit_compute_dtype=torch.bfloat16,  # optional
  bnb_4bit_use_double_quant=True,         # optional
  bnb_4bit_quant_type="nf4"               # optional
)

quantized_model = load_and_quantize_model(
  empty_model,
  weights_location=weights_location,
  bnb_quantization_config=bnb_quantization_config,
  device_map = "auto"
)
```

116
## PyTorch Lightning and Lightning Fabric
117

118
bitsandbytes is available from:
119

120
121
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/), a deep learning framework for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale.
- [Lightning Fabric](https://lightning.ai/docs/fabric/stable/), a fast and lightweight way to scale PyTorch models without boilerplate.
122

123
Learn more in the bitsandbytes PyTorch Lightning integration [guide](https://lightning.ai/docs/pytorch/stable/common/precision_intermediate.html#quantization-via-bitsandbytes).
124
125


126
## Lit-GPT
Titus's avatar
Titus committed
127

128
bitsandbytes is integrated with [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models. Lit-GPT is based on Lightning Fabric, and it can be used for quantization during training, finetuning, and inference.
Titus's avatar
Titus committed
129

130
Learn more in the bitsandbytes Lit-GPT integration [guide](https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/quantize.md).
Titus's avatar
Titus committed
131

132
## Blog posts
Titus's avatar
Titus committed
133

134
To learn in more detail about some of bitsandbytes integrations, take a look at the following blog posts:
Titus's avatar
Titus committed
135

136
137
- [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes)
- [A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co/blog/hf-bitsandbytes-integration)