index.mdx 1.08 KB
Newer Older
1
# bitsandbytes
Titus's avatar
Titus committed
2

3
bitsandbytes enables accessible large language models via k-bit quantization for PyTorch. bitsandbytes provides three main features for dramatically reducing memory consumption for inference and training:
Titus's avatar
Titus committed
4

5
6
7
* 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.
* LLM.Int() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.
* QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don't compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.
Titus's avatar
Titus committed
8

Titus's avatar
Titus committed
9
# License
Titus's avatar
Titus committed
10

11
bitsandbytes is MIT licensed.
Titus's avatar
Titus committed
12
13

We thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization.