installation.mdx 12.5 KB
Newer Older
1
# Installation Guide
Titus's avatar
Titus committed
2

3
Welcome to the installation guide for the `bitsandbytes` library! This document provides step-by-step instructions to install `bitsandbytes` across various platforms and hardware configurations. The library primarily supports CUDA-based GPUs, but the team is actively working on enabling support for additional backends like CPU, AMD ROCm, Intel XPU, and Gaudi HPU.
Younes Belkada's avatar
Younes Belkada committed
4

5
## Table of Contents
6

7
8
9
- [CUDA](#cuda)
  - [Installation via PyPI](#cuda-pip)
  - [Compile from Source](#cuda-compile)
10
11
  - [Preview Wheels from `main`](#cuda-preview)
- [Multi-Backend Preview](#multi-backend)
12
13
14
15
  - [Supported Backends](#multi-backend-supported-backends)
  - [Pre-requisites](#multi-backend-pre-requisites)
  - [Installation](#multi-backend-pip)
  - [Compile from Source](#multi-backend-compile)
16

17
## CUDA[[cuda]]
Younes Belkada's avatar
Younes Belkada committed
18

19
20
`bitsandbytes` is currently supported on NVIDIA GPUs with [Compute Capability](https://developer.nvidia.com/cuda-gpus) 6.0+.
The library can be built using CUDA Toolkit versions as old as **11.8**.
21

22
23
| **Feature**                     | **CC Required** | **Example Hardware Requirement**            |
|---------------------------------|-----------------|---------------------------------------------|
24
25
26
| LLM.int8()                      | 7.5+            | Turing (RTX 20 series, T4) or newer GPUs    |
| 8-bit optimizers/quantization   | 6.0+            | Pascal (GTX 10X0 series, P100) or newer GPUs|
| NF4/FP4 quantization            | 6.0+            | Pascal (GTX 10X0 series, P100) or newer GPUs|
Titus's avatar
Titus committed
27

Steven Liu's avatar
Steven Liu committed
28
> [!WARNING]
29
30
31
> Support for Maxwell GPUs is deprecated and will be removed in a future release.
> Maxwell support is not included in PyPI distributions from `v0.48.0` on and must be built from source.
> For the best results, a Turing generation device or newer is recommended.
Younes Belkada's avatar
Younes Belkada committed
32

33
### Installation via PyPI[[cuda-pip]]
34

35
This is the most straightforward and recommended installation option.
36

37
The currently distributed `bitsandbytes` packages are built with the following configurations:
38

39
40
| **OS**             | **CUDA Toolkit** | **Host Compiler**    | **Targets**
|--------------------|------------------|----------------------|--------------
41
42
| **Linux x86-64**   | 11.8 - 12.6      | GCC 11.2             | sm60, sm70, sm75, sm80, sm86, sm89, sm90
| **Linux x86-64**   | 12.8 - 12.9      | GCC 11.2             | sm70, sm75, sm80, sm86, sm89, sm90, sm100, sm120
43
| **Linux aarch64**  | 11.8 - 12.6      | GCC 11.2             | sm75, sm80, sm90
44
| **Linux aarch64**  | 12.8 - 12.9      | GCC 11.2             | sm75, sm80, sm90, sm100, sm120
45
| **Windows x86-64** | 11.8 - 12.6      | MSVC 19.43+ (VS2022) | sm50, sm60, sm75, sm80, sm86, sm89, sm90
46
| **Windows x86-64** | 12.8 - 12.9      | MSVC 19.43+ (VS2022) | sm70, sm75, sm80, sm86, sm89, sm90, sm100, sm120
47

48
Use `pip` or `uv` to install:
49

50
51
```bash
pip install bitsandbytes
52
53
54
55
56
```

### Compile from source[[cuda-compile]]

> [!TIP]
57
> Don't hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA Toolkit versions or Linux distributions, or other less common configurations.
58

59
For Linux and Windows systems, compiling from source allows you to customize the build configurations. See below for detailed platform-specific instructions (see the `CMakeLists.txt` if you want to check the specifics and explore some additional options):
60
61
62
63

<hfoptions id="source">
<hfoption id="Linux">

64
To compile from source, you need CMake >= **3.22.1** and Python >= **3.9** installed. Make sure you have a compiler installed to compile C++ (`gcc`, `make`, headers, etc.). It is recommended to use GCC 9 or newer.
65
66

For example, to install a compiler and CMake on Ubuntu:
Younes Belkada's avatar
Younes Belkada committed
67

Steven Liu's avatar
Steven Liu committed
68
69
70
71
```bash
apt-get install -y build-essential cmake
```

72
You should also install CUDA Toolkit by following the [NVIDIA CUDA Installation Guide for Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) guide. The current minimum supported CUDA Toolkit version that we support is **11.8**.
73

Younes Belkada's avatar
Younes Belkada committed
74
```bash
75
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
76
77
cmake -DCOMPUTE_BACKEND=cuda -S .
make
78
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
Younes Belkada's avatar
Younes Belkada committed
79
```
Steven Liu's avatar
Steven Liu committed
80
81

> [!TIP]
82
> If you have multiple versions of the CUDA Toolkit installed or it is in a non-standard location, please refer to CMake CUDA documentation for how to configure the CUDA compiler.
Younes Belkada's avatar
Younes Belkada committed
83
84
85
86

</hfoption>
<hfoption id="Windows">

87
Compilation from source on Windows systems require Visual Studio with C++ support as well as an installation of the CUDA Toolkit.
88

89
To compile from source, you need CMake >= **3.22.1** and Python >= **3.9** installed. You should also install CUDA Toolkit by following the [CUDA Installation Guide for Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html) guide from NVIDIA. The current minimum supported CUDA Toolkit version that we support is **11.8**.
Younes Belkada's avatar
Younes Belkada committed
90
91

```bash
92
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
93
94
cmake -DCOMPUTE_BACKEND=cuda -S .
cmake --build . --config Release
95
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
Younes Belkada's avatar
Younes Belkada committed
96
97
```

98
Big thanks to [wkpark](https://github.com/wkpark), [Jamezo97](https://github.com/Jamezo97), [rickardp](https://github.com/rickardp), [akx](https://github.com/akx) for their amazing contributions to make bitsandbytes compatible with Windows.
Younes Belkada's avatar
Younes Belkada committed
99

Titus's avatar
Titus committed
100
</hfoption>
Younes Belkada's avatar
Younes Belkada committed
101
</hfoptions>
Steven Liu's avatar
Steven Liu committed
102

103
### Preview Wheels from `main`[[cuda-preview]]
Steven Liu's avatar
Steven Liu committed
104

105
If you would like to use new features even before they are officially released and help us test them, feel free to install the wheel directly from our CI (*the wheel links will remain stable!*):
Steven Liu's avatar
Steven Liu committed
106

107
108
<hfoptions id="OS">
<hfoption id="Linux">
Steven Liu's avatar
Steven Liu committed
109
110

```bash
111
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
Steven Liu's avatar
Steven Liu committed
112

113
114
# x86_64 (most users)
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_x86_64.whl
Steven Liu's avatar
Steven Liu committed
115

116
117
# ARM/aarch64
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_aarch64.whl
Steven Liu's avatar
Steven Liu committed
118
119
```

120
121
</hfoption>
<hfoption id="Windows">
Steven Liu's avatar
Steven Liu committed
122
123

```bash
124
125
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-win_amd64.whl
Steven Liu's avatar
Steven Liu committed
126
```
127
</hfoption>
128
</hfoptions>
129
130


131
## Multi-Backend Preview[[multi-backend]]
jiqing-feng's avatar
jiqing-feng committed
132

133
134
> [!WARNING]
> This functionality existed as an early technical preview and is not recommended for production use. We are in the process of upstreaming improved support for AMD and Intel hardware into the main project.
jiqing-feng's avatar
jiqing-feng committed
135

136
We provide an early preview of support for AMD and Intel hardware as part of a development branch.
137

138
### Supported Backends[[multi-backend-supported-backends]]
139

140
141
142
| **Backend** | **Supported Versions** | **Python versions** | **Architecture Support** | **Status** |
|-------------|------------------------|---------------------------|-------------------------|------------|
| **AMD ROCm** | 6.1+                   | 3.10+                     | minimum CDNA - `gfx90a`, RDNA - `gfx1100` | Alpha      |
143
144
| **Intel CPU** | v2.4.0+                  | 3.10+                     | Intel CPU | Alpha |
| **Intel GPU** | v2.7.0+                  | 3.10+                     | Intel GPU | Experimental |
145
| **Ascend NPU** | 2.1.0+ (`torch_npu`)         | 3.10+                     | Ascend NPU | Experimental |
146
147
148
149
150

For each supported backend, follow the respective instructions below:

### Pre-requisites[[multi-backend-pre-requisites]]

151
To use this preview version of `bitsandbytes` with `transformers`, be sure to install:
152

153
```bash
154
155
pip install "transformers>=4.45.1"
```
156

157
158
<hfoptions id="backend">
<hfoption id="AMD ROCm">
jiqing-feng's avatar
jiqing-feng committed
159

160
> [!WARNING]
161
> Pre-compiled binaries are only built for ROCm versions `6.1.2`/`6.2.4`/`6.3.2` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. [Find the pip install instructions here](#multi-backend-pip).
162
163
164
>
> Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile).
>
165
> **Windows is not supported for the ROCm backend**
jiqing-feng's avatar
jiqing-feng committed
166

167
> [!TIP]
168
> If you would like to install ROCm and PyTorch on bare metal, skip the Docker steps and refer to ROCm's official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Special note: please make sure to get the respective ROCm-specific PyTorch wheel for the installed ROCm version, e.g. `https://download.pytorch.org/whl/nightly/rocm6.2/`!
jiqing-feng's avatar
jiqing-feng committed
169
170

```bash
171
172
173
# Create a docker container with the ROCm image, which includes ROCm libraries
docker pull rocm/dev-ubuntu-22.04:6.3.4-complete
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/dev-ubuntu-22.04:6.3.4-complete
174
apt-get update && apt-get install -y git && cd home
jiqing-feng's avatar
jiqing-feng committed
175

176
# Install pytorch compatible with above ROCm version
177
pip install torch --index-url https://download.pytorch.org/whl/rocm6.3/
178
```
179

180
</hfoption>
181
<hfoption id="Intel XPU">
182

183
* A compatible PyTorch version with Intel XPU support is required. It is recommended to use the latest stable release. See [Getting Started on Intel GPU](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html) for guidance.
184
185
186
187
188
189
190
191
192
193
194
195

</hfoption>
</hfoptions>

### Installation

You can install the pre-built wheels for each backend, or compile from source for custom configurations.

#### Pre-built Wheel Installation (recommended)[[multi-backend-pip]]

<hfoptions id="platform">
<hfoption id="Linux">
196
This wheel provides support for ROCm and Intel XPU platforms.
197
198

```
199
# Note, if you don't want to reinstall our dependencies, append the `--no-deps` flag!
200
201
202
203
204
pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-manylinux_2_24_x86_64.whl'
```

</hfoption>
<hfoption id="Windows">
205
This wheel provides support for the Intel XPU platform.
206

207
208
```bash
# Note, if you don't want to reinstall our dependencies, append the `--no-deps` flag!
209
210
211
212
213
214
215
216
217
218
219
220
221
pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-win_amd64.whl'
```

</hfoption>
</hfoptions>

#### Compile from Source[[multi-backend-compile]]

<hfoptions id="backend">
<hfoption id="AMD ROCm">

#### AMD GPU

222
bitsandbytes is supported from ROCm 6.1 - ROCm 6.4.
223
224

```bash
225
# Install bitsandbytes from source
226
# Clone bitsandbytes repo, ROCm backend is currently enabled on multi-backend-refactor branch
227
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
jiqing-feng's avatar
jiqing-feng committed
228

229
230
231
# Compile & install
apt-get install -y build-essential cmake  # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=hip -S .  # Use -DBNB_ROCM_ARCH="gfx90a;gfx942" to target specific gpu arch
jiqing-feng's avatar
jiqing-feng committed
232
make
233
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
jiqing-feng's avatar
jiqing-feng committed
234
235
236
```

</hfoption>
237
<hfoption id="Intel CPU + GPU">
jiqing-feng's avatar
jiqing-feng committed
238

239
#### Intel CPU + GPU(XPU)
jiqing-feng's avatar
jiqing-feng committed
240

241
242
CPU needs to build CPU C++ codes, while XPU needs to build sycl codes.
Run `export bnb_device=xpu` if you are using xpu, run `export bnb_device=cpu` if you are using cpu.
243
244
```
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
245
cmake -DCOMPUTE_BACKEND=$bnb_device -S .
246
make
247
pip install -e .
248
249
```

250

251
252
253
254
255
</hfoption>
<hfoption id="Ascend NPU">

#### Ascend NPU

256
Please refer to [the official Ascend installations instructions](https://www.hiascend.com/document/detail/zh/Pytorch/60RC3/configandinstg/instg/insg_0001.html) for guidance on how to install the necessary `torch_npu` dependency.
257

258
```bash
259
260
261
262
263
264
265
266
267
268
# Install bitsandbytes from source
# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/

# Compile & install
apt-get install -y build-essential cmake  # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=npu -S .
make
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
```
jiqing-feng's avatar
jiqing-feng committed
269
270
</hfoption>
</hfoptions>