installation.mdx 12.4 KB
Newer Older
1
# Installation Guide
Titus's avatar
Titus committed
2

3
Welcome to the installation guide for the `bitsandbytes` library! This document provides step-by-step instructions to install `bitsandbytes` across various platforms and hardware configurations. The library primarily supports CUDA-based GPUs, but the team is actively working on enabling support for additional backends like CPU, AMD ROCm, Intel XPU, and Gaudi HPU.
Younes Belkada's avatar
Younes Belkada committed
4

5
## Table of Contents
6

7
8
9
- [CUDA](#cuda)
  - [Installation via PyPI](#cuda-pip)
  - [Compile from Source](#cuda-compile)
10
11
  - [Preview Wheels from `main`](#cuda-preview)
- [Multi-Backend Preview](#multi-backend)
12
13
14
15
  - [Supported Backends](#multi-backend-supported-backends)
  - [Pre-requisites](#multi-backend-pre-requisites)
  - [Installation](#multi-backend-pip)
  - [Compile from Source](#multi-backend-compile)
16

17
## CUDA[[cuda]]
Younes Belkada's avatar
Younes Belkada committed
18

19
20
`bitsandbytes` is currently supported on NVIDIA GPUs with [Compute Capability](https://developer.nvidia.com/cuda-gpus) 5.0+.
The library can be built using CUDA Toolkit versions as old as **11.6** on Windows and **11.4** on Linux.
21

22
23
24
25
26
| **Feature**                     | **CC Required** | **Example Hardware Requirement**            |
|---------------------------------|-----------------|---------------------------------------------|
| LLM.int8()                      | 7.5+            | Turing (RTX 20 series, T4) or newer GPUs             |
| 8-bit optimizers/quantization   | 5.0+            | Maxwell (GTX 900 series, TITAN X, M40) or newer GPUs |
| NF4/FP4 quantization            | 5.0+            | Maxwell (GTX 900 series, TITAN X, M40) or newer GPUs |
Titus's avatar
Titus committed
27

Steven Liu's avatar
Steven Liu committed
28
> [!WARNING]
29
> Support for Maxwell GPUs is deprecated and will be removed in a future release. For the best results, a Turing generation device or newer is recommended.
Younes Belkada's avatar
Younes Belkada committed
30

31
### Installation via PyPI[[cuda-pip]]
32

33
This is the most straightforward and recommended installation option.
34

35
The currently distributed `bitsandbytes` packages are built with the following configurations:
36

37
38
39
40
41
42
43
44
| **OS**             | **CUDA Toolkit** | **Host Compiler**    | **Targets**
|--------------------|------------------|----------------------|--------------
| **Linux x86-64**   | 11.8 - 12.6      | GCC 11.2             | sm50, sm60, sm75, sm80, sm86, sm89, sm90
| **Linux x86-64**   | 12.8             | GCC 11.2             | sm75, sm80, sm86, sm89, sm90, sm100, sm120
| **Linux aarch64**  | 11.8 - 12.6      | GCC 11.2             | sm75, sm80, sm90
| **Linux aarch64**  | 12.8             | GCC 11.2             | sm75, sm80, sm90, sm100
| **Windows x86-64** | 11.8 - 12.6      | MSVC 19.43+ (VS2022) | sm50, sm60, sm75, sm80, sm86, sm89, sm90
| **Windows x86-64** | 12.8             | MSVC 19.43+ (VS2022) | sm75, sm80, sm86, sm89, sm90, sm100, sm120
45

46
Use `pip` or `uv` to install:
47

48
49
```bash
pip install bitsandbytes
50
51
52
53
54
```

### Compile from source[[cuda-compile]]

> [!TIP]
55
> Don't hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA Toolkit versions or Linux distributions, or other less common configurations.
56

57
For Linux and Windows systems, compiling from source allows you to customize the build configurations. See below for detailed platform-specific instructions (see the `CMakeLists.txt` if you want to check the specifics and explore some additional options):
58
59
60
61

<hfoptions id="source">
<hfoption id="Linux">

62
To compile from source, you need CMake >= **3.22.1** and Python >= **3.9** installed. Make sure you have a compiler installed to compile C++ (`gcc`, `make`, headers, etc.). It is recommended to use GCC 9 or newer.
63
64

For example, to install a compiler and CMake on Ubuntu:
Younes Belkada's avatar
Younes Belkada committed
65

Steven Liu's avatar
Steven Liu committed
66
67
68
69
```bash
apt-get install -y build-essential cmake
```

70
You should also install CUDA Toolkit by following the [NVIDIA CUDA Installation Guide for Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) guide. The current minimum supported CUDA Toolkit version that we test with is **11.8**.
71

Younes Belkada's avatar
Younes Belkada committed
72
```bash
73
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
74
75
cmake -DCOMPUTE_BACKEND=cuda -S .
make
76
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
Younes Belkada's avatar
Younes Belkada committed
77
```
Steven Liu's avatar
Steven Liu committed
78
79

> [!TIP]
80
> If you have multiple versions of the CUDA Toolkit installed or it is in a non-standard location, please refer to CMake CUDA documentation for how to configure the CUDA compiler.
Younes Belkada's avatar
Younes Belkada committed
81
82
83
84

</hfoption>
<hfoption id="Windows">

85
Compilation from source on Windows systems require Visual Studio with C++ support as well as an installation of the CUDA Toolkit.
86

87
To compile from source, you need CMake >= **3.22.1** and Python >= **3.9** installed. You should also install CUDA Toolkit by following the [CUDA Installation Guide for Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html) guide from NVIDIA. The current minimum supported CUDA Toolkit version that we test with is **11.8**.
Younes Belkada's avatar
Younes Belkada committed
88
89

```bash
90
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
91
92
cmake -DCOMPUTE_BACKEND=cuda -S .
cmake --build . --config Release
93
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
Younes Belkada's avatar
Younes Belkada committed
94
95
```

96
Big thanks to [wkpark](https://github.com/wkpark), [Jamezo97](https://github.com/Jamezo97), [rickardp](https://github.com/rickardp), [akx](https://github.com/akx) for their amazing contributions to make bitsandbytes compatible with Windows.
Younes Belkada's avatar
Younes Belkada committed
97

Titus's avatar
Titus committed
98
</hfoption>
Younes Belkada's avatar
Younes Belkada committed
99
</hfoptions>
Steven Liu's avatar
Steven Liu committed
100

101
### Preview Wheels from `main`[[cuda-preview]]
Steven Liu's avatar
Steven Liu committed
102

103
If you would like to use new features even before they are officially released and help us test them, feel free to install the wheel directly from our CI (*the wheel links will remain stable!*):
Steven Liu's avatar
Steven Liu committed
104

105
106
<hfoptions id="OS">
<hfoption id="Linux">
Steven Liu's avatar
Steven Liu committed
107
108

```bash
109
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
Steven Liu's avatar
Steven Liu committed
110

111
112
# x86_64 (most users)
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_x86_64.whl
Steven Liu's avatar
Steven Liu committed
113

114
115
# ARM/aarch64
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_aarch64.whl
Steven Liu's avatar
Steven Liu committed
116
117
```

118
119
</hfoption>
<hfoption id="Windows">
Steven Liu's avatar
Steven Liu committed
120
121

```bash
122
123
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-win_amd64.whl
Steven Liu's avatar
Steven Liu committed
124
```
125
</hfoption>
126
</hfoptions>
127
128


129
## Multi-Backend Preview[[multi-backend]]
jiqing-feng's avatar
jiqing-feng committed
130

131
132
> [!WARNING]
> This functionality existed as an early technical preview and is not recommended for production use. We are in the process of upstreaming improved support for AMD and Intel hardware into the main project.
jiqing-feng's avatar
jiqing-feng committed
133

134
We provide an early preview of support for AMD and Intel hardware as part of a development branch.
135

136
### Supported Backends[[multi-backend-supported-backends]]
137

138
139
140
| **Backend** | **Supported Versions** | **Python versions** | **Architecture Support** | **Status** |
|-------------|------------------------|---------------------------|-------------------------|------------|
| **AMD ROCm** | 6.1+                   | 3.10+                     | minimum CDNA - `gfx90a`, RDNA - `gfx1100` | Alpha      |
141
142
| **Intel CPU** | v2.4.0+                  | 3.10+                     | Intel CPU | Alpha |
| **Intel GPU** | v2.7.0+                  | 3.10+                     | Intel GPU | Experimental |
143
| **Ascend NPU** | 2.1.0+ (`torch_npu`)         | 3.10+                     | Ascend NPU | Experimental |
144
145
146
147
148

For each supported backend, follow the respective instructions below:

### Pre-requisites[[multi-backend-pre-requisites]]

149
To use this preview version of `bitsandbytes` with `transformers`, be sure to install:
150

151
```bash
152
153
pip install "transformers>=4.45.1"
```
154

155
156
<hfoptions id="backend">
<hfoption id="AMD ROCm">
jiqing-feng's avatar
jiqing-feng committed
157

158
> [!WARNING]
159
> Pre-compiled binaries are only built for ROCm versions `6.1.2`/`6.2.4`/`6.3.2` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. [Find the pip install instructions here](#multi-backend-pip).
160
161
162
>
> Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile).
>
163
> **Windows is not supported for the ROCm backend**
jiqing-feng's avatar
jiqing-feng committed
164

165
> [!TIP]
166
> If you would like to install ROCm and PyTorch on bare metal, skip the Docker steps and refer to ROCm's official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Special note: please make sure to get the respective ROCm-specific PyTorch wheel for the installed ROCm version, e.g. `https://download.pytorch.org/whl/nightly/rocm6.2/`!
jiqing-feng's avatar
jiqing-feng committed
167
168

```bash
169
170
171
# Create a docker container with the ROCm image, which includes ROCm libraries
docker pull rocm/dev-ubuntu-22.04:6.3.4-complete
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/dev-ubuntu-22.04:6.3.4-complete
172
apt-get update && apt-get install -y git && cd home
jiqing-feng's avatar
jiqing-feng committed
173

174
# Install pytorch compatible with above ROCm version
175
pip install torch --index-url https://download.pytorch.org/whl/rocm6.3/
176
```
177

178
</hfoption>
179
<hfoption id="Intel XPU">
180

181
* A compatible PyTorch version with Intel XPU support is required. It is recommended to use the latest stable release. See [Getting Started on Intel GPU](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html) for guidance.
182
183
184
185
186
187
188
189
190
191
192
193

</hfoption>
</hfoptions>

### Installation

You can install the pre-built wheels for each backend, or compile from source for custom configurations.

#### Pre-built Wheel Installation (recommended)[[multi-backend-pip]]

<hfoptions id="platform">
<hfoption id="Linux">
194
This wheel provides support for ROCm and Intel XPU platforms.
195
196

```
197
# Note, if you don't want to reinstall our dependencies, append the `--no-deps` flag!
198
199
200
201
202
pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-manylinux_2_24_x86_64.whl'
```

</hfoption>
<hfoption id="Windows">
203
This wheel provides support for the Intel XPU platform.
204

205
206
```bash
# Note, if you don't want to reinstall our dependencies, append the `--no-deps` flag!
207
208
209
210
211
212
213
214
215
216
217
218
219
pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-win_amd64.whl'
```

</hfoption>
</hfoptions>

#### Compile from Source[[multi-backend-compile]]

<hfoptions id="backend">
<hfoption id="AMD ROCm">

#### AMD GPU

220
bitsandbytes is supported from ROCm 6.1 - ROCm 6.4.
221
222

```bash
223
# Install bitsandbytes from source
224
# Clone bitsandbytes repo, ROCm backend is currently enabled on multi-backend-refactor branch
225
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
jiqing-feng's avatar
jiqing-feng committed
226

227
228
229
# Compile & install
apt-get install -y build-essential cmake  # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=hip -S .  # Use -DBNB_ROCM_ARCH="gfx90a;gfx942" to target specific gpu arch
jiqing-feng's avatar
jiqing-feng committed
230
make
231
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
jiqing-feng's avatar
jiqing-feng committed
232
233
234
```

</hfoption>
235
<hfoption id="Intel CPU + GPU">
jiqing-feng's avatar
jiqing-feng committed
236

237
#### Intel CPU + GPU(XPU)
jiqing-feng's avatar
jiqing-feng committed
238

239
240
CPU needs to build CPU C++ codes, while XPU needs to build sycl codes.
Run `export bnb_device=xpu` if you are using xpu, run `export bnb_device=cpu` if you are using cpu.
241
242
```
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
243
cmake -DCOMPUTE_BACKEND=$bnb_device -S .
244
make
245
pip install -e .
246
247
```

248

249
250
251
252
253
</hfoption>
<hfoption id="Ascend NPU">

#### Ascend NPU

254
Please refer to [the official Ascend installations instructions](https://www.hiascend.com/document/detail/zh/Pytorch/60RC3/configandinstg/instg/insg_0001.html) for guidance on how to install the necessary `torch_npu` dependency.
255

256
```bash
257
258
259
260
261
262
263
264
265
266
# Install bitsandbytes from source
# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/

# Compile & install
apt-get install -y build-essential cmake  # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=npu -S .
make
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
```
jiqing-feng's avatar
jiqing-feng committed
267
268
</hfoption>
</hfoptions>