installation.mdx 7.92 KB
Newer Older
Titus's avatar
Titus committed
1
2
# Installation

jiqing-feng's avatar
jiqing-feng committed
3
4
## CUDA

5
bitsandbytes is only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. However, there's a multi-backend effort under way which is currently in alpha release, check [the respective section below in case you're interested to help us with early feedback](#multi-backend).
Younes Belkada's avatar
Younes Belkada committed
6

7
The latest version of bitsandbytes builds on:
8
9
10
11
12
13
14
15
16

| OS | CUDA | Compiler |
|---|---|---|
| Linux | 11.7 - 12.3 | GCC 11.4 |
|  | 12.4+ | GCC 13.2 |
| Windows | 11.7 - 12.4 | MSVC 19.38+ (VS2022 17.8.0+) |

> [!TIP]
> MacOS support is still a work in progress! Subscribe to this [issue](https://github.com/TimDettmers/bitsandbytes/issues/1020) to get notified about discussions and to track the integration progress.
Younes Belkada's avatar
Younes Belkada committed
17

Steven Liu's avatar
Steven Liu committed
18
For Linux systems, make sure your hardware meets the following requirements to use bitsandbytes features.
Titus's avatar
Titus committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
| **Feature** | **Hardware requirement** |
|---|---|
| LLM.int8() | NVIDIA Turing (RTX 20 series, T4) or Ampere (RTX 30 series, A4-A100) GPUs |
| 8-bit optimizers/quantization | NVIDIA Kepler (GTX 780 or newer) |
Titus's avatar
Titus committed
24

Steven Liu's avatar
Steven Liu committed
25
26
> [!WARNING]
> bitsandbytes >= 0.39.1 no longer includes Kepler binaries in pip installations. This requires manual compilation, and you should follow the general steps and use `cuda11x_nomatmul_kepler` for Kepler-targeted compilation.
Younes Belkada's avatar
Younes Belkada committed
27

Steven Liu's avatar
Steven Liu committed
28
To install from PyPI.
Younes Belkada's avatar
Younes Belkada committed
29
30
31
32
33

```bash
pip install bitsandbytes
```

34
### Compile from source[[compile]]
35

36
37
38
39
40
For Linux and Windows systems, you can compile bitsandbytes from source. Installing from source allows for more build options with different CMake configurations.

<hfoptions id="source">
<hfoption id="Linux">

41
To compile from source, you need CMake >= **3.22.1** and Python >= **3.8** installed. Make sure you have a compiler installed to compile C++ (gcc, make, headers, etc.). For example, to install a compiler and CMake on Ubuntu:
Younes Belkada's avatar
Younes Belkada committed
42

Steven Liu's avatar
Steven Liu committed
43
44
45
46
```bash
apt-get install -y build-essential cmake
```

47
48
49
50
51
52
53
54
55
You should also install CUDA Toolkit by following the [NVIDIA CUDA Installation Guide for Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) guide from NVIDIA. The current expected CUDA Toolkit version is **11.1+** and it is recommended to install **GCC >= 7.3** and required to have at least **GCC >= 6**.

Refer to the following table if you're using another CUDA Toolkit version.

| CUDA Toolkit | GCC |
|---|---|
| >= 11.4.1 | >= 11 |
| >= 12.0 | >= 12 |
| >= 12.4 | >= 13 |
Steven Liu's avatar
Steven Liu committed
56
57

Now to install the bitsandbytes package from source, run the following commands:
58

Younes Belkada's avatar
Younes Belkada committed
59
60
```bash
git clone https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
61
62
63
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cuda -S .
make
64
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
Younes Belkada's avatar
Younes Belkada committed
65
```
Steven Liu's avatar
Steven Liu committed
66
67
68

> [!TIP]
> If you have multiple versions of CUDA installed or installed it in a non-standard location, please refer to CMake CUDA documentation for how to configure the CUDA compiler.
Younes Belkada's avatar
Younes Belkada committed
69
70
71
72

</hfoption>
<hfoption id="Windows">

Steven Liu's avatar
Steven Liu committed
73
Windows systems require Visual Studio with C++ support as well as an installation of the CUDA SDK.
74

75
76
77
78
79
80
81
To compile from source, you need CMake >= **3.22.1** and Python >= **3.8** installed. You should also install CUDA Toolkit by following the [CUDA Installation Guide for Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html) guide from NVIDIA.

Refer to the following table if you're using another CUDA Toolkit version.

| CUDA Toolkit | MSVC |
|---|---|
| >= 11.6 | 19.30+ (VS2022) |
Younes Belkada's avatar
Younes Belkada committed
82
83
84

```bash
git clone https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
85
86
87
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cuda -S .
cmake --build . --config Release
88
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
Younes Belkada's avatar
Younes Belkada committed
89
90
```

91
Big thanks to [wkpark](https://github.com/wkpark), [Jamezo97](https://github.com/Jamezo97), [rickardp](https://github.com/rickardp), [akx](https://github.com/akx) for their amazing contributions to make bitsandbytes compatible with Windows.
Younes Belkada's avatar
Younes Belkada committed
92

Titus's avatar
Titus committed
93
</hfoption>
Younes Belkada's avatar
Younes Belkada committed
94
</hfoptions>
Steven Liu's avatar
Steven Liu committed
95

jiqing-feng's avatar
jiqing-feng committed
96
### PyTorch CUDA versions
Steven Liu's avatar
Steven Liu committed
97
98
99
100
101
102
103
104
105
106
107
108
109

Some bitsandbytes features may need a newer CUDA version than the one currently supported by PyTorch binaries from Conda and pip. In this case, you should follow these instructions to load a precompiled bitsandbytes binary.

1. Determine the path of the CUDA version you want to use. Common paths include:

* `/usr/local/cuda`
* `/usr/local/cuda-XX.X` where `XX.X` is the CUDA version number

Then locally install the CUDA version you need with this script from bitsandbytes:

```bash
wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/install_cuda.sh
# Syntax cuda_install CUDA_VERSION INSTALL_PREFIX EXPORT_TO_BASH
110
#   CUDA_VERSION in {110, 111, 112, 113, 114, 115, 116, 117, 118, 120, 121, 122, 123, 124, 125}
Steven Liu's avatar
Steven Liu committed
111
112
113
114
#   EXPORT_TO_BASH in {0, 1} with 0=False and 1=True

# For example, the following installs CUDA 11.7 to ~/local/cuda-11.7 and exports the path to your .bashrc

115
bash install_cuda.sh 117 ~/local 1
Steven Liu's avatar
Steven Liu committed
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
```

2. Set the environment variables `BNB_CUDA_VERSION` and `LD_LIBRARY_PATH` by manually overriding the CUDA version installed by PyTorch.

> [!TIP]
> It is recommended to add the following lines to the `.bashrc` file to make them permanent.

```bash
export BNB_CUDA_VERSION=<VERSION>
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<PATH>
```

For example, to use a local install path:

```bash
export BNB_CUDA_VERSION=117
132
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/YOUR_USERNAME/local/cuda-11.7
Steven Liu's avatar
Steven Liu committed
133
134
135
```

3. Now when you launch bitsandbytes with these environment variables, the PyTorch CUDA version is overridden by the new CUDA version (in this example, version 11.7) and a different bitsandbytes library is loaded.
jiqing-feng's avatar
jiqing-feng committed
136

137
## Multi-backend preview release compilation[[multi-backend]]
jiqing-feng's avatar
jiqing-feng committed
138

139
Please follow these steps to install bitsandbytes with device-specific backend support other than CUDA:
jiqing-feng's avatar
jiqing-feng committed
140

141
142
<hfoptions id="backend">
<hfoption id="AMD ROCm">
jiqing-feng's avatar
jiqing-feng committed
143

144
### AMD GPU
jiqing-feng's avatar
jiqing-feng committed
145

146
bitsandbytes is fully supported from ROCm 6.1 onwards (currently in alpha release).
jiqing-feng's avatar
jiqing-feng committed
147

148
149
> [!TIP]
> If you already installed ROCm and PyTorch, skip Docker steps below and please check that the torch version matches your ROCm install. To install torch for a specific ROCm version, please refer to step 3 of wheels install in [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) guide.
jiqing-feng's avatar
jiqing-feng committed
150
151

```bash
152
153
154
# Create a docker container with latest pytorch. It comes with ROCm and pytorch preinstalled
docker pull rocm/pytorch:latest
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/pytorch:latest
jiqing-feng's avatar
jiqing-feng committed
155

156
157
# Clone bitsandbytes repo, ROCm backend is currently enabled on multi-backend-refactor branch
git clone --depth 1 -b multi-backend-refactor https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
jiqing-feng's avatar
jiqing-feng committed
158

159
# Install dependencies
jiqing-feng's avatar
jiqing-feng committed
160
pip install -r requirements-dev.txt
161
162
163
164

# Compile & install
apt-get install -y build-essential cmake  # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=hip -S .  # Use -DBNB_ROCM_ARCH="gfx90a;gfx942" to target specific gpu arch
jiqing-feng's avatar
jiqing-feng committed
165
make
166
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
jiqing-feng's avatar
jiqing-feng committed
167
168
169
```

</hfoption>
170
<hfoption id="Intel CPU + GPU">
jiqing-feng's avatar
jiqing-feng committed
171

172
### Intel CPU
jiqing-feng's avatar
jiqing-feng committed
173

174
175
> [!TIP]
> Intel CPU backend only supports building from source; for now, please follow the instructions below.
jiqing-feng's avatar
jiqing-feng committed
176

177
178
179
180
181
182
183
Similar to the CUDA case, you can compile bitsandbytes from source for Linux and Windows systems.

The below commands are for Linux. For installing on Windows, please adapt the below commands according to the same pattern as described [the section above on compiling from source under the Windows tab](#compile).

```
git clone --depth 1 -b multi-backend-refactor https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
pip install intel_extension_for_pytorch
jiqing-feng's avatar
jiqing-feng committed
184
185
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cpu -S .
186
187
make
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)
jiqing-feng's avatar
jiqing-feng committed
188
189
```

190
191
192
193
194
</hfoption>
<hfoption id="Apple Silicon (MPS)">

WIP

jiqing-feng's avatar
jiqing-feng committed
195
196
</hfoption>
</hfoptions>