CHANGELOG.md 19.4 KB
Newer Older
Titus von Koeller's avatar
Titus von Koeller committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
### 0.43.1

#### Improvements and New Features:

- Improved the serialization format for 8-bit weights; this change is fully backwards compatible. (#1164, thanks to @younesbelkada for the contributions and @akx for the review).
- Added CUDA 12.4 support to the Linux x86-64 build workflow, expanding the library's compatibility with the latest CUDA versions. (#1171, kudos to @matthewdouglas for this addition).
- Docs enhancement: Improved the instructions for installing the library from source. (#1149, special thanks to @stevhliu for the enhancements).

#### Bug Fixes

- Fix 4bit quantization with blocksize = 4096, where an illegal memory access was encountered. (#1160, thanks @matthewdouglas for fixing and @YLGH for reporting)

#### Internal Improvements:

- Tests: improve memory usage (#1147, thanks @matthewdouglas)
- Add CUDA 12.4 to docs/install helper (#1136, thanks @matthewdouglas)
- Minor type/doc fixes (#1128, thanks @akx)
- Reformat Python code with Ruff (#1081, thanks @akx)
- Rework of CUDA/native-library setup and diagnostics (#1041, thanks @akx)

21
### 0.43.0
Tim Dettmers's avatar
Tim Dettmers committed
22

23
#### Improvements and New Features:
Tim Dettmers's avatar
Tim Dettmers committed
24

25
26
27
28
29
30
- QLoRA + FSDP official support is now live! https://github.com/TimDettmers/bitsandbytes/pull/970 by @warner-benjamin and team - with FSDP you can train very large models (70b scale) on multiple 24GB consumer-type GPUs. See https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html for more details.
- Introduced improvements to the CI process for enhanced performance and efficiency during builds, specifically enabling more effective cross-compilation on Linux platforms. This was accomplished by deprecating Make and migrating to Cmake, as well as implementing new corresponding workflows. Huge thanks go to @wkpark, @rickardp, @matthewdouglas and @younesbelkada; #1055, #1050, #1111.
- Windows should be officially supported in bitsandbytes if you install the library from source. See: https://huggingface.co/docs/bitsandbytes/main/en/index for more details
- Updated installation instructions to provide more comprehensive guidance for users. This includes clearer explanations and additional tips for various setup scenarios, making the library more accessible to a broader audience (@rickardp, #1047).
- Enhanced the library's compatibility and setup process, including fixes for CPU-only installations and improvements in CUDA setup error messaging. This effort aims to streamline the installation process and improve user experience across different platforms and setups (@wkpark, @akx, #1038, #996, #1012).
- Setup a new documentation at https://huggingface.co/docs/bitsandbytes/main with extensive new sections and content to help users better understand and utilize the library. Especially notable are the new API docs. (big thanks to @stevhliu and @mishig25 from HuggingFace #1012). The API docs have been also addressed in #1075.
Tim Dettmers's avatar
Tim Dettmers committed
31

32
#### Bug Fixes:
Tim Dettmers's avatar
Tim Dettmers committed
33

34
35
- Addressed a race condition in kEstimateQuantiles, enhancing the reliability of quantile estimation in concurrent environments (@pnunna93, #1061).
- Fixed various minor issues, including typos in code comments and documentation, to improve code clarity and prevent potential confusion (@Brian Vaughan, #1063).
Tim Dettmers's avatar
Tim Dettmers committed
36

37
#### Backwards Compatibility
Tim Dettmers's avatar
Tim Dettmers committed
38

39
- After upgrading from `v0.42` to `v0.43`, when using 4bit quantization, models may generate slightly different outputs (approximately up to the 2nd decimal place) due to a fix in the code. For anyone interested in the details, [see this comment](https://github.com/TimDettmers/bitsandbytes/discussions/1094#discussioncomment-8984069).
Tim Dettmers's avatar
Tim Dettmers committed
40

41
#### Internal and Build System Enhancements:
Tim Dettmers's avatar
Tim Dettmers committed
42

43
- Implemented several enhancements to the internal and build systems, including adjustments to the CI workflows, portability improvements, and build artifact management. These changes contribute to a more robust and flexible development process, ensuring the library's ongoing quality and maintainability (@rickardp, @akx, @wkpark, @matthewdouglas; #949, #1053, #1045, #1037).
Tim Dettmers's avatar
Tim Dettmers committed
44

45
#### Contributors:
Tim Dettmers's avatar
Tim Dettmers committed
46

47
48
49
50
51
52
53
This release is made possible thanks to the many active contributors that submitted PRs and many others who contributed to discussions, reviews, and testing. Your efforts greatly enhance the library's quality and user experience. It's truly inspiring to work with such a dedicated and competent group of volunteers and professionals!

We give a special thanks to @TimDettmers for managing to find a little bit of time for valuable consultations on critical topics, despite preparing for and touring the states applying for professor positions. We wish him the utmost success!

We also extend our gratitude to the broader community for your continued support, feedback, and engagement, which play a crucial role in driving the library's development forward.

### 0.42.0
Tim Dettmers's avatar
Tim Dettmers committed
54
55

Features:
56
57
58

- 4-bit serialization now supported. This enables 4-bit load/store. Thank you @poedator #753
- the bitsandbytes library now has a version attribute: `bitsandbytes.__version__` @rasbt #710
Tim Dettmers's avatar
Tim Dettmers committed
59
60
61

Bug fixes:

62
63
64
65
66
67
68
69
- Fixed bugs in dynamic exponent data type creation. Thank you @RossM, @KohakuBlueleaf, @ArrowM #659 #227 #262 #152
- Fixed an issue where 4-bit serialization would fail for layers without double quantization #868. Thank you, @poedator
- Fixed an issue where calling .to() or .cuda() on a 4-bit layer twice would result in an error #867. Thank you, @jph00
- Fixed a bug where a missing access permission in a path searched for CUDA would lead to an error @osma #677
- Fixed a bug where the GOOGLE_VM_CONFIG_LOCK_FILE variable could cause errors in colab environments @akrentsel @xaptronic #715 #883 #622
- Fixed a bug where kgetColRowStats (LLM.int8()) would fail for certain dimensions @LucQueen @905
- Fixed a bug where the adjusted regular Embedding layer was not available via bnb.nn.Embedding @neel04 #563
- Fixed added missing scipy requirement @dulalbert #525
Tim Dettmers's avatar
Tim Dettmers committed
70

71
### 0.41.3
Tim Dettmers's avatar
Tim Dettmers committed
72

73
Bug fixes:
Tim Dettmers's avatar
Tim Dettmers committed
74

75
76
- Fixed an issue where 4-bit serialization would fail for layers without double quantization #868. Thank you, @poedator
- Fixed an issue where calling .to() or .cuda() on a 4-bit layer twice would result in an error #867. Thank you, @jph00
Tim Dettmers's avatar
Tim Dettmers committed
77

78
### 0.41.2
79

80
Feature:
81

82
- 4-bit serialization now supported. This enables 4-bit load/store. Thank you @poedator #753
83

84
### 0.41.1
85

86
87
88
89
90
Bug fixes:

- Fixed bugs in dynamic exponent data type creation. Thank you @RossM, @KohakuBlueleaf, @ArrowM #659 #227 #262 #152

### 0.41.0
91
92

Features:
93

94
95
- Added precompiled CUDA 11.8 binaries to support H100 GPUs without compilation #571
- CUDA SETUP now no longer looks for libcuda and libcudart and relies PyTorch CUDA libraries. To manually override this behavior see: how_to_use_nonpytorch_cuda.md. Thank you @rapsealk
96

97
Bug fixes:
98

99
100
101
102
103
104
105
106
- Fixed a bug where the default type of absmax was undefined which leads to errors if the default type is different than torch.float32. # 553
- Fixed a missing scipy dependency in requirements.txt. #544
- Fixed a bug, where a view operation could cause an error in 8-bit layers.
- Fixed a bug where CPU bitsandbytes would during the import. #593 Thank you @bilelomrani
- Fixed a but where a non-existent LD_LIBRARY_PATH variable led to a failure in python -m bitsandbytes #588
- Removed outdated get_cuda_lib_handle calls that lead to errors. #595 Thank you @ihsanturk
- Fixed bug where read-permission was assumed for a file. #497
- Fixed a bug where prefetchAsync lead to errors on GPUs that do not support unified memory but not prefetching (Maxwell, SM52). #470 #451 #453 #477 Thank you @jllllll and @stoperro
107

108
Documentation:
109

110
111
- Improved documentation for GPUs that do not support 8-bit matmul. #529
- Added description and pointers for the NF4 data type. #543
112

113
User experience:
114

115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
- Improved handling of default compute_dtype for Linear4bit Layers, so that compute_dtype = input_dtype if the input data type is stable enough (float32, bfloat16, but not float16).

Performance:

- improved 4-bit inference performance for A100 GPUs. This degraded performance for A40/RTX3090 and RTX 4090 GPUs slightly.

### 0.40.2

Bug fixes:

- Fixed a but where a non-existent LD_LIBRARY_PATH variable led to a failure in python -m bitsandbytes #588
- Removed outdated get_cuda_lib_handle calls that lead to errors. #595 Thank you @ihsanturk
- Fixed bug where read-permission was assumed for a file. #497
- Fixed a bug where prefetchAsync lead to errors on GPUs that do not support unified memory but not prefetching (Maxwell, SM52). #470 #451 #453 #477 Thank you @jllllll and @stoperro

### 0.40.1
131
132

Features:
133
134
135

- Added precompiled CUDA 11.8 binaries to support H100 GPUs without compilation #571
- CUDA SETUP now no longer looks for libcuda and libcudart and relies PyTorch CUDA libraries. To manually override this behavior see: how_to_use_nonpytorch_cuda.md. Thank you @rapsealk
136
137

Bug fixes:
Tim Dettmers's avatar
Tim Dettmers committed
138

139
140
141
142
- Fixed a bug where the default type of absmax was undefined which leads to errors if the default type is different than torch.float32. # 553
- Fixed a missing scipy dependency in requirements.txt. #544
- Fixed a bug, where a view operation could cause an error in 8-bit layers.
- Fixed a bug where CPU bitsandbytes would during the import. #593 Thank you @bilelomrani
Tim Dettmers's avatar
Tim Dettmers committed
143

144
Documentation:
Tim Dettmers's avatar
Tim Dettmers committed
145

146
147
148
149
- Improved documentation for GPUs that do not support 8-bit matmul. #529
- Added description and pointers for the NF4 data type. #543

### 0.40.0
Tim Dettmers's avatar
Tim Dettmers committed
150
151

Features:
152
153
154

- Added 4-bit inference kernels for batch size=1. Currently support are the NF4, FP4 data types.
- Added support for quantizations of bfloat16 input data.
Tim Dettmers's avatar
Tim Dettmers committed
155
156
157

Bug fixes:

158
- Added `device` variable for bitsandbytes layers to be compatible with PyTorch layers.
159

160
Deprecated:
161

162
- Binaries for CUDA 11.2, 11.6 no longer ship with `pip install bitsandbytes` and need to be compiled from source.
163

164
### 0.39.0
165
166

Features:
167
168
169
170
171
172

- 4-bit matrix multiplication for Float4 and NormalFloat4 data types.
- Added 4-bit quantization routines
- Doubled quantization routines for 4-bit quantization
- Paged optimizers for Adam and Lion.
- bfloat16 gradient / weight support for Adam and Lion with 8 or 32-bit states.
173
174

Bug fixes:
175

176
- Fixed a bug where 8-bit models consumed twice the memory as expected after serialization
177

178
Deprecated:
179

180
181
182
- Kepler binaries (GTX 700s and Tesla K40/K80) are not longer provided via pip and need to be compiled from source. Kepler support might be fully removed in the future.

### 0.38.1
183
184
185

Features:

186
187
- Added Int8 SwitchBack layers
- Added Fake FP8 layers for research purposes (available under `bnb.research.nn. ...`)
188

189
190
191
### 0.38.0

#### 8-bit Lion, Load/Store 8-bit Models directly from/to HF Hub
192

193
Features:
194
195
196
197

- Support for 32 and 8-bit Lion has been added. Thank you @lucidrains
- Support for serialization of Linear8bitLt layers (LLM.int8()). This allows to store and load 8-bit weights directly from the HuggingFace Hub. Thank you @myrab
- New bug report features `python -m bitsandbytes` now gives extensive debugging details to debug CUDA setup failures.
198

199
Bug fixes:
Tim Dettmers's avatar
Tim Dettmers committed
200

201
202
- Fixed a bug where some bitsandbytes methods failed in a model-parallel setup on multiple GPUs. Thank you @tonylins
- Fixed a bug where cudart.so libraries could not be found in newer PyTorch releases.
Tim Dettmers's avatar
Tim Dettmers committed
203

204
Improvements:
205

206
- Improved the CUDA Setup procedure by doing a more extensive search for CUDA libraries
207

208
Deprecated:
209

210
211
- Devices with compute capability 3.0 (GTX 700s, K10) and 3.2 (Tegra K1, Jetson TK1) are now deprecated and support will be removed in 0.39.0.
- Support for CUDA 10.0 and 10.2 will be removed in bitsandbytes 0.39.0
212

213
214
215
216
217
### 0.37.0

#### Int8 Matmul + backward support for all GPUs

Features:
218

219
220
221
222
223
224
- Int8 MatmulLt now supports backward through inversion of the ColTuring/ColAmpere format. Slow, but memory efficient. Big thanks to @borzunov
- Int8 now supported on all GPUs. On devices with compute capability \< 7.5, the Int weights are cast to 16/32-bit for the matrix multiplication. Contributed by @borzunov

Improvements:

- Improved logging for the CUDA detection mechanism.
225
226
227
228
229
230

### 0.36.0

#### Improvements, Ada/Hopper support, fake k-bit quantization.

Features:
231
232
233
234
235
236
237
238

- CUDA 11.8 and 12.0 support added
- support for Ada and Hopper GPUs added (compute capability 8.9 and 9.0)
- support for fake k-bit block-wise quantization for Int, Float, quantile quantization, and dynamic exponent data types added
- Added CUDA instruction generator to fix some installations.
- Added additional block sizes for quantization {64, 128, 256, 512, 1024}
- Added SRAM Quantile algorithm to quickly estimate less than 256 quantiles
- Added option to suppress the bitsandbytes welcome message (@Cyberes)
239
240

Regression:
241
242

- Compute capability 3.0 removed: GTX 600s and 700s series is no longer supported (except GTX 780 and GTX 780 Ti)
243
244

Bug fixes:
245
246
247
248
249
250
251
252
253
254
255

- fixed a bug where too long directory names would crash the CUDA SETUP #35 (@tomaarsen)
- fixed a bug where CPU installations on Colab would run into an error  #34 (@tomaarsen)
- fixed an issue where the default CUDA version with fast-DreamBooth was not supported #52
- fixed a bug where the CUDA setup failed due to a wrong function call.
- fixed a bug in the CUDA Setup which led to an incomprehensible error if no GPU was detected.
- fixed a bug in the CUDA Setup failed with the cuda runtime was found, but not the cuda library.
- fixed a bug where not finding the cuda runtime led to an incomprehensible error.
- fixed a bug where with missing CUDA the default was an error instead of the loading the CPU library
- fixed a bug where the CC version of the GPU was not detected appropriately (@BlackHC)
- fixed a bug in CPU quantization which lead to errors when the input buffer exceeded 2^31 elements
256
257

Improvements:
Tim Dettmers's avatar
Tim Dettmers committed
258

259
260
261
262
- multiple improvements in formatting, removal of unused imports, and slight performance improvements (@tomaarsen)
- StableEmbedding layer now has device and dtype parameters to make it 1:1 replaceable with regular Embedding layers (@lostmsu)
- runtime performance of block-wise quantization slightly improved
- added error message for the case multiple libcudart.so are installed and bitsandbytes picks the wrong one
Tim Dettmers's avatar
Tim Dettmers committed
263

264
### 0.35.4
Tim Dettmers's avatar
Tim Dettmers committed
265

266
Bug fixes:
Tim Dettmers's avatar
Tim Dettmers committed
267

268
269
- Fixed a bug in the CUDA Setup failed with the cuda runtime was found, but not the cuda library.
- Fixed a bug where not finding the cuda runtime led to an incomprehensible error.
Tim Dettmers's avatar
Tim Dettmers committed
270

271
### 0.35.3
Tim Dettmers's avatar
Tim Dettmers committed
272

273
Bug fixes:
Tim Dettmers's avatar
Tim Dettmers committed
274

275
- Fixed a bug in the CUDA Setup which led to an incomprehensible error if no GPU was detected.
Tim Dettmers's avatar
Tim Dettmers committed
276

277
### 0.35.2
Tim Dettmers's avatar
Tim Dettmers committed
278
279

Bug fixes:
Tim Dettmers's avatar
Tim Dettmers committed
280

281
- Fixed a bug where the CUDA setup failed due to a wrong function call.
Tim Dettmers's avatar
Tim Dettmers committed
282

283
### 0.35.1
284

285
Features:
286

287
- Added CUDA instruction generator to fix some installations.
288

289
Bug fixes:
Tim Dettmers's avatar
Tim Dettmers committed
290

291
- Fixed a problem where warning messages would be displayed even though everything worked correctly.
Tim Dettmers's avatar
Tim Dettmers committed
292

293
### 0.35.0
Tim Dettmers's avatar
Tim Dettmers committed
294

295
#### CUDA 11.8 support and bug fixes
Tim Dettmers's avatar
Tim Dettmers committed
296
297

Features:
298
299

- CUDA 11.8 support added and binaries added to the PyPI release.
Tim Dettmers's avatar
Tim Dettmers committed
300
301
302

Bug fixes:

303
304
305
- fixed a bug where too long directory names would crash the CUDA SETUP #35 (thank you @tomaarsen)
- fixed a bug where CPU installations on Colab would run into an error  #34 (thank you @tomaarsen)
- fixed an issue where the default CUDA version with fast-DreamBooth was not supported #52
Tim Dettmers's avatar
Tim Dettmers committed
306

307
### 0.34.0
Tim Dettmers's avatar
Tim Dettmers committed
308

309
#### Bug fixes and memory efficient backprop
Tim Dettmers's avatar
Tim Dettmers committed
310
311

Features:
312
313

- Linear8bitLt layer now supports `memory_efficient_backward=True` which enables backprop of gradients through frozen weights.
Tim Dettmers's avatar
Tim Dettmers committed
314
315

Bug fixes:
316

317
- fixed an issue where too many threads were created in blockwise quantization on the CPU for large tensors
318

319
### 0.33.0
320

321
#### Various bug fixes
322
323

Features:
324
325

- CPU quantization now supports a variable `blocksize` variable to enhance quantization speed or precision.
326
327

Bug fixes:
328

329
330
331
332
- fixed an issue in CPU quantization where tensors with more than 2^31 elements would fail 19a7adca7a6c9bf7061a384d7e9d9b13676a1a88
- fixed a bug where cpu binaries would fail if no GPU would be detected eab4d8232d558f2e6bd7f7cc3d00e2e6e94f4e80
- fixed an issue where cpu binaries cause additional stdout messages 92a3363096e10ad6a5c4e944af898bd1186d806a
- fixed an import of bnb.utils 2e630b55f51d454f3bd723dffda68a07ef93190c
333

334
We thank @mryab, @mbrukman, @chessgecko, @dbaranchuk for pull request with bug fixes and new features.
335

336
### 0.32.0
Tim Dettmers's avatar
Tim Dettmers committed
337

338
#### 8-bit Inference Performance Enhancements
Tim Dettmers's avatar
Tim Dettmers committed
339

340
We added performance enhancements for small models. This makes small models about 2x faster for LLM.int8() inference.
Tim Dettmers's avatar
Tim Dettmers committed
341
342

Features:
343
344
345
346

- Int32 dequantization now supports fused biases.
- Linear8bitLt now uses a fused bias implementation.
- Change `.data.storage().data_ptr()` to `.data.data_ptr()` to enhance inference performance.
Tim Dettmers's avatar
Tim Dettmers committed
347
348
349

Bug fixes:

350
351
- Now throws and error if LLM.int8() is used on a GPU that is not supported.
- Enhances error messaging if CUDA SETUP fails.
Tim Dettmers's avatar
Tim Dettmers committed
352

353
### 0.31.0
Tim Dettmers's avatar
Tim Dettmers committed
354

355
#### 8-bit Inference and Packaging Update
Tim Dettmers's avatar
Tim Dettmers committed
356

357
Features:
Tim Dettmers's avatar
Tim Dettmers committed
358

359
360
- added direct outlier extraction. This enables outlier extraction without fp16 weights without performance degradation.
- Added automatic CUDA SETUP procedure and packaging all binaries into a single bitsandbytes package.
Tim Dettmers's avatar
Tim Dettmers committed
361

362
### 0.30.0
Tim Dettmers's avatar
Tim Dettmers committed
363

364
#### 8-bit Inference Update
365

366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
Features:

- Added 8-bit matrix multiplication form cuBLAS,  and cuBLASLt as well as multiple GEMM kernels (GEMM, GEMMEx, GEMMLt)
- Added 8-bit Linear layers with 8-bit Params that perform memory efficient inference with an option for 8-bit mixed precision matrix decomposition for inference without performance degradation
- Added quantization methods for "fake" quantization as well as optimized kernels vector-wise quantization and equalization as well as optimized cuBLASLt transformations
- CPU only build now available (Thank you, @mryab)

Deprecated:

- Pre-compiled release for CUDA 9.2, 10.0, 10.2 no longer available

### 0.26.0:

Features:

- Added Adagrad (without grad clipping) as 32-bit and 8-bit block-wise optimizer.
- Added AdamW (copy of Adam with weight decay init 1e-2). #10
- Introduced ModuleConfig overrides which can be seamlessly be used at initialization time of a module.
- Added `bnb.nn.Embedding` layer which runs at 32-bit but without the layernorm. This works well if you need to fine-tune pretrained models that do not have a embedding layer norm. #19
385
386
387

Bug fixes:

388
389
390
391
392
393
394
395
396
- Fixed a bug where weight decay was incorrectly applied to 32-bit Adam. #13
- Fixed an unsafe use of eval. #8
- Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (`bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())`).  #13 #15

Docs:

- Added instructions how to solve "\_\_fatbinwrap\_" errors.

### 0.0.25:
397
398

Features:
399
400
401
402
403

- Added `skip_zeros` for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
- Added support for Kepler GPUs. (#4)
- Added Analysis Adam to track 8-bit vs 32-bit quantization errors over time.
- Make compilation more user friendly.
404
405

Bug fixes:
Titus von Koeller's avatar
Titus von Koeller committed
406

407
- fixed "undefined symbol: \_\_fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)
Titus von Koeller's avatar
Titus von Koeller committed
408

409
Docs:
Titus von Koeller's avatar
Titus von Koeller committed
410

411
- Added docs with instructions to compile from source.
Titus von Koeller's avatar
Titus von Koeller committed
412

413
### 0.0.24:
414

415
416
- Fixed a bug where a float/half conversion led to a compilation error for CUDA 11.1 on Turning GPUs.
- removed Apex dependency for bnb LAMB
417

418
### 0.0.23:
Titus von Koeller's avatar
Titus von Koeller committed
419

420
Bugs:
Titus von Koeller's avatar
Titus von Koeller committed
421

422
423
- Unified quantization API: each quantization function now returns `Q, S` where `Q` is the quantized tensor and `S` the quantization state which may hold absolute max values, a quantization map or more. For dequantization all functions now accept the inputs `Q, S` so that `Q` is dequantized with the quantization state `S`.
- Fixed an issue where the CUDA 11.1 binary was not compiled with the right headers
Titus von Koeller's avatar
Titus von Koeller committed
424

425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
API changes:

- Block-wise quantization for optimizers now enabled by default

Features:

- Block-wise quantization routines now support CPU Tensors.

### 0.0.22:

- Fixed an error where a `reset_parameters()` call on the `StableEmbedding` would lead to an error in older PyTorch versions (from 1.7.0).

### 0.0.21

- Ampere, RTX 30 series GPUs now compatible with the library.