CHANGELOG.md 1.51 KB
Newer Older
Tim Dettmers's avatar
Tim Dettmers committed
1
### 0.0.21
Tim Dettmers's avatar
Tim Dettmers committed
2
3
- Ampere, RTX 30 series GPUs now compatible with the library.

Tim Dettmers's avatar
Tim Dettmers committed
4
### 0.0.22:
Tim Dettmers's avatar
Tim Dettmers committed
5
6
7

- Fixed an error where a `reset_parameters()` call on the `StableEmbedding` would lead to an error in older PyTorch versions (from 1.7.0).

Tim Dettmers's avatar
Tim Dettmers committed
8
### 0.0.23:
Tim Dettmers's avatar
Tim Dettmers committed
9
10
11
12
13
14
15
16
17
18
19
20

Bugs:
 - Unified quantization API: each quantization function now returns `Q, S` where `Q` is the quantized tensor and `S` the quantization state which may hold absolute max values, a quantization map or more. For dequantization all functions now accept the inputs `Q, S` so that `Q` is dequantized with the quantization state `S`.
 - Fixed an issue where the CUDA 11.1 binary was not compiled with the right headers

API changes:
 - Block-wise quantization for optimizers now enabled by default

Features:
 - Block-wise quantization routines now support CPU Tensors.


Tim Dettmers's avatar
Tim Dettmers committed
21
### 0.0.24:
Tim Dettmers's avatar
Tim Dettmers committed
22
23

- Fixed a bug where a float/half conversion led to a compilation error for CUDA 11.1 on Turning GPUs.
24
- removed Apex dependency for bnb LAMB
Tim Dettmers's avatar
Tim Dettmers committed
25
26
27
28
29
30

### 0.0.25:

Features:
 - Added `skip_zeros` for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
 - Added support for Kepler GPUs. (#4)
31
 - Added Analysis Adam to track 8-bit vs 32-bit quantization errors over time.
Tim Dettmers's avatar
Tim Dettmers committed
32
 - Make compilation more user friendly.
Tim Dettmers's avatar
Tim Dettmers committed
33
34
35
36

Bug fixes:
 - fixed "undefined symbol: \_\_fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)

Tim Dettmers's avatar
Tim Dettmers committed
37
38
39
Docs:
 - Added docs with instructions to compile from source.

Tim Dettmers's avatar
Tim Dettmers committed
40

Tim Dettmers's avatar
Tim Dettmers committed
41
42
43
44
### 0.26.0:

Features:
 - Added Adagrad (without grad clipping) as 32-bit and 8-bit block-wise optimizer