Commit b844e104 authored by Tim Dettmers's avatar Tim Dettmers
Browse files

Updated docs (#32) and changelog.

parent 62b6a939
...@@ -117,3 +117,16 @@ Features: ...@@ -117,3 +117,16 @@ Features:
Bug fixes: Bug fixes:
- fixed an issue where too many threads were created in blockwise quantization on the CPU for large tensors - fixed an issue where too many threads were created in blockwise quantization on the CPU for large tensors
### 0.35.0
#### CUDA 11.8 support and bug fixes
Features:
- CUDA 11.8 support added and binaries added to the PyPI release.
Bug fixes:
- fixed a bug where too long directory names would crash the CUDA SETUP #35 (thank you @tomaarsen)
- fixed a bug where CPU installations on Colab would run into an error #34 (thank you @tomaarsen)
- fixed an issue where the default CUDA version with fast-DreamBooth was not supported #52
...@@ -10,6 +10,8 @@ Resources: ...@@ -10,6 +10,8 @@ Resources:
- [LLM.int8() Paper](https://arxiv.org/abs/2208.07339) -- [LLM.int8() Software Blog Post](https://huggingface.co/blog/hf-bitsandbytes-integration) -- [LLM.int8() Emergent Features Blog Post](https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/) - [LLM.int8() Paper](https://arxiv.org/abs/2208.07339) -- [LLM.int8() Software Blog Post](https://huggingface.co/blog/hf-bitsandbytes-integration) -- [LLM.int8() Emergent Features Blog Post](https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/)
## TL;DR ## TL;DR
**Requirements**
Linux distribution (Ubuntu, MacOS, etc.) + CUDA >= 10.0. LLM.int8() requires Turing or Ampere GPUs.
**Installation**: **Installation**:
``pip install bitsandbytes`` ``pip install bitsandbytes``
...@@ -52,6 +54,8 @@ Hardware requirements: ...@@ -52,6 +54,8 @@ Hardware requirements:
Supported CUDA versions: 10.2 - 11.7 Supported CUDA versions: 10.2 - 11.7
The bitsandbytes library is currently only supported on Linux distributions. Windows is not supported at the moment.
The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the ["Get Started"](https://pytorch.org/get-started/locally/) instructions on the official website. The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the ["Get Started"](https://pytorch.org/get-started/locally/) instructions on the official website.
## Using bitsandbytes ## Using bitsandbytes
......
...@@ -18,7 +18,7 @@ def read(fname): ...@@ -18,7 +18,7 @@ def read(fname):
setup( setup(
name=f"bitsandbytes", name=f"bitsandbytes",
version=f"0.34.0", version=f"0.35.0",
author="Tim Dettmers", author="Tim Dettmers",
author_email="dettmers@cs.washington.edu", author_email="dettmers@cs.washington.edu",
description="8-bit optimizers and matrix multiplication routines.", description="8-bit optimizers and matrix multiplication routines.",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment