@@ -6,26 +6,12 @@ The `bitsandbytes` library is a lightweight Python wrapper around CUDA custom fu
...
@@ -6,26 +6,12 @@ The `bitsandbytes` library is a lightweight Python wrapper around CUDA custom fu
The library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.
The library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.
There are ongoing efforts to support further hardware backends, i.e. Intel CPU + GPU, AMD GPU, Apple Silicon. Windows support is quite far along and is on its way as well.
There are ongoing efforts to support further hardware backends, i.e. Intel CPU + GPU, AMD GPU, Apple Silicon, hopefully NPU.
**Please head to the official documentation page:**
**Please head to the official documentation page:**
## `bitsandbytes` multi-backend _alpha_ release is out!
🚀 Big news! After months of hard work and incredible community contributions, we're thrilled to announce the **bitsandbytes multi-backend _alpha_ release**! 💥
Now supporting:
- 🔥 **AMD GPUs** (ROCm)
- ⚡ **Intel CPUs** & **GPUs**
We’d love your early feedback! 🙏
👉 [Instructions for your `pip install` here](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend)
We're super excited about these recent developments and grateful for any constructive input or support that you can give to help us make this a reality (e.g. helping us with the upcoming Apple Silicon backend or reporting bugs). BNB is a community project and we're excited for your collaboration 🤗