Commit 59593890 authored by Matthew Douglas's avatar Matthew Douglas
Browse files

Temporary updates for release

parent 19fe95ac
......@@ -104,6 +104,7 @@ jobs:
retention-days: 7
build-shared-libs-rocm:
if: false # Temporarily disabled
strategy:
matrix:
os: [ubuntu-22.04]
......@@ -151,7 +152,7 @@ jobs:
needs:
- build-shared-libs
- build-shared-libs-cuda
- build-shared-libs-rocm
#- build-shared-libs-rocm
strategy:
matrix:
os: [ubuntu-22.04, ubuntu-22.04-arm, windows-latest, macos-latest]
......
......@@ -26,7 +26,7 @@ bitsandbytes has the following minimum requirements for all platforms:
#### Accelerator support:
<small>Note: this table reflects the status of the current development branch. For the latest stable release, see the
[document in the v0.46.0 tag](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/0.46.0/README.md#accelerator-support).
[document in the 0.47.0 tag](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/0.47.0/README.md#accelerator-support).
</small>
##### Legend:
......@@ -73,9 +73,9 @@ bitsandbytes has the following minimum requirements for all platforms:
CDNA: gfx90a, gfx942<br>
RDNA: gfx1100
</td>
<td></td>
<td>〰️</td>
<td></td>
<td>🚧</td>
<td>🚧</td>
<td>🚧</td>
</tr>
<tr>
<td></td>
......@@ -85,16 +85,16 @@ bitsandbytes has the following minimum requirements for all platforms:
Arc A-Series (Alchemist)<br>
Arc B-Series (Battlemage)
</td>
<td></td>
<td></td>
<td>🚧</td>
<td>🚧</td>
<td>🚧</td>
</tr>
<tr>
<td></td>
<td>🟪 Intel Gaudi <br><code>hpu</code></td>
<td>Gaudi1, Gaudi2, Gaudi3</td>
<td></td>
<td>〰️</td>
<td>🚧</td>
<td>🚧</td>
<td></td>
</tr>
<tr>
......@@ -139,8 +139,8 @@ bitsandbytes has the following minimum requirements for all platforms:
Arc A-Series (Alchemist) <br>
Arc B-Series (Battlemage)
</td>
<td></td>
<td></td>
<td>🚧</td>
<td>🚧</td>
<td>🚧</td>
</tr>
<tr>
......
......@@ -35,17 +35,6 @@ supported_torch_devices = {
if torch.cuda.is_available():
from .backends.cuda import ops as cuda_ops
if hasattr(torch, "xpu") and torch.xpu.is_available():
from .backends.xpu import ops as xpu_ops
if importlib.util.find_spec("habana_frameworks") and importlib.util.find_spec("habana_frameworks.torch"):
# In case not automatically imported
import habana_frameworks.torch
if hasattr(torch, "hpu") and torch.hpu.is_available():
from .backends.hpu import ops as hpu_ops
def _import_backends():
"""
......
......@@ -74,6 +74,7 @@ test = [
package-data = { "*" = ["libbitsandbytes*.*"] }
[tool.setuptools.packages.find]
exclude = ["*backends.xpu", "*backends.hpu", "*backends.triton"]
include = ["bitsandbytes*"]
[tool.setuptools.dynamic]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment