Unverified Commit ef4b079a authored by Tim Dettmers's avatar Tim Dettmers Committed by GitHub
Browse files

Merge pull request #402 from alexrs/patch-1

Update README.md
parents b26454cb dae7041a
......@@ -102,7 +102,7 @@ For straight Int8 matrix multiplication with mixed precision decomposition you c
bnb.matmul(..., threshold=6.0)
```
For instructions how to use LLM.int8() inference layers in your own code, see the TL;DR above or for extended instruction see [this blog post](https://github.com/huggingface/transformers).
For instructions how to use LLM.int8() inference layers in your own code, see the TL;DR above or for extended instruction see [this blog post](https://huggingface.co/blog/hf-bitsandbytes-integration).
### Using the 8-bit Optimizers
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment