Unverified Commit dae7041a authored by Alejandro Rodríguez Salamanca's avatar Alejandro Rodríguez Salamanca Committed by GitHub
Browse files

Update README.md

parent 9e7cdc9e
......@@ -102,7 +102,7 @@ For straight Int8 matrix multiplication with mixed precision decomposition you c
bnb.matmul(..., threshold=6.0)
```
For instructions how to use LLM.int8() inference layers in your own code, see the TL;DR above or for extended instruction see [this blog post](https://github.com/huggingface/transformers).
For instructions how to use LLM.int8() inference layers in your own code, see the TL;DR above or for extended instruction see [this blog post](https://huggingface.co/blog/hf-bitsandbytes-integration).
### Using the 8-bit Optimizers
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment