Unverified Commit d80c9a34 authored by j-gc's avatar j-gc Committed by GitHub
Browse files

[`Docs`] fixed minor typo (#29555)

parent 4f27ee93
......@@ -49,7 +49,7 @@ Starting with version `aqlm 1.0.2`, AQLM supports Parameter-Efficient Fine-Tunin
### AQLM configurations
AQLM quantization setpus vary mainly on the number of codebooks used as well as codebook sizes in bits. The most popular setups, as well as inference kernels they support are:
AQLM quantization setups vary mainly on the number of codebooks used as well as codebook sizes in bits. The most popular setups, as well as inference kernels they support are:
| Kernel | Number of codebooks | Codebook size, bits | Notation | Accuracy | Speedup | Fast GPU inference | Fast CPU inference |
|---|---------------------|---------------------|----------|-------------|-------------|--------------------|--------------------|
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment