"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "cd927a47361c95c8adc5037df6453b65cca9149f"
Unverified Commit dafe3702 authored by Billy Cao's avatar Billy Cao Committed by GitHub
Browse files

[DOCS] Fix typo for llava next docs (#29829)

Fix typo for llava next docs
parent c5f0288b
...@@ -101,7 +101,7 @@ print(processor.decode(output[0], skip_special_tokens=True)) ...@@ -101,7 +101,7 @@ print(processor.decode(output[0], skip_special_tokens=True))
The model can be loaded in 8 or 4 bits, greatly reducing the memory requirements while maintaining the performance of the original model. First make sure to install bitsandbytes, `pip install bitsandbytes`` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: The model can be loaded in 8 or 4 bits, greatly reducing the memory requirements while maintaining the performance of the original model. First make sure to install bitsandbytes, `pip install bitsandbytes`` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
```python ```python
from transformers import LlavaNextForConditionalGeneration, BitsandBytesConfig from transformers import LlavaNextForConditionalGeneration, BitsAndBytesConfig
# specify how to quantize the model # specify how to quantize the model
quantization_config = BitsAndBytesConfig( quantization_config = BitsAndBytesConfig(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment