Unverified Commit 14fc1a24 authored by Kirill's avatar Kirill Committed by GitHub
Browse files

Fix quantization docs typo (#22666)

parent 3876fc68
...@@ -33,7 +33,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer ...@@ -33,7 +33,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7" model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map == "auto", load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True)
``` ```
Then, use your model as you would usually use a [`PreTrainedModel`]. Then, use your model as you would usually use a [`PreTrainedModel`].
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment