Commit dca730a0 authored by Konrad's avatar Konrad
Browse files

docs and errors update

parent 4b0c49a1
......@@ -46,7 +46,7 @@ This mode supports a number of command-line arguments, the details of which can
- `--system_instruction`: Specifies a system instruction string to prepend to the prompt.
- `--apply_chat_template` : If this flag is on, a chat template will be applied to the prompt. For Hugging Face models, the chat template is taken from the tokenizer, if the tokenizer does not have a chat template, a default one will be applied. For other models, a generic chat template is used.
- `--apply_chat_template` : If this flag is on, a chat template will be applied to the prompt. For Hugging Face models, the chat template is taken from the tokenizer, if the tokenizer does not have a chat template, a default one will be applied. For other models, chat templating is not currently implemented.
- `--fewshot_as_multiturn` : If this flag is on, the Fewshot examples are treated as a multi-turn conversation. Questions are provided as user content and answers are provided as assistant responses. Requires `--num_fewshot` to be set to be greater than 0, and `--apply_chat_template` to be on.
......
......@@ -125,7 +125,7 @@ class LM(abc.ABC):
A string representing the chat history in a format that can be used as input to the LM.
"""
raise NotImplementedError(
"To use this model with chat templates, please implement the 'apply_chat_template' method."
"To use this model with chat templates, please implement the 'apply_chat_template' method for your model type."
)
@classmethod
......@@ -185,8 +185,12 @@ class LM(abc.ABC):
@property
def tokenizer_name(self) -> str:
"""Must be defined for LM subclasses which implement Chat Templating.
Should return the name of the tokenizer or chat template used.
Used only to properly fingerprint caches when requests are being cached with `--cache_requests`, otherwise not used.
"""
raise NotImplementedError(
"To use this model with chat templates, please implement the 'get_tokenizer_name' property."
"To use this model with chat templates, please implement the 'tokenizer_name' property."
)
def set_cache_hook(self, cache_hook) -> None:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment