"torchvision/vscode:/vscode.git/clone" did not exist on "bdd690e555efc60092ac0c255b69bca1793a047f"
Unverified Commit 1e2cf2b5 authored by Didier Durand's avatar Didier Durand Committed by GitHub
Browse files

fix server_arguments typo (#3499)

parent 9490d157
...@@ -44,13 +44,13 @@ Please consult the documentation below to learn more about the parameters you ma ...@@ -44,13 +44,13 @@ Please consult the documentation below to learn more about the parameters you ma
* `tokenizer_path`: Defaults to the `model_path`. * `tokenizer_path`: Defaults to the `model_path`.
* `tokenizer_mode`: By default `auto`, see [here](https://huggingface.co/docs/transformers/en/main_classes/tokenizer) for different mode. * `tokenizer_mode`: By default `auto`, see [here](https://huggingface.co/docs/transformers/en/main_classes/tokenizer) for different mode.
* `load_format`: The format the weights are loaded in. Defaults to `*.safetensors`/`*.bin`. * `load_format`: The format the weights are loaded in. Defaults to `*.safetensors`/`*.bin`.
* `trust_remote_code`: If `True`, will use locally cached config files, other wise use remote configs in HuggingFace. * `trust_remote_code`: If `True`, will use locally cached config files, otherwise use remote configs in HuggingFace.
* `dtype`: Dtype used for the model, defaults to `bfloat16`. * `dtype`: Dtype used for the model, defaults to `bfloat16`.
* `kv_cache_dtype`: Dtype of the kv cache, defaults to the `dtype`. * `kv_cache_dtype`: Dtype of the kv cache, defaults to the `dtype`.
* `context_length`: The number of tokens our model can process *including the input*. Not that extending the default might lead to strange behavior. * `context_length`: The number of tokens our model can process *including the input*. Not that extending the default might lead to strange behavior.
* `device`: The device we put the model, defaults to `cuda`. * `device`: The device we put the model, defaults to `cuda`.
* `chat_template`: The chat template to use. Deviating from the default might lead to unexpected responses. For multi-modal chat templates, refer to [here](https://docs.sglang.ai/backend/openai_api_vision.html#Chat-Template). * `chat_template`: The chat template to use. Deviating from the default might lead to unexpected responses. For multi-modal chat templates, refer to [here](https://docs.sglang.ai/backend/openai_api_vision.html#Chat-Template).
* `is_embedding`: Set to true to perform [embedding](https://docs.sglang.ai/backend/openai_api_embeddings.html) / [enocode](https://docs.sglang.ai/backend/native_api.html#Encode-(embedding-model)) and [reward](https://docs.sglang.ai/backend/native_api.html#Classify-(reward-model)) tasks. * `is_embedding`: Set to true to perform [embedding](https://docs.sglang.ai/backend/openai_api_embeddings.html) / [encode](https://docs.sglang.ai/backend/native_api.html#Encode-(embedding-model)) and [reward](https://docs.sglang.ai/backend/native_api.html#Classify-(reward-model)) tasks.
* `revision`: Adjust if a specific version of the model should be used. * `revision`: Adjust if a specific version of the model should be used.
* `skip_tokenizer_init`: Set to true to provide the tokens to the engine and get the output tokens directly, typically used in RLHF. * `skip_tokenizer_init`: Set to true to provide the tokens to the engine and get the output tokens directly, typically used in RLHF.
* `json_model_override_args`: Override model config with the provided JSON. * `json_model_override_args`: Override model config with the provided JSON.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment