"tests/python/test_tokenizer.py" did not exist on "5ea40abf613e47bb56a0c06f695644d55671f585"
Unverified Commit 7efcb5e0 authored by Nicholas Broad's avatar Nicholas Broad Committed by GitHub
Browse files

remove LORA_ADAPTERS_PATH (#2563)

specify how to call local adapters
parent dd8691b7
...@@ -36,7 +36,7 @@ To use LoRA in TGI, when starting the server, you can specify the list of LoRA m ...@@ -36,7 +36,7 @@ To use LoRA in TGI, when starting the server, you can specify the list of LoRA m
LORA_ADAPTERS=predibase/customer_support,predibase/dbpedia LORA_ADAPTERS=predibase/customer_support,predibase/dbpedia
``` ```
additionally, you can specify the path to the LoRA models using the `LORA_ADAPTERS_PATH` environment variable. For example: To use a locally stored lora adapter, use `adapter-name=/path/to/adapter`, as seen below. When you want to use this adapter, set `"parameters": {"adapter_id": "adapter-name"}"`
```bash ```bash
LORA_ADAPTERS=myadapter=/some/path/to/adapter,myadapter2=/another/path/to/adapter LORA_ADAPTERS=myadapter=/some/path/to/adapter,myadapter2=/another/path/to/adapter
...@@ -72,6 +72,22 @@ curl 127.0.0.1:3000/generate \ ...@@ -72,6 +72,22 @@ curl 127.0.0.1:3000/generate \
}' }'
``` ```
If you are using a lora adapter stored locally that was set in the following manner: `LORA_ADAPTERS=myadapter=/some/path/to/adapter`, here is an example payload:
```json
curl 127.0.0.1:3000/generate \
-X POST \
-H 'Content-Type: application/json' \
-d '{
"inputs": "Hello who are you?",
"parameters": {
"max_new_tokens": 40,
"adapter_id": "myadapter"
}
}'
```
> **Note:** The Lora feature is new and still being improved. If you encounter any issues or have any feedback, please let us know by opening an issue on the [GitHub repository](https://github.com/huggingface/text-generation-inference/issues/new/choose). Additionally documentation and an improved client library will be published soon. > **Note:** The Lora feature is new and still being improved. If you encounter any issues or have any feedback, please let us know by opening an issue on the [GitHub repository](https://github.com/huggingface/text-generation-inference/issues/new/choose). Additionally documentation and an improved client library will be published soon.
An updated tutorial with detailed examples will be published soon. Stay tuned! An updated tutorial with detailed examples will be published soon. Stay tuned!
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment