quicktour.md 3.3 KB
Newer Older
1
2
3
4
# Quick Tour

The easiest way of getting started is using the official Docker container. Install Docker following [their installation instructions](https://docs.docker.com/get-docker/).

fxmarty's avatar
fxmarty committed
5
6
7
## Launching TGI

Let's say you want to deploy [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model with TGI on an Nvidia GPU. Here is an example on how to do that:
8

Merve Noyan's avatar
Merve Noyan committed
9
```bash
10
model=teknium/OpenHermes-2.5-Mistral-7B
11
12
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

fxmarty's avatar
fxmarty committed
13
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
14
    ghcr.io/huggingface/text-generation-inference:2.4.1 \
fxmarty's avatar
fxmarty committed
15
    --model-id $model
16
17
```

18
19
<Tip>

20
If you want to serve gated or private models, please refer to
21
22
23
24
25
[this guide](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/gated_model_access)
for detailed instructions.

</Tip>

fxmarty's avatar
fxmarty committed
26
### Supported hardware
fxmarty's avatar
fxmarty committed
27

Wang, Yi's avatar
Wang, Yi committed
28
TGI supports various hardware. Make sure to check the [Using TGI with Nvidia GPUs](./installation_nvidia), [Using TGI with AMD GPUs](./installation_amd), [Using TGI with Intel GPUs](./installation_intel), [Using TGI with Gaudi](./installation_gaudi), [Using TGI with Inferentia](./installation_inferentia) guides depending on which hardware you would like to deploy TGI on.
29

fxmarty's avatar
fxmarty committed
30
## Consuming TGI
fxmarty's avatar
fxmarty committed
31

32
Once TGI is running, you can use the `generate` endpoint or the Open AI Chat Completion API compatible [Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) by doing requests. To learn more about how to query the endpoints, check the [Consuming TGI](./basic_tutorials/consuming_tgi) section, where we show examples with utility libraries and UIs. Below you can see a simple snippet to query the endpoint.
33

34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
<inferencesnippet>
<python>

```python
import requests

headers = {
    "Content-Type": "application/json",
}

data = {
    'inputs': 'What is Deep Learning?',
    'parameters': {
        'max_new_tokens': 20,
    },
}

response = requests.post('http://127.0.0.1:8080/generate', headers=headers, json=data)
print(response.json())
# {'generated_text': '\n\nDeep Learning is a subset of Machine Learning that is concerned with the development of algorithms that can'}
54
```
55
56
57
58
59
60
</python>
<js>

```js
async function query() {
    const response = await fetch(
OlivierDehaene's avatar
OlivierDehaene committed
61
        'http://127.0.0.1:8080/generate',
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
        {
            method: 'POST',
            headers: { 'Content-Type': 'application/json'},
            body: JSON.stringify({
                'inputs': 'What is Deep Learning?',
                'parameters': {
                    'max_new_tokens': 20
                }
            })
        }
    );
}

query().then((response) => {
    console.log(JSON.stringify(response));
});
/// {"generated_text":"\n\nDeep Learning is a subset of Machine Learning that is concerned with the development of algorithms that can"}
```

</js>
<curl>

```curl
curl 127.0.0.1:8080/generate \
    -X POST \
    -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
    -H 'Content-Type: application/json'
```

</curl>
</inferencesnippet>
93
94
95

<Tip>

96
To see all possible deploy flags and options, you can use the `--help` flag. It's possible to configure the number of shards, quantization, generation parameters, and more.
97

98
```bash
99
docker run ghcr.io/huggingface/text-generation-inference:2.4.1 --help
100
101
```

Merve Noyan's avatar
Merve Noyan committed
102
</Tip>