quicktour.md 3.1 KB
Newer Older
1
2
3
4
# Quick Tour

The easiest way of getting started is using the official Docker container. Install Docker following [their installation instructions](https://docs.docker.com/get-docker/).

fxmarty's avatar
fxmarty committed
5
6
7
## Launching TGI

Let's say you want to deploy [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model with TGI on an Nvidia GPU. Here is an example on how to do that:
8

Merve Noyan's avatar
Merve Noyan committed
9
```bash
10
model=teknium/OpenHermes-2.5-Mistral-7B
11
12
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

fxmarty's avatar
fxmarty committed
13
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
Nicolas Patry's avatar
Nicolas Patry committed
14
    ghcr.io/huggingface/text-generation-inference:2.3.0 \
fxmarty's avatar
fxmarty committed
15
    --model-id $model
16
17
```

fxmarty's avatar
fxmarty committed
18
### Supported hardware
fxmarty's avatar
fxmarty committed
19

Wang, Yi's avatar
Wang, Yi committed
20
TGI supports various hardware. Make sure to check the [Using TGI with Nvidia GPUs](./installation_nvidia), [Using TGI with AMD GPUs](./installation_amd), [Using TGI with Intel GPUs](./installation_intel), [Using TGI with Gaudi](./installation_gaudi), [Using TGI with Inferentia](./installation_inferentia) guides depending on which hardware you would like to deploy TGI on.
21

fxmarty's avatar
fxmarty committed
22
## Consuming TGI
fxmarty's avatar
fxmarty committed
23

24
Once TGI is running, you can use the `generate` endpoint or the Open AI Chat Completion API compatible [Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) by doing requests. To learn more about how to query the endpoints, check the [Consuming TGI](./basic_tutorials/consuming_tgi) section, where we show examples with utility libraries and UIs. Below you can see a simple snippet to query the endpoint.
25

26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<inferencesnippet>
<python>

```python
import requests

headers = {
    "Content-Type": "application/json",
}

data = {
    'inputs': 'What is Deep Learning?',
    'parameters': {
        'max_new_tokens': 20,
    },
}

response = requests.post('http://127.0.0.1:8080/generate', headers=headers, json=data)
print(response.json())
# {'generated_text': '\n\nDeep Learning is a subset of Machine Learning that is concerned with the development of algorithms that can'}
46
```
47
48
49
50
51
52
</python>
<js>

```js
async function query() {
    const response = await fetch(
OlivierDehaene's avatar
OlivierDehaene committed
53
        'http://127.0.0.1:8080/generate',
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
        {
            method: 'POST',
            headers: { 'Content-Type': 'application/json'},
            body: JSON.stringify({
                'inputs': 'What is Deep Learning?',
                'parameters': {
                    'max_new_tokens': 20
                }
            })
        }
    );
}

query().then((response) => {
    console.log(JSON.stringify(response));
});
/// {"generated_text":"\n\nDeep Learning is a subset of Machine Learning that is concerned with the development of algorithms that can"}
```

</js>
<curl>

```curl
curl 127.0.0.1:8080/generate \
    -X POST \
    -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
    -H 'Content-Type: application/json'
```

</curl>
</inferencesnippet>
85
86
87

<Tip>

88
To see all possible deploy flags and options, you can use the `--help` flag. It's possible to configure the number of shards, quantization, generation parameters, and more.
89

90
```bash
Nicolas Patry's avatar
Nicolas Patry committed
91
docker run ghcr.io/huggingface/text-generation-inference:2.2.0 --help
92
93
```

Merve Noyan's avatar
Merve Noyan committed
94
</Tip>