README.md 2.19 KB
Newer Older
1
# Text Generation Inference
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
2

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
3
<div align="center">
Olivier Dehaene's avatar
Olivier Dehaene committed
4

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
5
6
7
8
![architecture](assets/architecture.jpg)

</div>

9
A Rust and gRPC server for text generation inference.
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
10

11
## Features
Olivier Dehaene's avatar
Olivier Dehaene committed
12

13
- [Dynamic bathing of incoming requests](https://github.com/huggingface/text-generation-inference/blob/main/router/src/batcher.rs#L88) for increased total throughput
14
- Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
15
16
17
- [Safetensors](https://github.com/huggingface/safetensors) weight loading
- 45ms per token generation for BLOOM with 8xA100 80GB

18
## Officialy supported models
Olivier Dehaene's avatar
Olivier Dehaene committed
19

20
- BLOOM
21
- BLOOMZ
22
23
- BLOOM-560m

24
25
26
27
28
29
30
31
Other models are supported on a best effort basis using:

`AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")`

or

`AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")`

32
## Load Tests for BLOOM
Olivier Dehaene's avatar
Olivier Dehaene committed
33

34
See `k6/load_test.js`
Olivier Dehaene's avatar
Olivier Dehaene committed
35

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
36
37
38
|                                                              | avg       | min          | med       | max        | p(90)     | p(95)     | RPS      |
|--------------------------------------------------------------|-----------|--------------|-----------|------------|-----------|-----------|----------|
| [Original code](https://github.com/huggingface/transformers_bloom_parallel) | 8.9s      | 1s           | 9.12s     | 16.69s     | 13.7s     | 14.26s    | 5.9      |
39
| New batching logic                                           | **5.44s** | **959.53ms** | **5.28s** | **13.12s** | **7.78s** | **8.92s** | **9.08** |
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
40
41
42
43

## Install

```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
44
make install
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
45
46
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
47
## Run 
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
48

49
50
### BLOOM 560-m

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
51
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
52
make run-bloom-560m
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
53
54
```

55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
### BLOOM

First you need to download the weights:

```shell
make download-bloom
```

```shell
make run-bloom # Requires 8xA100 80GB
```

You can also quantize the weights with bitsandbytes to reduce the VRAM requirement:

```shell
make run-bloom-quantize # Requires 8xA100 40GB
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
73
74
## Test

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
75
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
76
curl 127.0.0.1:3000/generate \
Nicolas Patry's avatar
Nicolas Patry committed
77
    -v \
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
78
79
80
    -X POST \
    -d '{"inputs":"Testing API","parameters":{"max_new_tokens":9}}' \
    -H 'Content-Type: application/json'
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
81
82
```

Nicolas Patry's avatar
Nicolas Patry committed
83
84
85
86
87
88
89
## Develop

```shell
make server-dev
make router-dev
```

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
90
91
## TODO:

Nicolas Patry's avatar
Nicolas Patry committed
92
93
94
- [ ] Add tests for the `server/model` logic
- [ ] Backport custom CUDA kernels to Transformers
- [ ] Install safetensors with pip