README.md 2.03 KB
Newer Older
1
# Text Generation Inference
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
2

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
3
<div align="center">
Olivier Dehaene's avatar
Olivier Dehaene committed
4

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
5
6
7
8
![architecture](assets/architecture.jpg)

</div>

9
A Rust and gRPC server for text generation inference.
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
10

11
## Features
Olivier Dehaene's avatar
Olivier Dehaene committed
12

13
- [Dynamic bathing of incoming requests](https://github.com/huggingface/text-generation-inference/blob/main/router/src/batcher.rs#L88) for increased total throughput
14
- Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
15
16
17
- [Safetensors](https://github.com/huggingface/safetensors) weight loading
- 45ms per token generation for BLOOM with 8xA100 80GB

18
## Supported models
Olivier Dehaene's avatar
Olivier Dehaene committed
19

20
- BLOOM
21
- BLOOMZ
22
23
24
- BLOOM-560m

## Load Tests for BLOOM
Olivier Dehaene's avatar
Olivier Dehaene committed
25

26
See `k6/load_test.js`
Olivier Dehaene's avatar
Olivier Dehaene committed
27

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
28
29
30
|                                                              | avg       | min          | med       | max        | p(90)     | p(95)     | RPS      |
|--------------------------------------------------------------|-----------|--------------|-----------|------------|-----------|-----------|----------|
| [Original code](https://github.com/huggingface/transformers_bloom_parallel) | 8.9s      | 1s           | 9.12s     | 16.69s     | 13.7s     | 14.26s    | 5.9      |
31
| New batching logic                                           | **5.44s** | **959.53ms** | **5.28s** | **13.12s** | **7.78s** | **8.92s** | **9.08** |
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
32
33
34
35

## Install

```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
36
make install
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
37
38
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
39
## Run 
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
40

41
42
### BLOOM 560-m

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
43
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
44
make run-bloom-560m
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
45
46
```

47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
### BLOOM

First you need to download the weights:

```shell
make download-bloom
```

```shell
make run-bloom # Requires 8xA100 80GB
```

You can also quantize the weights with bitsandbytes to reduce the VRAM requirement:

```shell
make run-bloom-quantize # Requires 8xA100 40GB
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
65
66
## Test

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
67
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
68
curl 127.0.0.1:3000/generate \
Nicolas Patry's avatar
Nicolas Patry committed
69
    -v \
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
70
71
72
    -X POST \
    -d '{"inputs":"Testing API","parameters":{"max_new_tokens":9}}' \
    -H 'Content-Type: application/json'
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
73
74
```

Nicolas Patry's avatar
Nicolas Patry committed
75
76
77
78
79
80
81
## Develop

```shell
make server-dev
make router-dev
```

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
82
83
## TODO:

84
- [ ] Support AutoModelForSeq2SeqLM
Nicolas Patry's avatar
Nicolas Patry committed
85
86
87
- [ ] Add tests for the `server/model` logic
- [ ] Backport custom CUDA kernels to Transformers
- [ ] Install safetensors with pip