README.md 2.02 KB
Newer Older
1
# Text Generation Inference
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
2

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
3
<div align="center">
Olivier Dehaene's avatar
Olivier Dehaene committed
4

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
5
6
7
8
![architecture](assets/architecture.jpg)

</div>

9
A Rust and gRPC server for text generation inference.
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
10

11
## Features
Olivier Dehaene's avatar
Olivier Dehaene committed
12

13
- [Dynamic bathing of incoming requests](https://github.com/huggingface/text-generation-inference/blob/main/router/src/batcher.rs#L88) for increased total throughput
14
- Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
15
16
17
- [Safetensors](https://github.com/huggingface/safetensors) weight loading
- 45ms per token generation for BLOOM with 8xA100 80GB

18
## Supported models
Olivier Dehaene's avatar
Olivier Dehaene committed
19

20
21
22
23
- BLOOM
- BLOOM-560m

## Load Tests for BLOOM
Olivier Dehaene's avatar
Olivier Dehaene committed
24

25
See `k6/load_test.js`
Olivier Dehaene's avatar
Olivier Dehaene committed
26

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
27
28
29
|                                                              | avg       | min          | med       | max        | p(90)     | p(95)     | RPS      |
|--------------------------------------------------------------|-----------|--------------|-----------|------------|-----------|-----------|----------|
| [Original code](https://github.com/huggingface/transformers_bloom_parallel) | 8.9s      | 1s           | 9.12s     | 16.69s     | 13.7s     | 14.26s    | 5.9      |
30
| New batching logic                                           | **5.44s** | **959.53ms** | **5.28s** | **13.12s** | **7.78s** | **8.92s** | **9.08** |
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
31
32
33
34

## Install

```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
35
make install
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
36
37
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
38
## Run 
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
39

40
41
### BLOOM 560-m

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
42
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
43
make run-bloom-560m
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
44
45
```

46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
### BLOOM

First you need to download the weights:

```shell
make download-bloom
```

```shell
make run-bloom # Requires 8xA100 80GB
```

You can also quantize the weights with bitsandbytes to reduce the VRAM requirement:

```shell
make run-bloom-quantize # Requires 8xA100 40GB
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
64
65
## Test

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
66
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
67
curl 127.0.0.1:3000/generate \
Nicolas Patry's avatar
Nicolas Patry committed
68
    -v \
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
69
70
71
    -X POST \
    -d '{"inputs":"Testing API","parameters":{"max_new_tokens":9}}' \
    -H 'Content-Type: application/json'
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
72
73
```

Nicolas Patry's avatar
Nicolas Patry committed
74
75
76
77
78
79
80
## Develop

```shell
make server-dev
make router-dev
```

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
81
82
## TODO:

83
- [ ] Support AutoModelForSeq2SeqLM
Nicolas Patry's avatar
Nicolas Patry committed
84
85
86
- [ ] Add tests for the `server/model` logic
- [ ] Backport custom CUDA kernels to Transformers
- [ ] Install safetensors with pip