README.md 2.17 KB
Newer Older
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
1
# LLM Text Generation Inference
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
2

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
3
<div align="center">
Olivier Dehaene's avatar
Olivier Dehaene committed
4

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
5
6
7
8
9
10
![architecture](assets/architecture.jpg)

</div>

A Rust and gRPC server for large language models text generation inference.

11
## Features
Olivier Dehaene's avatar
Olivier Dehaene committed
12

13
14
15
16
17
- Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
- [Dynamic bathing of incoming requests](https://github.com/huggingface/text-generation-inference/blob/main/router/src/batcher.rs#L88) for increased total throughput
- [Safetensors](https://github.com/huggingface/safetensors) weight loading
- 45ms per token generation for BLOOM with 8xA100 80GB

18
## Officially supported models
Olivier Dehaene's avatar
Olivier Dehaene committed
19

20
21
22
- BLOOM
- BLOOM-560m

23
24
Other models are supported on a best-effort basis using `AutoModelForCausalLM.from_pretrained(<model>, torch_dtype=torch.float16, device_map="auto")`.

25
## Load Tests for BLOOM
Olivier Dehaene's avatar
Olivier Dehaene committed
26

27
See `k6/load_test.js`
Olivier Dehaene's avatar
Olivier Dehaene committed
28

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
29
30
31
|                                                              | avg       | min          | med       | max        | p(90)     | p(95)     | RPS      |
|--------------------------------------------------------------|-----------|--------------|-----------|------------|-----------|-----------|----------|
| [Original code](https://github.com/huggingface/transformers_bloom_parallel) | 8.9s      | 1s           | 9.12s     | 16.69s     | 13.7s     | 14.26s    | 5.9      |
32
| New batching logic                                           | **5.44s** | **959.53ms** | **5.28s** | **13.12s** | **7.78s** | **8.92s** | **9.08** |
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
33
34
35
36

## Install

```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
37
make install
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
38
39
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
40
## Run 
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
41

42
43
### BLOOM 560-m

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
44
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
45
make run-bloom-560m
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
46
47
```

48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
### BLOOM

First you need to download the weights:

```shell
make download-bloom
```

```shell
make run-bloom # Requires 8xA100 80GB
```

You can also quantize the weights with bitsandbytes to reduce the VRAM requirement:

```shell
make run-bloom-quantize # Requires 8xA100 40GB
```

Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
66
67
## Test

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
68
```shell
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
69
curl 127.0.0.1:3000/generate \
Nicolas Patry's avatar
Nicolas Patry committed
70
    -v \
Olivier Dehaene's avatar
v0.1.0  
Olivier Dehaene committed
71
72
73
    -X POST \
    -d '{"inputs":"Testing API","parameters":{"max_new_tokens":9}}' \
    -H 'Content-Type: application/json'
Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
74
75
```

Nicolas Patry's avatar
Nicolas Patry committed
76
77
78
79
80
81
82
## Develop

```shell
make server-dev
make router-dev
```

Olivier Dehaene's avatar
Init  
Olivier Dehaene committed
83
84
## TODO:

Nicolas Patry's avatar
Nicolas Patry committed
85
86
87
- [ ] Add tests for the `server/model` logic
- [ ] Backport custom CUDA kernels to Transformers
- [ ] Install safetensors with pip