"examples/vscode:/vscode.git/clone" did not exist on "c0cb5e22bb3550c6f04672e579c0a7b8e2784c11"
README.md 3.62 KB
Newer Older
lvskiller's avatar
readme  
lvskiller committed
1
2
# Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models

Melos's avatar
Melos committed
3

Yuliang Liu's avatar
Yuliang Liu committed
4
5
6
7
8
<br>
<p align="center">
    <img src="images/logo_monkey.png" width="300"/>
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
9
10

<div align="center">
Yuliang Liu's avatar
Yuliang Liu committed
11
Zhang Li*, Biao Yang*, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu†, Xiang Bai†
lvskiller's avatar
readme  
lvskiller committed
12
13
14
15
</div>
<div align="center">
<strong>Huazhong University of Science and Technology, Kingsoft</strong>
</div>
Yuliang Liu's avatar
Yuliang Liu committed
16
17

<p align="center">
Yuliang Liu's avatar
Yuliang Liu committed
18
<a href="updating">Paper</a>&nbsp&nbsp | &nbsp&nbsp<a href="http://221.232.49.195:7680/">Demo</a>&nbsp&nbsp | &nbsp&nbsp<a href="Monkey Model">Monkey Models</a>&nbsp | &nbsp <a href="updating">Tutorial</a>
Yuliang Liu's avatar
Yuliang Liu committed
19
</p>
Melos's avatar
Melos committed
20
21
-----

Yuliang Liu's avatar
Yuliang Liu committed
22
**Monkey** brings a training-efficient approach to effectively improve the input resolution capacity up to 896 x 1344 pixels without pretraining from the start. To bridge the gap between simple text labels and high input resolution, we propose a multi-level description generation method, which automatically provides rich information that can guide the model to learn the contextual association between scenes and objects. With the synergy of these two designs, our model achieved excellent results on multiple benchmarks. By comparing our model with various LMMs, including GPT4V, our model demonstrates promising performance in image captioning by paying attention to textual information and capturing fine details within the images; its improved input resolution also enables remarkable performance in document images with dense text. 
lvskiller's avatar
readme  
lvskiller committed
23
24
25

## Spotlights

Yuliang Liu's avatar
Yuliang Liu committed
26
- **Contextual associations.** Our method demonstrates a superior ability to infer the relationships between targets more effectively when answering questions, which results in delivering more comprehensive and insightful results.
Melos's avatar
Melos committed
27
28
- **Support resolution up to 1344 x 896.** Surpassing the standard 448 x 448 resolution typically employed for LMMs, this significant increase in resolution augments the ability to discern and understand unnoticeable or tightly clustered objects and dense text. 
- **Enhanced general performance.** We carried out testing across 16 diverse datasets, leading to impressive performance by our Monkey model in tasks such as Image Captioning, General Visual Question Answering, Text-centric Visual Question Answering, and Document-oriented Visual Question Answering.
lvskiller's avatar
readme  
lvskiller committed
29
30
31

## performance

Yuliang Liu's avatar
Yuliang Liu committed
32
<br>
lvskiller's avatar
lvskiller committed
33

Yuliang Liu's avatar
Yuliang Liu committed
34
<p align="center">
lvskiller's avatar
lvskiller committed
35
    <img src="images/radar.png" width="800"/>
Yuliang Liu's avatar
Yuliang Liu committed
36
37
<p>
<br>
lvskiller's avatar
lvskiller committed
38
39


lvskiller's avatar
readme  
lvskiller committed
40
41
## Demo

Yuliang Liu's avatar
Yuliang Liu committed
42
Have a try using the providing [Demo](http://221.232.49.195:7680/). All you need are to simpley upload or capture image from desktop or your phone, then click the generate. 
lvskiller's avatar
lvskiller committed
43

lvskiller's avatar
readme  
lvskiller committed
44
45
## Cases

Yuliang Liu's avatar
Yuliang Liu committed
46
Our model can accurately describe the details in the image.
lvskiller's avatar
readme  
lvskiller committed
47

Yuliang Liu's avatar
Yuliang Liu committed
48
49
50
51
52
<br>
<p align="center">
    <img src="images/caption_1.png" width="700"/>
<p>
<br>
lvskiller's avatar
lvskiller committed
53

Yuliang Liu's avatar
Yuliang Liu committed
54
Besides, our model has also demonstrated some capabilities in fine-grained question answering.
lvskiller's avatar
readme  
lvskiller committed
55

Yuliang Liu's avatar
Yuliang Liu committed
56
57
58
59
60
<br>
<p align="center">
    <img src="images/qa_1.png" width="700"/>
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
61

Yuliang Liu's avatar
Yuliang Liu committed
62
We have also achieved impressive performance on document-based tasks.
lvskiller's avatar
readme  
lvskiller committed
63

Yuliang Liu's avatar
Yuliang Liu committed
64
65
66
67
68
<br>
<p align="center">
    <img src="images/Doc_Chart.png" width="700"/>
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
69

Yuliang Liu's avatar
Yuliang Liu committed
70
We qualitatively compare with existing LMMs including GPT4V, Qwen-vl, etc, which shows inspiring results. One can have a try using the provided demo. 
lvskiller's avatar
readme  
lvskiller committed
71

Yuliang Liu's avatar
Yuliang Liu committed
72
73
74
75
76
<br>
<p align="center">
    <img src="images/compare.png" width="800"/>
<p>
<br>
lvskiller's avatar
lvskiller committed
77

lvskiller's avatar
readme  
lvskiller committed
78
79
## Acknowledgement

lvskiller's avatar
lvskiller committed
80

Melos's avatar
Melos committed
81
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git): the codebase we built upon. Thanks for the authors of Qwen for providing the framework.
lvskiller's avatar
readme  
lvskiller committed
82

Yuliang Liu's avatar
Yuliang Liu committed
83

lvskiller's avatar
lvskiller committed
84

Yuliang Liu's avatar
Yuliang Liu committed
85
86
## Copyright
We welcome suggestions to help us improve the little Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn