README.md 3.84 KB
Newer Older
lvskiller's avatar
readme  
lvskiller committed
1
2
# Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models

Melos's avatar
Melos committed
3

Yuliang Liu's avatar
Yuliang Liu committed
4
5
6
7
8
<br>
<p align="center">
    <img src="images/logo_monkey.png" width="300"/>
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
9
10

<div align="center">
Yuliang Liu's avatar
Yuliang Liu committed
11
Zhang Li*, Biao Yang*, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu†, Xiang Bai†
lvskiller's avatar
readme  
lvskiller committed
12
13
14
15
</div>
<div align="center">
<strong>Huazhong University of Science and Technology, Kingsoft</strong>
</div>
Yuliang Liu's avatar
Yuliang Liu committed
16
17

<p align="center">
Yuliang Liu's avatar
Yuliang Liu committed
18
<a href="updating">Paper</a>&nbsp&nbsp | &nbsp&nbsp<a href="http://221.232.49.195:7680/">Demo</a>&nbsp&nbsp | &nbsp&nbsp<a href="Monkey Model">Monkey Models</a>&nbsp | &nbsp <a href="updating">Tutorial</a>
Yuliang Liu's avatar
Yuliang Liu committed
19
</p>
Melos's avatar
Melos committed
20
21
-----

Yuliang Liu's avatar
Yuliang Liu committed
22
**Monkey** brings a training-efficient approach to effectively improve the input resolution capacity up to 896 x 1344 pixels without pretraining from the start. To bridge the gap between simple text labels and high input resolution, we propose a multi-level description generation method, which automatically provides rich information that can guide the model to learn the contextual association between scenes and objects. With the synergy of these two designs, our model achieved excellent results on multiple benchmarks. By comparing our model with various LMMs, including GPT4V, our model demonstrates promising performance in image captioning by paying attention to textual information and capturing fine details within the images; its improved input resolution also enables remarkable performance in document images with dense text. 
lvskiller's avatar
readme  
lvskiller committed
23
24
25

## Spotlights

Yuliang Liu's avatar
Yuliang Liu committed
26
- **Contextual associations.** Our method demonstrates a superior ability to infer the relationships between targets more effectively when answering questions, which results in delivering more comprehensive and insightful results.
Melos's avatar
Melos committed
27
28
- **Support resolution up to 1344 x 896.** Surpassing the standard 448 x 448 resolution typically employed for LMMs, this significant increase in resolution augments the ability to discern and understand unnoticeable or tightly clustered objects and dense text. 
- **Enhanced general performance.** We carried out testing across 16 diverse datasets, leading to impressive performance by our Monkey model in tasks such as Image Captioning, General Visual Question Answering, Text-centric Visual Question Answering, and Document-oriented Visual Question Answering.
lvskiller's avatar
readme  
lvskiller committed
29
30
31

## performance

Yuliang Liu's avatar
Yuliang Liu committed
32
<br>
lvskiller's avatar
lvskiller committed
33

Yuliang Liu's avatar
Yuliang Liu committed
34
<p align="center">
lvskiller's avatar
lvskiller committed
35
    <img src="images/radar.png" width="800"/>
Yuliang Liu's avatar
Yuliang Liu committed
36
37
<p>
<br>
lvskiller's avatar
lvskiller committed
38
39


lvskiller's avatar
readme  
lvskiller committed
40
41
## Demo

Yuliang Liu's avatar
Yuliang Liu committed
42
Have a try using the providing [Demo](http://221.232.49.195:7680/). All you need are to simpley upload or capture image from desktop or your phone, then click the generate. You may also generate multiple times to get more information. You can also generate Chinese answer by using “生成中文描述”: 
lvskiller's avatar
lvskiller committed
43

Yuliang Liu's avatar
Yuliang Liu committed
44
45
46
47
48
49
<br>
<p align="center">
    <img src="images/generation.png" width="900"/>
<p>
<br>
    
lvskiller's avatar
readme  
lvskiller committed
50
51
## Cases

Yuliang Liu's avatar
Yuliang Liu committed
52
Our model can accurately describe the details in the image.
lvskiller's avatar
readme  
lvskiller committed
53

Yuliang Liu's avatar
Yuliang Liu committed
54
55
56
57
58
<br>
<p align="center">
    <img src="images/caption_1.png" width="700"/>
<p>
<br>
lvskiller's avatar
lvskiller committed
59

Yuliang Liu's avatar
Yuliang Liu committed
60
Besides, our model has also demonstrated some capabilities in fine-grained question answering.
lvskiller's avatar
readme  
lvskiller committed
61

Yuliang Liu's avatar
Yuliang Liu committed
62
63
64
65
66
<br>
<p align="center">
    <img src="images/qa_1.png" width="700"/>
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
67

Yuliang Liu's avatar
Yuliang Liu committed
68
We have also achieved impressive performance on document-based tasks.
lvskiller's avatar
readme  
lvskiller committed
69

Yuliang Liu's avatar
Yuliang Liu committed
70
71
72
73
74
<br>
<p align="center">
    <img src="images/Doc_Chart.png" width="700"/>
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
75

Yuliang Liu's avatar
Yuliang Liu committed
76
We qualitatively compare with existing LMMs including GPT4V, Qwen-vl, etc, which shows inspiring results. One can have a try using the provided demo. 
lvskiller's avatar
readme  
lvskiller committed
77

Yuliang Liu's avatar
Yuliang Liu committed
78
79
80
81
82
<br>
<p align="center">
    <img src="images/compare.png" width="800"/>
<p>
<br>
lvskiller's avatar
lvskiller committed
83

lvskiller's avatar
readme  
lvskiller committed
84
85
## Acknowledgement

lvskiller's avatar
lvskiller committed
86

Melos's avatar
Melos committed
87
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git): the codebase we built upon. Thanks for the authors of Qwen for providing the framework.
lvskiller's avatar
readme  
lvskiller committed
88

Yuliang Liu's avatar
Yuliang Liu committed
89

lvskiller's avatar
lvskiller committed
90

Yuliang Liu's avatar
Yuliang Liu committed
91
92
## Copyright
We welcome suggestions to help us improve the little Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn