README.md 11.6 KB
Newer Older
ShuoZhang2003's avatar
ShuoZhang2003 committed
1
<p align="left">
ShuoZhang2003's avatar
ShuoZhang2003 committed
2
        English</a>&nbsp | &nbsp<a href="README_cn.md">中文</a>&nbsp
ShuoZhang2003's avatar
ShuoZhang2003 committed
3
4
</p>
<br><br>
lvskiller's avatar
readme  
lvskiller committed
5

ShuoZhang2003's avatar
ShuoZhang2003 committed
6
# Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models
Melos's avatar
Melos committed
7

Yuliang Liu's avatar
Yuliang Liu committed
8
9
<br>
<p align="center">
Yuliang Liu's avatar
Yuliang Liu committed
10
    <img src="images/Logo-Monkey2.gif" width="300"/>
Yuliang Liu's avatar
Yuliang Liu committed
11
12
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
13
14

<div align="center">
Yuliang Liu's avatar
Yuliang Liu committed
15
Zhang Li*, Biao Yang*, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu†, Xiang Bai†
lvskiller's avatar
readme  
lvskiller committed
16
17
18
19
</div>
<div align="center">
<strong>Huazhong University of Science and Technology, Kingsoft</strong>
</div>
Yuliang Liu's avatar
Yuliang Liu committed
20
<p align="center">
ShuoZhang2003's avatar
ShuoZhang2003 committed
21
<a href="https://arxiv.org/abs/2311.06607">Paper</a>&nbsp&nbsp | &nbsp&nbsp<a href="http://27.17.169.154:7681/">Demo_chat</a>&nbsp&nbsp | &nbsp&nbsp<a href="http://huggingface.co/datasets/echo840/Detailed_Caption">Detailed Caption</a>&nbsp&nbsp | &nbsp&nbsp<a href="http://huggingface.co/echo840/Monkey">Model Weight</a>&nbsp&nbsp  |  <a href="https://www.wisemodel.cn/models/HUST-VLRLab/Monkey/">Model Weight in wisemodel</a>&nbsp&nbsp| <a href="https://wisemodel.cn/space/gradio/huakeMonkey">Demo in wisemodel</a>&nbsp&nbsp
lvskiller's avatar
lvskiller committed
22
<!--     | &nbsp&nbsp<a href="Monkey Model">Monkey Models</a>&nbsp | &nbsp <a href="http://huggingface.co/echo840/Monkey">Tutorial</a> -->
Yuliang Liu's avatar
Yuliang Liu committed
23
</p>
lvskiller's avatar
lvskiller committed
24

Melos's avatar
Melos committed
25
-----
Yuliang Liu's avatar
Yuliang Liu committed
26
  
Yuliang Liu's avatar
Yuliang Liu committed
27
**Monkey** brings a training-efficient approach to effectively improve the input resolution capacity up to 896 x 1344 pixels without pretraining from the start. To bridge the gap between simple text labels and high input resolution, we propose a multi-level description generation method, which automatically provides rich information that can guide the model to learn the contextual association between scenes and objects. With the synergy of these two designs, our model achieved excellent results on multiple benchmarks. By comparing our model with various LMMs, including GPT4V, our model demonstrates promising performance in image captioning by paying attention to textual information and capturing fine details within the images; its improved input resolution also enables remarkable performance in document images with dense text. 
Yuliang Liu's avatar
Yuliang Liu committed
28
    
Yuliang Liu's avatar
Yuliang Liu committed
29
## News 
lz's avatar
lz committed
30
* ```2024.1.3  ``` 🚀🚀🚀 Release the basic data generation pipeline. [Data Generation](./data_generation)
ShuoZhang2003's avatar
ShuoZhang2003 committed
31
* ```2023.12.21``` 🚀🚀🚀 The JSON file used for Monkey training is provided.
ShuoZhang2003's avatar
ShuoZhang2003 committed
32
* ```2023.12.16``` 🚀🚀🚀 Monkey can be trained using 8 NVIDIA 3090 GPUs. See subsection [train](#Train) for details.
Yuliang Liu's avatar
Yuliang Liu committed
33
* ```2023.11.25``` 🚀🚀🚀 Monkey-chat demo is released. 
Yuliang Liu's avatar
Yuliang Liu committed
34
35
* ```2023.11.06``` 🚀🚀🚀 Monkey [paper](https://arxiv.org/abs/2311.06607) is released.

lvskiller's avatar
lvskiller committed
36

lvskiller's avatar
readme  
lvskiller committed
37
38
## Spotlights

Yuliang Liu's avatar
Yuliang Liu committed
39
- **Contextual associations.** Our method demonstrates a superior ability to infer the relationships between targets more effectively when answering questions, which results in delivering more comprehensive and insightful results.
Melos's avatar
Melos committed
40
41
- **Support resolution up to 1344 x 896.** Surpassing the standard 448 x 448 resolution typically employed for LMMs, this significant increase in resolution augments the ability to discern and understand unnoticeable or tightly clustered objects and dense text. 
- **Enhanced general performance.** We carried out testing across 16 diverse datasets, leading to impressive performance by our Monkey model in tasks such as Image Captioning, General Visual Question Answering, Text-centric Visual Question Answering, and Document-oriented Visual Question Answering.
lvskiller's avatar
readme  
lvskiller committed
42

ShuoZhang2003's avatar
ShuoZhang2003 committed
43

lvskiller's avatar
lvskiller committed
44
45
46
47
48
49
50
51
52
53
54
## Environment

```python
conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt
```


Yuliang Liu's avatar
Yuliang Liu committed
55
## Demo
lvskiller's avatar
readme  
lvskiller committed
56

ShuoZhang2003's avatar
ShuoZhang2003 committed
57
Demo is fast and easy to use. Simply uploading an image from your desktop or phone, or capture one directly. 
echo840's avatar
echo840 committed
58
[Demo_chat](27.17.184.204:7681/) is also launched as an upgraded version of the original demo to deliver an enhanced interactive experience.
ShuoZhang2003's avatar
ShuoZhang2003 committed
59
60

Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V.  
Yuliang Liu's avatar
Yuliang Liu committed
61
62
<br>
<p align="center">
Yuliang Liu's avatar
Yuliang Liu committed
63
    <img src="images/demo_gpt4v_compare4.png" width="900"/>
Yuliang Liu's avatar
Yuliang Liu committed
64
65
66
<p>
<br>

ShuoZhang2003's avatar
ShuoZhang2003 committed
67
We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:
lvskiller's avatar
lvskiller committed
68
69
70
71
72
73
74
75
76
77
78
79
80
81
 1. Make sure you have configured the [environment](#environment).
 2. You can choose to use the demo offline or online:
- **Offline:** 
	- Download the [Model Weight](http://huggingface.co/echo840/Monkey). 
	- Modify `DEFAULT_CKPT_PATH="pathto/Monkey"` in the `demo.py` file to your model weight path. 
	- Run the demo using the following command: 
	```
	python demo.py
	```
- **Online:** 
	- Run the demo and download model weights online with the following command: 
	```
	python demo.py -c echo840/Monkey 
	```
ShuoZhang2003's avatar
ShuoZhang2003 committed
82

ShuoZhang2003's avatar
ShuoZhang2003 committed
83
84
In order to generate more detailed captions, we provide some prompt examples so that you can conduct more interesting explorations. You can modify these two variables in the `caption` function to implement different prompt inputs for the caption task, as shown below:
```
lz's avatar
lz committed
85
86
query = "Generate the detailed caption in English. Answer: "
chat_query = "Generate the detailed caption in English. Answer: "
ShuoZhang2003's avatar
ShuoZhang2003 committed
87
88
89
90
91
92
93
```
- Generate the detailed caption in English.
- Explain the visual content of the image in great detail.
- Analyze the image in a comprehensive and detailed manner.
- Describe the image in as much detail as possible in English without duplicating it.
- Describe the image in as much detail as possible in English, including as many elements from the image as possible, but without repetition.

lvskiller's avatar
lvskiller committed
94
95
96

## Dataset

ShuoZhang2003's avatar
ShuoZhang2003 committed
97
98
The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

Yuliang Liu's avatar
Yuliang Liu committed
99
The data from our multi-level description generation method is now open-sourced and available for download at [Link](https://huggingface.co/datasets/echo840/Detailed_Caption). Examples:
lvskiller's avatar
lvskiller committed
100

Yuliang Liu's avatar
Yuliang Liu committed
101
102
103
104
105
<br>
<p align="center">
    <img src="images/detailed_caption.png" width="1000"/>
<p>
<br>
ShuoZhang2003's avatar
ShuoZhang2003 committed
106

lvskiller's avatar
lvskiller committed
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
## Evaluate

We offer evaluation code for 14 Visual Question Answering (VQA) datasets in the `evaluate_vqa.py` file, facilitating a quick verification of results.  The specific operations are as follows:

 1. Make sure you have configured the [environment](#environment).
 2. Modify `sys.path.append("pathto/Monkey")`  to your model weight path.
 3. Prepare the datasets required for evaluation. 
 4. Run the evaluation code.

 Take ESTVQA as an example:
 - Prepare data according to the following directory structure:
```
├── data
|	├── estvqa
|		├── test_image
|			├── {image_path0}
|			├── {image_path1}
ShuoZhang2003's avatar
ShuoZhang2003 committed
124
125
|				  ·
|				  ·
Melos's avatar
Melos committed
126
|	├── estvqa.jsonl
lvskiller's avatar
lvskiller committed
127
128
129
130
131
132
133
134
135
```
 - Example of the format of each line of the annotated `.jsonl` file:
```
{"image": "data/estvqa/test_image/011364.jpg", "question": "What is this store?", "answer": "pizzeria", "question_id": 0}
```
 - Modify the dictionary `ds_collections`:
```
ds_collections = {
	'estvqa_test': {
Melos's avatar
Melos committed
136
137
138
		'test': 'data/estvqa/estvqa.jsonl',
		'metric': 'anls',
		'max_new_tokens': 100,
lvskiller's avatar
lvskiller committed
139
140
141
142
143
144
145
146
147
	},
	...
}
```
 - Run the following command:
```
bash eval/eval.sh 'EVAL_PTH' 'SAVE_NAME'
```

ShuoZhang2003's avatar
ShuoZhang2003 committed
148

lvskiller's avatar
lvskiller committed
149
150
151
152
## Train

We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing `finetune_ds_debug.sh`.

ShuoZhang2003's avatar
ShuoZhang2003 committed
153
154
The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

lvskiller's avatar
lvskiller committed
155
156
**ATTENTION:** Specify the path to your training data, which should be a json file consisting of a list of conversations.

ShuoZhang2003's avatar
ShuoZhang2003 committed
157
Inspired by Qwen-VL, we freeze the Large Language Model (LLM) and introduce LoRA into four linear layers ```"c_attn", "attn.c_proj", "w1", "w2"``` for training. This step makes it possible to train Monkey using 8 NVIDIA 3090 GPUs. The specific implementation code is in ```model_qwen_nvdia3090.py```.
ShuoZhang2003's avatar
ShuoZhang2003 committed
158

ShuoZhang2003's avatar
ShuoZhang2003 committed
159
 - Add LoRA: You need to replace the contents of ```model_qwen.py``` with the contents of ```model_qwen_nvdia3090.py```.
ShuoZhang2003's avatar
ShuoZhang2003 committed
160
 - Freeze LLM: You need to freeze other modules except LoRA and Resampler modules in ```finetune_multitask.py```.
ShuoZhang2003's avatar
ShuoZhang2003 committed
161

lvskiller's avatar
lvskiller committed
162

lz's avatar
lz committed
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
## Inference

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "echo840/Monkey"
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map='cuda', trust_remote_code=True).eval()
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
tokenizer.padding_side = 'left'
tokenizer.pad_token_id = tokenizer.eod_id
img_path = ""
question = ""
query = f'<img>{img_path}</img> {question} Answer: ' #VQA
# query = f'<img>{img_path}</img> Generate the detailed caption in English: ' #detailed caption

input_ids = tokenizer(query, return_tensors='pt', padding='longest')
attention_mask = input_ids.attention_mask
input_ids = input_ids.input_ids

pred = model.generate(
            input_ids=input_ids.cuda(),
            attention_mask=attention_mask.cuda(),
            do_sample=False,
            num_beams=1,
echo840's avatar
echo840 committed
186
            max_new_tokens=512,
lz's avatar
lz committed
187
188
189
190
191
192
193
194
195
196
197
198
            min_new_tokens=1,
            length_penalty=1,
            num_return_sequences=1,
            output_hidden_states=True,
            use_cache=True,
            pad_token_id=tokenizer.eod_id,
            eos_token_id=tokenizer.eod_id,
            )
response = tokenizer.decode(pred[0][input_ids.size(1):].cpu(), skip_special_tokens=True).strip()
print(response)
```

Yuliang Liu's avatar
Yuliang Liu committed
199
## Performance
lvskiller's avatar
lvskiller committed
200

Yuliang Liu's avatar
Yuliang Liu committed
201
<br>
Yuliang Liu's avatar
Yuliang Liu committed
202

Yuliang Liu's avatar
Yuliang Liu committed
203
<p align="center">
ShuoZhang2003's avatar
ShuoZhang2003 committed
204
    <img src="images/radar_1.png" width="800"/>
Yuliang Liu's avatar
Yuliang Liu committed
205
206
<p>
<br>
Yuliang Liu's avatar
Yuliang Liu committed
207
208


lvskiller's avatar
readme  
lvskiller committed
209
210
## Cases

Yuliang Liu's avatar
Yuliang Liu committed
211
Our model can accurately describe the details in the image.
lvskiller's avatar
readme  
lvskiller committed
212

Yuliang Liu's avatar
Yuliang Liu committed
213
214
215
216
217
<br>
<p align="center">
    <img src="images/caption_1.png" width="700"/>
<p>
<br>
lvskiller's avatar
lvskiller committed
218

ShuoZhang2003's avatar
ShuoZhang2003 committed
219
220
221
222
223
224
225
Our model performs particularly well in dense text question answering tasks. For example, in the dense text of item labels, Monkey can accurately answer various information about the item, and its performance is very impressive compared to other LMMs including GPT4V.

<br>
<p align="center">
    <img src="images/dense_text_1.png" width="700"/>
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
226

Yuliang Liu's avatar
Yuliang Liu committed
227
228
<br>
<p align="center">
ShuoZhang2003's avatar
ShuoZhang2003 committed
229
    <img src="images/dense_text_2.png" width="700"/>
Yuliang Liu's avatar
Yuliang Liu committed
230
231
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
232

ShuoZhang2003's avatar
ShuoZhang2003 committed
233
Monkey also performs equally well in daily life scenes. It can complete various Q&A and caption tasks and describe various details in the image in detail, even the inconspicuous watermark.
lvskiller's avatar
readme  
lvskiller committed
234

Yuliang Liu's avatar
Yuliang Liu committed
235
236
<br>
<p align="center">
ShuoZhang2003's avatar
ShuoZhang2003 committed
237
    <img src="images/qa_caption.png" width="700"/>
Yuliang Liu's avatar
Yuliang Liu committed
238
239
<p>
<br>
lvskiller's avatar
readme  
lvskiller committed
240

Yuliang Liu's avatar
Yuliang Liu committed
241
We qualitatively compare with existing LMMs including GPT4V, Qwen-vl, etc, which shows inspiring results. One can have a try using the provided demo. 
lvskiller's avatar
readme  
lvskiller committed
242

Yuliang Liu's avatar
Yuliang Liu committed
243
244
245
246
247
<br>
<p align="center">
    <img src="images/compare.png" width="800"/>
<p>
<br>
lvskiller's avatar
lvskiller committed
248

ShuoZhang2003's avatar
ShuoZhang2003 committed
249

Yuliang Liu's avatar
Yuliang Liu committed
250
251
252
253
## Citing Monkey
If you wish to refer to the baseline results published here, please use the following BibTeX entries:

```BibTeX
Yuliang Liu's avatar
Yuliang Liu committed
254
@article{li2023monkey,
Yuliang Liu's avatar
Yuliang Liu committed
255
  title={Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models},
Yuliang Liu's avatar
Yuliang Liu committed
256
  author={Li, Zhang and Yang, Biao and Liu, Qiang and Ma, Zhiyin and Zhang, Shuo and Yang, Jingxu and Sun, Yabo and Liu, Yuliang and Bai, Xiang},
Yuliang Liu's avatar
Yuliang Liu committed
257
258
259
260
261
262
  journal={arXiv preprint arXiv:2311.06607},
  year={2023}
}
```

If you find the Monkey cute, please star. It would be a great encouragement for us.
lvskiller's avatar
lvskiller committed
263

ShuoZhang2003's avatar
ShuoZhang2003 committed
264

lvskiller's avatar
readme  
lvskiller committed
265
266
## Acknowledgement

Melos's avatar
Melos committed
267
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git): the codebase we built upon. Thanks for the authors of Qwen for providing the framework.
lvskiller's avatar
readme  
lvskiller committed
268

Yuliang Liu's avatar
Yuliang Liu committed
269

Yuliang Liu's avatar
Yuliang Liu committed
270
## Copyright
Yuliang Liu's avatar
Yuliang Liu committed
271
We welcome suggestions to help us improve the Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn. If you find something interesting, please also feel free to share with us through email or open an issue. Thanks!