README.md 10.4 KB
Newer Older
Yuliang Liu's avatar
Yuliang Liu committed
1
<p align="center">
Melos's avatar
Melos committed
2
    <img src="https://v1.ax1x.com/2024/04/13/7ySieU.png" width="500" style="margin-bottom: 0.2;"/>
Yuliang Liu's avatar
Yuliang Liu committed
3
<p>
lvskiller's avatar
readme  
lvskiller committed
4

Melos's avatar
Melos committed
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<h3 align="center"> <a href="https://arxiv.org/abs/2311.06607">Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models</a></h3>
<h2></h2>

<h5 align="center"> Please give us a star ⭐ for the latest update.  </h5>

<h5 align="center">

 
[![arXiv](https://img.shields.io/badge/Arxiv-2311.06607-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2311.06607) 
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/Yuliang-Liu/Monkey/blob/main/LICENSE) 
[![GitHub issues](https://img.shields.io/github/issues/Yuliang-Liu/Monkey?color=critical&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aopen+is%3Aissue)
[![GitHub closed issues](https://img.shields.io/github/issues-closed/Yuliang-Liu/Monkey?color=success&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aissue+is%3Aclosed)  <br>
</h5>


<details open><summary>💡 Monkey series projects:✨. </summary><p>
<!--  may -->

>[CVPR'24] [**Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models**](https://arxiv.org/abs/2311.06607)<br>
> Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, Xiang Bai <br>
[![Paper](https://img.shields.io/badge/Paper-CVPR'24_Highlight-red)](README.md)
[![Source_code](https://img.shields.io/badge/Code-Available-white)](README.md)
[![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlab-monkey.xyz:7681/)
[![Detailed Caption](https://img.shields.io/badge/Detailed_Caption-yellow)](http://huggingface.co/datasets/echo840/Detailed_Caption)
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](http://huggingface.co/echo840/Monkey)
[![Model Weight in Wisemodel](https://img.shields.io/badge/Model_Weight_in_Wisemodel-gray)](https://www.wisemodel.cn/models/HUST-VLRLab/Monkey/)
[![Demo in Wisemodel](https://img.shields.io/badge/Demo_in_Wisemodel-blue)](https://wisemodel.cn/space/gradio/huakeMonkey)



> [**TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document**](https://arxiv.org/abs/2403.04473)<br>
> Yuliang Liu, Biao Yang, Qiang Liu, Zhang Li, Zhiyin Ma, Shuo Zhang, Xiang Bai <br>
[![arXiv](https://img.shields.io/badge/Arxiv-2403.04473-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2403.04473) 
[![Source_code](https://img.shields.io/badge/Code-Available-white)](monkey_model/text_monkey/README.md)
[![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlab-monkey.xyz:7684/)
[![Data](https://img.shields.io/badge/Data-yellow)](https://www.modelscope.cn/datasets/lvskiller/TextMonkey_data)
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](https://www.modelscope.cn/models/lvskiller/TextMonkey)
lvskiller's avatar
lvskiller committed
42

Yuliang Liu's avatar
Yuliang Liu committed
43
    
Yuliang Liu's avatar
Yuliang Liu committed
44
## News 
Melos's avatar
Melos committed
45
* ```2024.4.13 ``` 🚀 Sourced code for [TextMonkey](monkey_model/text_monkey/README.md) is released.
Yuliang Liu's avatar
Yuliang Liu committed
46
* ```2024.4.5  ``` 🚀 Monkey is nominated as CVPR 2024 Highlight paper.
Melos's avatar
Melos committed
47
48
* ```2024.3.8  ``` 🚀 We release the paper [TextMonkey](https://arxiv.org/abs/2403.04473).
* ```2024.2.27 ``` 🚀 Monkey is accepted by CVPR 2024. 
Yuliang Liu's avatar
Yuliang Liu committed
49
50
* ```2024.1.3  ``` 🚀 Release the basic data generation pipeline. [Data Generation](./data_generation)
* ```2023.12.16``` 🚀 Monkey can be trained using 8 NVIDIA 3090 GPUs. See subsection [train](#Train) for details.
Melos's avatar
Melos committed
51
* ```2023.11.06``` 🚀 We release the paper [Monkey](https://arxiv.org/abs/2311.06607).
Yuliang Liu's avatar
Yuliang Liu committed
52

Melos's avatar
Melos committed
53
## 🐳 Model Zoo
lvskiller's avatar
lvskiller committed
54

Melos's avatar
Melos committed
55
56
57
58
Monkey-Chat
| Model|Language Model|Transformers(HF) |MMBench-Test|CCBench|MME|SeedBench_IMG|MathVista-MiniTest|HallusionBench-Avg|AI2D Test|OCRBench|
|---------------|---------|-----------------------------------------|---|---|---|---|---|---|---|---|
|Monkey-Chat|Qwev-7B|[🤗echo840/Monkey-Chat](https://huggingface.co/echo840/Monkey-Chat)|72.4|48|1887.4|68.9|34.8|39.3|68.5|534|
lvskiller's avatar
readme  
lvskiller committed
59

ShuoZhang2003's avatar
ShuoZhang2003 committed
60

lvskiller's avatar
lvskiller committed
61
62
63
64
65
66
67
68
69
## Environment

```python
conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt
```
echo840's avatar
echo840 committed
70
71
You can download the corresponding version of flash_attention from https://github.com/Dao-AILab/flash-attention/releases/ and use the following code to install:
```python
echo840's avatar
echo840 committed
72
pip install flash_attn-2.3.5+cu117torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl --no-build-isolation
echo840's avatar
echo840 committed
73
```
lvskiller's avatar
lvskiller committed
74
75


Melos's avatar
Melos committed
76
77
## Train

echo840's avatar
echo840 committed
78
We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing `finetune_ds_debug.sh`, `finetune_textmonkey.sh`.
Melos's avatar
Melos committed
79
80
81
82
83
84
85
86
87
88
89
90
91

The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

**ATTENTION:** Specify the path to your training data, which should be a json file consisting of a list of conversations.

Inspired by Qwen-VL, we freeze the Large Language Model (LLM) and introduce LoRA into four linear layers ```"c_attn", "attn.c_proj", "w1", "w2"``` for training. This step makes it possible to train Monkey using 8 NVIDIA 3090 GPUs. The specific implementation code is in ```modeling_qwen_nvdia3090.py```.

 - Add LoRA: You need to replace the contents of ```modeling_qwen.py``` with the contents of ```modeling_qwen_nvdia3090.py```.
 - Freeze LLM: You need to freeze other modules except LoRA and Resampler modules in ```finetune_multitask.py```.

## Inference
Run the inference code:
```
echo840's avatar
echo840 committed
92
python ./inference.py --model_path MODEL_PATH  --image_path IMAGE_PATH  --question "YOUR_QUESTION"
Melos's avatar
Melos committed
93
94
95
```


Yuliang Liu's avatar
Yuliang Liu committed
96
## Demo
lvskiller's avatar
readme  
lvskiller committed
97

ShuoZhang2003's avatar
ShuoZhang2003 committed
98
Demo is fast and easy to use. Simply uploading an image from your desktop or phone, or capture one directly. 
echo840's avatar
echo840 committed
99
[Demo_chat](http://vlrlab-monkey.xyz:7681) is also launched as an upgraded version of the original demo to deliver an enhanced interactive experience.
ShuoZhang2003's avatar
ShuoZhang2003 committed
100
101

We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:
lvskiller's avatar
lvskiller committed
102
103
104
105
106
107
108
109
110
111
112
113
114
115
 1. Make sure you have configured the [environment](#environment).
 2. You can choose to use the demo offline or online:
- **Offline:** 
	- Download the [Model Weight](http://huggingface.co/echo840/Monkey). 
	- Modify `DEFAULT_CKPT_PATH="pathto/Monkey"` in the `demo.py` file to your model weight path. 
	- Run the demo using the following command: 
	```
	python demo.py
	```
- **Online:** 
	- Run the demo and download model weights online with the following command: 
	```
	python demo.py -c echo840/Monkey 
	```
ShuoZhang2003's avatar
ShuoZhang2003 committed
116

Melos's avatar
Melos committed
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V.  
<br>
<p align="center">
    <img src="https://v1.ax1x.com/2024/04/13/7yS2yq.jpg" width="666"/>
<p>
<br>

Before 31/1/2024, Monkey-chat achieved the fifth rank in the Multimodal Model category on [OpenCompass](https://opencompass.org.cn/home). 
<br>
<p align="center">
    <img src="https://v1.ax1x.com/2024/04/13/7yShXL.jpg" width="666"/>
<p>
<br>

 
lvskiller's avatar
lvskiller committed
132
133
## Dataset

ShuoZhang2003's avatar
ShuoZhang2003 committed
134
135
The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

Yuliang Liu's avatar
Yuliang Liu committed
136
The data from our multi-level description generation method is now open-sourced and available for download at [Link](https://huggingface.co/datasets/echo840/Detailed_Caption). Examples:
lvskiller's avatar
lvskiller committed
137

Yuliang Liu's avatar
Yuliang Liu committed
138
139
<br>
<p align="center">
Melos's avatar
Melos committed
140
    <img src="https://v1.ax1x.com/2024/04/13/7yS6Ss.jpg" width="666"/>
Yuliang Liu's avatar
Yuliang Liu committed
141
142
<p>
<br>
echo840's avatar
echo840 committed
143
	
echo840's avatar
echo840 committed
144
You can download train images of Monkey from [Train](https://pan.baidu.com/s/1svSjXTxWpI-3boALgSeLlw). Extraction code: 4hdh
echo840's avatar
echo840 committed
145

echo840's avatar
echo840 committed
146
You can download test images and jsonls of Monkey from [Test](https://pan.baidu.com/s/1ABrQKeE9QBeKvtGzXfM8Eg). Extraction code: 5h71
echo840's avatar
echo840 committed
147

echo840's avatar
echo840 committed
148
The images are from CC3M, COCO Caption, TextCaps, VQAV2, OKVQA, GQA, ScienceQA, VizWiz, TextVQA, OCRVQA, ESTVQA, STVQA, AI2D and DUE_Benchmark. When using the data, it is necessary to comply with the protocols of the original dataset.
ShuoZhang2003's avatar
ShuoZhang2003 committed
149

lvskiller's avatar
lvskiller committed
150
151
152
153
154
## Evaluate

We offer evaluation code for 14 Visual Question Answering (VQA) datasets in the `evaluate_vqa.py` file, facilitating a quick verification of results.  The specific operations are as follows:

 1. Make sure you have configured the [environment](#environment).
echo840's avatar
echo840 committed
155
 2. Modify `sys.path.append("pathto/Monkey")`  to the project path.
lvskiller's avatar
lvskiller committed
156
157
158
159
160
161
162
163
164
165
166
 3. Prepare the datasets required for evaluation. 
 4. Run the evaluation code.

 Take ESTVQA as an example:
 - Prepare data according to the following directory structure:
```
├── data
|	├── estvqa
|		├── test_image
|			├── {image_path0}
|			├── {image_path1}
ShuoZhang2003's avatar
ShuoZhang2003 committed
167
168
|				  ·
|				  ·
Melos's avatar
Melos committed
169
|	├── estvqa.jsonl
lvskiller's avatar
lvskiller committed
170
171
172
173
174
175
176
177
178
```
 - Example of the format of each line of the annotated `.jsonl` file:
```
{"image": "data/estvqa/test_image/011364.jpg", "question": "What is this store?", "answer": "pizzeria", "question_id": 0}
```
 - Modify the dictionary `ds_collections`:
```
ds_collections = {
	'estvqa_test': {
Melos's avatar
Melos committed
179
180
181
		'test': 'data/estvqa/estvqa.jsonl',
		'metric': 'anls',
		'max_new_tokens': 100,
lvskiller's avatar
lvskiller committed
182
183
184
185
186
187
188
189
190
	},
	...
}
```
 - Run the following command:
```
bash eval/eval.sh 'EVAL_PTH' 'SAVE_NAME'
```

ShuoZhang2003's avatar
ShuoZhang2003 committed
191

Yuliang Liu's avatar
Yuliang Liu committed
192
193
194
195
## Citing Monkey
If you wish to refer to the baseline results published here, please use the following BibTeX entries:

```BibTeX
Melos's avatar
Melos committed
196
@inproceedings{li2023monkey,
Yuliang Liu's avatar
Yuliang Liu committed
197
  title={Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models},
Yuliang Liu's avatar
Yuliang Liu committed
198
  author={Li, Zhang and Yang, Biao and Liu, Qiang and Ma, Zhiyin and Zhang, Shuo and Yang, Jingxu and Sun, Yabo and Liu, Yuliang and Bai, Xiang},
Melos's avatar
Melos committed
199
200
201
202
203
204
205
206
  booktitle={proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  year={2024}
}
@article{liu2024textmonkey,
  title={TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document},
  author={Liu, Yuliang and Yang, Biao and Liu, Qiang and Li, Zhang and Ma, Zhiyin and Zhang, Shuo and Bai, Xiang},
  journal={arXiv preprint arXiv:2403.04473},
  year={2024}
Yuliang Liu's avatar
Yuliang Liu committed
207
208
209
}
```

lvskiller's avatar
readme  
lvskiller committed
210
211
## Acknowledgement

Melos's avatar
Melos committed
212
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git), [LLAMA](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [OpenCompass](https://github.com/open-compass/opencompass), [InternLM](https://github.com/InternLM/InternLM). 
lvskiller's avatar
readme  
lvskiller committed
213

Yuliang Liu's avatar
Yuliang Liu committed
214

Yuliang Liu's avatar
Yuliang Liu committed
215
## Copyright
Yuliang Liu's avatar
Yuliang Liu committed
216
We welcome suggestions to help us improve the Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn. If you find something interesting, please also feel free to share with us through email or open an issue. Thanks!