README.md 10.5 KB
Newer Older
Yuliang Liu's avatar
Yuliang Liu committed
1
<p align="center">
Melos's avatar
Melos committed
2
    <img src="https://v1.ax1x.com/2024/04/13/7ySieU.png" width="500" style="margin-bottom: 0.2;"/>
Yuliang Liu's avatar
Yuliang Liu committed
3
<p>
lvskiller's avatar
readme  
lvskiller committed
4

Melos's avatar
Melos committed
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<h3 align="center"> <a href="https://arxiv.org/abs/2311.06607">Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models</a></h3>
<h2></h2>

<h5 align="center"> Please give us a star ⭐ for the latest update.  </h5>

<h5 align="center">

 
[![arXiv](https://img.shields.io/badge/Arxiv-2311.06607-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2311.06607) 
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/Yuliang-Liu/Monkey/blob/main/LICENSE) 
[![GitHub issues](https://img.shields.io/github/issues/Yuliang-Liu/Monkey?color=critical&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aopen+is%3Aissue)
[![GitHub closed issues](https://img.shields.io/github/issues-closed/Yuliang-Liu/Monkey?color=success&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aissue+is%3Aclosed)  <br>
</h5>


<details open><summary>💡 Monkey series projects:✨. </summary><p>
<!--  may -->

>[CVPR'24] [**Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models**](https://arxiv.org/abs/2311.06607)<br>
> Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, Xiang Bai <br>
[![Paper](https://img.shields.io/badge/Paper-CVPR'24_Highlight-red)](README.md)
[![Source_code](https://img.shields.io/badge/Code-Available-white)](README.md)
[![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlab-monkey.xyz:7681/)
[![Detailed Caption](https://img.shields.io/badge/Detailed_Caption-yellow)](http://huggingface.co/datasets/echo840/Detailed_Caption)
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](http://huggingface.co/echo840/Monkey)
[![Model Weight in Wisemodel](https://img.shields.io/badge/Model_Weight_in_Wisemodel-gray)](https://www.wisemodel.cn/models/HUST-VLRLab/Monkey/)
[![Demo in Wisemodel](https://img.shields.io/badge/Demo_in_Wisemodel-blue)](https://wisemodel.cn/space/gradio/huakeMonkey)



> [**TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document**](https://arxiv.org/abs/2403.04473)<br>
> Yuliang Liu, Biao Yang, Qiang Liu, Zhang Li, Zhiyin Ma, Shuo Zhang, Xiang Bai <br>
[![arXiv](https://img.shields.io/badge/Arxiv-2403.04473-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2403.04473) 
[![Source_code](https://img.shields.io/badge/Code-Available-white)](monkey_model/text_monkey/README.md)
echo840's avatar
echo840 committed
39
[![Data](https://img.shields.io/badge/Data-yellow)](https://huggingface.co/datasets/MelosY/TextMonkey_Data/tree/main)
Melos's avatar
Melos committed
40
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](https://www.modelscope.cn/models/lvskiller/TextMonkey)
lvskiller's avatar
lvskiller committed
41

Yuliang Liu's avatar
Yuliang Liu committed
42
    
Yuliang Liu's avatar
Yuliang Liu committed
43
## News 
Melos's avatar
Melos committed
44
* ```2024.4.13 ``` 🚀 Sourced code for [TextMonkey](monkey_model/text_monkey/README.md) is released.
Yuliang Liu's avatar
Yuliang Liu committed
45
* ```2024.4.5  ``` 🚀 Monkey is nominated as CVPR 2024 Highlight paper.
Melos's avatar
Melos committed
46
47
* ```2024.3.8  ``` 🚀 We release the paper [TextMonkey](https://arxiv.org/abs/2403.04473).
* ```2024.2.27 ``` 🚀 Monkey is accepted by CVPR 2024. 
Yuliang Liu's avatar
Yuliang Liu committed
48
49
* ```2024.1.3  ``` 🚀 Release the basic data generation pipeline. [Data Generation](./data_generation)
* ```2023.12.16``` 🚀 Monkey can be trained using 8 NVIDIA 3090 GPUs. See subsection [train](#Train) for details.
Melos's avatar
Melos committed
50
* ```2023.11.06``` 🚀 We release the paper [Monkey](https://arxiv.org/abs/2311.06607).
Yuliang Liu's avatar
Yuliang Liu committed
51

Melos's avatar
Melos committed
52
## 🐳 Model Zoo
lvskiller's avatar
lvskiller committed
53

Melos's avatar
Melos committed
54
55
56
57
Monkey-Chat
| Model|Language Model|Transformers(HF) |MMBench-Test|CCBench|MME|SeedBench_IMG|MathVista-MiniTest|HallusionBench-Avg|AI2D Test|OCRBench|
|---------------|---------|-----------------------------------------|---|---|---|---|---|---|---|---|
|Monkey-Chat|Qwev-7B|[🤗echo840/Monkey-Chat](https://huggingface.co/echo840/Monkey-Chat)|72.4|48|1887.4|68.9|34.8|39.3|68.5|534|
lvskiller's avatar
readme  
lvskiller committed
58

ShuoZhang2003's avatar
ShuoZhang2003 committed
59

lvskiller's avatar
lvskiller committed
60
61
62
63
64
65
66
67
68
## Environment

```python
conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt
```
echo840's avatar
echo840 committed
69
70
You can download the corresponding version of flash_attention from https://github.com/Dao-AILab/flash-attention/releases/ and use the following code to install:
```python
echo840's avatar
echo840 committed
71
pip install flash_attn-2.3.5+cu117torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl --no-build-isolation
echo840's avatar
echo840 committed
72
```
lvskiller's avatar
lvskiller committed
73
74


Melos's avatar
Melos committed
75
76
## Train

echo840's avatar
echo840 committed
77
We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing `finetune_ds_debug.sh`, `finetune_textmonkey.sh`.
Melos's avatar
Melos committed
78
79
80
81
82
83
84
85
86
87
88

The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

**ATTENTION:** Specify the path to your training data, which should be a json file consisting of a list of conversations.

Inspired by Qwen-VL, we freeze the Large Language Model (LLM) and introduce LoRA into four linear layers ```"c_attn", "attn.c_proj", "w1", "w2"``` for training. This step makes it possible to train Monkey using 8 NVIDIA 3090 GPUs. The specific implementation code is in ```modeling_qwen_nvdia3090.py```.

 - Add LoRA: You need to replace the contents of ```modeling_qwen.py``` with the contents of ```modeling_qwen_nvdia3090.py```.
 - Freeze LLM: You need to freeze other modules except LoRA and Resampler modules in ```finetune_multitask.py```.

## Inference
echo840's avatar
echo840 committed
89
Run the inference code for Monkey and Monkey-Chat:
Melos's avatar
Melos committed
90
```
echo840's avatar
echo840 committed
91
python ./inference.py --model_path MODEL_PATH  --image_path IMAGE_PATH  --question "YOUR_QUESTION"
Melos's avatar
Melos committed
92
93
94
```


Yuliang Liu's avatar
Yuliang Liu committed
95
## Demo
lvskiller's avatar
readme  
lvskiller committed
96

ShuoZhang2003's avatar
ShuoZhang2003 committed
97
Demo is fast and easy to use. Simply uploading an image from your desktop or phone, or capture one directly. 
echo840's avatar
echo840 committed
98
[Demo_chat](http://vlrlab-monkey.xyz:7681) is also launched as an upgraded version of the original demo to deliver an enhanced interactive experience.
ShuoZhang2003's avatar
ShuoZhang2003 committed
99
100

We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:
lvskiller's avatar
lvskiller committed
101
102
103
104
105
106
107
108
109
110
111
112
113
114
 1. Make sure you have configured the [environment](#environment).
 2. You can choose to use the demo offline or online:
- **Offline:** 
	- Download the [Model Weight](http://huggingface.co/echo840/Monkey). 
	- Modify `DEFAULT_CKPT_PATH="pathto/Monkey"` in the `demo.py` file to your model weight path. 
	- Run the demo using the following command: 
	```
	python demo.py
	```
- **Online:** 
	- Run the demo and download model weights online with the following command: 
	```
	python demo.py -c echo840/Monkey 
	```
ShuoZhang2003's avatar
ShuoZhang2003 committed
115

echo840's avatar
echo840 committed
116
For TextMonkey you can download the model weight from [Model Weight](https://www.modelscope.cn/models/lvskiller/TextMonkey)  and run the demo code:
echo840's avatar
echo840 committed
117
118
119
120
``` python
python demo_textmonkey.py -c model_path
```

Melos's avatar
Melos committed
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V.  
<br>
<p align="center">
    <img src="https://v1.ax1x.com/2024/04/13/7yS2yq.jpg" width="666"/>
<p>
<br>

Before 31/1/2024, Monkey-chat achieved the fifth rank in the Multimodal Model category on [OpenCompass](https://opencompass.org.cn/home). 
<br>
<p align="center">
    <img src="https://v1.ax1x.com/2024/04/13/7yShXL.jpg" width="666"/>
<p>
<br>

 
lvskiller's avatar
lvskiller committed
136
137
## Dataset

ShuoZhang2003's avatar
ShuoZhang2003 committed
138
139
The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

Yuliang Liu's avatar
Yuliang Liu committed
140
The data from our multi-level description generation method is now open-sourced and available for download at [Link](https://huggingface.co/datasets/echo840/Detailed_Caption). Examples:
lvskiller's avatar
lvskiller committed
141

Yuliang Liu's avatar
Yuliang Liu committed
142
143
<br>
<p align="center">
Melos's avatar
Melos committed
144
    <img src="https://v1.ax1x.com/2024/04/13/7yS6Ss.jpg" width="666"/>
Yuliang Liu's avatar
Yuliang Liu committed
145
146
<p>
<br>
echo840's avatar
echo840 committed
147
	
echo840's avatar
echo840 committed
148
You can download train images of Monkey from [Train](https://pan.baidu.com/s/1svSjXTxWpI-3boALgSeLlw). Extraction code: 4hdh
echo840's avatar
echo840 committed
149

echo840's avatar
echo840 committed
150
You can download test images and jsonls of Monkey from [Test](https://pan.baidu.com/s/1ABrQKeE9QBeKvtGzXfM8Eg). Extraction code: 5h71
echo840's avatar
echo840 committed
151

echo840's avatar
echo840 committed
152
The images are from CC3M, COCO Caption, TextCaps, VQAV2, OKVQA, GQA, ScienceQA, VizWiz, TextVQA, OCRVQA, ESTVQA, STVQA, AI2D and DUE_Benchmark. When using the data, it is necessary to comply with the protocols of the original dataset.
ShuoZhang2003's avatar
ShuoZhang2003 committed
153

lvskiller's avatar
lvskiller committed
154
155
156
157
158
## Evaluate

We offer evaluation code for 14 Visual Question Answering (VQA) datasets in the `evaluate_vqa.py` file, facilitating a quick verification of results.  The specific operations are as follows:

 1. Make sure you have configured the [environment](#environment).
echo840's avatar
echo840 committed
159
 2. Modify `sys.path.append("pathto/Monkey")`  to the project path.
lvskiller's avatar
lvskiller committed
160
161
162
163
164
165
166
167
168
169
170
 3. Prepare the datasets required for evaluation. 
 4. Run the evaluation code.

 Take ESTVQA as an example:
 - Prepare data according to the following directory structure:
```
├── data
|	├── estvqa
|		├── test_image
|			├── {image_path0}
|			├── {image_path1}
ShuoZhang2003's avatar
ShuoZhang2003 committed
171
172
|				  ·
|				  ·
Melos's avatar
Melos committed
173
|	├── estvqa.jsonl
lvskiller's avatar
lvskiller committed
174
175
176
177
178
179
180
181
182
```
 - Example of the format of each line of the annotated `.jsonl` file:
```
{"image": "data/estvqa/test_image/011364.jpg", "question": "What is this store?", "answer": "pizzeria", "question_id": 0}
```
 - Modify the dictionary `ds_collections`:
```
ds_collections = {
	'estvqa_test': {
Melos's avatar
Melos committed
183
184
185
		'test': 'data/estvqa/estvqa.jsonl',
		'metric': 'anls',
		'max_new_tokens': 100,
lvskiller's avatar
lvskiller committed
186
187
188
189
190
191
192
193
194
	},
	...
}
```
 - Run the following command:
```
bash eval/eval.sh 'EVAL_PTH' 'SAVE_NAME'
```

ShuoZhang2003's avatar
ShuoZhang2003 committed
195

Yuliang Liu's avatar
Yuliang Liu committed
196
197
198
199
## Citing Monkey
If you wish to refer to the baseline results published here, please use the following BibTeX entries:

```BibTeX
Melos's avatar
Melos committed
200
@inproceedings{li2023monkey,
Yuliang Liu's avatar
Yuliang Liu committed
201
  title={Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models},
Yuliang Liu's avatar
Yuliang Liu committed
202
  author={Li, Zhang and Yang, Biao and Liu, Qiang and Ma, Zhiyin and Zhang, Shuo and Yang, Jingxu and Sun, Yabo and Liu, Yuliang and Bai, Xiang},
Melos's avatar
Melos committed
203
204
205
206
207
208
209
210
  booktitle={proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  year={2024}
}
@article{liu2024textmonkey,
  title={TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document},
  author={Liu, Yuliang and Yang, Biao and Liu, Qiang and Li, Zhang and Ma, Zhiyin and Zhang, Shuo and Bai, Xiang},
  journal={arXiv preprint arXiv:2403.04473},
  year={2024}
Yuliang Liu's avatar
Yuliang Liu committed
211
212
213
}
```

lvskiller's avatar
readme  
lvskiller committed
214
215
## Acknowledgement

Melos's avatar
Melos committed
216
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git), [LLAMA](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [OpenCompass](https://github.com/open-compass/opencompass), [InternLM](https://github.com/InternLM/InternLM). 
lvskiller's avatar
readme  
lvskiller committed
217

Yuliang Liu's avatar
Yuliang Liu committed
218

Yuliang Liu's avatar
Yuliang Liu committed
219
## Copyright
Yuliang Liu's avatar
Yuliang Liu committed
220
We welcome suggestions to help us improve the Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn. If you find something interesting, please also feel free to share with us through email or open an issue. Thanks!