README.md 10.1 KB
Newer Older
Yuliang Liu's avatar
Yuliang Liu committed
1
<p align="center">
Melos's avatar
Melos committed
2
    <img src="https://v1.ax1x.com/2024/04/13/7ySieU.png" width="500" style="margin-bottom: 0.2;"/>
Yuliang Liu's avatar
Yuliang Liu committed
3
<p>
lvskiller's avatar
readme  
lvskiller committed
4

Melos's avatar
Melos committed
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<h3 align="center"> <a href="https://arxiv.org/abs/2311.06607">Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models</a></h3>
<h2></h2>

<h5 align="center"> Please give us a star ⭐ for the latest update.  </h5>

<h5 align="center">

 
[![arXiv](https://img.shields.io/badge/Arxiv-2311.06607-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2311.06607) 
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/Yuliang-Liu/Monkey/blob/main/LICENSE) 
[![GitHub issues](https://img.shields.io/github/issues/Yuliang-Liu/Monkey?color=critical&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aopen+is%3Aissue)
[![GitHub closed issues](https://img.shields.io/github/issues-closed/Yuliang-Liu/Monkey?color=success&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aissue+is%3Aclosed)  <br>
</h5>


<details open><summary>💡 Monkey series projects:✨. </summary><p>
<!--  may -->

>[CVPR'24] [**Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models**](https://arxiv.org/abs/2311.06607)<br>
> Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, Xiang Bai <br>
[![Paper](https://img.shields.io/badge/Paper-CVPR'24_Highlight-red)](README.md)
[![Source_code](https://img.shields.io/badge/Code-Available-white)](README.md)
[![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlab-monkey.xyz:7681/)
[![Detailed Caption](https://img.shields.io/badge/Detailed_Caption-yellow)](http://huggingface.co/datasets/echo840/Detailed_Caption)
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](http://huggingface.co/echo840/Monkey)
[![Model Weight in Wisemodel](https://img.shields.io/badge/Model_Weight_in_Wisemodel-gray)](https://www.wisemodel.cn/models/HUST-VLRLab/Monkey/)
[![Demo in Wisemodel](https://img.shields.io/badge/Demo_in_Wisemodel-blue)](https://wisemodel.cn/space/gradio/huakeMonkey)



> [**TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document**](https://arxiv.org/abs/2403.04473)<br>
> Yuliang Liu, Biao Yang, Qiang Liu, Zhang Li, Zhiyin Ma, Shuo Zhang, Xiang Bai <br>
[![arXiv](https://img.shields.io/badge/Arxiv-2403.04473-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2403.04473) 
[![Source_code](https://img.shields.io/badge/Code-Available-white)](monkey_model/text_monkey/README.md)
[![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlab-monkey.xyz:7684/)
[![Data](https://img.shields.io/badge/Data-yellow)](https://www.modelscope.cn/datasets/lvskiller/TextMonkey_data)
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](https://www.modelscope.cn/models/lvskiller/TextMonkey)
lvskiller's avatar
lvskiller committed
42

Yuliang Liu's avatar
Yuliang Liu committed
43
    
Yuliang Liu's avatar
Yuliang Liu committed
44
## News 
Melos's avatar
Melos committed
45
* ```2024.4.13 ``` 🚀 Sourced code for [TextMonkey](monkey_model/text_monkey/README.md) is released.
Yuliang Liu's avatar
Yuliang Liu committed
46
* ```2024.4.5  ``` 🚀 Monkey is nominated as CVPR 2024 Highlight paper.
Melos's avatar
Melos committed
47
48
* ```2024.3.8  ``` 🚀 We release the paper [TextMonkey](https://arxiv.org/abs/2403.04473).
* ```2024.2.27 ``` 🚀 Monkey is accepted by CVPR 2024. 
Yuliang Liu's avatar
Yuliang Liu committed
49
50
* ```2024.1.3  ``` 🚀 Release the basic data generation pipeline. [Data Generation](./data_generation)
* ```2023.12.16``` 🚀 Monkey can be trained using 8 NVIDIA 3090 GPUs. See subsection [train](#Train) for details.
Melos's avatar
Melos committed
51
* ```2023.11.06``` 🚀 We release the paper [Monkey](https://arxiv.org/abs/2311.06607).
Yuliang Liu's avatar
Yuliang Liu committed
52

Melos's avatar
Melos committed
53
## 🐳 Model Zoo
lvskiller's avatar
lvskiller committed
54

Melos's avatar
Melos committed
55
56
57
58
Monkey-Chat
| Model|Language Model|Transformers(HF) |MMBench-Test|CCBench|MME|SeedBench_IMG|MathVista-MiniTest|HallusionBench-Avg|AI2D Test|OCRBench|
|---------------|---------|-----------------------------------------|---|---|---|---|---|---|---|---|
|Monkey-Chat|Qwev-7B|[🤗echo840/Monkey-Chat](https://huggingface.co/echo840/Monkey-Chat)|72.4|48|1887.4|68.9|34.8|39.3|68.5|534|
lvskiller's avatar
readme  
lvskiller committed
59

ShuoZhang2003's avatar
ShuoZhang2003 committed
60

lvskiller's avatar
lvskiller committed
61
62
63
64
65
66
67
68
69
70
71
## Environment

```python
conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt
```


Melos's avatar
Melos committed
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
## Train

We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing `finetune_ds_debug.sh`.

The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

**ATTENTION:** Specify the path to your training data, which should be a json file consisting of a list of conversations.

Inspired by Qwen-VL, we freeze the Large Language Model (LLM) and introduce LoRA into four linear layers ```"c_attn", "attn.c_proj", "w1", "w2"``` for training. This step makes it possible to train Monkey using 8 NVIDIA 3090 GPUs. The specific implementation code is in ```modeling_qwen_nvdia3090.py```.

 - Add LoRA: You need to replace the contents of ```modeling_qwen.py``` with the contents of ```modeling_qwen_nvdia3090.py```.
 - Freeze LLM: You need to freeze other modules except LoRA and Resampler modules in ```finetune_multitask.py```.

## Inference
Run the inference code:
```
python ./inference.py --model_path MODEL_PATH  --image_path IMAGE_PATH  --question YOUR_QUESTION
```


Yuliang Liu's avatar
Yuliang Liu committed
92
## Demo
lvskiller's avatar
readme  
lvskiller committed
93

ShuoZhang2003's avatar
ShuoZhang2003 committed
94
Demo is fast and easy to use. Simply uploading an image from your desktop or phone, or capture one directly. 
echo840's avatar
echo840 committed
95
[Demo_chat](http://vlrlab-monkey.xyz:7681) is also launched as an upgraded version of the original demo to deliver an enhanced interactive experience.
ShuoZhang2003's avatar
ShuoZhang2003 committed
96
97

We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:
lvskiller's avatar
lvskiller committed
98
99
100
101
102
103
104
105
106
107
108
109
110
111
 1. Make sure you have configured the [environment](#environment).
 2. You can choose to use the demo offline or online:
- **Offline:** 
	- Download the [Model Weight](http://huggingface.co/echo840/Monkey). 
	- Modify `DEFAULT_CKPT_PATH="pathto/Monkey"` in the `demo.py` file to your model weight path. 
	- Run the demo using the following command: 
	```
	python demo.py
	```
- **Online:** 
	- Run the demo and download model weights online with the following command: 
	```
	python demo.py -c echo840/Monkey 
	```
ShuoZhang2003's avatar
ShuoZhang2003 committed
112

Melos's avatar
Melos committed
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V.  
<br>
<p align="center">
    <img src="https://v1.ax1x.com/2024/04/13/7yS2yq.jpg" width="666"/>
<p>
<br>

Before 31/1/2024, Monkey-chat achieved the fifth rank in the Multimodal Model category on [OpenCompass](https://opencompass.org.cn/home). 
<br>
<p align="center">
    <img src="https://v1.ax1x.com/2024/04/13/7yShXL.jpg" width="666"/>
<p>
<br>

 
lvskiller's avatar
lvskiller committed
128
129
## Dataset

ShuoZhang2003's avatar
ShuoZhang2003 committed
130
131
The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).

Yuliang Liu's avatar
Yuliang Liu committed
132
The data from our multi-level description generation method is now open-sourced and available for download at [Link](https://huggingface.co/datasets/echo840/Detailed_Caption). Examples:
lvskiller's avatar
lvskiller committed
133

Yuliang Liu's avatar
Yuliang Liu committed
134
135
<br>
<p align="center">
Melos's avatar
Melos committed
136
    <img src="https://v1.ax1x.com/2024/04/13/7yS6Ss.jpg" width="666"/>
Yuliang Liu's avatar
Yuliang Liu committed
137
138
<p>
<br>
echo840's avatar
echo840 committed
139
	
echo840's avatar
echo840 committed
140
141
142
143
You can download train images from [Train](https://pan.baidu.com/s/1svSjXTxWpI-3boALgSeLlw). Extraction code: 4hdh

You can download test images and jsonls from [Test](https://pan.baidu.com/s/1ABrQKeE9QBeKvtGzXfM8Eg). Extraction code: 5h71

echo840's avatar
echo840 committed
144
The images are from CC3M, COCO Caption, TextCaps, VQAV2, OKVQA, GQA, ScienceQA, VizWiz, TextVQA, OCRVQA, ESTVQA, STVQA, AI2D and DUE_Benchmark. When using the data, it is necessary to comply with the protocols of the original dataset.
ShuoZhang2003's avatar
ShuoZhang2003 committed
145

lvskiller's avatar
lvskiller committed
146
147
148
149
150
## Evaluate

We offer evaluation code for 14 Visual Question Answering (VQA) datasets in the `evaluate_vqa.py` file, facilitating a quick verification of results.  The specific operations are as follows:

 1. Make sure you have configured the [environment](#environment).
echo840's avatar
echo840 committed
151
 2. Modify `sys.path.append("pathto/Monkey")`  to the project path.
lvskiller's avatar
lvskiller committed
152
153
154
155
156
157
158
159
160
161
162
 3. Prepare the datasets required for evaluation. 
 4. Run the evaluation code.

 Take ESTVQA as an example:
 - Prepare data according to the following directory structure:
```
├── data
|	├── estvqa
|		├── test_image
|			├── {image_path0}
|			├── {image_path1}
ShuoZhang2003's avatar
ShuoZhang2003 committed
163
164
|				  ·
|				  ·
Melos's avatar
Melos committed
165
|	├── estvqa.jsonl
lvskiller's avatar
lvskiller committed
166
167
168
169
170
171
172
173
174
```
 - Example of the format of each line of the annotated `.jsonl` file:
```
{"image": "data/estvqa/test_image/011364.jpg", "question": "What is this store?", "answer": "pizzeria", "question_id": 0}
```
 - Modify the dictionary `ds_collections`:
```
ds_collections = {
	'estvqa_test': {
Melos's avatar
Melos committed
175
176
177
		'test': 'data/estvqa/estvqa.jsonl',
		'metric': 'anls',
		'max_new_tokens': 100,
lvskiller's avatar
lvskiller committed
178
179
180
181
182
183
184
185
186
	},
	...
}
```
 - Run the following command:
```
bash eval/eval.sh 'EVAL_PTH' 'SAVE_NAME'
```

ShuoZhang2003's avatar
ShuoZhang2003 committed
187

Yuliang Liu's avatar
Yuliang Liu committed
188
189
190
191
## Citing Monkey
If you wish to refer to the baseline results published here, please use the following BibTeX entries:

```BibTeX
Melos's avatar
Melos committed
192
@inproceedings{li2023monkey,
Yuliang Liu's avatar
Yuliang Liu committed
193
  title={Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models},
Yuliang Liu's avatar
Yuliang Liu committed
194
  author={Li, Zhang and Yang, Biao and Liu, Qiang and Ma, Zhiyin and Zhang, Shuo and Yang, Jingxu and Sun, Yabo and Liu, Yuliang and Bai, Xiang},
Melos's avatar
Melos committed
195
196
197
198
199
200
201
202
  booktitle={proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  year={2024}
}
@article{liu2024textmonkey,
  title={TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document},
  author={Liu, Yuliang and Yang, Biao and Liu, Qiang and Li, Zhang and Ma, Zhiyin and Zhang, Shuo and Bai, Xiang},
  journal={arXiv preprint arXiv:2403.04473},
  year={2024}
Yuliang Liu's avatar
Yuliang Liu committed
203
204
205
}
```

lvskiller's avatar
readme  
lvskiller committed
206
207
## Acknowledgement

Melos's avatar
Melos committed
208
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git), [LLAMA](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [OpenCompass](https://github.com/open-compass/opencompass), [InternLM](https://github.com/InternLM/InternLM). 
lvskiller's avatar
readme  
lvskiller committed
209

Yuliang Liu's avatar
Yuliang Liu committed
210

Yuliang Liu's avatar
Yuliang Liu committed
211
## Copyright
Yuliang Liu's avatar
Yuliang Liu committed
212
We welcome suggestions to help us improve the Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn. If you find something interesting, please also feel free to share with us through email or open an issue. Thanks!