README.md 11.2 KB
Newer Older
wanglch's avatar
wanglch committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
# Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization

[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat/shell/internvl2.0_mpo)  [\[🆕 Blog\]](https://internvl.github.io/blog/2024-11-14-InternVL-2.0-MPO/)  [\[📜 Paper\]](https://arxiv.org/abs/2411.10442) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/internvl2.0/preference_optimization.html)

## Introduction

Existing open-source multimodal large language models (MLLMs) generally follow a training process involving pre-training and supervised fine-tuning. However, these models suffer from distribution shifts, which limit their multimodal reasoning, particularly in the Chain-of-Thought (CoT) performance.

To address this, we introduce a preference optimization (PO) process to enhance the multimodal reasoning capabilities of MLLMs. Specifically, (1) on the data side, we design an automated preference data construction pipeline to create [MMPR](https://huggingface.co/datasets/OpenGVLab/MMPR), a high-quality, large-scale multimodal reasoning preference dataset. and (2) on the model side, we explore integrating PO with MLLMs, developing a simple yet effective method, termed Mixed Preference Optimization (MPO), which boosts multimodal CoT performance.

Our approach demonstrates improved performance across multiple benchmarks, particularly in multimodal reasoning tasks. Notably, our model, [InternVL2-8B-MPO](https://huggingface.co/OpenGVLab/InternVL2-8B-MPO), achieves an accuracy of 67.0 on MathVista, outperforming InternVL2-8B by 8.7 points and achieving performance comparable to the 10$`\times`$ larger InternVL2-76B. We hope this study could inspire further advancements in MLLMs.

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/sy8aVC1Y5wtAjG-OQzrDI.jpeg)

## MMPR Dataset

MMPR is a large-scale and high-quality multimodal reasoning preference dataset. This dataset includes about 3 million samples.

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/mmXL47UPDFwYOWdn9Z6j5.jpeg)
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/6fnvI_wCd9JXAs6vYthaG.jpeg)

To construct this dataset, we propose an efficient data construction pipeline. Specifically, we categorize the multimodal data into **samples with clear ground truths** and **samples without clear ground truths**.

- **For samples with clear ground truths:**
  the model is prompted to first provide the reasoning process and then give the final answer in the format like `Final Answer: ***`.
  Responses matching the ground truth answer constitute the positive set $\\mathcal{Y}\_p$, while those that do not match make up the negative set $\\mathcal{Y}\_n$. Additionally, responses that fail to provide a clear final answer are also merged into $\\mathcal{Y}\_n$.
  Given these responses labeled as positive or negative, we build the preference pairs by selecting a chosen response $y_c$ from $\\mathcal{Y}\_p$ and a negative response $y_r$ from $\\mathcal{Y}\_n$.

- **For samples without clear ground truths:**
  we propose a simple yet effective method: Dropout Next-Token Prediction (Dropout NTP).
  Specifically, we use the responses generated by InternVL2-8B as chosen answers.
  Given the chosen answer, we truncate it by half and then prompt InternVL2-8B to complete the remaining
  portion of the truncated answer without access to the image input.
  This generated completion serves as the rejected answer for the paired sample.
  It is worth noting that while the responses generated by InternVL2-8B may not be perfect,
  the completions generated without the image input will introduce more hallucinations than those
  generated with the image input.
  Therefore, the partial order relationship between the chosen and rejected responses holds true.

The data construction pipeline is open-sourced, see more details in our [document](https://internvl.readthedocs.io/en/latest/internvl2.0/preference_optimization.html#generate-additional-preference-data).

## Mixed Preference Optimization

The key insight behind MPO is that *an effective PO process should enable the model to learn the relative preference between pairs of responses, the absolute quality of individual responses, and the process for generating preferred responses.* We define the training objective as a combination of
preference loss $`\mathcal{L}_{\text{p}}`$,
quality loss $`\mathcal{L}_{\text{q}}`$,
and generation loss $`\mathcal{L}_{\text{g}}`$,
referred to as Mixed Preference Optimization:

```math
\mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
```

where $w\_{\*}$ represents the weight assigned to each loss component.
In this work, we empirically compare different variants of preference loss.
Based on the experimental results, we use DPO as our preference loss and BCO as our quality loss.

Specifically, the DPO serves as the preference loss to enable the model to learn the
relative preference between chosen and rejected responses.
This algorithm optimizes the following loss function:

```math
\mathcal{L}_{\text{p}}=-\log \sigma\left(\beta \log \frac{\pi_\theta\left(y_c \mid x\right)}{\pi_0\left(y_c \mid x\right)}-\beta \log \frac{\pi_\theta\left(y_r \mid x\right)}{\pi_0\left(y_r \mid x\right)}\right),
```

where $\\beta$ is the KL penalty coefficient, and $x$, $y_c$, and $y_r$ are user query, chosen response, and rejected response, respectively.
The policy model $\\pi\_\\theta$ is initialized from model $\\pi_0$.

Additionally, the BCO loss is employed as the quality loss, which helps the model to understand the absolute quality of individual responses.
The loss function is defined as:

```math
\mathcal{L}_{\text{q}}=\mathcal{L}_{\text{q}}^+ + \mathcal{L}_{\text{q}}^-,
```

where $`\mathcal{L}_{\text{q}}^{+}`$ and $`\mathcal{L}_{\text{q}}^{+}`$ represent the loss for chosen and rejected responses, respectively.
Each response type's loss is calculated independently, requiring the model to differentiate the absolute quality of individual responses. The loss terms are given by:

```math
\mathcal{L}_{\text{q}}^+=-\log \sigma\left(\beta \log \frac{\pi_\theta\left(y_c \mid x\right)}{\pi_0\left(y_c \mid x\right)} - \delta\right),
```

```math
\mathcal{L}_{\text{q}}^-=-\log \sigma\left(-\left(\beta \log \frac{\pi_\theta\left(y_r \mid x\right)}{\pi_0\left(y_r \mid x\right)} - \delta\right) \right),
```

where $\\delta$ represents the reward shift, calculated as the moving average of previous rewards to stabilize training.

Finally, the SFT loss is used as the generation loss to help the model learn the generation process of preferred responses.
The loss function is defined as:

```math
\mathcal{L}_{\text{gen}}=-\frac{\log\pi_\theta\left(y_c \mid x\right)}{\left| y_c \right|}.
```

## Models and Performance

Our [InternVL2-8B-MPO](https://huggingface.co/OpenGVLab/InternVL2-8B) achieves superior performance across 8 benchmarks, particularly excelling in multimodal reasoning tasks.
**On the MathVista benchmark, our model achieves an accuracy of 67.0%**, outperforming InternVL2-8B by 8.7 points and achieving performance comparable to the 10$`\times`$ larger InternVL2-76B.
**On the MathVision benchmark, our model achieves an accuracy of 25.7%**, establishing a new state-of-the-art performance among open-source models.
These results demonstrate the effectiveness of our preference optimization approach in enhancing multimodal reasoning capabilities.

Additionally, on the POPE benchmark, our model exhibits a 1.2-point improvement over InterVL2-8B, demonstrating the effectiveness of the perception data contained in our MMPR dataset to mitigate hallucinations.

Furthermore, our model also shows superior performance compared to the InternVL2-8B on complex VQA benchmarks, indicating that the general abilities of our model are also improved, benefiting from enhanced reasoning abilities and mitigated hallucinations.

| Model Name              | M3CoT | MathVista | MathVision MINI | MMVet (GPT4-Turbo) | LLaVA-Bench | POPE | CRPE | MMHalBench |
| ----------------------- | :---: | :-------: | :-------------: | :----------------: | :---------: | :--: | :--: | :--------: |
| Gemini-1.5-Pro          |   -   |   63.9    |      19.2       |         -          |      -      |  -   |  -   |     -      |
| GPT-4o                  | 64.3  |   63.8    |      30.4       |        69.1        |    97.6     | 86.9 | 76.6 |    4.0     |
| GPT-4o-Mini             | 61.9  |   52.4    |      27.3       |        66.9        |    95.4     | 85.1 | 73.1 |    3.6     |
| LLaVA-1.5-13B           | 39.5  |   27.6    |      11.1       |        36.3        |    70.7     | 85.9 | 55.6 |    2.4     |
| Qwen2-VL-7B             | 57.8  |   58.2    |      21.1       |        60.6        |    67.7     | 88.1 | 74.4 |    3.4     |
| MiniCPM-V-2-6-8B        | 56.0  |   60.6    |      23.4       |        57.4        |    83.4     | 87.3 | 75.2 |    3.6     |
| LLaVA-OneVision-7B      | 52.3  |   63.2    |      18.4       |        51.4        |    79.9     | 88.4 | 73.7 |    3.1     |
| InternVL2-26B           | 58.2  |   59.4    |      23.4       |        62.1        |    92.3     | 88.0 | 75.6 |    3.7     |
| InternVL2-40B           | 63.6  |   63.7    |      21.4       |        65.5        |    100.5    | 88.4 | 77.3 |    3.9     |
| InternVL2-76B           | 65.4  |   67.5    |      23.7       |        65.7        |    99.3     | 89.0 | 77.8 |    3.8     |
| InternVL2-Pro           | 65.6  |   66.3    |      18.8       |        69.4        |    99.5     | 88.2 | 77.6 |    3.7     |
| InternVL2-8B            | 59.3  |   58.3    |      20.4       |        54.2        |    73.2     | 86.9 | 75.5 |    3.3     |
| InternVL2-8B-MPO (ours) | 79.2  |   67.0    |      25.7       |        56.2        |    76.7     | 88.1 | 75.4 |    3.5     |

## Train

Please refer to [our document](https://internvl.readthedocs.io/en/latest/internvl2.0/preference_optimization.html) for more details about how to train with our data.

## Citation

If you find this project useful in your research, please consider citing:

```BibTeX
@article{wang2024mpo,
  title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
  author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2411.10442},
  year={2024}
}
@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}
@article{chen2024far,
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
  journal={arXiv preprint arXiv:2404.16821},
  year={2024}
}
```