<ahref="">The Young's First ``Large'' Vision Language Model</a>
</p>
## Release
- [2024/4/21] 🔥🔥🔥 For OneChart, we have released the web demo in [Project Page](https://onechartt.github.io/). Have fun!!
- [2024/4/21] 🔥🔥🔥 We present a Vary-tiny LAVIS codebase (for training from scratch) and the Vary-600k dataset (300K English and 300K Chinese pages) [here](https://github.com/Ucas-HaoranWei/Vary-tiny-600k) !!!
- [2024/4/15]🔥🔥🔥We release a chart parsing model OneChart [here](https://github.com/LingyvKong/OneChart).
- [2024/4/12]🔥🔥🔥We will release a chart parsing model based on Vary-tiny next week. The model supports both English and Chinese charts.
- [2024/3/16]🔥🔥🔥I found many friends very interested in Vary-tiny(OPT-125M), so I opened source it [here](https://huggingface.co/HaoranWei/Vary-tiny-opt125M/tree/main), a PDF-dense OCR and object detection version.
- [2024/1/23] 🔥Eval codes will be available soon.
- [2024/1/23] 🔥🔥🔥You only need a single 1080Ti to experience all features of current LVLMs.
**Usage and License Notices**: The data, code, and checkpoint are intended and licensed for research use only. They are also restricted to use that follow the license agreement of LLaMA, Vicuna, GPT-4, Qwen, and LLaVA.
## Contents
- [Install](#install)
- [Vary-toy Weights](#vary-weights)
- [Demo](#Demo)
- [Train](#train)
## Note
If you have built the original [Vary](https://github.com/Ucas-HaoranWei/Vary), please rebuild this repo !!!
## Install
1. Clone this repository and navigate to the Vary folder
We encourage you to extract the new vision vocabulary weights for your new base language model !!!
## Contact
If you have any questions about the code or the paper, please email (`weihaoran18@mails.ucas.ac.cn`).
## Discussion
Vary-toy is not a toy, and we have designed two excellent models based on it, one is Vary-document (specifically for document/pdf processing), and the other is Vary-plot for chart analysis. You can see their amazing performance here [Vary-family](https://github.com/Ucas-HaoranWei/Vary-family).
## Citation
If you find our work useful in your research, please consider citing Vary:
```bibtex
@article{wei2023vary,
title={Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models},
author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yang, Jinrong and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2312.06109},
year={2023}
}
@article{wei2024small,
title={Small Language Model Meets with Reinforced Vision Vocabulary},
author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yu, En and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},