"git@developer.sourcefind.cn:modelzoo/solov2-pytorch.git" did not exist on "a8ec6fb3a7f23d8fd98fa906bfe5de3fd58a8f75"
Commit 6f73ea6b authored by mashun1's avatar mashun1
Browse files

omniparser

parents
Pipeline #2421 failed with stages
in 0 seconds
weights/icon_caption_blip2
weights/icon_caption_florence
weights/icon_detect/
weights/icon_detect_v1_5/
weights/icon_detect_v1_5_2/
.gradio
__pycache__/
debug.ipynb
util/__pycache__/
index.html?linkid=2289031
wget-log
weights/icon_caption_florence_v2/
weights
.gradio
\ No newline at end of file
FROM image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.3.0-ubuntu22.04-dtk24.04.3-py3.10
\ No newline at end of file
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More_considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.
Creative Commons may be contacted at creativecommons.org.
# OmniParser
## 论文
`OmniParser for Pure Vision Based GUI Agent`
* https://arxiv.org/pdf/2408.00203
## 模型结构
该方法使用三个模块,分别是
1、可交互图标检测模型(YOLOv8):这个模型用于在屏幕上解析可交互区域,并标注出可能的交互图标。
2、图标描述模型(Florence-2):这个模型用于提取检测到的图标的功能语义,为每个图标提供描述性的标签。
3、OCR模块:这个模块用于识别屏幕上的文本内容,包括按钮标签、提示信息等,这些信息对于理解UI的上下文至关重要。
![alt text](readme_imgs/arch.png)
## 算法原理
该算法集成了一个微调的可交互图标检测模型的输出、一个微调的图标描述模型和一个OCR模块的输出。这种组合产生了UI的结构化、类似DOM的表示,以及带有潜在可交互元素边界框的截图。
![alt text](readme_imgs/alg.png)
## 环境配置
### Docker(方法一)
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.3.0-ubuntu22.04-dtk24.04.3-py3.10
docker run --shm-size 50g --network=host --name=dpskr1 --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v 项目地址(绝对路径):/home/ -v /opt/hyhal:/opt/hyhal:ro -it <your IMAGE ID> bash
pip install https://download.sourcefind.cn:65024/directlink/4/paddle/DAS1.3/paddlepaddle-2.6.1+das.opt1.dtk24043-cp310-cp310-manylinux_2_28_x86_64.whl
pip install -r requirements.txt
### Dockerfile(方法二)
docker build -t <IMAGE_NAME>:<TAG> .
docker run --shm-size 50g --network=host --name=dpskr1 --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v 项目地址(绝对路径):/home/ -v /opt/hyhal:/opt/hyhal:ro -it <your IMAGE ID> bash
pip install https://download.sourcefind.cn:65024/directlink/4/paddle/DAS1.3/paddlepaddle-2.6.1+das.opt1.dtk24043-cp310-cp310-manylinux_2_28_x86_64.whl
pip install -r requirements.txt
### Anaconda(方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装: https://developer.hpccube.com/tool/
```
torch: 2.3.0
torchvision: 0.18.1
padlepadle: 2.6.1
```
2、其他非特殊库直接按照requirements.txt安装
```
pip install -r requirements.txt
```
## 数据集
## 训练
## 推理
### 命令行
```bash
python demo.py --img_path <图像路径>
```
### webui
```bash
python gradio_demo.py
```
## result
![alt text](readme_imgs/demo.png)
### 精度
## 应用场景
### 算法类别
`目标检测`
### 热点应用行业
`电商,教育,广媒`
## 预训练权重
OmniParser-v2.0:[huggingface](https://hf-mirror.com/microsoft/OmniParser-v2.0) | [SCNet高速下载通道]()
## 源码仓库及问题反馈
* https://developer.sourcefind.cn/codes/modelzoo/omniparser_pytorch
## 参考资料
* https://github.com/microsoft/OmniParser/
# OmniParser: Screen Parsing tool for Pure Vision Based GUI Agent
<p align="center">
<img src="imgs/logo.png" alt="Logo">
</p>
[![arXiv](https://img.shields.io/badge/Paper-green)](https://arxiv.org/abs/2408.00203)
[![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
📢 [[Project Page](https://microsoft.github.io/OmniParser/)] [[V2 Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-v2-turning-any-llm-into-a-computer-use-agent/)] [[Models V2](https://huggingface.co/microsoft/OmniParser-v2.0)] [[Models V1.5](https://huggingface.co/microsoft/OmniParser)] [[HuggingFace Space Demo](https://huggingface.co/spaces/microsoft/OmniParser-v2)]
**OmniParser** is a comprehensive method for parsing user interface screenshots into structured and easy-to-understand elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface.
## News
- [2025/2] We release OmniParser V2 [checkpoints](https://huggingface.co/microsoft/OmniParser-v2.0). [Watch Video](https://1drv.ms/v/c/650b027c18d5a573/EWXbVESKWo9Buu6OYCwg06wBeoM97C6EOTG6RjvWLEN1Qg?e=alnHGC)
- [2025/2] We introduce OmniTool: Control a Windows 11 VM with OmniParser + your vision model of choice. OmniTool supports out of the box the following large language models - OpenAI (4o/o1/o3-mini), DeepSeek (R1), Qwen (2.5VL) or Anthropic Computer Use. [Watch Video](https://1drv.ms/v/c/650b027c18d5a573/EehZ7RzY69ZHn-MeQHrnnR4BCj3by-cLLpUVlxMjF4O65Q?e=8LxMgX)
- [2025/1] V2 is coming. We achieve new state of the art results 39.5% on the new grounding benchmark [Screen Spot Pro](https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding/tree/main) with OmniParser v2 (will be released soon)! Read more details [here](https://github.com/microsoft/OmniParser/tree/master/docs/Evaluation.md).
- [2024/11] We release an updated version, OmniParser V1.5 which features 1) more fine grained/small icon detection, 2) prediction of whether each screen element is interactable or not. Examples in the demo.ipynb.
- [2024/10] OmniParser was the #1 trending model on huggingface model hub (starting 10/29/2024).
- [2024/10] Feel free to checkout our demo on [huggingface space](https://huggingface.co/spaces/microsoft/OmniParser)! (stay tuned for OmniParser + Claude Computer Use)
- [2024/10] Both Interactive Region Detection Model and Icon functional description model are released! [Hugginface models](https://huggingface.co/microsoft/OmniParser)
- [2024/09] OmniParser achieves the best performance on [Windows Agent Arena](https://microsoft.github.io/WindowsAgentArena/)!
## Install
First clone the repo, and then install environment:
```python
cd OmniParser
conda create -n "omni" python==3.12
conda activate omni
pip install -r requirements.txt
```
Ensure you have the V2 weights downloaded in weights folder (ensure caption weights folder is called icon_caption_florence). If not download them with:
```
# download the model checkpoints to local directory OmniParser/weights/
for f in icon_detect/{train_args.yaml,model.pt,model.yaml} icon_caption/{config.json,generation_config.json,model.safetensors}; do huggingface-cli download microsoft/OmniParser-v2.0 "$f" --local-dir weights; done
mv weights/icon_caption weights/icon_caption_florence
```
<!-- ## [deprecated]
Then download the model ckpts files in: https://huggingface.co/microsoft/OmniParser, and put them under weights/, default folder structure is: weights/icon_detect, weights/icon_caption_florence, weights/icon_caption_blip2.
For v1:
convert the safetensor to .pt file.
```python
python weights/convert_safetensor_to_pt.py
For v1.5:
download 'model_v1_5.pt' from https://huggingface.co/microsoft/OmniParser/tree/main/icon_detect_v1_5, make a new dir: weights/icon_detect_v1_5, and put it inside the folder. No weight conversion is needed.
``` -->
## Examples:
We put together a few simple examples in the demo.ipynb.
## Gradio Demo
To run gradio demo, simply run:
```python
python gradio_demo.py
```
## Model Weights License
For the model checkpoints on huggingface model hub, please note that icon_detect model is under AGPL license since it is a license inherited from the original yolo model. And icon_caption_blip2 & icon_caption_florence is under MIT license. Please refer to the LICENSE file in the folder of each model: https://huggingface.co/microsoft/OmniParser.
## 📚 Citation
Our technical report can be found [here](https://arxiv.org/abs/2408.00203).
If you find our work useful, please consider citing our work:
```
@misc{lu2024omniparserpurevisionbased,
title={OmniParser for Pure Vision Based GUI Agent},
author={Yadong Lu and Jianwei Yang and Yelong Shen and Ahmed Awadallah},
year={2024},
eprint={2408.00203},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.00203},
}
```
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from util.utils import get_som_labeled_img, check_ocr_box, get_caption_model_processor, get_yolo_model\n",
"import torch\n",
"from ultralytics import YOLO\n",
"from PIL import Image\n",
"device = 'cuda'\n",
"model_path='weights/OmniParser-v2/icon_detect/model.pt'\n",
"\n",
"som_model = get_yolo_model(model_path)\n",
"\n",
"som_model.to(device)\n",
"print('model to {}'.format(device))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# two choices for caption model: fine-tuned blip2 or florence2\n",
"import importlib\n",
"# import util.utils\n",
"# importlib.reload(utils)\n",
"from util.utils import get_som_labeled_img, check_ocr_box, get_caption_model_processor, get_yolo_model\n",
"caption_model_processor = get_caption_model_processor(model_name=\"florence2\", model_name_or_path=\"weights/OmniParser-v2/icon_caption\", device=device)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"som_model.device, type(som_model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# reload utils\n",
"import importlib\n",
"import utils\n",
"importlib.reload(utils)\n",
"# from utils import get_som_labeled_img, check_ocr_box, get_caption_model_processor, get_yolo_model\n",
"\n",
"image_path = 'imgs/google_page.png'\n",
"image_path = 'imgs/windows_home.png'\n",
"# image_path = 'imgs/windows_multitab.png'\n",
"# image_path = 'imgs/omni3.jpg'\n",
"# image_path = 'imgs/ios.png'\n",
"image_path = 'imgs/word.png'\n",
"# image_path = 'imgs/excel2.png'\n",
"\n",
"image = Image.open(image_path)\n",
"image_rgb = image.convert('RGB')\n",
"print('image size:', image.size)\n",
"\n",
"box_overlay_ratio = max(image.size) / 3200\n",
"draw_bbox_config = {\n",
" 'text_scale': 0.8 * box_overlay_ratio,\n",
" 'text_thickness': max(int(2 * box_overlay_ratio), 1),\n",
" 'text_padding': max(int(3 * box_overlay_ratio), 1),\n",
" 'thickness': max(int(3 * box_overlay_ratio), 1),\n",
"}\n",
"BOX_TRESHOLD = 0.05\n",
"\n",
"import time\n",
"start = time.time()\n",
"ocr_bbox_rslt, is_goal_filtered = check_ocr_box(image_path, display_img = False, output_bb_format='xyxy', goal_filtering=None, easyocr_args={'paragraph': False, 'text_threshold':0.9}, use_paddleocr=True)\n",
"text, ocr_bbox = ocr_bbox_rslt\n",
"cur_time_ocr = time.time() \n",
"\n",
"dino_labled_img, label_coordinates, parsed_content_list = get_som_labeled_img(image_path, som_model, BOX_TRESHOLD = BOX_TRESHOLD, output_coord_in_ratio=True, ocr_bbox=ocr_bbox,draw_bbox_config=draw_bbox_config, caption_model_processor=caption_model_processor, ocr_text=text,use_local_semantics=True, iou_threshold=0.7, scale_img=False, batch_size=128)\n",
"cur_time_caption = time.time() \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# plot dino_labled_img it is in base64\n",
"import base64\n",
"import matplotlib.pyplot as plt\n",
"import io\n",
"plt.figure(figsize=(15,15))\n",
"\n",
"image = Image.open(io.BytesIO(base64.b64decode(dino_labled_img)))\n",
"plt.axis('off')\n",
"\n",
"plt.imshow(image)\n",
"# print(len(parsed_content_list))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"df = pd.DataFrame(parsed_content_list)\n",
"df['ID'] = range(len(df))\n",
"\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"parsed_content_list"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
from util.utils import get_som_labeled_img, check_ocr_box, get_caption_model_processor, get_yolo_model
import torch
from ultralytics import YOLO
from PIL import Image
if __name__ == "__main__":
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("--img_path", type=str, required=True)
args = parser.parse_args()
image_path = args.img_path
device = 'cuda'
model_path='weights/OmniParser-v2/icon_detect/model.pt'
som_model = get_yolo_model(model_path)
som_model.to(device)
print('model to {}'.format(device))
# two choices for caption model: fine-tuned blip2 or florence2
import importlib
# import util.utils
# importlib.reload(utils)
from util.utils import get_som_labeled_img, check_ocr_box, get_caption_model_processor, get_yolo_model
caption_model_processor = get_caption_model_processor(model_name="florence2", model_name_or_path="weights/OmniParser-v2/icon_caption", device=device)
# reload utils
import importlib
import utils
importlib.reload(utils)
# from utils import get_som_labeled_img, check_ocr_box, get_caption_model_processor, get_yolo_model
# image_path = 'imgs/google_page.png'
# image_path = 'imgs/windows_home.png'
# image_path = 'imgs/windows_multitab.png'
# image_path = 'imgs/omni3.jpg'
# image_path = 'imgs/ios.png'
# image_path = 'imgs/word.png'
# image_path = 'imgs/excel2.png'
image = Image.open(image_path)
image_rgb = image.convert('RGB')
print('image size:', image.size)
box_overlay_ratio = max(image.size) / 3200
draw_bbox_config = {
'text_scale': 0.8 * box_overlay_ratio,
'text_thickness': max(int(2 * box_overlay_ratio), 1),
'text_padding': max(int(3 * box_overlay_ratio), 1),
'thickness': max(int(3 * box_overlay_ratio), 1),
}
BOX_TRESHOLD = 0.05
import time
start = time.time()
ocr_bbox_rslt, is_goal_filtered = check_ocr_box(image_path, display_img = False, output_bb_format='xyxy', goal_filtering=None, easyocr_args={'paragraph': False, 'text_threshold':0.9}, use_paddleocr=True)
text, ocr_bbox = ocr_bbox_rslt
cur_time_ocr = time.time()
dino_labled_img, label_coordinates, parsed_content_list = get_som_labeled_img(image_path, som_model, BOX_TRESHOLD = BOX_TRESHOLD, output_coord_in_ratio=True, ocr_bbox=ocr_bbox,draw_bbox_config=draw_bbox_config, caption_model_processor=caption_model_processor, ocr_text=text,use_local_semantics=True, iou_threshold=0.7, scale_img=False, batch_size=128)
cur_time_caption = time.time()
# plot dino_labled_img it is in base64
import base64
import matplotlib.pyplot as plt
import io
plt.figure(figsize=(15,15))
image = Image.open(io.BytesIO(base64.b64decode(dino_labled_img)))
plt.axis('off')
plt.imshow(image)
plt.savefig("demo.png")
# print(len(parsed_content_list))
# Eval setup for ScreenSpot Pro
We adapt the eval code from ScreenSpot Pro (ss pro) official [repo](https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding/tree/main). This folder contains the inference script/results on this benchmark. We going through legal review proces to release omniparser v2. Once it is done, we will update the file so that it can load the v2 model.
1. eval/ss_pro_gpt4o_omniv2.py: contains the prompt we use, it can be dropped in replacement for this [file](https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding/blob/main/models/gpt4x.py) in the original ss pro repo.
2. eval/logs_sspro_omniv2.json: contains the inferenced results for ss pro using GPT4o+OmniParserv2.
This source diff could not be displayed because it is too large. You can view the blob instead.
import os
import re
import ast
import base64
from io import BytesIO
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import openai
from openai import BadRequestError
model_name = "gpt-4o-2024-05-13"
OPENAI_KEY = os.environ.get("OPENAI_API_KEY")
def convert_pil_image_to_base64(image):
buffered = BytesIO()
image.save(buffered, format="PNG")
return base64.b64encode(buffered.getvalue()).decode()
from models.utils import get_som_labeled_img, check_ocr_box, get_caption_model_processor, get_yolo_model
import torch
from ultralytics import YOLO
from PIL import Image
device = 'cuda' if torch.cuda.is_available() else 'cpu'
SOM_MODEL_PATH='...'
CAPTION_MODEL_PATH='...'
som_model = get_yolo_model(SOM_MODEL_PATH)
som_model.to(device)
print('model to {}'.format(device))
# two choices for caption model: fine-tuned blip2 or florence2
import importlib
caption_model_processor = get_caption_model_processor(model_name="florence2", model_name_or_path="CAPTION_MODEL_PATH", device=device)
def omniparser_parse(image, image_path):
box_overlay_ratio = max(image.size) / 3200
draw_bbox_config = {
'text_scale': 0.8 * box_overlay_ratio,
'text_thickness': max(int(2 * box_overlay_ratio), 1),
'text_padding': max(int(3 * box_overlay_ratio), 1),
'thickness': max(int(3 * box_overlay_ratio), 1),
}
BOX_TRESHOLD = 0.05
ocr_bbox_rslt, is_goal_filtered = check_ocr_box(image_path, display_img = False, output_bb_format='xyxy', goal_filtering=None, easyocr_args={'paragraph': False, 'text_threshold':0.5, 'canvas_size':max(image.size), 'decoder':'beamsearch', 'beamWidth':10, 'batch_size':256}, use_paddleocr=False)
text, ocr_bbox = ocr_bbox_rslt
dino_labled_img, label_coordinates, parsed_content_list = get_som_labeled_img(image_path, som_model, BOX_TRESHOLD = BOX_TRESHOLD, output_coord_in_ratio=True, ocr_bbox=ocr_bbox,draw_bbox_config=draw_bbox_config, caption_model_processor=caption_model_processor, ocr_text=text,use_local_semantics=True, iou_threshold=0.7, scale_img=False, batch_size=128)
return dino_labled_img, label_coordinates, parsed_content_list
def reformat_messages(parsed_content_list):
screen_info = ""
for idx, element in enumerate(parsed_content_list):
element['idx'] = idx
if element['type'] == 'text':
screen_info += f'''<p id={idx} class="text" alt="{element['content']}"> </p>\n'''
# screen_info += f'ID: {idx}, Text: {element["content"]}\n'
elif element['type'] == 'icon':
screen_info += f'''<img id={idx} class="icon" alt="{element['content']}"> </img>\n'''
# screen_info += f'ID: {idx}, Icon: {element["content"]}\n'
return screen_info
PROMPT_TEMPLATE_SEECLICK_PARSED_CONTENT = '''Please generate the next move according to the UI screenshot and task instruction. You will be presented with a screenshot image. Also you will be given each bounding box's description in a list. To complete the task, You should choose a related bbox to click based on the bbox descriptions.
Task instruction: {}.
Here is the list of all detected bounding boxes by IDs and their descriptions: {}. Keep in mind the description for Text Boxes are likely more accurate than the description for Icon Boxes.
Requirement: 1. You should first give a reasonable description of the current screenshot, and give a short analysis of how can the user task be achieved. 2. Then make an educated guess of bbox id to click in order to complete the task based on the bounding boxes descriptions. 3. Your answer should follow the following format: {{"Analysis": xxx, "Click BBox ID": "y"}}. Do not include any other info. Some examples: {}. The task is to {}. Retrieve the bbox id where its description matches the task instruction. Now start your answer:'''
# PROMPT_TEMPLATE_SEECLICK_PARSED_CONTENT_v1 = "The instruction is to {}. \nHere is the list of all detected bounding boxes by IDs and their descriptions: {}. \nKeep in mind the description for Text Boxes are likely more accurate than the description for Icon Boxes. \n Requirement: 1. You should first give a reasonable description of the current screenshot, and give a step by step analysis of how can the user task be achieved. 2. Then make an educated guess of bbox id to click in order to complete the task using both the visual information from the screenshot image and the bounding boxes descriptions. 3. Your answer should follow the following format: {{'Analysis': 'xxx', 'Click BBox ID': 'y'}}. Please do not include any other info."
PROMPT_TEMPLATE_SEECLICK_PARSED_CONTENT_v1 = "The instruction is to {}. \nHere is the list of all detected bounding boxes by IDs and their descriptions: {}. \nKeep in mind the description for Text Boxes are likely more accurate than the description for Icon Boxes. \n Requirement: 1. You should first give a reasonable description of the current screenshot, and give a some analysis of how can the user instruction be achieved by a single click. 2. Then make an educated guess of bbox id to click in order to complete the task using both the visual information from the screenshot image and the bounding boxes descriptions. REMEMBER: the task instruction must be achieved by one single click. 3. Your answer should follow the following format: {{'Analysis': 'xxx', 'Click BBox ID': 'y'}}. Please do not include any other info."
FEWSHOT_EXAMPLE = '''Example 1: Task instruction: Next page. \n{"Analysis": "Based on the screenshot and icon descriptions, I should click on the next page icon, which is labeled with box ID x in the bounding box list", "Click BBox ID": "x"}\n\n
Example 2: Task instruction: Search on google. \n{"Analysis": "Based on the screenshot and icon descriptions, I should click on the 'Search' box, which is labeled with box ID y in the bounding box list", "Click BBox ID": "y"}'''
from azure.identity import AzureCliCredential, DefaultAzureCredential, get_bearer_token_provider
from openai import AzureOpenAI
from models.utils import get_pred_phi3v, extract_dict_from_text, get_phi3v_model_dict
class GPT4XModel():
def __init__(self, model_name="gpt-4o-2024-05-13", use_managed_identity=False):
self.client = openai.OpenAI(
api_key=OPENAI_KEY,
)
self.model_name = model_name
if model_name == 'phi35v':
self.model_dict = get_phi3v_model_dict()
def load_model(self):
pass
def set_generation_config(self, **kwargs):
self.override_generation_config.update(kwargs)
def ground_only_positive_phi35v(self, instruction, image):
if isinstance(image, str):
image_path = image
assert os.path.exists(image_path) and os.path.isfile(image_path), "Invalid input image path."
image = Image.open(image_path).convert('RGB')
assert isinstance(image, Image.Image), "Invalid input image."
base64_image = convert_pil_image_to_base64(image)
dino_labled_img, label_coordinates, parsed_content_list = omniparser_parse(image, image_path)
screen_info = reformat_messages(parsed_content_list)
prompt_origin = PROMPT_TEMPLATE_SEECLICK_PARSED_CONTENT.format(instruction, screen_info, FEWSHOT_EXAMPLE, instruction)
# prompt_origin = PROMPT_TEMPLATE_SEECLICK_PARSED_CONTENT_v1.format(instruction, screen_info)
# Use the get_pred_phi3v function to get predictions
icon_id, bbox, click_point, response_text = get_pred_phi3v(prompt_origin, (base64_image, dino_labled_img), label_coordinates, id_key='Click ID', model_dict=self.model_dict)
result_dict = {
"result": "positive",
"bbox": bbox,
"point": click_point,
"raw_response": response_text,
'dino_labled_img': dino_labled_img,
'screen_info': screen_info,
}
return result_dict
def ground_only_positive(self, instruction, image):
if isinstance(image, str):
image_path = image
assert os.path.exists(image_path) and os.path.isfile(image_path), "Invalid input image path."
image = Image.open(image_path).convert('RGB')
assert isinstance(image, Image.Image), "Invalid input image."
base64_image = convert_pil_image_to_base64(image)
dino_labled_img, label_coordinates, parsed_content_list = omniparser_parse(image, image_path)
screen_info = reformat_messages(parsed_content_list)
# prompt_origin = PROMPT_TEMPLATE_SEECLICK_PARSED_CONTENT.format(screen_info, FEWSHOT_EXAMPLE, instruction)
prompt_origin = PROMPT_TEMPLATE_SEECLICK_PARSED_CONTENT_v1.format(instruction, screen_info)
try:
response = self.client.chat.completions.create(
model=self.model_name,
messages=[
{
"role": "system",
"content": [
# {"type": "text", "text": "You are an expert in using electronic devices and interacting with graphic interfaces. You should not call any external tools."}
{"type": "text", "text": '''You are an expert at completing instructions on GUI screens.
You will be presented with two images. The first is the original screenshot. The second is the same screenshot with some numeric tags. You will also be provided with some descriptions of the bbox, and your task is to choose the numeric bbox idx you want to click in order to complete the user instruction.'''}
],
},
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt_origin
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{base64_image}",
}
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{dino_labled_img}",
}
},
],
}
],
temperature=self.override_generation_config['temperature'],
max_tokens=2048,
)
response_text = response.choices[0].message.content
except BadRequestError as e:
print("OpenAI BadRequestError:", e)
return None
# Extract bounding box
# print("------")
# print(grounding_prompt)
print("------")
print(response_text)
# print("------")
# Try getting groundings
# bbox = extract_first_bounding_box(response_text)
# click_point = extract_first_point(response_text)
# if not click_point and bbox:
# click_point = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2]
response_text = response_text.replace('```json', '').replace('```', '') #TODO: fix this
try:
response_text = ast.literal_eval(response_text)
icon_id = response_text['Click BBox ID']
bbox = label_coordinates[str(icon_id)]
click_point = [bbox[0] + bbox[2]/2, bbox[1] + bbox[3]/2]
except:
print('error parsing, use regex to parse!!!')
response_text = extract_dict_from_text(response_text)
icon_id = response_text['Click BBox ID']
bbox = label_coordinates[str(icon_id)]
click_point = [bbox[0] + bbox[2]/2, bbox[1] + bbox[3]/2]
result_dict = {
"result": "positive",
"bbox": bbox,
"point": click_point,
"raw_response": response_text,
'dino_labled_img': dino_labled_img,
'screen_info': screen_info,
}
return result_dict
def ground_allow_negative(self, instruction, image=None):
if isinstance(image, str):
image_path = image
assert os.path.exists(image_path) and os.path.isfile(image_path), "Invalid input image path."
image = Image.open(image_path).convert('RGB')
assert isinstance(image, Image.Image), "Invalid input image."
base64_image = convert_pil_image_to_base64(image)
try:
response = self.client.chat.completions.create(
model=self.model_name,
messages=[
{
"role": "system",
"content": [
{"type": "text", "text": "You are an expert in using electronic devices and interacting with graphic interfaces. You should not call any external tools."}
],
},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{base64_image}",
}
},
{
"type": "text",
"text": "You are asked to find the bounding box of an UI element in the given screenshot corresponding to a given instruction.\n"
"Don't output any analysis. Output your result in the format of [[x0,y0,x1,y1]], with x and y ranging from 0 to 1. \n"
"If such element does not exist, output only the text 'Target not existent'.\n"
"The instruction is:\n"
f"{instruction}\n"
}
],
}
],
temperature=self.override_generation_config['temperature'],
max_tokens=2048,
)
response_text = response.choices[0].message.content
except BadRequestError as e:
print("OpenAI BadRequestError:", e)
return {
"result": "failed"
}
# Extract bounding box
# print("------")
# print(grounding_prompt)
print("------")
print(response_text)
# print("------")
if "not existent" in response_text.lower():
return {
"result": "negative",
"bbox": None,
"point": None,
"raw_response": response_text
}
# Try getting groundings
bbox = extract_first_bounding_box(response_text)
click_point = extract_first_point(response_text)
if not click_point and bbox:
click_point = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2]
result_dict = {
"result": "positive" if bbox or click_point else "negative",
"bbox": bbox,
"point": click_point,
"raw_response": response_text
}
return result_dict
def ground_with_uncertainty(self, instruction, image=None):
if isinstance(image, str):
image_path = image
assert os.path.exists(image_path) and os.path.isfile(image_path), "Invalid input image path."
image = Image.open(image_path).convert('RGB')
assert isinstance(image, Image.Image), "Invalid input image."
base64_image = convert_pil_image_to_base64(image)
try:
response = self.client.chat.completions.create(
model=self.model_name,
messages=[
{
"role": "system",
"content": [
{"type": "text", "text": "You are an expert in using electronic devices and interacting with graphic interfaces. You should not call any external tools."}
],
},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{base64_image}",
}
},
{
"type": "text",
"text": "You are asked to find the bounding box of an UI element in the given screenshot corresponding to a given instruction.\n"
"- If such element does not exist in the screenshot, output only the text 'Target not existent'."
"- If you are sure such element exists and you are confident in finding it, output your result in the format of [[x0,y0,x1,y1]], with x and y ranging from 0 to 1. \n"
"Please find out the bounding box of the UI element corresponding to the following instruction: \n"
"The instruction is:\n"
f"{instruction}\n"
}
],
}
],
temperature=self.override_generation_config['temperature'],
max_tokens=2048,
)
response_text = response.choices[0].message.content
except BadRequestError as e:
print("OpenAI BadRequestError:", e)
return {
"result": "failed"
}
# Extract bounding box
# print("------")
# print(grounding_prompt)
print("------")
print(response_text)
# print("------")
if "not found" in response_text.lower():
return {
"result": "negative",
"bbox": None,
"point": None,
"raw_response": response_text
}
# Try getting groundings
bbox = extract_first_bounding_box(response_text)
click_point = extract_first_point(response_text)
if not click_point and bbox:
click_point = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2]
result_dict = {
"result": "positive",
"bbox": bbox,
"point": click_point,
"raw_response": response_text
}
return result_dict
def extract_first_bounding_box(text):
# Regular expression pattern to match the first bounding box in the format [[x0,y0,x1,y1]]
# This captures the entire float value using \d for digits and optional decimal points
pattern = r"\[\[(\d+\.\d+|\d+),(\d+\.\d+|\d+),(\d+\.\d+|\d+),(\d+\.\d+|\d+)\]\]"
# Search for the first match in the text
match = re.search(pattern, text, re.DOTALL)
if match:
# Capture the bounding box coordinates as floats
bbox = [float(match.group(1)), float(match.group(2)), float(match.group(3)), float(match.group(4))]
return bbox
return None
def extract_first_point(text):
# Regular expression pattern to match the first point in the format [[x0,y0]]
# This captures the entire float value using \d for digits and optional decimal points
pattern = r"\[\[(\d+\.\d+|\d+),(\d+\.\d+|\d+)\]\]"
# Search for the first match in the text
match = re.search(pattern, text, re.DOTALL)
if match:
point = [float(match.group(1)), float(match.group(2))]
return point
return None
from typing import Optional
import gradio as gr
import numpy as np
import torch
from PIL import Image
import io
import base64, os
from util.utils import check_ocr_box, get_yolo_model, get_caption_model_processor, get_som_labeled_img
import torch
from PIL import Image
yolo_model = get_yolo_model(model_path='weights/OmniParser-v2/icon_detect/model.pt')
caption_model_processor = get_caption_model_processor(model_name="florence2", model_name_or_path="weights/OmniParser-v2/icon_caption")
# caption_model_processor = get_caption_model_processor(model_name="blip2", model_name_or_path="weights/icon_caption_blip2")
MARKDOWN = """
# OmniParser for Pure Vision Based General GUI Agent 🔥
<div>
<a href="https://arxiv.org/pdf/2408.00203">
<img src="https://img.shields.io/badge/arXiv-2408.00203-b31b1b.svg" alt="Arxiv" style="display:inline-block;">
</a>
</div>
OmniParser is a screen parsing tool to convert general GUI screen to structured elements.
"""
DEVICE = torch.device('cuda')
# @spaces.GPU
# @torch.inference_mode()
# @torch.autocast(device_type="cuda", dtype=torch.bfloat16)
def process(
image_input,
box_threshold,
iou_threshold,
use_paddleocr,
imgsz
) -> Optional[Image.Image]:
image_save_path = 'imgs/saved_image_demo.png'
image_input.save(image_save_path)
image = Image.open(image_save_path)
box_overlay_ratio = image.size[0] / 3200
draw_bbox_config = {
'text_scale': 0.8 * box_overlay_ratio,
'text_thickness': max(int(2 * box_overlay_ratio), 1),
'text_padding': max(int(3 * box_overlay_ratio), 1),
'thickness': max(int(3 * box_overlay_ratio), 1),
}
# import pdb; pdb.set_trace()
ocr_bbox_rslt, is_goal_filtered = check_ocr_box(image_save_path, display_img = False, output_bb_format='xyxy', goal_filtering=None, easyocr_args={'paragraph': False, 'text_threshold':0.9}, use_paddleocr=use_paddleocr)
text, ocr_bbox = ocr_bbox_rslt
# print('prompt:', prompt)
dino_labled_img, label_coordinates, parsed_content_list = get_som_labeled_img(image_save_path, yolo_model, BOX_TRESHOLD = box_threshold, output_coord_in_ratio=True, ocr_bbox=ocr_bbox,draw_bbox_config=draw_bbox_config, caption_model_processor=caption_model_processor, ocr_text=text,iou_threshold=iou_threshold, imgsz=imgsz,)
image = Image.open(io.BytesIO(base64.b64decode(dino_labled_img)))
print('finish processing')
parsed_content_list = '\n'.join([f'icon {i}: ' + str(v) for i,v in enumerate(parsed_content_list)])
# parsed_content_list = str(parsed_content_list)
return image, str(parsed_content_list)
with gr.Blocks() as demo:
gr.Markdown(MARKDOWN)
with gr.Row():
with gr.Column():
image_input_component = gr.Image(
type='pil', label='Upload image')
# set the threshold for removing the bounding boxes with low confidence, default is 0.05
box_threshold_component = gr.Slider(
label='Box Threshold', minimum=0.01, maximum=1.0, step=0.01, value=0.05)
# set the threshold for removing the bounding boxes with large overlap, default is 0.1
iou_threshold_component = gr.Slider(
label='IOU Threshold', minimum=0.01, maximum=1.0, step=0.01, value=0.1)
use_paddleocr_component = gr.Checkbox(
label='Use PaddleOCR', value=True)
imgsz_component = gr.Slider(
label='Icon Detect Image Size', minimum=640, maximum=1920, step=32, value=640)
submit_button_component = gr.Button(
value='Submit', variant='primary')
with gr.Column():
image_output_component = gr.Image(type='pil', label='Image Output')
text_output_component = gr.Textbox(label='Parsed screen elements', placeholder='Text Output')
submit_button_component.click(
fn=process,
inputs=[
image_input_component,
box_threshold_component,
iou_threshold_component,
use_paddleocr_component,
imgsz_component
],
outputs=[image_output_component, text_output_component]
)
# demo.launch(debug=False, show_error=True, share=True)
demo.launch(share=True, server_port=7861, server_name='0.0.0.0')
icon.png

77.3 KB

Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment