Commit 0d97cc8c authored by Sugon_ldc's avatar Sugon_ldc
Browse files

add new model

parents
Pipeline #316 failed with stages
in 0 seconds
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
include eiseg/config/*
include eiseg/resource/*
include eiseg/util/translate/*
\ No newline at end of file
简体中文 | [English](README_EN.md)
<div align="center">
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/179460858-7dfb19b1-cabf-4f8a-9e81-eb15b6cc7d5f.png" align="middle" alt="LOGO" width = "500" />
</p>
**飞桨高效交互式分割标注工具。**
[![Python 3.6](https://img.shields.io/badge/python-3.6+-blue.svg)](https://www.python.org/downloads/release/python-360/) [![PaddlePaddle 2.2](https://img.shields.io/badge/paddlepaddle-2.2-blue.svg)](https://www.python.org/downloads/release/python-360/) [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) [![Downloads](https://pepy.tech/badge/eiseg)](https://pepy.tech/project/eiseg)
</div>
<div align="center">
<table>
<tr>
<td><img src="https://user-images.githubusercontent.com/71769312/179209324-eb074e65-4a32-4568-a1d3-7680331dbf22.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209332-e3bcb1f0-d4d9-44e1-8b2a-8d7fac8996d4.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209312-0febfe78-810d-49b2-9169-eb15f0523af7.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209340-d04a0cec-d9a7-4962-93f1-b4953c6c9f39.gif"></td>
<tr>
<tr>
<td align="center">通用分割</td>
<td align="center">人像分割</td>
<td align="center">遥感建筑分割</td>
<td align="center">医疗分割</td>
<tr>
<tr>
<td><img src="https://user-images.githubusercontent.com/71769312/185751161-f23d0c1b-62c5-4cd2-903f-502037e353a8.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209328-87174780-6c6f-4b53-b2a2-90d289ac1c8a.gif"></td>
<td colspan="2"><img src="https://user-images.githubusercontent.com/71769312/179209342-5b75e61e-d9cf-4702-ba3e-971f47a10f5f.gif"></td>
<tr>
<tr>
<td align="center">工业质检</td>
<td align="center">通用视频分割</td>
<td align="center" colspan="2">3D医疗分割</td>
<tr>
</table>
</div>
## <img src="../docs/images/seg_news_icon.png" width="20"/> 最新动态
* [2022-09-16] :fire: EISeg使用的X光胸腔标注模型MUSCLE已经被MICCAI 2022接收,具体可参见[MUSCLE](docs/MUSCLE.md), 标注模型下载[地址](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_resnet50_deeplab_chest_xray.zip).
* [2022-07-20] :fire: EISeg 1.0版本发布!
- 新增用于通用场景视频交互式分割能力,以EISeg交互式分割模型及[MiVOS](https://github.com/hkchengrex/MiVOS)算法为基础,全面提升视频标注体验。详情使用请参考[视频标注](docs/video.md)
- 新增用于腹腔多器官及CT椎骨数据3D分割能力,并提供3D可视化工具,给予医疗领域3D标注新的思路。详情使用请参考[3D标注](docs/video.md)
## <img src="https://user-images.githubusercontent.com/48054808/157795569-9fc77c85-732f-4870-9be0-99a7fe2cff27.png" width="20"/> 简介
EISeg(Efficient Interactive Segmentation)基于飞桨开发的一个高效智能的交互式分割标注软件。它涵盖了通用、人像、遥感、医疗、视频等不同方向的高质量交互式分割模型。 另外,将EISeg获取到的标注应用到PaddleSeg提供的其他分割模型进行训练,便可得到定制化场景的高精度模型,打通分割任务从数据标注到模型训练及预测的全流程。
![4a9ed-a91y1](https://user-images.githubusercontent.com/71769312/141130688-e1529c27-aba8-4bf7-aad8-dda49808c5c7.gif)
## <img src="../docs/images/feature.png" width="20"/> 特性
* 高效的半自动标注工具,已上线多个Top标注平台
* 覆盖遥感、医疗、视频、3D医疗等众多垂类场景
* 多平台兼容,简单易用,支持多类别标签管理
## <img src="../docs/images/chat.png" width="20"/> 技术交流
* 如果您对EISeg有任何问题和建议,欢迎在[GitHub Issues](https://github.com/PaddlePaddle/PaddleSeg/issues)提issue。
* 欢迎您加入EISeg微信群,和大家交流讨论、一起共建EISeg,而且可以**领取重磅学习大礼包🎁**
* 🔥 获取深度学习视频教程、图像分割论文合集
* 🔥 获取PaddleSeg的历次直播视频,最新发版信息和直播动态
* 🔥 获取PaddleSeg自建的人像分割数据集,整理的开源数据集
* 🔥 获取PaddleSeg在垂类场景的预训练模型和应用合集,涵盖人像分割、交互式分割等等
* 🔥 获取PaddleSeg的全流程产业实操范例,包括质检缺陷分割、抠图Matting、道路分割等等
<div align="center">
<img src="https://user-images.githubusercontent.com/35907364/184841582-84a3c12d-0b50-48cc-9762-11fdd56b59eb.jpg" width = "200" />
</div>
## <img src="../docs/images/teach.png" width="20"/> 使用教程
* [安装说明](docs/install.md)
* [图像标注](docs/image.md)
* [视频及3D医疗标注](docs/video.md)
* [遥感特色功能](docs/remote_sensing.md)
* [医疗特色功能](docs/medical.md)
* [数据处理脚本文档](docs/tools.md)
## <img src="../docs/images/anli.png" width="20"/> 更新历史
- 2022.07.20 **1.0.0**:【1】新增交互式视频分割功能【2】新增腹腔多器官3D标注模型【3】新增CT椎骨3D标注模型。
- 2022.04.10 **0.5.0**:【1】新增chest_xray模型;【2】新增MRSpineSeg模型;【3】新增铝板质检标注模型;【4】修复保存shp时可能坐标出错。
- 2021.11.16 **0.4.0**:【1】将动态图预测转换成静态图预测,单次点击速度提升十倍;【2】新增遥感图像标注功能,支持多光谱数据通道的选择;【3】支持大尺幅数据的切片(多宫格)处理;【4】新增医疗图像标注功能,支持读取dicom的数据格式,支持选择窗宽和窗位。
- 2021.09.16 **0.3.0**:【1】初步完成多边形编辑功能,支持对交互标注的结果进行编辑;【2】支持中/英界面;【3】支持保存为灰度/伪彩色标签和COCO格式;【4】界面拖动更加灵活;【5】标签栏可拖动,生成mask的覆盖顺序由上往下覆盖。
- 2021.07.07 **0.2.0**:新增contrib:EISeg,可实现人像和通用图像的快速交互式标注。
## 贡献者
- 感谢[Zhiliang Yu](https://github.com/yzl19940819), [Yizhou Chen](https://github.com/geoyee), [Lin Han](https://github.com/linhandev), [Jinrui Ding](https://github.com/Thudjr), [Yiakwy](https://github.com/yiakwy), [GT](https://github.com/GT-ZhangAcer), [Youssef Harby](https://github.com/Youssef-Harby), [Nick Nie](https://github.com/niecongchong) 等开发者及[RITM](https://github.com/saic-vul/ritm_interactive_segmentation)[MiVOS](https://github.com/hkchengrex/MiVOS) 等算法支持。
- 感谢[LabelMe](https://github.com/wkentaro/labelme)[LabelImg](https://github.com/tzutalin/labelImg)的标签设计。
- 感谢[Weibin Liao](https://github.com/MrBlankness)提供的ResNet50_DeeplabV3+预训练模型。
- 感谢[Junjie Guo](https://github.com/Guojunjie08)[Jiajun Feng](https://github.com/richarddddd198)在椎骨模型上提供的技术支持。
## 学术引用
如果我们的项目在学术上帮助到你,请考虑以下引用:
```latex
@article{hao2022eiseg,
title={EISeg: An Efficient Interactive Segmentation Tool based on PaddlePaddle},
author={Hao, Yuying and Liu, Yi and Chen, Yizhou and Han, Lin and Peng, Juncai and Tang, Shiyu and Chen, Guowei and Wu, Zewu and Chen, Zeyu and Lai, Baohua},
journal={arXiv e-prints},
pages={arXiv--2210},
year={2022}
}
@inproceedings{hao2021edgeflow,
title={Edgeflow: Achieving practical interactive segmentation with edge-guided flow},
author={Hao, Yuying and Liu, Yi and Wu, Zewu and Han, Lin and Chen, Yizhou and Chen, Guowei and Chu, Lutao and Tang, Shiyu and Yu, Zhiliang and Chen, Zeyu and others},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={1551--1560},
year={2021}
}
```
<html dir="rtl" lang="ar">
# EISeg
[![Python 3.6](https://camo.githubusercontent.com/75b8738e1bdfe8a832711925abbc3bd449c1e7e9260c870153ec761cad8dde40/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f707974686f6e2d332e362b2d626c75652e737667)](https://www.python.org/downloads/release/python-360/) [![PaddlePaddle 2.2](https://camo.githubusercontent.com/f792707056617d58db17dca769c9a62832156e183b6eb29dde812b34123c2b18/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f706164646c65706164646c652d322e322d626c75652e737667)](https://www.python.org/downloads/release/python-360/) [![License](https://camo.githubusercontent.com/9330efc6e55b251db7966bffaec1bd48e3aae79348121f596d541991cfec8858/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d417061636865253230322d626c75652e737667)](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/EISeg/LICENSE) [![Downloads](https://camo.githubusercontent.com/d3d7e08bac205f34cee998959f85ffcbe6a9aca4129c20d7a5ec449848826d48/68747470733a2f2f706570792e746563682f62616467652f6569736567)](https://pepy.tech/project/eiseg)
[Chinese (Simplified)](README.md) | [English](README_EN.md) | عربي
## آخر التطورات
- ورقتنا البحثية عن ال (interactive segmentation) بأسم [EdgeFlow](https://arxiv.org/abs/2109.09406) تم قبولها من قبل ICCV 2021 Workshop.
- دعم الاستدلال على الرسم البياني الثابت مع سرعة تفاعل واداء عالي بالكامل ؛ إطلاق أحدث إصدار من EISeg 0.4.0 والذي يضيف حديثًا الاستشعار عن بعد ، ووضع العلامات الطبية ، وتقسيم الصور كبيرة الحجم الى اجزاء صغيره ليمكن استخدامها.
## مقدمة
EISeg (Efficient Interactive Segmentation), تم بنائه على أساس [RITM](https://github.com/saic-vul/ritm_interactive_segmentation) و [EdgeFlow](https://arxiv.org/abs/2109.09406), هو برنامج تعليمي فعال وذكي للتجزئة تم تطويره على أساس PaddlePaddle. إنه يغطي عددًا كبيرًا من نماذج التجزئة عالية الجودة في اتجاهات مختلفة مثل الصور بانواعها العامة ، والصورة الشخصية ، والاستشعار عن بعد ، والعلاج الطبي ، وما إلى ذلك ، مما يوفر الراحة للتعليق التوضيحي السريع للملصقات الدلالية والمثيلات بتكلفة منخفضة. بالإضافة إلى ذلك ، من خلال تطبيق التعليقات التوضيحية التي حصل عليها EISeg على نماذج التجزئة الأخرى المقدمة من PaddleSeg للتدريب ، يمكن إنشاء نماذج عالية الأداء مع حالات مخصصة ، ودمج العملية الكاملة لمهام التجزئة من شرح البيانات إلى تدريب النموذج والاستدلال.
[![4a9ed-a91y1](https://user-images.githubusercontent.com/71769312/141130688-e1529c27-aba8-4bf7-aad8-dda49808c5c7.gif)](https://user-images.githubusercontent.com/71769312/141130688-e1529c27-aba8-4bf7-aad8-dda49808c5c7.gif)
## تجهيز النموذج
يرجى تنزيل معلمات النموذج قبل استخدام EIseg. يوفر EISeg 0.4.0 أربعة نماذج اتجاه مدربة على COCO + LVIS ، وبيانات صورة واسعة النطاق ، و mapping_challenge ، و LiTS (تحدي تجزئة ورم الكبد) لتلبية احتياجات وضع العلامات للسيناريوهات العامة والصورة بالإضافة إلى الهندسة المعمارية والكبد في الصور الطبية. تتوافق بنية النموذج مع وحدة اختيار الشبكة في أدوات EISeg التفاعلية ، ويحتاج المستخدمون إلى تحديد هياكل شبكة مختلفة ومعلمات التحميل وفقًا لاحتياجاتهم الخاصة.
| نوع النموذج | سيناريوهات الاستخدام | معمارية النموذج | رابط التنزيل |
| ---------------------- | ---------------------------------------- | ------------------ | ------------------------------------------------------------ |
| High Performance Model | تحديد الصور العامة | HRNet18_OCR64 | [static_hrnet18_ocr64_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_cocolvis.zip) |
| Lightweight Model | تحديد الصور العامة | HRNet18s_OCR48 | [static_hrnet18s_ocr48_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_cocolvis.zip) |
| High Performance Model | تحديد الصور في وضع ال portrait | HRNet18_OCR64 | [static_hrnet18_ocr64_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_human.zip) |
| Lightweight Model | تحديد الصور في وضع ال portrait | HRNet18s_OCR48 | [static_hrnet18s_ocr48_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_human.zip) |
| High Performance Model | تحديد الصور العامة | EdgeFlow | [static_edgeflow_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_edgeflow_cocolvis.zip) |
| Lightweight Model | تحديد صور الاستشعار عن بعد | HRNet18s_OCR48 | [static_hrnet18_ocr48_rsbuilding_instance](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr48_rsbuilding_instance.zip) |
| Lightweight Model | تحديد الصور الطبية والكبد | HRNet18s_OCR48 | [static_hrnet18s_ocr48_lits](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_lits.zip) |
**ملاحظة** : يجب وضع بنية النموذج الذي تم تنزيله "* .pdmodel" ومعلمات النموذج المقابلة "* .pdiparams" في نفس مسار المجلد على جهازك. عند تحميل النموذج ، ما عليك سوى تحديد موقع معلمة النموذج في نهاية `* .pdiparams` ، وسيتم تحميل` * .pdmodel` تلقائيًا. عند استخدام نموذج "EdgeFlow" ، يرجى إيقاف تشغيل "استخدام قناع" والتحقق من "استخدام قناع" عند استخدام طرز أخرى.
## كيفية التثبيت \ التنصيب
يوفر EISeg طرقًا متعددة للتثبيت ، من بينها [pip](#PIP) و [كود التشغيل](#كودالتشغيل) متوافقان مع أنظمة التشغيل Windows و Mac OS و Linux. يوصى بالتثبيت في بيئة افتراضية تم إنشاؤها بواسطة Conda خوفًا من مشاكل في عدم التوافق.
متطلبات الإصدار:
- PaddlePaddle >= 2.2.0
لمزيد من التفاصيل حول تثبيت PaddlePaddle ، يرجى الرجوع إلى موقعنا [الموقع الرسمي](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/windows-pip.html).
### Clone
استنساخ PaddleSeg الي جهازك عن طريق git:
```
git clone https://github.com/PaddlePaddle/PaddleSeg.git
```
قم بتثبيت البيئة المطلوبة (إذا كنت بحاجة إلى استخدام GDAL و SimpleITK ، فيرجى الرجوع إلى **Vertical Segmentation** للتثبيت).
```
pip install -r requirements.txt
```
قم بتمكين EISeg عن طريق تشغيل eiseg بعد تثبيت البيئة المطلوبة:
```
cd PaddleSeg\EISeg
python -m eiseg
```
أو يمكنك تشغيل exe.py في eiseg:
```
cd PaddleSeg\EISeg\eiseg
python exe.py
```
### PIP
قم بتثبيت pip على النحو التالي :
```
pip install eiseg
```
تقوم pip بتثبيت التبعيات تلقائيًا. بعد ذلك ، أدخل ما يلي في سطر command line:
```
eiseg
```
الآن ، يمكنك تشغيل pip.
## الأستخدام
بعد فتح البرنامج ، قم بإجراء الإعدادات التالية قبل التحديد على الصور:
1. **تحميل نموذج المعلمة**
حدد الشبكة المناسبة وقم بتحميل معلمات النموذج المقابلة. يشهد EISeg0.4.0 تحويل استدلال الرسم البياني الديناميكي إلى استنتاج ثابت وتحسينات شاملة لسرعة الاستدلال بنقرة واحدة. بعد تنزيل وفك ضغط النموذج والمعلمات الصحيحة ، يجب وضع بنية النموذج `* .pdmodel` ومعلمات النموذج المقابلة` * .pdiparams` في نفس الدليل ، وفقط موقع معلمة النموذج في نهاية `* يجب تحديد .pdiparams` عند تحميل النموذج. تستغرق تهيئة النموذج الثابت بعض الوقت ، يرجى الانتظار بصبر حتى يتم تحميل النموذج. سيتم تسجيل معلمات النموذج التي تم تحميلها بشكل صحيح في `معلمات النموذج الحديثة` ، والتي يمكن تبديلها بسهولة ، وسيتم تحميل معلمة النموذج الخارج تلقائيًا في المرة التالية التي تفتح فيها البرنامج.
2. **فتح الصور**
قم بفتح مجلد الصور التي تحتوي الصور المراد تحديدها وعند تحميلها نلاحظ ظهورها في الشاشة الرئيسية للبرنامج تحت `قائمة البيانات`.
3. **اضافة او إستيراد علامات**
إضافة / تحميل العلامات. يمكن إنشاء علامات جديدة عن طريق `إضافة تسمية` ، والتي تنقسم إلى 4 أعمدة مطابقة لقيمة البكسل والوصف واللون والحذف. يمكن حفظ العلامات التي تم إنشاؤها حديثًا كملفات txt عن طريق `حفظ قائمة العلامات` ، ويمكن للمتعاونين الآخرين استيراد العلامات عن طريق `تحميل قائمة العلامات`. العلامات التي تم استيرادها عن طريق التحميل سيتم تحميلها تلقائيًا بعد إعادة تشغيل البرنامج.
4. **الحفظ التلقائي**
يمكنك اختيار مجلد الحفظ المناسب وتشغيل `الحفظ التلقائي`, بحيث يتم حفظ الصورة التي انتهيت منها تلقائيًا عند تبديل الصور.
يمكنك البدأ في استخدام البرنامج الان عندما تتم أعداد كل ما سبق, يوجد ايضًا اختصارات للوحة المفاتيح لتسهل عليك ويمكن تعديلها عن طريق الضغط على حرف `E` بم يناسبك.
| المفتاح الو الحروف الدالة على الاختصارات | الخاصية |
| --------------------------------- | ------------------------------ |
| زر الفأرة الايسر | أضف نقاط عينة موجبة |
| زر الماوس الايمن | أضف نقاط عينة سلبية |
| زر الماوس الأوسط | تحريك الصورة |
| Ctrl + زر الماوس الأوسط (عجلة) | تكبير الصورة |
| S | الصورة السابقة |
| F | الصورة التالية |
| مسافة | انهاء التعليم |
| Ctrl+Z | خطوة للخلف |
| Ctrl+Shift+Z | مسح |
| Ctrl+Y | خطوة للأمام |
| Ctrl+A | فتح صورة |
| Shift+A | افتح المجلد |
| E | افتح قائمة مفاتيح الاختصار |
| Backspace | حذف المضلع |
| نقرتين متتاليتين(نقطة)| حذف نقطة |
| نقرتين متتاليتين(حرف)| إضافة نقطة |
## تعليمات الخاصيات الجديدة
- **مضلع**
- انقر فوق مفتاح ال `مسافة` لإكمال التعليق التوضيحي التفاعلي ، ثم يظهر حدود المضلع.
- عندما تحتاج إلى متابعة العملية التفاعلية داخل المضلع ، انقر فوق `مسافة` للتبديل إلى الوضع التفاعلي حتى لا يمكن تحديد المضلع وتغييره.
- يمكن حذف المضلع. استخدم الماوس الأيسر لسحب نقطة الربط ، وانقر نقرًا مزدوجًا فوق نقطة الربط لحذفها ، ثم انقر نقرًا مزدوجًا فوق أحد الجوانب لإضافة نقطة ربط.
- مع تشغيل `الاحتفاظ بأقصى عدد من الكتل المتصلة` ، ستبقى المساحة الأكبر فقط في الصورة ، ولن يتم عرض باقي المساحات الصغيرة وحفظها.
- **الصيغات المختلفة للحفظ واخراج الملفات**
- سيتم تسجيل المضلعات وتحميلها تلقائيًا بعد إعداد `حفظ JSON` أو `COCO حفظ`.
- مع عدم وجود مسار حفظ محدد ، يتم حفظ الصورة في مجلد التسمية ضمن مجلد الصورة الحالي افتراضيًا.
- إذا كانت هناك صور بنفس الاسم ولكن بصيغة مختلفة ، فيمكنك فتح `التسميات والصور ذات الامتدادات نفسها`.
- يمكنك أيضًا الحفظ كصورة ذات تدرج رمادي أو ألوان زائفة أو غير لامعة ، راجع الأدوات 7-9 في شريط الأدوات.
- **استخدام واخراج القناع**
- يمكن سحب الملصقات عن طريق الضغط باستمرار على العمود الثاني ، وسيتم الكتابة فوق القناع النهائي الذي تم إنشاؤه من أعلى إلى أسفل وفقًا لقائمة العلامات.
- **وحدة واجهة**
- يمكنك تحديد وحدة الواجهة ليتم عرضها في `العرض` ، وسيتم تسجيل حالة الخروج العادية وموقع وحدة الواجهة ، وتحميلها تلقائيًا عند فتحها في المرة القادمة.
- **تجزئة رأسية**
يدعم EISeg الآن صور الاستشعار عن بعد وتجزئة الصور الطبية ، ويلزم تثبيت تبعيات إضافية لتشغيلها.
- قم بتثبيت GDAL للاستشعار عن بعد لتجزئة الصورة ، يرجى الرجوع إلى [تجزئة الاستشعار عن بعد](docs/remote_sensing_en.md)
- قم بتثبيت SimpleITK لتجزئة الصور الطبية ، يرجى الرجوع إلى [تجزئة الصور الطبية](docs/medical_en.md)
- **أداة البرمجة النصية**
يوفر EISeg حاليًا أدوات البرمجة النصية بما في ذلك التعليقات التوضيحية على مجموعة بيانات PaddleX ، وتحديد تنسيق COCO والتسميات الدلالية لتسميات المثيلات ، إلخ. راجع [استخدام أدوات البرمجة النصية](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/EISeg/) لمزيد من التفاصيل.
## تحديثات الإصدار
يمكنك نعرفة آخر التحديقات من [هنا](README_EN.md#version-updates)
## المساهمون
نشكر المطورين [Yuying Hao](https://github.com/haoyuying), [Lin Han](https://github.com/linhandev), [Yizhou Chen](https://github.com/geoyee), [Yiakwy](https://github.com/yiakwy), [GT](https://github.com/GT-ZhangAcer), [Youssef Harby](https://github.com/Youssef-Harby), [Zhiliang Yu](https://github.com/yzl19940819), [Nick Nie](https://github.com/niecongchong) والدعم من قبل [RITM](https://github.com/saic-vul/ritm_interactive_segmentation).
## الاقتباس
إذا وجدت مشروعنا مفيدًا في بحثك ، فيرجى كتابة الاقتباس :
```
@article{hao2021edgeflow,
title={EdgeFlow: Achieving Practical Interactive Segmentation with Edge-Guided Flow},
author={Hao, Yuying and Liu, Yi and Wu, Zewu and Han, Lin and Chen, Yizhou and Chen, Guowei and Chu, Lutao and Tang, Shiyu and Yu, Zhiliang and Chen, Zeyu and others},
journal={arXiv preprint arXiv:2109.09406},
year={2021}
}
```
</html>
[Chinese (Simplified)](README.md) | English
<div align="center">
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/179460858-7dfb19b1-cabf-4f8a-9e81-eb15b6cc7d5f.png" align="middle" alt="LOGO" width = "500" />
</p>
**An Efficient Interactive Segmentation Tool based on [PaddlePaddle](https://github.com/paddlepaddle/paddle).**
[![Python 3.6](https://img.shields.io/badge/python-3.6+-blue.svg)](https://www.python.org/downloads/release/python-360/) [![PaddlePaddle 2.2](https://img.shields.io/badge/paddlepaddle-2.2-blue.svg)](https://www.python.org/downloads/release/python-360/) [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) [![Downloads](https://pepy.tech/badge/eiseg)](https://pepy.tech/project/eiseg)
</div>
<div align="center">
<table>
<tr>
<td><img src="https://user-images.githubusercontent.com/71769312/179209324-eb074e65-4a32-4568-a1d3-7680331dbf22.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209332-e3bcb1f0-d4d9-44e1-8b2a-8d7fac8996d4.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209312-0febfe78-810d-49b2-9169-eb15f0523af7.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209340-d04a0cec-d9a7-4962-93f1-b4953c6c9f39.gif"></td>
<tr>
<tr>
<td align="center">Generic segmentation</td>
<td align="center">Human segmentation</td>
<td align="center">RS building segmentation</td>
<td align="center">Medical segmentation</td>
<tr>
<tr>
<td><img src="https://user-images.githubusercontent.com/71769312/185751161-f23d0c1b-62c5-4cd2-903f-502037e353a8.gif"></td>
<td><img src="https://user-images.githubusercontent.com/71769312/179209328-87174780-6c6f-4b53-b2a2-90d289ac1c8a.gif"></td>
<td colspan="2"><img src="https://user-images.githubusercontent.com/71769312/179209342-5b75e61e-d9cf-4702-ba3e-971f47a10f5f.gif"></td>
<tr>
<tr>
<td align="center">Industrial quality inspection</td>
<td align="center">Generic video segmentation</td>
<td align="center" colspan="2">3D medical segmentation</td>
<tr>
</table>
</div>
## <img src="../docs/images/seg_news_icon.png" width="20"/> Latest Developments
* [2022-09-16] :fire: The annotation model MUSCLE has been accepted by MICCAI 2022. For details, please refer to [MUSCLE](docs/MUSCLE_en.md), the model can be downloaded [here](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_resnet50_deeplab_chest_xray.zip).
* [2022-07-20] :fire: EISeg 1.0 is released!
- Added interactive video object segmentation for general scenes, this work is based on EISeg interactive segmentation model and [MiVOS](https://github.com/hkchengrex/MiVOS).
- Added 3D segmentation capabilities for abdominal multi-organ and CT vertebral data, and provides 3D visualization tools. For details, please refer to [3D Anotations](docs/video.md).
## <img src="https://user-images.githubusercontent.com/48054808/157795569-9fc77c85-732f-4870-9be0-99a7fe2cff27.png" width="20"/> Introduction
EISeg (Efficient Interactive Segmentation) is an efficient and intelligent interactive segmentation annotation software developed based on PaddlePaddle. It covers a large number of high-quality segmentation models in different directions such as generic scenarios, portrait, remote sensing, medical treatment, video, etc., providing convenience to the rapid annotation of semantic and instance labels with reduced cost. In addition, by applying the annotations obtained by EISeg to other segmentation models provided by PaddleSeg for training, high-performance models with customized scenarios can be created, integrating the whole process of segmentation tasks from data annotation to model training and inference.
[![4a9ed-a91y1](https://user-images.githubusercontent.com/71769312/141130688-e1529c27-aba8-4bf7-aad8-dda49808c5c7.gif)](https://user-images.githubusercontent.com/71769312/141130688-e1529c27-aba8-4bf7-aad8-dda49808c5c7.gif)
## <img src="../docs/images/chat.png" width="20"/> Community
* If you have any problem or suggestion on EISeg, please send us issues through [GitHub Issues](https://github.com/PaddlePaddle/PaddleSeg/issues).
* Welcome to Join EISeg WeChat Group
<div align="center">
<img src="https://user-images.githubusercontent.com/35907364/184841582-84a3c12d-0b50-48cc-9762-11fdd56b59eb.jpg" width = "200" />
</div>
## <img src="../docs/images/teach.png" width="20"/> Tutorials
* [Installation](docs/install_en.md)
* [Image Annotation](docs/image_en.md)
* [Video Annotation](docs/video_en.md)
* [Remote Sensing](docs/remote_sensing_en.md)
* [Medical Treatment](docs/medical_en.md)
## <img src="../docs/images/anli.png" width="20"/> Version Updates
- 2022.07.20 **1.0.0**:【1】Add the ability of interactive video object segmentation. 【2】Add 3D annotation model for abdominal multi-organ【3】Added 3D annotation model for CT vertebra.
- 2022.04.10 **0.5.0**: 【1】Add chest_xray interactive model;【2】Add MRSpineSeg interactive model;【3】Add industrial quality inspection model;【4】Fix geo-transform / CRS error when shapefile saved.
- 2021.12.14 **0.4.1**: 【1】Fix the bug of crashing; 【2】Newly add the post-labeling operation of remote sensing building images.
- 2021.11.16 **0.4.0**: 【1】 Convert dynamic graph inference into static graph inference with ten times' increase in the speed of single click; 【2】 Add the function of remote sensing image labeling, support the selection of multi-spectral data channels; 【3】 Support the processing of slicing (multi squre division) of large size data; 【4】 Add medical image labeling function, support the reading dicom format and the selection of window width and position.
- 2021.09.16 **0.3.0**:【1】Complete the function of polygon editing with support for editing the results of interactive annotation;【2】Support CH/EN interface;【3】Support saving as grayscale/pseudo-color labels and COCO format;【4】More flexible interface dragging;【5】Achieve the dragging of label bar, and the generated mask is overwritten from top to bottom.
- 2021.07.07 **0.2.0**: Newly added contrib:EISeg,which enables rapid interactive annotation of portrait and generic images.
## Contributors
- Our gratitude goes to Developers including [Zhiliang Yu](https://github.com/yzl19940819), [Yizhou Chen](https://github.com/geoyee), [Lin Han](https://github.com/linhandev), [Jinrui Ding](https://github.com/Thudjr), [Yiakwy](https://github.com/yiakwy), [GT](https://github.com/GT-ZhangAcer), [Youssef Harby](https://github.com/Youssef-Harby), [Nick Nie](https://github.com/niecongchong) and the support of [RITM](https://github.com/saic-vul/ritm_interactive_segmentation) and [MiVOS](https://github.com/hkchengrex/MiVOS).
- Thanks for the labelling deisgn of [LabelMe](https://github.com/wkentaro/labelme) and [LabelImg](https://github.com/tzutalin/labelImg).
- Thanks for [Weibin Liao](https://github.com/MrBlankness) to provide the pretrain model of ResNet50_DeeplabV3.
- Thanks for support of [Junjie Guo](https://github.com/Guojunjie08) and [Jiajun Feng](https://github.com/richarddddd198) on MRSpineSeg model.
## Citation
If you find our project useful in your research, please consider citing :
```
@article{hao2022eiseg,
title={EISeg: An Efficient Interactive Segmentation Tool based on PaddlePaddle},
author={Hao, Yuying and Liu, Yi and Chen, Yizhou and Han, Lin and Peng, Juncai and Tang, Shiyu and Chen, Guowei and Wu, Zewu and Chen, Zeyu and Lai, Baohua},
journal={arXiv e-prints},
pages={arXiv--2210},
year={2022}
}
@inproceedings{hao2021edgeflow,
title={Edgeflow: Achieving practical interactive segmentation with edge-guided flow},
author={Hao, Yuying and Liu, Yi and Wu, Zewu and Han, Lin and Chen, Yizhou and Chen, Guowei and Chu, Lutao and Tang, Shiyu and Yu, Zhiliang and Chen, Zeyu and others},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={1551--1560},
year={2021}
}
```
<div align="center">
<img src="https://user-images.githubusercontent.com/50255927/189930324-0f3992cd-47f8-487c-b20e-5a59f28f978f.png" align="middle" alt="LOGO" height="60"/><img src="https://user-images.githubusercontent.com/35907364/179460858-7dfb19b1-cabf-4f8a-9e81-eb15b6cc7d5f.png" align="middle" alt="LOGO" height="60"/><img src="https://user-images.githubusercontent.com/50255927/189930342-d32b90e5-ef80-44fb-9eab-c9df25ca0d12.png" align="middle" alt="LOGO" height="60" />
</div>
# MUSCLE - MICCAI 2022
这是一篇论文 "MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep Models for X-ray Images of Multiple Body Parts" 的相关介绍。
该论文发布于MICCAI 2022。
## 简介
MUSCLE的主标是通过预训练一个主干网络,来提高深度学习在医学影像分析任务中的性能。
该论文的所有代码均使用PaddlePaddle框架实现。
## 框架
![image](https://user-images.githubusercontent.com/50255927/189317770-c8c9e866-beb2-4eb5-8116-21ab00850ef0.png)
MUSCLE聚合了从不同人体部位收集的多个Xray图像数据集,并作用于各种Xray影像的分析任务。
我们提出了多数据集动量对比表征学习(MD-MoCo)模块和多任务持续学习模块,
以自我监督的持续学习方式对深度学习框架的主干网络进行预训练。
预训练的模型可以使用特定任务的head对目标任务进行微调,并取得极佳的性能。
## 数据集
<table class="tg">
<thead>
<tr>
<th class="tg-cly1">Datasets</th>
<th class="tg-cly1">Body Part</th>
<th class="tg-cly1">Task</th>
<th class="tg-cly1">Train</th>
<th class="tg-cly1">Valid</th>
<th class="tg-cly1">Test</th>
<th class="tg-cly1">Total</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-nrix" colspan="7">Only Used for the first step (MD-MoCo) of MUSCLE</td>
</tr>
<tr>
<td class="tg-cly1">NIHCC</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">112,120</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">112,120</td>
</tr>
<tr>
<td class="tg-cly1">China-Set-CXR</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">661</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">661</td>
</tr>
<tr>
<td class="tg-cly1">Montgomery-Set-CXR</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">138</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">138</td>
</tr>
<tr>
<td class="tg-cly1">Indiana-CXR</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">7,470</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">7,470</td>
</tr>
<tr>
<td class="tg-cly1">RSNA Bone Age</td>
<td class="tg-nrix">Hand</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">10,811</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">10,811</td>
</tr>
<tr>
<td class="tg-nrix" colspan="7">Used for all three steps of MUSCLE</td>
</tr>
<tr>
<td class="tg-cly1">Pneumonia</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">Classification</td>
<td class="tg-cly1">4,686</td>
<td class="tg-cly1">585</td>
<td class="tg-cly1">585</td>
<td class="tg-cly1">5,856</td>
</tr>
<tr>
<td class="tg-cly1">MURA</td>
<td class="tg-nrix">Various Bone</td>
<td class="tg-nrix">Classification</td>
<td class="tg-cly1">32,013</td>
<td class="tg-cly1">3,997</td>
<td class="tg-cly1">3,997</td>
<td class="tg-cly1">40,005</td>
</tr>
<tr>
<td class="tg-cly1">Chest Xray Masks and labels</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">Segmentation</td>
<td class="tg-cly1">718</td>
<td class="tg-cly1">89</td>
<td class="tg-cly1">89</td>
<td class="tg-cly1">896</td>
</tr>
<tr>
<td class="tg-cly1">TBX</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">Detection</td>
<td class="tg-cly1">640</td>
<td class="tg-cly1">80</td>
<td class="tg-cly1">80</td>
<td class="tg-cly1">800</td>
</tr>
<tr>
<td class="tg-cly1">Total</td>
<td class="tg-nrix">N/A</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">169,257</td>
<td class="tg-cly1">4,751</td>
<td class="tg-cly1">4,479</td>
<td class="tg-cly1">178,757</td>
</tr>
</tbody>
</table>
## 实验
### 实验设置
- 主干网络
- ResNet-18、 ResNet-50
- 医学影像分析任务
- 肺炎分类任务 (Pneumonia),
- 骨骼异常分类任务 (MURA)
- 肺部分割任务 (Lung)
- 结核病Bounding Box检测 (TBX)
- Head网络
- 分类任务:Fully-Connected (FC) Layer
- 分割任务:DeepLab-V3
- 检测任务:FasterRCNN
- 基线的预训练算法
- **Scratch**: 模型主干网络使用Kaiming’s initialization进行参数初始化
- **ImageNet**: 模型主干网络使用官方发布的ImageNet进行参数初始化
- **MD-MoCo**: 模型主干网络只使用在多数据源的Xray图像进行MoCo学习的参数进行初始化
- **MUSCLE−−**: 模型的初始化策略和MUSCLE一致,但是不采用我们设计的跨任务记忆与循环和重组学习计划模块
### 不同身体部位的Xray数据集的结果
注意:Pneumonia是由**胸片**图像构成的数据集,而MURA由**骨骼**图像构成
<table class="tg">
<thead>
<tr>
<th class="tg-8d8j">Datasets</th>
<th class="tg-2b7s">Backbones</th>
<th class="tg-7zrl">Pre-train</th>
<th class="tg-2b7s">Acc.</th>
<th class="tg-8d8j">Sen.</th>
<th class="tg-8d8j">Spe.</th>
<th class="tg-2b7s">AUC(95%CI)</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-8d8j" rowspan="10">Pneumonia</td>
<td class="tg-2b7s" rowspan="5">ResNet-18</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">91.11</td>
<td class="tg-8d8j">93.91</td>
<td class="tg-8d8j">83.54</td>
<td class="tg-2b7s">96.58(95.09-97.81)</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">90.09</td>
<td class="tg-8d8j">93.68</td>
<td class="tg-8d8j">80.38</td>
<td class="tg-2b7s">96.05(94.24-97.33)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">96.58</td>
<td class="tg-8d8j">97.19</td>
<td class="tg-8d8j">94.94</td>
<td class="tg-2b7s">98.48(97.14-99.30)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">96.75</td>
<td class="tg-8d8j">97.66</td>
<td class="tg-8d8j">94.30</td>
<td class="tg-2b7s">99.51(99.16-99.77)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">97.26</td>
<td class="tg-8d8j">97.42</td>
<td class="tg-8d8j">96.84</td>
<td class="tg-2b7s">99.61(99.32-99.83) </td>
</tr>
<tr>
<td class="tg-2b7s" rowspan="5">ResNet-50</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">91.45</td>
<td class="tg-8d8j">92.51</td>
<td class="tg-8d8j">88.61</td>
<td class="tg-2b7s">96.55(95.08-97.82)</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">95.38</td>
<td class="tg-8d8j">95.78</td>
<td class="tg-8d8j">94.30</td>
<td class="tg-2b7s">98.72(98.03-99.33)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">97.09</td>
<td class="tg-8d8j">98.83</td>
<td class="tg-8d8j">92.41</td>
<td class="tg-2b7s">99.53(99.23-99.75)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">96.75</td>
<td class="tg-8d8j">98.36</td>
<td class="tg-8d8j">92.41</td>
<td class="tg-2b7s">99.58(99.30-99.84)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">98.12</td>
<td class="tg-8d8j">98.36</td>
<td class="tg-8d8j">97.47</td>
<td class="tg-2b7s">99.72(99.46-99.92)</td>
</tr>
<tr>
<td class="tg-8d8j" rowspan="10">MURA</td>
<td class="tg-2b7s" rowspan="5">ResNet-18</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">81.00</td>
<td class="tg-8d8j">68.17</td>
<td class="tg-8d8j">89.91</td>
<td class="tg-2b7s">86.62(85.73-87.55)</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">81.88</td>
<td class="tg-8d8j">73.49</td>
<td class="tg-8d8j">87.70</td>
<td class="tg-2b7s">88.11(87.18-89.03)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">82.48</td>
<td class="tg-8d8j">72.27</td>
<td class="tg-8d8j">89,57</td>
<td class="tg-2b7s">88.28(87.28-89.26)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">82.45</td>
<td class="tg-8d8j">74.16</td>
<td class="tg-8d8j">88.21</td>
<td class="tg-2b7s">88.41(87.54-89.26)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">82.62</td>
<td class="tg-8d8j">74.28</td>
<td class="tg-8d8j">88.42</td>
<td class="tg-2b7s">88.5o(87.46-89.57)</td>
</tr>
<tr>
<td class="tg-2b7s" rowspan="5">RcsNet-50</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">80.50</td>
<td class="tg-8d8j">65.42</td>
<td class="tg-8d8j">90.97</td>
<td class="tg-2b7s">86.22(85.22-87.35)</td>
</tr>
<tr>
<td class="tg-7zrl">ImngeNet</td>
<td class="tg-2b7s">81.73</td>
<td class="tg-8d8j">68.36</td>
<td class="tg-8d8j">91.01</td>
<td class="tg-2b7s">87.87(86.85-88.85)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">82.35</td>
<td class="tg-8d8j">73.12</td>
<td class="tg-8d8j">88.76</td>
<td class="tg-2b7s">87.89(87.06-88.88)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">81.10</td>
<td class="tg-8d8j">69.03</td>
<td class="tg-8d8j">89.48</td>
<td class="tg-2b7s">87.14(86.10-88.22)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">82.60</td>
<td class="tg-8d8j">74.53</td>
<td class="tg-8d8j">88.21</td>
<td class="tg-2b7s">88.37(87.38-89.32)</td>
</tr>
</tbody>
</table>
![image](https://user-images.githubusercontent.com/50255927/189317679-e3c22309-899b-4f8f-a689-d81e406376b5.png)
### 不同任务的结果
注意:Lung为肺部**分割**任务,而TBX为**检测**任务
<table class="tg">
<thead>
<tr>
<th class="tg-7zrl" rowspan="2">Backbones</th>
<th class="tg-7zrl" rowspan="2">Pre-train</th>
<th class="tg-8d8j" colspan="2">Lung</th>
<th class="tg-8d8j" colspan="3">TBX</th>
</tr>
<tr>
<th class="tg-2b7s">Dice</th>
<th class="tg-7zrl">mloU</th>
<th class="tg-7zrl">mAP</th>
<th class="tg-7zrl">AP-Active</th>
<th class="tg-7zrl">AP-Latent</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-7zrl" rowspan="5">ResNet-18</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">95.24</td>
<td class="tg-2b7s">94.00</td>
<td class="tg-2b7s">30.71</td>
<td class="tg-2b7s">56.71</td>
<td class="tg-2b7s">4.72</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">95.26</td>
<td class="tg-2b7s">94.10</td>
<td class="tg-2b7s">29.46</td>
<td class="tg-2b7s">56.27</td>
<td class="tg-2b7s">2.66</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">95.31</td>
<td class="tg-2b7s">94.14</td>
<td class="tg-2b7s">36.00</td>
<td class="tg-2b7s">67.17</td>
<td class="tg-2b7s">4.84</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">95.14</td>
<td class="tg-2b7s">93.90</td>
<td class="tg-2b7s">34.70</td>
<td class="tg-2b7s">63.43</td>
<td class="tg-2b7s">5.97</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">95.37</td>
<td class="tg-2b7s">94.22</td>
<td class="tg-2b7s">36.71</td>
<td class="tg-2b7s">64.84</td>
<td class="tg-2b7s">8.59</td>
</tr>
<tr>
<td class="tg-7zrl" rowspan="5"> <br>ResNet-50</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">93.52</td>
<td class="tg-2b7s">92.03</td>
<td class="tg-2b7s">23.93</td>
<td class="tg-2b7s">44.85</td>
<td class="tg-2b7s">3.01</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">93.77</td>
<td class="tg-2b7s">92.43</td>
<td class="tg-2b7s">35.61</td>
<td class="tg-2b7s">58.81</td>
<td class="tg-2b7s">12.42</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">94.33</td>
<td class="tg-2b7s">93.04</td>
<td class="tg-2b7s">36.78</td>
<td class="tg-2b7s">64.37</td>
<td class="tg-2b7s">9.18</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">95.04</td>
<td class="tg-2b7s">93.82</td>
<td class="tg-2b7s">35.14</td>
<td class="tg-2b7s">57.32</td>
<td class="tg-2b7s">12.97</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">95.27</td>
<td class="tg-2b7s">94.10</td>
<td class="tg-2b7s">37.83</td>
<td class="tg-2b7s">63.46</td>
<td class="tg-2b7s">12.21</td>
</tr>
</tbody>
</table>
![image](https://user-images.githubusercontent.com/50255927/189317479-14ecb3de-da80-4df3-b9a0-f1fece7b953f.png)
## Citation
如果我们的项目在学术上帮助到你,请考虑以下引用:
```
@inproceedings{liao2022muscle,
title={MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep Models for X-ray Images of Multiple Body Parts},
author={Weibin, Liao and Haoyi, Xiong and Qingzhong, Wang and Yan, Mo and Xuhong, Li and Yi, Liu and Zeyu, Chen and Siyu, Huang and Dejing, Dou},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
year={2022},
organization={Springer}
}
```
\ No newline at end of file
<div align="center">
<img src="https://user-images.githubusercontent.com/50255927/189930324-0f3992cd-47f8-487c-b20e-5a59f28f978f.png" align="middle" alt="LOGO" height="60"/><img src="https://user-images.githubusercontent.com/35907364/179460858-7dfb19b1-cabf-4f8a-9e81-eb15b6cc7d5f.png" align="middle" alt="LOGO" height="60"/><img src="https://user-images.githubusercontent.com/50255927/189930342-d32b90e5-ef80-44fb-9eab-c9df25ca0d12.png" align="middle" alt="LOGO" height="60" />
</div>
# MUSCLE - MICCAI 2022
This is a repository for paper "MUSCLE: Multi-task Self-supervised Continual Learning to
Pre-train Deep Models for X-ray Images of Multiple Body Parts" accepted by MICCAI 2022.
## Introduction
The goal of MUSCLE (*<u>MU</u>lti-task <u>S</u>elf-supervised <u>C</u>ontinual <u>LE</u>arning*) is to pre-train the deep neural network (DNN) models and
deliver decent performance on medical image analysis tasks.
All codes are implemented using PaddlePaddle.
## Framework
![image](https://user-images.githubusercontent.com/50255927/189317770-c8c9e866-beb2-4eb5-8116-21ab00850ef0.png)
MUSCLE aggregated multiple X-ray image datasets collected from different human body
parts, subject to various X-ray analytics tasks. We proposed Multi-Dataset Momentum
Contrastive Representation Learning (MD-MoCo) and Multi-task Continual Learning to
pre-train the backbone DNNs in a self-supervised continual learning manner.
The pre-trained models could be fine-tuned to target tasks using task-specific heads
and achieve superb performance.
## Datasets
<table class="tg">
<thead>
<tr>
<th class="tg-cly1">Datasets</th>
<th class="tg-cly1">Body Part</th>
<th class="tg-cly1">Task</th>
<th class="tg-cly1">Train</th>
<th class="tg-cly1">Valid</th>
<th class="tg-cly1">Test</th>
<th class="tg-cly1">Total</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-nrix" colspan="7">Only Used for the first step (MD-MoCo) of MUSCLE</td>
</tr>
<tr>
<td class="tg-cly1">NIHCC</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">112,120</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">112,120</td>
</tr>
<tr>
<td class="tg-cly1">China-Set-CXR</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">661</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">661</td>
</tr>
<tr>
<td class="tg-cly1">Montgomery-Set-CXR</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">138</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">138</td>
</tr>
<tr>
<td class="tg-cly1">Indiana-CXR</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">7,470</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">7,470</td>
</tr>
<tr>
<td class="tg-cly1">RSNA Bone Age</td>
<td class="tg-nrix">Hand</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">10,811</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-mwxe">N/A</td>
<td class="tg-cly1">10,811</td>
</tr>
<tr>
<td class="tg-nrix" colspan="7">Used for all three steps of MUSCLE</td>
</tr>
<tr>
<td class="tg-cly1">Pneumonia</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">Classification</td>
<td class="tg-cly1">4,686</td>
<td class="tg-cly1">585</td>
<td class="tg-cly1">585</td>
<td class="tg-cly1">5,856</td>
</tr>
<tr>
<td class="tg-cly1">MURA</td>
<td class="tg-nrix">Various Bone</td>
<td class="tg-nrix">Classification</td>
<td class="tg-cly1">32,013</td>
<td class="tg-cly1">3,997</td>
<td class="tg-cly1">3,997</td>
<td class="tg-cly1">40,005</td>
</tr>
<tr>
<td class="tg-cly1">Chest Xray Masks and labels</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">Segmentation</td>
<td class="tg-cly1">718</td>
<td class="tg-cly1">89</td>
<td class="tg-cly1">89</td>
<td class="tg-cly1">896</td>
</tr>
<tr>
<td class="tg-cly1">TBX</td>
<td class="tg-nrix">Chest</td>
<td class="tg-nrix">Detection</td>
<td class="tg-cly1">640</td>
<td class="tg-cly1">80</td>
<td class="tg-cly1">80</td>
<td class="tg-cly1">800</td>
</tr>
<tr>
<td class="tg-cly1">Total</td>
<td class="tg-nrix">N/A</td>
<td class="tg-nrix">N/A</td>
<td class="tg-cly1">169,257</td>
<td class="tg-cly1">4,751</td>
<td class="tg-cly1">4,479</td>
<td class="tg-cly1">178,757</td>
</tr>
</tbody>
</table>
## Experiments
### Experiment setups
- Backbone
- ResNet-18 and ResNet-50
- Task
- Pneumonia classification (Pneumonia),
- Skeletal abnormality classification (MURA)
- Lung segmentation (Lung)
- Tuberculosis detection (TBX)
- Head
- Fully-Connected (FC) Layer for classification tasks
- DeepLab-V3 for segmentation tasks
- FasterRCNN for detection tasks
- Baselines Pre-training Algorithms
- **Scratch**: the models are all initialized using Kaiming’s random initialization and fine-tuned on the target datasets
- **ImageNet**: the models are initialized using the officially-released weights pre-trained by the ImageNet dataset and fine-tuned on the target datasets
- **MD-MoCo**: the models are pre-trained using multi-dataset MoCo and fine-tuned accordingly
- **MUSCLE−−**: all models are pre-trained and fine-tuned with MUSCLE but with Cross-Task Memorization and Cyclic and Reshuffled Learning Schedule turned off
### Results for various body parts
+ Note: **chest** of Pneumonia and **bones** of MURA
<div>
<table class="tg">
<thead>
<tr>
<th class="tg-8d8j">Datasets</th>
<th class="tg-2b7s">Backbones</th>
<th class="tg-7zrl">Pre-train</th>
<th class="tg-2b7s">Acc.</th>
<th class="tg-8d8j">Sen.</th>
<th class="tg-8d8j">Spe.</th>
<th class="tg-2b7s">AUC(95%CI)</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-8d8j" rowspan="10">Pneumonia</td>
<td class="tg-2b7s" rowspan="5">ResNet-18</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">91.11</td>
<td class="tg-8d8j">93.91</td>
<td class="tg-8d8j">83.54</td>
<td class="tg-2b7s">96.58(95.09-97.81)</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">90.09</td>
<td class="tg-8d8j">93.68</td>
<td class="tg-8d8j">80.38</td>
<td class="tg-2b7s">96.05(94.24-97.33)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">96.58</td>
<td class="tg-8d8j">97.19</td>
<td class="tg-8d8j">94.94</td>
<td class="tg-2b7s">98.48(97.14-99.30)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">96.75</td>
<td class="tg-8d8j">97.66</td>
<td class="tg-8d8j">94.30</td>
<td class="tg-2b7s">99.51(99.16-99.77)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">97.26</td>
<td class="tg-8d8j">97.42</td>
<td class="tg-8d8j">96.84</td>
<td class="tg-2b7s">99.61(99.32-99.83) </td>
</tr>
<tr>
<td class="tg-2b7s" rowspan="5">ResNet-50</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">91.45</td>
<td class="tg-8d8j">92.51</td>
<td class="tg-8d8j">88.61</td>
<td class="tg-2b7s">96.55(95.08-97.82)</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">95.38</td>
<td class="tg-8d8j">95.78</td>
<td class="tg-8d8j">94.30</td>
<td class="tg-2b7s">98.72(98.03-99.33)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">97.09</td>
<td class="tg-8d8j">98.83</td>
<td class="tg-8d8j">92.41</td>
<td class="tg-2b7s">99.53(99.23-99.75)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">96.75</td>
<td class="tg-8d8j">98.36</td>
<td class="tg-8d8j">92.41</td>
<td class="tg-2b7s">99.58(99.30-99.84)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">98.12</td>
<td class="tg-8d8j">98.36</td>
<td class="tg-8d8j">97.47</td>
<td class="tg-2b7s">99.72(99.46-99.92)</td>
</tr>
<tr>
<td class="tg-8d8j" rowspan="10">MURA</td>
<td class="tg-2b7s" rowspan="5">ResNet-18</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">81.00</td>
<td class="tg-8d8j">68.17</td>
<td class="tg-8d8j">89.91</td>
<td class="tg-2b7s">86.62(85.73-87.55)</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">81.88</td>
<td class="tg-8d8j">73.49</td>
<td class="tg-8d8j">87.70</td>
<td class="tg-2b7s">88.11(87.18-89.03)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">82.48</td>
<td class="tg-8d8j">72.27</td>
<td class="tg-8d8j">89,57</td>
<td class="tg-2b7s">88.28(87.28-89.26)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">82.45</td>
<td class="tg-8d8j">74.16</td>
<td class="tg-8d8j">88.21</td>
<td class="tg-2b7s">88.41(87.54-89.26)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">82.62</td>
<td class="tg-8d8j">74.28</td>
<td class="tg-8d8j">88.42</td>
<td class="tg-2b7s">88.5o(87.46-89.57)</td>
</tr>
<tr>
<td class="tg-2b7s" rowspan="5">RcsNet-50</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">80.50</td>
<td class="tg-8d8j">65.42</td>
<td class="tg-8d8j">90.97</td>
<td class="tg-2b7s">86.22(85.22-87.35)</td>
</tr>
<tr>
<td class="tg-7zrl">ImngeNet</td>
<td class="tg-2b7s">81.73</td>
<td class="tg-8d8j">68.36</td>
<td class="tg-8d8j">91.01</td>
<td class="tg-2b7s">87.87(86.85-88.85)</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">82.35</td>
<td class="tg-8d8j">73.12</td>
<td class="tg-8d8j">88.76</td>
<td class="tg-2b7s">87.89(87.06-88.88)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">81.10</td>
<td class="tg-8d8j">69.03</td>
<td class="tg-8d8j">89.48</td>
<td class="tg-2b7s">87.14(86.10-88.22)</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">82.60</td>
<td class="tg-8d8j">74.53</td>
<td class="tg-8d8j">88.21</td>
<td class="tg-2b7s">88.37(87.38-89.32)</td>
</tr>
</tbody>
</table>
</div>
![image](https://user-images.githubusercontent.com/50255927/189317679-e3c22309-899b-4f8f-a689-d81e406376b5.png)
### Results for various tasks
+ Note: **segmentation task** for Lung and **detection task** for TBX
<table class="tg">
<thead>
<tr>
<th class="tg-7zrl" rowspan="2">Backbones</th>
<th class="tg-7zrl" rowspan="2">Pre-train</th>
<th class="tg-8d8j" colspan="2">Lung</th>
<th class="tg-8d8j" colspan="3">TBX</th>
</tr>
<tr>
<th class="tg-2b7s">Dice</th>
<th class="tg-7zrl">mloU</th>
<th class="tg-7zrl">mAP</th>
<th class="tg-7zrl">AP-Active</th>
<th class="tg-7zrl">AP-Latent</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-7zrl" rowspan="5">ResNet-18</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">95.24</td>
<td class="tg-2b7s">94.00</td>
<td class="tg-2b7s">30.71</td>
<td class="tg-2b7s">56.71</td>
<td class="tg-2b7s">4.72</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">95.26</td>
<td class="tg-2b7s">94.10</td>
<td class="tg-2b7s">29.46</td>
<td class="tg-2b7s">56.27</td>
<td class="tg-2b7s">2.66</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">95.31</td>
<td class="tg-2b7s">94.14</td>
<td class="tg-2b7s">36.00</td>
<td class="tg-2b7s">67.17</td>
<td class="tg-2b7s">4.84</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">95.14</td>
<td class="tg-2b7s">93.90</td>
<td class="tg-2b7s">34.70</td>
<td class="tg-2b7s">63.43</td>
<td class="tg-2b7s">5.97</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">95.37</td>
<td class="tg-2b7s">94.22</td>
<td class="tg-2b7s">36.71</td>
<td class="tg-2b7s">64.84</td>
<td class="tg-2b7s">8.59</td>
</tr>
<tr>
<td class="tg-7zrl" rowspan="5"> <br>ResNet-50</td>
<td class="tg-7zrl">Scratch</td>
<td class="tg-2b7s">93.52</td>
<td class="tg-2b7s">92.03</td>
<td class="tg-2b7s">23.93</td>
<td class="tg-2b7s">44.85</td>
<td class="tg-2b7s">3.01</td>
</tr>
<tr>
<td class="tg-7zrl">ImageNet</td>
<td class="tg-2b7s">93.77</td>
<td class="tg-2b7s">92.43</td>
<td class="tg-2b7s">35.61</td>
<td class="tg-2b7s">58.81</td>
<td class="tg-2b7s">12.42</td>
</tr>
<tr>
<td class="tg-7zrl">MD-MoCo</td>
<td class="tg-2b7s">94.33</td>
<td class="tg-2b7s">93.04</td>
<td class="tg-2b7s">36.78</td>
<td class="tg-2b7s">64.37</td>
<td class="tg-2b7s">9.18</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE--</td>
<td class="tg-2b7s">95.04</td>
<td class="tg-2b7s">93.82</td>
<td class="tg-2b7s">35.14</td>
<td class="tg-2b7s">57.32</td>
<td class="tg-2b7s">12.97</td>
</tr>
<tr>
<td class="tg-7zrl">MUSCLE</td>
<td class="tg-2b7s">95.27</td>
<td class="tg-2b7s">94.10</td>
<td class="tg-2b7s">37.83</td>
<td class="tg-2b7s">63.46</td>
<td class="tg-2b7s">12.21</td>
</tr>
</tbody>
</table>
![image](https://user-images.githubusercontent.com/50255927/189317479-14ecb3de-da80-4df3-b9a0-f1fece7b953f.png)
## Citation
If our work is helpful to you, please kindly cite our paper as:
```
@inproceedings{liao2022muscle,
title={MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep Models for X-ray Images of Multiple Body Parts},
author={Weibin, Liao and Haoyi, Xiong and Qingzhong, Wang and Yan, Mo and Xuhong, Li and Yi, Liu and Zeyu, Chen and Siyu, Huang and Dejing, Dou},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
year={2022},
organization={Springer}
}
```
\ No newline at end of file
简体中文 | [English](image_en.md)
# 图片标注
以下内容为2D图片标注模型下载及EISeg2D图片标注流程,具体如下:
## 模型准备
在使用EISeg前,请先下载模型参数。EISeg开放了在COCO+LVIS、大规模人像数据、mapping_challenge,Chest X-Ray,MRSpineSeg,LiTS及百度自建质检数据集上训练的7个垂类方向模型,满足通用场景、人像场景、建筑物标注,医疗影像肝脏,胸腔,椎骨及铝板质检的标注需求。其中模型结构对应EISeg交互工具中的网络选择模块,用户需要根据自己的场景需求选择不同的网络结构和加载参数。
| 模型类型 | 适用场景 | 模型结构 | 模型下载地址 |
| ---------- | -------------------------- | -------------- | ------------------------------------------------------------ |
| 高精度模型 | 通用场景的图像标注 | HRNet18_OCR64 | [static_hrnet18_ocr64_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_cocolvis.zip) |
| 轻量化模型 | 通用场景的图像标注 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_cocolvis.zip) |
| 高精度模型 | 通用图像标注场景 | EdgeFlow | [static_edgeflow_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_edgeflow_cocolvis.zip) |
| 高精度模型 | 人像标注场景 | HRNet18_OCR64 | [static_hrnet18_ocr64_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_human.zip) |
| 轻量化模型 | 人像标注场景 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_human.zip) |
| 轻量化模型 | 遥感建筑物标注场景 | HRNet18s_OCR48 | [static_hrnet18_ocr48_rsbuilding_instance](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr48_rsbuilding_instance.zip) |
| 高精度模型\* | x光胸腔标注场景 | Resnet50_Deeplabv3+ | [static_resnet50_deeplab_chest_xray](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_resnet50_deeplab_chest_xray.zip) |
| 轻量化模型 | 医疗肝脏标注场景 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_lits](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_lits.zip) |
| 轻量化模型\* | MRI椎骨图像标注场景 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_MRSpineSeg](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_hrnet18s_ocr48_MRSpineSeg.zip) |
| 轻量化模型\* | 质检铝板瑕疵标注场景 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_aluminium](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_hrnet18s_ocr48_aluminium.zip) |
**NOTE**: 将下载的模型结构`*.pdmodel`及相应的模型参数`*.pdiparams`需要放到同一个目录下,加载模型时只需选择`*.pdiparams`结尾的模型参数位置即可, `*.pdmodel`会自动加载。在使用`EdgeFlow`模型时,请将`使用掩膜`关闭,其他模型使用时请勾选`使用掩膜`。其中,`高精度模型`推荐使用带有显卡的电脑,以便获得更流畅的标注体验。
## 使用
打开软件后,在对项目进行标注前,需要进行如下设置:
1. **模型参数加载**
根据标注场景,选择合适的网络模型及参数进行加载。选择合适的模型及参数下载解压后,模型结构`*.pdmodel`及相应的模型参数`*.pdiparams`需要放到同一个目录下,加载模型时只需选择`*.pdiparams`结尾的模型参数位置即可。静态图模型初始化时间稍长,请耐心等待模型加载完成后进行下一步操作。正确加载的模型参数会记录在`近期模型参数`中,可以方便切换,并且下次打开软件时自动加载退出时的模型参数。
2. **图像加载**
打开图像/图像文件夹。当看到主界面图像正确加载,`数据列表`正确出现图像路径即可。
3. **标签添加/加载**
添加/加载标签。可以通过`添加标签`新建标签,标签分为4列,分别对应像素值、说明、颜色和删除。新建好的标签可以通过`保存标签列表`保存为txt文件,其他合作者可以通过`加载标签列表`将标签导入。通过加载方式导入的标签,重启软件后会自动加载。
4. **开始标注**
标注首先进入交互式分割模式。鼠标左键代表正点击,表示所选择的前景部分,鼠标右键代表负点击,为用户选定的背景区域。用户可以通过正负点击操作来选择感兴趣的区域, 直到满意为止。交互完成后使用Space(空格)完成交互标注,此时出现多边形边界,进入多边形标注模式。多边形可以删除,使用鼠标左边可 以对锚点进行拖动,鼠标左键双击锚点可以删 除锚点,双击两点之间的边则可在此边添加一个锚点。
5. **自动保存设置**
在使用中可以将`自动保存`设置上,设定好文件夹即可,这样在使用时切换图像会自动将完成标注的图像进行保存。
当设置完成后即可开始进行标注,默认情况下常用的按键/快捷键如下,如需修改可按`E`弹出快捷键修改。
| 部分按键/快捷键 | 功能 |
| --------------------- | ----------------- |
| 鼠标左键 | 增加正样本点 |
| 鼠标右键 | 增加负样本点 |
| 鼠标中键 | 平移图像 |
| Ctrl+鼠标中键(滚轮) | 缩放图像 |
| S | 切换上一张图 |
| F | 切换下一张图 |
| Space(空格) | 完成标注/切换状态 |
| Ctrl+Z | 撤销 |
| Ctrl+Shift+Z | 清除 |
| Ctrl+Y | 重做 |
| Ctrl+A | 打开图像 |
| Shift+A | 打开文件夹 |
| E | 打开快捷键表 |
| Backspace(退格) | 删除多边形 |
| 鼠标双击(点) | 删除点 |
| 鼠标双击(边) | 添加点 |
## 特色功能使用说明
- **多边形**
- 交互完成后使用Space(空格)完成交互标注,此时出现多边形边界;
- 当需要在多边形内部继续进行交互,则使用空格切换为交互模式,此时多边形无法选中和更改。
- 多边形可以删除,使用鼠标左边可以对锚点进行拖动,鼠标左键双击锚点可以删除锚点,双击两点之间的边则可在此边添加一个锚点。
- 打开`保留最大连通块`后,所有的点击只会在图像中保留面积最大的区域,其余小区域将不会显示和保存。
- **保存格式**
- 打开保存`JSON保存`或`COCO保存`后,多边形会被记录,加载时会自动加载。
- 若不设置保存路径,默认保存至当前图像文件夹下的label文件夹中。
- 如果有图像之间名称相同但后缀名不同,可以打开`标签和图像使用相同扩展名`。
- 还可设置灰度保存、伪彩色保存和抠图保存,见工具栏中7-9号工具。
- **生成mask**
- 标签按住第二列可以进行拖动,最后生成mask时会根据标签列表从上往下进行覆盖。
- **界面模块**
- 可在`显示`中选择需要显示的界面模块,正常退出时将会记录界面模块的状态和位置,下次打开自动加载。
- **垂类分割**
EISeg目前已添加对遥感图像和医学影像分割的支持,使用相关功能需要安装额外依赖。
- 分割遥感图像请安装GDAL,相关安装及介绍具体详见[遥感标注垂类建设](remote_sensing.md)。
- 分割医学影像请安装SimpleITK,相关安装及介绍具体详见[医疗标注垂类建设](medical.md)。
- **视频标注及3D医疗图像标注**
EISeg目前已添加对视频标注和腹腔多器官及ct椎骨数据3D医疗图像标注的支持,使用相关功能需要安装额外依赖。
- 视频标注及3D医疗图像标注请安装VTK,相关安装及介绍具体详见[视频标注功能](video.md)。
- **脚本工具使用**
EISeg目前提供包括标注转PaddleX数据集、划分COCO格式以及语义标签转实例标签等脚本工具,相关使用方式详见[脚本工具使用](tools.md)。
English | [简体中文](image.md)
# 2D Image Anotation
The following content is about how to use EISeg to annotate 2D images. Model preparation and how to use can be seen as follow:
## Model Preparation
Please download the model parameters before using EIseg. EISeg 0.5.0 provides seven direction models trained on COCO+LVIS, large-scale portrait data, mapping_challenge, MRSpineSeg, Chest Xray, LiTS and Self-built aluminum plate quality inspection data set to meet the labeling needs of generic and portrait scenarios as well as architecture, medical and industrial images. The model architecture corresponds to the network selection module in EISeg interactive tools, and users need to select different network structures and loading parameters in accordance with their own needs.
| Model Type | Applicable Scenarios | Model Architecture | Download Link |
|---------------------| ---------------------------------------- | ------------------ | ------------------------------------------------------------ |
| High Performance Model | Image annotation in generic scenarios | HRNet18_OCR64 | [static_hrnet18_ocr64_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_cocolvis.zip) |
| Lightweight Model | Image annotation in generic scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_cocolvis.zip) |
| High Performance Model | Annotation in portrait scenarios | HRNet18_OCR64 | [static_hrnet18_ocr64_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_human.zip) |
| High Performance Model | Image annotation in generic scenarios | EdgeFlow | [static_edgeflow_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_edgeflow_cocolvis.zip) |
| Lightweight Model | Annotation in portrait scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_human.zip) |
| Lightweight Model | Annotation of remote sensing building | HRNet18s_OCR48 | [static_hrnet18_ocr48_rsbuilding_instance](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr48_rsbuilding_instance.zip) |
| High Performance Model | Annotation of chest Xray in medical scenarios | Resnet50_DeeplabV3+ | [static_resnet50_deeplab_chest_xray \*](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_resnet50_deeplab_chest_xray.zip) |
| Lightweight Model | Annotation of liver in medical scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_lits](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_lits.zip) |
| Lightweight Model | Annotation of Spinal Structures in medical scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_MRSpineSeg](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_hrnet18s_ocr48_MRSpineSeg.zip) |
| Lightweight Model | Annotation of Aluminum plate defects in industrial scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_aluminium ](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_hrnet18s_ocr48_aluminium.zip) |
**NOTE**: The downloaded model structure `*.pdmodel` and the corresponding model parameters `*.pdiparams` should be put into the same directory. When loading the model, you only need to decide the location of the model parameter at the end of `*.pdiparams`, and `*.pdmodel` will be loaded automatically. When using `EdgeFlow` model, please turn off `Use Mask`, and check `Use Mask` when adopting other models. For `High Performance Model`, we recommend to utilize the computer with gpu for a smoother annotation experience.
## Using
After opening the software, make the following settings before annotating:
1. **Load Model Parameter**
Select the appropriate network and load the corresponding model parameters. After downloading and decompressing the right model and parameters, the model structure `*.pdmodel` and the corresponding model parameters `*.pdiparams` should be put into the same directory, and only the location of the model parameter at the end of `*.pdiparams`need to be selected when loading the model. The initialization of the static model takes some time, please wait patiently until the model is loaded. The correctly loaded model parameters will be recorded in `Recent Model Parameters`, which can be easily switched, and the exiting model parameter will be loaded automatically the next time you open the software.
2. **Load Image**
Open the image or image folder. Things go well when you see that the main screen image is loaded correctly and the image path is rightly shown in `Data List`.
3. **Add/Load Label**
Add/load labels. New labels can be created by `Add Label`, which are divided into 4 columns corresponding to pixel value, description, color and deletion. The newly created labels can be saved as txt files by `Save Label List`, and other collaborators can import labels by `Load Label List`. Labels imported by loading will be loaded automatically after restarting the software.
4. **Annotation**
During interactive annotation, users add positive and negative points with left and right mouse clicks, respectively. After finishing interactive segmentation, you can push Space button and the tool generates a polygon frame around the target border. Users can
adjust the polygon vertexes to further improve segmentation accuracy.
5. **Autosave**
You can choose the right folder and have the `autosave` set up, so that the annotated image will be saved automatically when switching images.
Start the annotation when the above are all set up. Here are the commonly used keys/shortcut keys by default, press `E` to modify them as you need.
| Keys/Shortcut Keys | Function |
| --------------------------------- | ------------------------------ |
| Left Mouse Button | Add Positive Sample Points |
| Right Mouse Button | Add Negative Sample Points |
| Middle Mouse Button | Image Panning |
| Ctrl+Middle Mouse Button(wheel) | Image Zooming |
| S | Previous Image |
| F | Next Image |
| Space | Finish Annotation/Switch State |
| Ctrl+Z | Undo |
| Ctrl+Shift+Z | Clear |
| Ctrl+Y | Redo |
| Ctrl+A | Open Image |
| Shift+A | Open Folder |
| E | Open Shortcut Key List |
| Backspace | Delete Polygon |
| Double Click(point) | Delete Point |
| Double Click(edge) | Add Point |
## Instruction of New Functions
- **Polygon**
- Click Space key to complete interactive annotation, then appears the polygon boundary.
- When you need to continue the interactive process inside the polygon, click Space to switch to interactive mode so the polygon cannot be selected and changed.
- The polygon can be deleted. Use the left mouse to drag the anchor point, double-click the anchor point to delete it, and double-click a side to add an anchor point.
- With `Keep Maximum Connected Blocks` on, only the largest area will remain in the image, the rest of the small areas will not be displayed and saved.
- **Save Format**
- Polygons will be recorded and automatically loaded after setting `JSON Save` or `COCO Save`.
- With no specified save path, the image is save to the label folder under the current image folder by default.
- If there are images with the same name but different suffixes, you can open `labels and images with the same extensions`.
- You can also save as grayscale, pseudo-color or matting image, see tools 7-9 in the toolbar.
- **Generate mask**
- Labels can be dragged by holding down the second column, and the final generated mask will be overwritten from top to bottom according to the label list.
- **Interface Module**
- You can select the interface module to be presented in `Display`, and the normal exit status and location of the interface module will be recorded, and loaded automatically when you open it next time.
- **Vertical Segmentation**
EISeg now supports remote sensing images and medical images segmentation, and additional dependencies need to be installed for their functioning.
- Install GDAL for remote sensing image segmentation, please refer to [Remote Sensing Segmentation](docs/remote_sensing_en.md)
- Install SimpleITK for medical images segmentation, please refer to [Medical Image Segmentation](docs/medical_en.md)
- **Scripting Tool**
EISeg currently provides scripting tools including annotation to PaddleX dataset, delineation of COCO format and semantic labels to instance labels, etc. See [Scripting Tools Usage](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/EISeg/) for more details.
简体中文 | [English](install_en.md)
## 安装使用
EISeg提供多种安装方式,其中使用[pip](#PIP)[运行代码](#运行代码)方式可兼容Windows,Mac OS和Linux。为了避免环境冲突,推荐在conda创建的虚拟环境中安装。
版本要求:
* PaddlePaddle >= 2.2.0
PaddlePaddle安装请参考[官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/windows-pip.html)
### 克隆到本地
通过git将PaddleSeg克隆到本地:
```shell
git clone https://github.com/PaddlePaddle/PaddleSeg.git
```
安装所需环境(若需要使用到GDAL和SimpleITK请参考**垂类分割**进行安装):
```shell
pip install -r requirements.txt
```
安装好所需环境后,进入EISeg,可通过直接运行eiseg打开EISeg:
```shell
cd PaddleSeg\EISeg
python -m eiseg
```
或进入eiseg,运行exe.py打开EISeg:
```shell
cd PaddleSeg\EISeg\eiseg
python exe.py
```
### PIP
pip安装方式如下:
```shell
pip install eiseg
```
pip会自动安装依赖。安装完成后命令行输入:
```shell
eiseg
```
即可运行软件。
English | [简体中文](install.md)
## Installation
EISeg provides multiple ways of installation, among which [pip](#PIP) and [run code](#run code) are compatible with Windows, Mac OS and Linux. It is recommended to install in a virtual environment created by conda for fear of environmental conflicts.
Version Requirements:
- PaddlePaddle >= 2.2.0
For more details of the installation of PaddlePaddle, please refer to our [official website](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/windows-pip.html).
### Clone
Clone PaddleSeg to your local system through git:
```
git clone https://github.com/PaddlePaddle/PaddleSeg.git
```
Install the required environment (if you need to use GDAL and SimpleITK, please refer to **Vertical Segmentation** for installation).
```
pip install -r requirements.txt
```
Enable EISeg by running eiseg after installing the needed environment:
```
cd PaddleSeg\EISeg
python -m eiseg
```
Or you can run exe.py in eiseg:
```
cd PaddleSeg\EISeg\eiseg
python exe.py
```
### PIP
Install pip as follows:
```
pip install eiseg
```
pip will install dependencies automatically. After that, enter the following at the command line:
```
eiseg
```
Now, you can run pip.
简体中文 | [English](medical_en.md)
# 医疗相关
以下内容为EISeg中医疗垂类相关的文档,主要包括环境配置和功能介绍。
## 1 环境配置
使用医疗组件需要额外安装SimpleITK包用于读取医学影像,安装方式如下:
```shell
pip install SimpleITK
```
## 2 功能介绍
目前EISeg支持打开**单层的Dicom格式图像**,对Nitfi格式和多张Dicom的支持正在开发中。EISeg通过图像拓展名判断图像格式。打开单张图像时需要在右下角类型下拉菜单中选择医疗图像,如下图所示
打开文件夹时和自然图像过程相同。打开 .dcm 后缀的图像后会询问是否开启医疗组件。
![med-prompt](https://linhandev.github.io/assets/img/post/Med/med-prompt.png)
点击确定后会出现图像窗宽窗位设置面板
![med-widget](https://linhandev.github.io/assets/img/post/Med/med-widget.png)
窗宽窗位的作用是聚焦一定的强度区间,方便观察CT扫描。CT扫描中每个像素点存储的数值代表人体在该位置的密度,密度越高数值越大,图像的数据范围通常为-1024~1024。不过查看扫描时人眼无法分辨2048个灰度,因此通常选择一个更小的强度范围,将这一区间内图像的灰度差异拉大,从而方便观察。具体的操作是取扫描中强度范围在 窗位-窗宽/2~窗位+窗宽/2 的部分,将这一部分数据放入256灰度的图片中展示给用户。
推理方面,目前EISeg针对医疗场景提供[肝脏分割预训练模型](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_lits.zip),推荐窗宽窗位400, 0。该模型用于肝脏分割效果最佳,也可以用于其他组织或器官的分割。
English | [简体中文](medical.md)
# Medical Treatment
This part presents documents related to medical treatment in EISeg, including its environment configuration and functions.
## 1 Environment Configuration
The SimpleITK package should be additionally installed for image reading, please try the following:
```
pip install SimpleITK
```
## 2 Functions
EISeg can open **single-layer Dicom format images**, while the support for Nitfi format and multiple Dicom is under development. EISeg fines the image format by its expansion name. To open a single image you need to select Medical Image in the drop-down menu of type at the bottom right corner, as shown below.
The folder and natural image share the same process. When opening an image with a .dcm suffix, you will be asked whether to turn on the medical component.
[![med-prompt](https://camo.githubusercontent.com/ba9ab11d3e602ae61769d2926bd6774d1dfa633346cc483ab04bf4c89e65d2d0/68747470733a2f2f6c696e68616e6465762e6769746875622e696f2f6173736574732f696d672f706f73742f4d65642f6d65642d70726f6d70742e706e67)](https://camo.githubusercontent.com/ba9ab11d3e602ae61769d2926bd6774d1dfa633346cc483ab04bf4c89e65d2d0/68747470733a2f2f6c696e68616e6465762e6769746875622e696f2f6173736574732f696d672f706f73742f4d65642f6d65642d70726f6d70742e706e67)
Click Yes and there appears the setting panel of the image window width and position.
[![med-widget](https://camo.githubusercontent.com/05e9c84842f9b18ad94d5a9d7610642607f569d3ef6a9d97fd445a60df9ece46/68747470733a2f2f6c696e68616e6465762e6769746875622e696f2f6173736574732f696d672f706f73742f4d65642f6d65642d7769646765742e706e67)](https://camo.githubusercontent.com/05e9c84842f9b18ad94d5a9d7610642607f569d3ef6a9d97fd445a60df9ece46/68747470733a2f2f6c696e68616e6465762e6769746875622e696f2f6173736574732f696d672f706f73742f4d65642f6d65642d7769646765742e706e67)
The window width and position serve to limit the intensity range for easy observation of the CT scanning. The value stored at each pixel point in the CT scan represents the density of the human body at that location, so the higher the density the larger the value. The data range of the image is usually -1024 to 1024. However, the human eye cannot distinguish 2048 shades of gray when viewing the scan, so a smaller intensity range is usually adopted to increase the grayscale differences of the images within, thus facilitating the observation. This is done by selecting the section ranging from Window - Window Width/2 to Window + Window Width/2, and presenting the data in a 256-grayscale image.
For inference, EISeg provides the [pre-trained model for liver segmentation](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_lits.zip) for medical scenarios, with recommended window widths of 400, 0. This model performs best for liver segmentation and can also be used for other tissues or organs.
简体中文 | [English](remote_sensing_en.md)
# 遥感相关
以下内容为EISeg中遥感垂类相关的文档,主要包括环境配置和功能介绍两大方面。
## 1 环境配置
EISeg中对遥感数据的支持来自GDAL/OGR,GDAL是一个在X/MIT许可协议下的开源栅格空间数据转换库,OGR与其功能类似但主要提供对矢量数据的支持。
### 1.1 依赖安装
关于GDAL的安装,可参考如下安装方式:
#### 1.1.1 Windows
Windows用户可以通过[这里](https://www.lfd.uci.edu/~gohlke/pythonlibs/#gdal)下载对应Python和系统版本的二进制文件(*.whl)到本地,以GDAL‑3.3.3‑cp39‑cp39‑win_amd64.whl为例,进入下载目录:
```shell
cd download
```
安装依赖:
```shell
pip install GDAL‑3.3.3‑cp39‑cp39‑win_amd64.whl
```
#### 1.1.2 Linux/Mac安装
Mac用户建议利用conda安装,如下:
```shell script
conda install gdal
```
## 2 功能介绍
目前EISeg中的遥感垂类功能建设还比较简单,基本完成了GTiff类数据加载、大幅遥感影像切片与合并、地理栅格/矢量数据(GTiff/ESRI Shapefile)导出。并基于各类建筑提取数据集40余万张数据训练了一个建筑分割的交互式模型。
### 2.1 数据加载
目前EISeg仅支持了*.tif/tiff图像后缀的的遥感影像读取,由于训练数据都是来自于RGB三通道的遥感图像切片,因此交互分割也仅在RGB三通道上完成,也就表示EISeg支持多波段数据的波段选择。
当使用EISeg打开GTiff图像时,会获取当前波段数,可通过波段设置的下拉列表进行设置。默认为[b1, b1, b1]。下例展示的是天宫一号多光谱数据设置真彩色:
![yd6fa-hqvvb](https://user-images.githubusercontent.com/71769312/141137443-a327309e-0987-4b2a-88fd-f698e08d3294.gif)
### 2.2 大幅数据切片
目前EISeg对于大幅遥感图像(目前最大尝试为900M,17000*10000大小三通道图像),支持切片预测后合并,其中切片的重叠区域overlap为24。
![140916007-86076366-62ce-49ba-b1d9-18239baafc90](https://user-images.githubusercontent.com/71769312/141139282-854dcb4f-bcab-4ccc-aa3c-577cc52ca385.png)
下面是一副来自谷歌地球的重庆部分地区的切片演示:
![7kevx-q90hv](https://user-images.githubusercontent.com/71769312/141137447-60b305b1-a8ef-4b06-a45e-6db0b1ef2516.gif)
### 2.3 地理数据保存
当打开标注的GTiff图像带有地理参考,可设置EISeg保存时保存为带有地理参考的GTiff和ESRI Shapefile。
- GTiff:已成为GIS和卫星遥感应用的行业图像标准文件。
- ESRI Shapefile:是最常见的的矢量数据格式,Shapefile文件是美国环境系统研究所(ESRI)所研制的GIS文件系统格式文件,是工业标准的矢量数据文件。 所有的商业和开源GIS软件都支持。无处不在的它已成为行业标准。
![82jlu-no59o](https://user-images.githubusercontent.com/71769312/141137726-76457454-5e9c-4ad0-85d6-d03f658ee63c.gif)
### 2.4 遥感标注模型选择
建筑物标注建议使用[static_hrnet18_ocr48_rsbuilding_instance](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr48_rsbuilding_instance.zip)
English | [简体中文](remote_sensing.md)
# Remote Sensing
This part presents documents related to remote sensing in EISeg, including its environment configuration and functions.
## 1 Environment Configuration
EISeg supports remote sensing data with GDAL and OGR. The former is a translator library for raster spatial data formats under the X/MIT style Open Source License, while the latter has similar functions but mainly supports vector data.
### 1.1 Install Dependencies
GDAL can be installed as follows:
#### 1.1.1 Windows
Windows users can download the corresponding binaries (*.whl) of Python and system versions [here](https://www.lfd.uci.edu/~gohlke/pythonlibs/#gdal). Here we take GDAL-3.3.3 -cp39-cp39-win_amd64.whl as an example, go to the download directory:
```
cd download
```
Install the dependencies:
```
pip install GDAL‑3.3.3‑cp39‑cp39‑win_amd64.whl
```
#### 1.1.2 Linux/Mac
Mac users are recommended to install with conda:
```
conda install gdal
```
## 2 Functions
At present, functions of remote sensing in EISeg are relatively simple including GTiff class data loading, large remote sensing image slicing and merging, and geographic raster/vector data (GTiff/ESRI Shapefile) export. What's more, an interactive model of building segmentation is trained based on more than 400,000 data from various building datasets.
### 2.1 Data Loading
For the moment, EISeg can only read remote sensing images with *.tif/tiff suffix. Since the training data are all remote sensing image slices of RGB three-channel, the interactive segmentation shares the same basis, which means EISeg supports band selection of multi-band data.
When adopting EISeg to open the GTiff image, the current number of bands is obtained, which can be set by the drop-down list of band settings. The default is [b1, b1, b1]. The following example shows the true color setting of Tiangong-1 multispectral data.
[![yd6fa-hqvvb](https://user-images.githubusercontent.com/71769312/141137443-a327309e-0987-4b2a-88fd-f698e08d3294.gif)](https://user-images.githubusercontent.com/71769312/141137443-a327309e-0987-4b2a-88fd-f698e08d3294.gif)
### 2.2 large Image Slicing
EISeg supports the post-prediction merging of sliced large remote sensing images (the latest attempt is 900M three-channel images with a size of 17000*10000), in which the overlap (overlapping area) of slices is 24.
[![140916007-86076366-62ce-49ba-b1d9-18239baafc90](https://user-images.githubusercontent.com/71769312/141139282-854dcb4f-bcab-4ccc-aa3c-577cc52ca385.png)](https://user-images.githubusercontent.com/71769312/141139282-854dcb4f-bcab-4ccc-aa3c-577cc52ca385.png)
The following demonstrates the slicing of some districts in Chongqing from Google Earth:
[![7kevx-q90hv](https://user-images.githubusercontent.com/71769312/141137447-60b305b1-a8ef-4b06-a45e-6db0b1ef2516.gif)](https://user-images.githubusercontent.com/71769312/141137447-60b305b1-a8ef-4b06-a45e-6db0b1ef2516.gif)
### 2.3 Geographic Data Saving
When the GTiff images to be labeled are accompanied by georeferencing, you can set EISeg to save them as GTiff with georeferencing or ESRI Shapefile.
- GTiff: A standard image file for industries of GIS and satellite remote sensing.
- ESRI Shapefile: The most common vector data format.The Shapefile file is a GIS file format developed by the U.S. Environmental Systems Research Institute (ESRI) and is the industry-standard vector data file. It is supported by all commercial and open source GIS software and now represents the industry standard.
[![82jlu-no59o](https://user-images.githubusercontent.com/71769312/141137726-76457454-5e9c-4ad0-85d6-d03f658ee63c.gif)](https://user-images.githubusercontent.com/71769312/141137726-76457454-5e9c-4ad0-85d6-d03f658ee63c.gif)
### 2.4 Labeling Model for Remote Sensing
[static_hrnet18_ocr48_rsbuilding_instance](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr48_rsbuilding_instance.zip) are recommended for building labeling.
# 脚本工具相关
以下内容为EISeg中的相关工具使用。位置位于EISeg/tool
## 语义标签转实例标签
语义分割标签转实例分割标签(原标签为0/255),结果为单通道图像采用调色板调色。通过`tool`中的`semantic2instance`,可以将EISeg标注好的语义分割数据转为实例分割数据。使用以下方法:
``` shell
python semantic2instance.py -o label_path -d save_path
```
其中:
- `label_path`: 语义标签存放路径,必填
- `save_path`: 实例标签保存路径,必填
![68747470733a2f2f73332e626d702e6f76682f696d67732f323032312f30392f303038633562373638623765343737612e706e67](https://user-images.githubusercontent.com/71769312/141392781-d99ec177-f445-4336-9ab2-0ba7ae75d664.png)
## 视频切分脚本
由于视频数据计算量巨大,为了防止显存不足,推荐将视频切分成100帧以内再标注,脚本位置为`EISeg/tool/cut_video.py`
## 医疗切片图转换成视频脚本
3D医疗标注是基于视频标注算法来实现的,因此在医疗图像标注前,需要将医疗图像转换成`mp4`格式后再进行标注,脚本位置为`EISeg/tool/medical2video.py`
简体中文 | [English](video_en.md)
# 视频及3D医疗标注相关
以下内容为EISeg中交互式视频标注垂类相关的文档,主要包括模型选择,数据准备和使用步骤。
## 环境配置
使用3D显示需要额外安装VTK用于3D医学影像显示,安装方式如下:
```shell
pip install vtk
```
## 使用演示
![dance](https://user-images.githubusercontent.com/35907364/175504795-d41f0842-cb18-4675-9763-3e817f168edf.gif)
## 模型选择
EISeg视频标注工具以EISeg交互式分割算法及[MIVOS](https://github.com/hkchengrex/MiVOS)交互式视频分割算法为基础,基于Paddle开发的一个高效的图像及视频标注软件。
它涵盖了通用、腹腔多器官,CT椎骨等不同方向的高质量交互式视频分割模型,方便开发者快速实现视频的标注,降低标注成本。 对于3D医疗标注的尝试,我们将医疗切片数据视作视频帧关系,利用帧间传播实现3D医疗图像的标注。结合EISeg已有的高精度交互式分割算法,进一步扩展了视频分割算法的使用边界。
在使用EISeg前,请先下载传播模型参数。用户需要根据自己的场景需求选择对应的交互式分割模型及传播模型。若您想使用3D显示功能,可在`显示`菜单中勾选3D显示功能。
![lits](https://user-images.githubusercontent.com/35907364/178422205-40327d43-c7d4-4a5d-87fb-63c08308fb9f.gif)
| 模型类型 | 适用场景 | 模型结构 | 模型下载地址 | 配套传播模型下载地址 |
| -------- | -------------------------- | -------------- | ------------------------------------------------------------ |-------------|
| 高精度模型 | 通用场景的图像标注 | HRNet18_OCR64 | [static_hrnet18_ocr64_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_cocolvis.zip) | [static_propagation](https://www.wjx.cn/vm/wWw3pRc.aspx) |
| 轻量化模型 | 通用场景的图像标注 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_cocolvis.zip) | [static_propagation](https://www.wjx.cn/vm/wWw3pRc.aspx) |
| 高精度模型 | 通用图像标注场景 | EdgeFlow | [static_edgeflow_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_edgeflow_cocolvis.zip) | [static_propagation](https://www.wjx.cn/vm/wWw3pRc.aspx) |
| 高精度模型 | 人像标注场景 | HRNet18_OCR64 | [static_hrnet18_ocr64_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_human.zip) | [static_propagation](https://www.wjx.cn/vm/wWw3pRc.aspx) |
| 轻量化模型 | 人像标注场景 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_human.zip) | [static_propagation](https://www.wjx.cn/vm/wWw3pRc.aspx) |
| 轻量化模型 | 医疗肝脏标注场景 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_lits](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_lits.zip) | [static_propagation_lits](https://www.wjx.cn/vm/wWw3pRc.aspx) |
| 轻量化模型 | CT椎骨图像标注场景 | HRNet18s_OCR48 | [static_hrnet18s_ocr48_MRSpineSeg](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_hrnet18s_ocr48_MRSpineSeg.zip) | [static_propagation_spine](https://www.wjx.cn/vm/wWw3pRc.aspx) |
## 数据准备
- 由于视频处理计算量较大,推荐使用带显卡带机器进行视频分割和3D医疗图片带标注,并且标注图像帧数不宜超过100帧,若视频超过该帧数,可以通过[cut_video.py](../tool/cut_video.py)来进行视频截取。
- 对于3D医疗图像的标注是基于视频传播进行的,因此在标注前请先将切片图像转换成视频格式,脚本为[medical2video.py](../tool/medical2video.py)
## 使用步骤
1. **模型参数加载**
根据标注场景,选择合适的网络模型及参数进行加载。选择合适的模型及参数下载解压后,模型结构`*.pdmodel`及相应的模型参数`*.pdiparams`需要放到同一个目录下,加载模型时只需选择`*.pdiparams`结尾的模型参数位置即可。静态图模型初始化时间稍长,请耐心等待模型加载完成后进行下一步操作。正确加载的模型参数会记录在`近期模型参数`中,可以方便切换,并且下次打开软件时自动加载退出时的模型参数。
2. **图像加载**
打开图像/图像文件夹。当看到主界面图像正确加载,`数据列表`正确出现图像路径即可。加载视频格式的文件后,EIVSeg会自动弹出视频标注相关组件。**由于视频标注计算量较大,请确保在具有显卡的机器上进行视频标注,推荐每个视频帧长在100帧以内,如不符合,可用[cut_video.py](../tool/cut_video.py)进行视频分段**
3. **标签添加/加载**
添加/加载标签。可以通过`添加标签`新建标签,标签分为4列,分别对应像素值、说明、颜色和删除。新建好的标签可以通过`保存标签列表`保存为txt文件,其他合作者可以通过`加载标签列表`将标签导入。通过加载方式导入的标签,重启软件后会自动加载。
在视频标注时,需要将标签完全确定,**标签设定尽可能覆盖所有待标注类别,否则会影响视频传播的结果**
4. **传播模型参数加载**
根据标注场景,选择上方提供的与EISeg模型匹配的传播模型。下载完模型解压后,随意选择其中一个以`*.pdiparams`结尾的模型参数位置即可,不需要对解压后的模型及参数名称进行修改,否则会导致加载出错。
5. **交互式分割确定参考帧**
利用鼠标左键(添加)或右键(删除)所选择的区域,最终点击空格键获取参考帧分割结果,注意**标注图像时尽可能覆盖所有待标注及传播类别,否则会影响视频传播的结果**。
6. **帧传播**
按下视频组件中的`传播`按钮,模型会自动计算与参考帧相似区域并生成标注结果。
7. **修改**
如果对中间结果不满意,重复5-6步骤。
8. **保存**
点击左下角保存按钮,选择保存路径即可将结果保存。
English | [简体中文](video.md)
# Interactive Video Object Segmentation and 3D Medical Imaging Annotation
The following contents are related to interactive video annotation in EISeg, mainly including model selection, data preparation and instructions.
## Environment Configuration
The VTK package should be additionally installed for 3D visualization, please try the following:
```
pip install vtk
```
## Demo
![dance](https://user-images.githubusercontent.com/35907364/175504795-d41f0842-cb18-4675-9763-3e817f168edf.gif)
## Model Selection
Interactive video object segmentation is based on EISeg interactive segmentation algorithms and [MIVOS](https://github.com/hkchengrex/MiVOS) algorithm. It is an efficient image and video annotation software based on PaddlePaddle.
EISeg 1.0 covers high-quality interactive video object segmentation models in different directions such as general, liver, CT spinal structures, etc., which is convenient for developers to quickly annotate videos and reduce the cost of annotation. For 3D Medical Imaging Annotation, we regard medical slice data as the video frames, and realize the labeling of 3D medical images by using video annotation method.
Before using EISeg, please download the propagation model parameters. If you want to use the 3D display function, you can check the 3D display function in the `Display` menu.
![lits](https://user-images.githubusercontent.com/35907364/178422205-40327d43-c7d4-4a5d-87fb-63c08308fb9f.gif)
| Model Type | Applicable Scenarios | Model Architecture | Download Link | Corresponding Propagation Model Download Link |
| -------- |---------------------------------------------------------| -------------- | ------------------------------------------------------------ |----------------------------------------------------------------|
| High Performance Model | Image annotation in generic scenarios | HRNet18_OCR64 | [static_hrnet18_ocr64_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_cocolvis.zip) | [static_propagation](https://docs.google.com/forms/d/e/1FAIpQLSc72-EQKVCJnTQIlROY1DYVIYIm50LWyboj70XqIOvHsUa6ng/viewform?usp=sf_link) |
| Lightweight Model | Image annotation in generic scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_cocolvis.zip) | [static_propagation](https://docs.google.com/forms/d/e/1FAIpQLSc72-EQKVCJnTQIlROY1DYVIYIm50LWyboj70XqIOvHsUa6ng/viewform?usp=sf_link) |
| High Performance Model | Image annotation in generic scenarios | EdgeFlow | [static_edgeflow_cocolvis](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_edgeflow_cocolvis.zip) | [static_propagation](https://docs.google.com/forms/d/e/1FAIpQLSc72-EQKVCJnTQIlROY1DYVIYIm50LWyboj70XqIOvHsUa6ng/viewform?usp=sf_link) |
| High Performance Model | Annotation in portrait scenarios | HRNet18_OCR64 | [static_hrnet18_ocr64_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18_ocr64_human.zip) | [static_propagation](https://docs.google.com/forms/d/e/1FAIpQLSc72-EQKVCJnTQIlROY1DYVIYIm50LWyboj70XqIOvHsUa6ng/viewform?usp=sf_link) |
| Lightweight Model | Annotation in portrait scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_human](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_human.zip) | [static_propagation](https://docs.google.com/forms/d/e/1FAIpQLSc72-EQKVCJnTQIlROY1DYVIYIm50LWyboj70XqIOvHsUa6ng/viewform?usp=sf_link) |
| Lightweight Model | Annotation of liver in medical scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_lits](https://paddleseg.bj.bcebos.com/eiseg/0.4/static_hrnet18s_ocr48_lits.zip) | [static_propagation_lits](https://docs.google.com/forms/d/e/1FAIpQLSc72-EQKVCJnTQIlROY1DYVIYIm50LWyboj70XqIOvHsUa6ng/viewform?usp=sf_link) |
| Lightweight Model | Annotation of CT Spinal Structures in medical scenarios | HRNet18s_OCR48 | [static_hrnet18s_ocr48_MRSpineSeg](https://paddleseg.bj.bcebos.com/eiseg/0.5/static_hrnet18s_ocr48_MRSpineSeg.zip) | [static_propagation_spine](https://docs.google.com/forms/d/e/1FAIpQLSc72-EQKVCJnTQIlROY1DYVIYIm50LWyboj70XqIOvHsUa6ng/viewform?usp=sf_link) |
## Data Preparation
- Due to the large computation in video segmentation, it is recommended to use the compyter with a graphics card, and the number of labeled video should not exceed 100 frames. If the video exceeds the number of frames, you can use [cut_video.py](../tool/cut_video.py) to cut video.
- 3D medical imaging annotation is based on interactive video segmentation, so please convert the sliced images into mp4 format before labeling. The script is: [medical2video.py](../tool/medical2video.py).
## Using
After opening the software, make the following settings before annotating:
1. **Load Model Parameter**
Select the appropriate network and load the corresponding model parameters. After downloading and decompressing the right model and parameters, the model structure `*.pdmodel` and the corresponding model parameters `*.pdiparams` should be put into the same directory, and only the location of the model parameter at the end of `*.pdiparams`need to be selected when loading the model. The initialization of the static model takes some time, please wait patiently until the model is loaded. The correctly loaded model parameters will be recorded in `Recent Model Parameters`, which can be easily switched, and the exiting model parameter will be loaded automatically the next time you open the software.
2. **Load Image**
Open the image or image folder. Things go well when you see that the main screen image is loaded correctly and the image path is rightly shown in `Data List`.
3. **Add/Load Label**
Add/load labels. New labels can be created by `Add Label`, which are divided into 4 columns corresponding to pixel value, description, color and deletion. The newly created labels can be saved as txt files by `Save Label List`, and other collaborators can import labels by `Load Label List`. Labels imported by loading will be loaded automatically after restarting the software.
4. **Load Propagation Model Parameter**
Select the corresponding propagation model parameters. After downloading and decompressing the right model and parameters, the model structure `*.pdmodel` and the corresponding model parameters `*.pdiparams` should be put into the same directory, and only one of the location of the model parameter at the end of `*.pdiparams`need to be selected when loading the model. The other models will be loaded automatically.
5. **Annotate Referance frame**
During interactive annotation, users add positive and negative points with left and right mouse clicks, respectively. After finishing interactive segmentation, you can push Space button and the tool generates a polygon frame around the target border. Users can
adjust the polygon vertexes to further improve segmentation accuracy.
6. **Autosave**
You can choose the right folder and have the `autosave` set up, so that the annotated image will be saved automatically when switching images.
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
import os.path as osp
import logging
from datetime import datetime
from qtpy import QtCore
import cv2
__APPNAME__ = "EISeg"
__VERSION__ = "1.0.0"
pjpath = osp.dirname(osp.realpath(__file__))
sys.path.append(pjpath)
for k, v in os.environ.items():
if k.startswith("QT_") and "cv2" in v:
del os.environ[k]
# log
settings = QtCore.QSettings(
osp.join(pjpath, "config/setting.txt"), QtCore.QSettings.IniFormat)
logFolder = settings.value("logFolder")
logLevel = bool(settings.value("log"))
logDays = settings.value("logDays")
if logFolder is None or len(logFolder) == 0:
logFolder = osp.normcase(osp.join(pjpath, "log"))
if not osp.exists(logFolder):
os.makedirs(logFolder)
if logLevel:
logLevel = logging.DEBUG
else:
logLevel = logging.CRITICAL
if logDays:
logDays = int(logDays)
else:
logDays = 7
# TODO: 删除大于logDays 的 log
t = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
logger = logging.getLogger("EISeg Logger")
handler = logging.FileHandler(
osp.normcase(osp.join(logFolder, f"eiseg-{t}.log")))
handler.setFormatter(
logging.Formatter(
"%(levelname)s - %(asctime)s - %(filename)s - %(funcName)s - %(message)s"
))
logger.setLevel(logLevel)
logger.addHandler(handler)
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from run import main
if __name__ == "__main__":
main()
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment