"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "e344e3d4021421ec0d631d076daf17f8a4e82e69"
Unverified Commit 5451f889 authored by NielsRogge's avatar NielsRogge Committed by GitHub
Browse files

Add DETA (#20983)

* First draft

* Add initial draft of conversion script

* Convert all weights

* Fix config

* Add image processor

* Fix DetaImageProcessor

* Run make fix copies

* Remove timm dependency

* Fix dummy objects

* Improve loss function

* Remove conv_encoder attribute

* Update conversion scripts

* Improve postprocessing + docs

* Fix copied from statements

* Add tests

* Improve postprocessing

* Improve postprocessing

* Update READMEs

* More improvements

* Fix rebase

* Add is_torchvision_available

* Add torchvision dependency

* Fix typo and README

* Fix bug

* Add copied from

* Fix style

* Apply suggestions

* Fix thanks to @ydshieh

* Fix another dependency check

* Simplify image processor

* Add scipy

* Improve code

* Add threshold argument

* Fix bug

* Set default threshold

* Improve integration test

* Add another integration test

* Update setup.py

* Address review

* Improve deformable attention function

* Improve copied from

* Use relative imports

* Address review

* Replace assertions

* Address review

* Update dummies

* Remove dummies

* Address comments, update READMEs

* Remove custom kernel code

* Add image processor tests

* Add requires_backends

* Add minor comment

* Update scripts

* Update organization name

* Fix defaults, add doc tests

* Add id2label for object 365

* Fix tests

* Update task guide
parent 98d88b23
...@@ -359,6 +359,7 @@ exotic_models_job = CircleCIJob( ...@@ -359,6 +359,7 @@ exotic_models_job = CircleCIJob(
"pip install --upgrade pip", "pip install --upgrade pip",
"pip install .[torch,testing,vision]", "pip install .[torch,testing,vision]",
"pip install torchvision", "pip install torchvision",
"pip install scipy",
"pip install 'git+https://github.com/facebookresearch/detectron2.git'", "pip install 'git+https://github.com/facebookresearch/detectron2.git'",
"sudo apt install tesseract-ocr", "sudo apt install tesseract-ocr",
"pip install pytesseract", "pip install pytesseract",
...@@ -367,6 +368,7 @@ exotic_models_job = CircleCIJob( ...@@ -367,6 +368,7 @@ exotic_models_job = CircleCIJob(
tests_to_run=[ tests_to_run=[
"tests/models/*layoutlmv*", "tests/models/*layoutlmv*",
"tests/models/*nat", "tests/models/*nat",
"tests/models/deta",
], ],
pytest_num_workers=1, pytest_num_workers=1,
pytest_options={"durations": 100}, pytest_options={"durations": 100},
......
...@@ -309,6 +309,7 @@ Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://h ...@@ -309,6 +309,7 @@ Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://h
1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
1. **[DETA](https://huggingface.co/docs/transformers/main/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -414,6 +414,8 @@ ...@@ -414,6 +414,8 @@
title: Deformable DETR title: Deformable DETR
- local: model_doc/deit - local: model_doc/deit
title: DeiT title: DeiT
- local: model_doc/deta
title: DETA
- local: model_doc/detr - local: model_doc/detr
title: DETR title: DETR
- local: model_doc/dinat - local: model_doc/dinat
......
...@@ -88,6 +88,7 @@ The documentation is organized into five sections: ...@@ -88,6 +88,7 @@ The documentation is organized into five sections:
1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
1. **[DETA](model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
...@@ -271,6 +272,7 @@ Flax), PyTorch, and/or TensorFlow. ...@@ -271,6 +272,7 @@ Flax), PyTorch, and/or TensorFlow.
| Decision Transformer | ❌ | ❌ | ✅ | ❌ | ❌ | | Decision Transformer | ❌ | ❌ | ✅ | ❌ | ❌ |
| Deformable DETR | ❌ | ❌ | ✅ | ❌ | ❌ | | Deformable DETR | ❌ | ❌ | ✅ | ❌ | ❌ |
| DeiT | ❌ | ❌ | ✅ | ✅ | ❌ | | DeiT | ❌ | ❌ | ✅ | ✅ | ❌ |
| DETA | ❌ | ❌ | ✅ | ❌ | ❌ |
| DETR | ❌ | ❌ | ✅ | ❌ | ❌ | | DETR | ❌ | ❌ | ✅ | ❌ | ❌ |
| DiNAT | ❌ | ❌ | ✅ | ❌ | ❌ | | DiNAT | ❌ | ❌ | ✅ | ❌ | ❌ |
| DistilBERT | ✅ | ✅ | ✅ | ✅ | ✅ | | DistilBERT | ✅ | ✅ | ✅ | ✅ | ✅ |
......
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DETA
## Overview
The DETA model was proposed in [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
DETA (short for Detection Transformers with Assignment) improves [Deformable DETR](deformable_detr) by replacing the one-to-one bipartite Hungarian matching loss
with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP.
The abstract from the paper is the following:
*Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture.*
Tips:
- One can use [`DetaImageProcessor`] to prepare images and optional targets for the model.
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/jozhang97/DETA).
## DetaConfig
[[autodoc]] DetaConfig
## DetaImageProcessor
[[autodoc]] DetaImageProcessor
- preprocess
- post_process_object_detection
## DetaModel
[[autodoc]] DetaModel
- forward
## DetaForObjectDetection
[[autodoc]] DetaForObjectDetection
- forward
...@@ -33,7 +33,7 @@ The task illustrated in this tutorial is supported by the following model archit ...@@ -33,7 +33,7 @@ The task illustrated in this tutorial is supported by the following model archit
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> <!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[Conditional DETR](../model_doc/conditional_detr), [Deformable DETR](../model_doc/deformable_detr), [DETR](../model_doc/detr), [Table Transformer](../model_doc/table-transformer), [YOLOS](../model_doc/yolos) [Conditional DETR](../model_doc/conditional_detr), [Deformable DETR](../model_doc/deformable_detr), [DETA](../model_doc/deta), [DETR](../model_doc/detr), [Table Transformer](../model_doc/table-transformer), [YOLOS](../model_doc/yolos)
<!--End of the generated tip--> <!--End of the generated tip-->
......
...@@ -168,6 +168,7 @@ _deps = [ ...@@ -168,6 +168,7 @@ _deps = [
"tokenizers>=0.11.1,!=0.11.3,<0.14", "tokenizers>=0.11.1,!=0.11.3,<0.14",
"torch>=1.7,!=1.12.0", "torch>=1.7,!=1.12.0",
"torchaudio", "torchaudio",
"torchvision",
"pyctcdecode>=0.4.0", "pyctcdecode>=0.4.0",
"tqdm>=4.27", "tqdm>=4.27",
"unidic>=1.0.2", "unidic>=1.0.2",
...@@ -285,6 +286,7 @@ extras["tf-speech"] = extras["audio"] ...@@ -285,6 +286,7 @@ extras["tf-speech"] = extras["audio"]
extras["flax-speech"] = extras["audio"] extras["flax-speech"] = extras["audio"]
extras["vision"] = deps_list("Pillow") extras["vision"] = deps_list("Pillow")
extras["timm"] = deps_list("timm") extras["timm"] = deps_list("timm")
extras["torch-vision"] = deps_list("torchvision") + extras["vision"]
extras["natten"] = deps_list("natten") extras["natten"] = deps_list("natten")
extras["codecarbon"] = deps_list("codecarbon") extras["codecarbon"] = deps_list("codecarbon")
extras["video"] = deps_list("decord") extras["video"] = deps_list("decord")
...@@ -331,6 +333,7 @@ extras["all"] = ( ...@@ -331,6 +333,7 @@ extras["all"] = (
+ extras["vision"] + extras["vision"]
+ extras["integrations"] + extras["integrations"]
+ extras["timm"] + extras["timm"]
+ extras["torch-vision"]
+ extras["codecarbon"] + extras["codecarbon"]
+ extras["accelerate"] + extras["accelerate"]
+ extras["video"] + extras["video"]
...@@ -351,6 +354,7 @@ extras["dev-torch"] = ( ...@@ -351,6 +354,7 @@ extras["dev-torch"] = (
+ extras["vision"] + extras["vision"]
+ extras["integrations"] + extras["integrations"]
+ extras["timm"] + extras["timm"]
+ extras["torch-vision"]
+ extras["codecarbon"] + extras["codecarbon"]
+ extras["quality"] + extras["quality"]
+ extras["ja"] + extras["ja"]
......
...@@ -40,6 +40,7 @@ from .utils import ( ...@@ -40,6 +40,7 @@ from .utils import (
is_timm_available, is_timm_available,
is_tokenizers_available, is_tokenizers_available,
is_torch_available, is_torch_available,
is_torchvision_available,
is_vision_available, is_vision_available,
logging, logging,
) )
...@@ -236,6 +237,7 @@ _import_structure = { ...@@ -236,6 +237,7 @@ _import_structure = {
"models.decision_transformer": ["DECISION_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "DecisionTransformerConfig"], "models.decision_transformer": ["DECISION_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "DecisionTransformerConfig"],
"models.deformable_detr": ["DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeformableDetrConfig"], "models.deformable_detr": ["DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeformableDetrConfig"],
"models.deit": ["DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeiTConfig"], "models.deit": ["DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeiTConfig"],
"models.deta": ["DETA_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetaConfig"],
"models.detr": ["DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetrConfig"], "models.detr": ["DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetrConfig"],
"models.dialogpt": [], "models.dialogpt": [],
"models.dinat": ["DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP", "DinatConfig"], "models.dinat": ["DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP", "DinatConfig"],
...@@ -589,6 +591,7 @@ _import_structure = { ...@@ -589,6 +591,7 @@ _import_structure = {
"is_torch_available", "is_torch_available",
"is_torch_neuroncore_available", "is_torch_neuroncore_available",
"is_torch_tpu_available", "is_torch_tpu_available",
"is_torchvision_available",
"is_vision_available", "is_vision_available",
"logging", "logging",
], ],
...@@ -797,6 +800,7 @@ else: ...@@ -797,6 +800,7 @@ else:
["DeformableDetrFeatureExtractor", "DeformableDetrImageProcessor"] ["DeformableDetrFeatureExtractor", "DeformableDetrImageProcessor"]
) )
_import_structure["models.deit"].extend(["DeiTFeatureExtractor", "DeiTImageProcessor"]) _import_structure["models.deit"].extend(["DeiTFeatureExtractor", "DeiTImageProcessor"])
_import_structure["models.deta"].append("DetaImageProcessor")
_import_structure["models.detr"].extend(["DetrFeatureExtractor", "DetrImageProcessor"]) _import_structure["models.detr"].extend(["DetrFeatureExtractor", "DetrImageProcessor"])
_import_structure["models.donut"].extend(["DonutFeatureExtractor", "DonutImageProcessor"]) _import_structure["models.donut"].extend(["DonutFeatureExtractor", "DonutImageProcessor"])
_import_structure["models.dpt"].extend(["DPTFeatureExtractor", "DPTImageProcessor"]) _import_structure["models.dpt"].extend(["DPTFeatureExtractor", "DPTImageProcessor"])
...@@ -1343,6 +1347,14 @@ else: ...@@ -1343,6 +1347,14 @@ else:
"DeiTPreTrainedModel", "DeiTPreTrainedModel",
] ]
) )
_import_structure["models.deta"].extend(
[
"DETA_PRETRAINED_MODEL_ARCHIVE_LIST",
"DetaForObjectDetection",
"DetaModel",
"DetaPreTrainedModel",
]
)
_import_structure["models.dinat"].extend( _import_structure["models.dinat"].extend(
[ [
"DINAT_PRETRAINED_MODEL_ARCHIVE_LIST", "DINAT_PRETRAINED_MODEL_ARCHIVE_LIST",
...@@ -3681,6 +3693,7 @@ if TYPE_CHECKING: ...@@ -3681,6 +3693,7 @@ if TYPE_CHECKING:
) )
from .models.deformable_detr import DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DeformableDetrConfig from .models.deformable_detr import DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DeformableDetrConfig
from .models.deit import DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP, DeiTConfig from .models.deit import DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP, DeiTConfig
from .models.deta import DETA_PRETRAINED_CONFIG_ARCHIVE_MAP, DetaConfig
from .models.detr import DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DetrConfig from .models.detr import DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DetrConfig
from .models.dinat import DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP, DinatConfig from .models.dinat import DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP, DinatConfig
from .models.distilbert import DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, DistilBertConfig, DistilBertTokenizer from .models.distilbert import DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, DistilBertConfig, DistilBertTokenizer
...@@ -4008,6 +4021,7 @@ if TYPE_CHECKING: ...@@ -4008,6 +4021,7 @@ if TYPE_CHECKING:
is_torch_available, is_torch_available,
is_torch_neuroncore_available, is_torch_neuroncore_available,
is_torch_tpu_available, is_torch_tpu_available,
is_torchvision_available,
is_vision_available, is_vision_available,
logging, logging,
) )
...@@ -4168,6 +4182,7 @@ if TYPE_CHECKING: ...@@ -4168,6 +4182,7 @@ if TYPE_CHECKING:
from .models.convnext import ConvNextFeatureExtractor, ConvNextImageProcessor from .models.convnext import ConvNextFeatureExtractor, ConvNextImageProcessor
from .models.deformable_detr import DeformableDetrFeatureExtractor, DeformableDetrImageProcessor from .models.deformable_detr import DeformableDetrFeatureExtractor, DeformableDetrImageProcessor
from .models.deit import DeiTFeatureExtractor, DeiTImageProcessor from .models.deit import DeiTFeatureExtractor, DeiTImageProcessor
from .models.deta import DetaImageProcessor
from .models.detr import DetrFeatureExtractor, DetrImageProcessor from .models.detr import DetrFeatureExtractor, DetrImageProcessor
from .models.donut import DonutFeatureExtractor, DonutImageProcessor from .models.donut import DonutFeatureExtractor, DonutImageProcessor
from .models.dpt import DPTFeatureExtractor, DPTImageProcessor from .models.dpt import DPTFeatureExtractor, DPTImageProcessor
...@@ -4629,6 +4644,12 @@ if TYPE_CHECKING: ...@@ -4629,6 +4644,12 @@ if TYPE_CHECKING:
DeiTModel, DeiTModel,
DeiTPreTrainedModel, DeiTPreTrainedModel,
) )
from .models.deta import (
DETA_PRETRAINED_MODEL_ARCHIVE_LIST,
DetaForObjectDetection,
DetaModel,
DetaPreTrainedModel,
)
from .models.dinat import ( from .models.dinat import (
DINAT_PRETRAINED_MODEL_ARCHIVE_LIST, DINAT_PRETRAINED_MODEL_ARCHIVE_LIST,
DinatBackbone, DinatBackbone,
......
...@@ -74,6 +74,7 @@ deps = { ...@@ -74,6 +74,7 @@ deps = {
"tokenizers": "tokenizers>=0.11.1,!=0.11.3,<0.14", "tokenizers": "tokenizers>=0.11.1,!=0.11.3,<0.14",
"torch": "torch>=1.7,!=1.12.0", "torch": "torch>=1.7,!=1.12.0",
"torchaudio": "torchaudio", "torchaudio": "torchaudio",
"torchvision": "torchvision",
"pyctcdecode": "pyctcdecode>=0.4.0", "pyctcdecode": "pyctcdecode>=0.4.0",
"tqdm": "tqdm>=4.27", "tqdm": "tqdm>=4.27",
"unidic": "unidic>=1.0.2", "unidic": "unidic>=1.0.2",
......
...@@ -58,6 +58,7 @@ from . import ( ...@@ -58,6 +58,7 @@ from . import (
decision_transformer, decision_transformer,
deformable_detr, deformable_detr,
deit, deit,
deta,
detr, detr,
dialogpt, dialogpt,
dinat, dinat,
......
...@@ -64,6 +64,7 @@ CONFIG_MAPPING_NAMES = OrderedDict( ...@@ -64,6 +64,7 @@ CONFIG_MAPPING_NAMES = OrderedDict(
("decision_transformer", "DecisionTransformerConfig"), ("decision_transformer", "DecisionTransformerConfig"),
("deformable_detr", "DeformableDetrConfig"), ("deformable_detr", "DeformableDetrConfig"),
("deit", "DeiTConfig"), ("deit", "DeiTConfig"),
("deta", "DetaConfig"),
("detr", "DetrConfig"), ("detr", "DetrConfig"),
("dinat", "DinatConfig"), ("dinat", "DinatConfig"),
("distilbert", "DistilBertConfig"), ("distilbert", "DistilBertConfig"),
...@@ -230,6 +231,7 @@ CONFIG_ARCHIVE_MAP_MAPPING_NAMES = OrderedDict( ...@@ -230,6 +231,7 @@ CONFIG_ARCHIVE_MAP_MAPPING_NAMES = OrderedDict(
("deberta-v2", "DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("deberta-v2", "DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("deformable_detr", "DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("deformable_detr", "DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("deit", "DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("deit", "DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("deta", "DETA_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("detr", "DETR_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("detr", "DETR_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("dinat", "DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("dinat", "DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("distilbert", "DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("distilbert", "DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
...@@ -389,6 +391,7 @@ MODEL_NAMES_MAPPING = OrderedDict( ...@@ -389,6 +391,7 @@ MODEL_NAMES_MAPPING = OrderedDict(
("decision_transformer", "Decision Transformer"), ("decision_transformer", "Decision Transformer"),
("deformable_detr", "Deformable DETR"), ("deformable_detr", "Deformable DETR"),
("deit", "DeiT"), ("deit", "DeiT"),
("deta", "DETA"),
("detr", "DETR"), ("detr", "DETR"),
("dialogpt", "DialoGPT"), ("dialogpt", "DialoGPT"),
("dinat", "DiNAT"), ("dinat", "DiNAT"),
......
...@@ -50,6 +50,7 @@ IMAGE_PROCESSOR_MAPPING_NAMES = OrderedDict( ...@@ -50,6 +50,7 @@ IMAGE_PROCESSOR_MAPPING_NAMES = OrderedDict(
("data2vec-vision", "BeitImageProcessor"), ("data2vec-vision", "BeitImageProcessor"),
("deformable_detr", "DeformableDetrImageProcessor"), ("deformable_detr", "DeformableDetrImageProcessor"),
("deit", "DeiTImageProcessor"), ("deit", "DeiTImageProcessor"),
("deta", "DetaImageProcessor"),
("detr", "DetrImageProcessor"), ("detr", "DetrImageProcessor"),
("dinat", "ViTImageProcessor"), ("dinat", "ViTImageProcessor"),
("donut-swin", "DonutImageProcessor"), ("donut-swin", "DonutImageProcessor"),
......
...@@ -64,6 +64,7 @@ MODEL_MAPPING_NAMES = OrderedDict( ...@@ -64,6 +64,7 @@ MODEL_MAPPING_NAMES = OrderedDict(
("decision_transformer_gpt2", "DecisionTransformerGPT2Model"), ("decision_transformer_gpt2", "DecisionTransformerGPT2Model"),
("deformable_detr", "DeformableDetrModel"), ("deformable_detr", "DeformableDetrModel"),
("deit", "DeiTModel"), ("deit", "DeiTModel"),
("deta", "DetaModel"),
("detr", "DetrModel"), ("detr", "DetrModel"),
("dinat", "DinatModel"), ("dinat", "DinatModel"),
("distilbert", "DistilBertModel"), ("distilbert", "DistilBertModel"),
...@@ -538,6 +539,7 @@ MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES = OrderedDict( ...@@ -538,6 +539,7 @@ MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES = OrderedDict(
# Model for Object Detection mapping # Model for Object Detection mapping
("conditional_detr", "ConditionalDetrForObjectDetection"), ("conditional_detr", "ConditionalDetrForObjectDetection"),
("deformable_detr", "DeformableDetrForObjectDetection"), ("deformable_detr", "DeformableDetrForObjectDetection"),
("deta", "DetaForObjectDetection"),
("detr", "DetrForObjectDetection"), ("detr", "DetrForObjectDetection"),
("table-transformer", "TableTransformerForObjectDetection"), ("table-transformer", "TableTransformerForObjectDetection"),
("yolos", "YolosForObjectDetection"), ("yolos", "YolosForObjectDetection"),
......
...@@ -545,27 +545,42 @@ def build_position_encoding(config): ...@@ -545,27 +545,42 @@ def build_position_encoding(config):
return position_embedding return position_embedding
def ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_locations, attention_weights): def multi_scale_deformable_attention(
# for debug and test only, value: Tensor, value_spatial_shapes: Tensor, sampling_locations: Tensor, attention_weights: Tensor
# need to use cuda version instead ) -> Tensor:
N_, S_, M_, D_ = value.shape batch_size, _, num_heads, hidden_dim = value.shape
_, Lq_, M_, L_, P_, _ = sampling_locations.shape _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape
value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) value_list = value.split([height * width for height, width in value_spatial_shapes], dim=1)
sampling_grids = 2 * sampling_locations - 1 sampling_grids = 2 * sampling_locations - 1
sampling_value_list = [] sampling_value_list = []
for lid_, (H_, W_) in enumerate(value_spatial_shapes): for level_id, (height, width) in enumerate(value_spatial_shapes):
# N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, D_, H_, W_ # batch_size, height*width, num_heads, hidden_dim
value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_ * M_, D_, H_, W_) # -> batch_size, height*width, num_heads*hidden_dim
# N_, Lq_, M_, P_, 2 -> N_, M_, Lq_, P_, 2 -> N_*M_, Lq_, P_, 2 # -> batch_size, num_heads*hidden_dim, height*width
sampling_grid_l_ = sampling_grids[:, :, :, lid_].transpose(1, 2).flatten(0, 1) # -> batch_size*num_heads, hidden_dim, height, width
# N_*M_, D_, Lq_, P_ value_l_ = (
sampling_value_l_ = F.grid_sample( value_list[level_id].flatten(2).transpose(1, 2).reshape(batch_size * num_heads, hidden_dim, height, width)
)
# batch_size, num_queries, num_heads, num_points, 2
# -> batch_size, num_heads, num_queries, num_points, 2
# -> batch_size*num_heads, num_queries, num_points, 2
sampling_grid_l_ = sampling_grids[:, :, :, level_id].transpose(1, 2).flatten(0, 1)
# batch_size*num_heads, hidden_dim, num_queries, num_points
sampling_value_l_ = nn.functional.grid_sample(
value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False
) )
sampling_value_list.append(sampling_value_l_) sampling_value_list.append(sampling_value_l_)
# (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_) # (batch_size, num_queries, num_heads, num_levels, num_points)
attention_weights = attention_weights.transpose(1, 2).reshape(N_ * M_, 1, Lq_, L_ * P_) # -> (batch_size, num_heads, num_queries, num_levels, num_points)
output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights).sum(-1).view(N_, M_ * D_, Lq_) # -> (batch_size, num_heads, 1, num_queries, num_levels*num_points)
attention_weights = attention_weights.transpose(1, 2).reshape(
batch_size * num_heads, 1, num_queries, num_levels * num_points
)
output = (
(torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights)
.sum(-1)
.view(batch_size, num_heads * hidden_dim, num_queries)
)
return output.transpose(1, 2).contiguous() return output.transpose(1, 2).contiguous()
...@@ -678,7 +693,7 @@ class DeformableDetrMultiscaleDeformableAttention(nn.Module): ...@@ -678,7 +693,7 @@ class DeformableDetrMultiscaleDeformableAttention(nn.Module):
else: else:
raise ValueError(f"Last dim of reference_points must be 2 or 4, but got {reference_points.shape[-1]}") raise ValueError(f"Last dim of reference_points must be 2 or 4, but got {reference_points.shape[-1]}")
try: try:
# GPU # custom kernel
output = MultiScaleDeformableAttentionFunction.apply( output = MultiScaleDeformableAttentionFunction.apply(
value, value,
spatial_shapes, spatial_shapes,
...@@ -688,8 +703,8 @@ class DeformableDetrMultiscaleDeformableAttention(nn.Module): ...@@ -688,8 +703,8 @@ class DeformableDetrMultiscaleDeformableAttention(nn.Module):
self.im2col_step, self.im2col_step,
) )
except Exception: except Exception:
# CPU # PyTorch implementation
output = ms_deform_attn_core_pytorch(value, spatial_shapes, sampling_locations, attention_weights) output = multi_scale_deformable_attention(value, spatial_shapes, sampling_locations, attention_weights)
output = self.output_proj(output) output = self.output_proj(output)
return output, attention_weights return output, attention_weights
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment