"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "5f721ad6e48c9d846de25c3fefa0e50a306cbf10"
Unverified Commit cf6308ef authored by NielsRogge's avatar NielsRogge Committed by GitHub
Browse files

Improve conditional detr docs (#19154)


Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
parent 2d9853b2
...@@ -20,6 +20,10 @@ The abstract from the paper is the following: ...@@ -20,6 +20,10 @@ The abstract from the paper is the following:
*The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.* *The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/conditional_detr_curve.jpg"
alt="drawing" width="600"/>
<small> Conditional DETR shows much faster convergence compared to the original DETR. Taken from the <a href="https://arxiv.org/abs/2108.06152">original paper</a>.</small>
This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The original code can be found [here](https://github.com/Atten4Vis/ConditionalDETR). This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The original code can be found [here](https://github.com/Atten4Vis/ConditionalDETR).
......
...@@ -1515,15 +1515,15 @@ class ConditionalDetrModel(ConditionalDetrPreTrainedModel): ...@@ -1515,15 +1515,15 @@ class ConditionalDetrModel(ConditionalDetrPreTrainedModel):
Examples: Examples:
```python ```python
>>> from transformers import ConditionalDetrFeatureExtractor, ConditionalDetrModel >>> from transformers import AutoFeatureExtractor, AutoModel
>>> from PIL import Image >>> from PIL import Image
>>> import requests >>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw) >>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = ConditionalDetrFeatureExtractor.from_pretrained("microsoft/conditional-detr-resnet-50") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/conditional-detr-resnet-50")
>>> model = ConditionalDetrModel.from_pretrained("microsoft/conditional-detr-resnet-50") >>> model = AutoModel.from_pretrained("microsoft/conditional-detr-resnet-50")
>>> # prepare image for the model >>> # prepare image for the model
>>> inputs = feature_extractor(images=image, return_tensors="pt") >>> inputs = feature_extractor(images=image, return_tensors="pt")
...@@ -1683,21 +1683,36 @@ class ConditionalDetrForObjectDetection(ConditionalDetrPreTrainedModel): ...@@ -1683,21 +1683,36 @@ class ConditionalDetrForObjectDetection(ConditionalDetrPreTrainedModel):
Examples: Examples:
```python ```python
>>> from transformers import ConditionalDetrFeatureExtractor, ConditionalDetrForObjectDetection >>> from transformers import AutoFeatureExtractor, AutoModelForObjectDetection
>>> from PIL import Image >>> from PIL import Image
>>> import requests >>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw) >>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = ConditionalDetrFeatureExtractor.from_pretrained("microsoft/conditional-detr-resnet-50") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/conditional-detr-resnet-50")
>>> model = ConditionalDetrForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50") >>> model = AutoModelForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50")
>>> inputs = feature_extractor(images=image, return_tensors="pt") >>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> outputs = model(**inputs) >>> outputs = model(**inputs)
>>> # model predicts bounding boxes and corresponding COCO classes
>>> logits = outputs.logits >>> # convert outputs (bounding boxes and class logits) to COCO API
>>> bboxes = outputs.pred_boxes >>> target_sizes = torch.tensor([image.size[::-1]])
>>> results = feature_extractor.post_process(outputs, target_sizes=target_sizes)[0]
>>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... # let's only keep detections with score > 0.5
... if score > 0.5:
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45]
Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0]
Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95]
Detected remote with confidence 0.683 at location [334.48, 73.49, 366.37, 190.01]
Detected couch with confidence 0.535 at location [0.52, 1.19, 640.35, 475.1]
```""" ```"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict return_dict = return_dict if return_dict is not None else self.config.use_return_dict
...@@ -1860,16 +1875,21 @@ class ConditionalDetrForSegmentation(ConditionalDetrPreTrainedModel): ...@@ -1860,16 +1875,21 @@ class ConditionalDetrForSegmentation(ConditionalDetrPreTrainedModel):
>>> import torch >>> import torch
>>> import numpy >>> import numpy
>>> from transformers import ConditionalDetrFeatureExtractor, ConditionalDetrForSegmentation >>> from transformers import (
... AutoFeatureExtractor,
... ConditionalDetrConfig,
... ConditionalDetrForSegmentation,
... )
>>> from transformers.models.conditional_detr.feature_extraction_conditional_detr import rgb_to_id >>> from transformers.models.conditional_detr.feature_extraction_conditional_detr import rgb_to_id
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw) >>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = ConditionalDetrFeatureExtractor.from_pretrained( >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/conditional-detr-resnet-50")
... "facebook/conditional_detr-resnet-50-panoptic"
... ) >>> # randomly initialize all weights of the model
>>> model = ConditionalDetrForSegmentation.from_pretrained("facebook/conditional_detr-resnet-50-panoptic") >>> config = ConditionalDetrConfig()
>>> model = ConditionalDetrForSegmentation(config)
>>> # prepare image for the model >>> # prepare image for the model
>>> inputs = feature_extractor(images=image, return_tensors="pt") >>> inputs = feature_extractor(images=image, return_tensors="pt")
......
...@@ -21,6 +21,7 @@ src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py ...@@ -21,6 +21,7 @@ src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
src/transformers/models/big_bird/modeling_big_bird.py src/transformers/models/big_bird/modeling_big_bird.py
src/transformers/models/blenderbot/modeling_blenderbot.py src/transformers/models/blenderbot/modeling_blenderbot.py
src/transformers/models/blenderbot_small/modeling_blenderbot_small.py src/transformers/models/blenderbot_small/modeling_blenderbot_small.py
src/transformers/models/conditional_detr/modeling_conditional_detr.py
src/transformers/models/convnext/modeling_convnext.py src/transformers/models/convnext/modeling_convnext.py
src/transformers/models/ctrl/modeling_ctrl.py src/transformers/models/ctrl/modeling_ctrl.py
src/transformers/models/cvt/modeling_cvt.py src/transformers/models/cvt/modeling_cvt.py
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment