Commit 68352e5c authored by Baumgartner, Michael's avatar Baumgartner, Michael
Browse files

Merge branch '0002_cleanup' into main

parents 8607cb0f 9ef2a30d
...@@ -24,5 +24,7 @@ build-job: ...@@ -24,5 +24,7 @@ build-job:
- coverage xml - coverage xml
artifacts: artifacts:
reports: reports:
cobertura: coverage.xml coverage_report:
coverage_format: cobertura
path: coverage.xml
coverage: '/TOTAL.*\s+(\d+\%)/' coverage: '/TOTAL.*\s+(\d+\%)/'
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Copyright (c) [yyyy] [name of copyright owner],
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those
of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
BSD 3-Clause License
Copyright (c) [yyyy] [name of copyright owner],
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
...@@ -199,41 +199,3 @@ ...@@ -199,41 +199,3 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
################################################################################
Some parts of nndet/core/boxes/anchors.py, coder.py, matcher.py are derived
from torchvision and thus licensed under:
BSD 3-Clause License
Copyright (c) Soumith Chintala 2016,
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
# License
This work is licensed under **multiple** licenses, please check the respective code files for more info.
\ No newline at end of file
...@@ -403,11 +403,10 @@ The final model directory will contain multiple subfolders with different inform ...@@ -403,11 +403,10 @@ The final model directory will contain multiple subfolders with different inform
- `val_analysis[_preprocessed]` *experimental*: provide additional analysis information of the predictions. This feature is marked as expeirmental since it uses a simplified matching algorithm and should only be used to gain an intuition of potential improvements. - `val_analysis[_preprocessed]` *experimental*: provide additional analysis information of the predictions. This feature is marked as expeirmental since it uses a simplified matching algorithm and should only be used to gain an intuition of potential improvements.
The following section contains some additional information regarding the metrics which are computed by nnDetection. They can be found in `[val/test]_results/results_boxes.json`: The following section contains some additional information regarding the metrics which are computed by nnDetection. They can be found in `[val/test]_results/results_boxes.json`:
- `AP_IoU_0.10_MaxDet_100`: is the main metric used for the evaluation in our paper. It is evaluated at an IoU threshold of `0.1` and `100` predictions per image are allows. Note that this is a hard limit and if images contain much more instances this leads to wrong results. - `AP_IoU_0.10_MaxDet_100`: is the main metric used for the evaluation in our paper. It is evaluated at an IoU threshold of `0.1` and `100` predictions per image. Note that this is a hard limit and if images contain much more instances this leads to wrong results.
- `mAP_IoU_0.10_0.50_0.05_MaxDet_100`: Is the typically found COCO mAP metric evaluated at multiple IoU values. *The IoU thresholds are different from those of the COCO evaluation to account for the generally lower IoU in 3D data* - `mAP_IoU_0.10_0.50_0.05_MaxDet_100`: Is the typically found COCO mAP metric evaluated at multiple IoU values. *The IoU thresholds are different from those of the COCO evaluation to account for the generally lower IoU in 3D data*
- `[num]_AP_IoU_0.10_MaxDet_100`: AP metric computed per class - `[num]_AP_IoU_0.10_MaxDet_100`: AP metric computed per class
- `AR`: is only added for additional information. Since most AR metrics refer to a single IoU threshold it only reflects the max recall. - `FROC_score_IoU_0.10` FROC score with default FPPI (1/8, 1/4, 1/2, 1, 2, 4, 8). Note (in contrast to the AP implementation): the multi-class case does not compute the metric per class but puts all predictions/gt into a single large pool (similar to AP_pool from https://arxiv.org/abs/2102.01066) and thus inter class calibration is important here. In most cases simply averaging the `[num]_FROC` scores manually to assign the same weight to each class should be prefered.
- `FROC_score_IoU_0.10` *experimental*: Experimental FROC score. The implementation is still undergoing additional testing and might be subject to change. Also see the docstring for additional information on the multi class case. Additional featuers might be added in the future.
- case evaluation *experimental*: It is possible to run case evaluations with nnDetection but this is still experimental and undergoing additional testing and might be changed in the future. - case evaluation *experimental*: It is possible to run case evaluations with nnDetection but this is still experimental and undergoing additional testing and might be changed in the future.
## nnU-Net for Detection ## nnU-Net for Detection
......
defaults: defaults:
- _self_
- train: v001 - train: v001
- prep: process - prep: process
......
""" # Modifications licensed under:
Parts of this code are from torchvision and thus licensed under # SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
# SPDX-License-Identifier: Apache-2.0
#
# Parts of this code are from torchvision (https://github.com/pytorch/vision) licensed under
# SPDX-FileCopyrightText: 2016 Soumith Chintala
# SPDX-License-Identifier: BSD-3-Clause
BSD 3-Clause License
Copyright (c) Soumith Chintala 2016,
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import torch import torch
from typing import Callable, Sequence, List, Tuple, TypeVar, Union from typing import Callable, Sequence, List, Tuple, TypeVar, Union
...@@ -172,7 +146,7 @@ class AnchorGenerator2D(torch.nn.Module): ...@@ -172,7 +146,7 @@ class AnchorGenerator2D(torch.nn.Module):
shifts_x = torch.arange(0, size0, dtype=torch.float, device=device) * stride0 shifts_x = torch.arange(0, size0, dtype=torch.float, device=device) * stride0
shifts_y = torch.arange(0, size1, dtype=torch.float, device=device) * stride1 shifts_y = torch.arange(0, size1, dtype=torch.float, device=device) * stride1
shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x) shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x, indexing="ij")
shift_x = shift_x.reshape(-1) shift_x = shift_x.reshape(-1)
shift_y = shift_y.reshape(-1) shift_y = shift_y.reshape(-1)
shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1) shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1)
...@@ -387,7 +361,7 @@ class AnchorGenerator3D(AnchorGenerator2D): ...@@ -387,7 +361,7 @@ class AnchorGenerator3D(AnchorGenerator2D):
shifts_y = torch.arange(0, size1, dtype=dtype, device=device) * stride1 shifts_y = torch.arange(0, size1, dtype=dtype, device=device) * stride1
shifts_z = torch.arange(0, size2, dtype=dtype, device=device) * stride2 shifts_z = torch.arange(0, size2, dtype=dtype, device=device) * stride2
shift_x, shift_y, shift_z = torch.meshgrid(shifts_x, shifts_y, shifts_z) shift_x, shift_y, shift_z = torch.meshgrid(shifts_x, shifts_y, shifts_z, indexing="ij")
shift_x = shift_x.reshape(-1) shift_x = shift_x.reshape(-1)
shift_y = shift_y.reshape(-1) shift_y = shift_y.reshape(-1)
shift_z = shift_z.reshape(-1) shift_z = shift_z.reshape(-1)
......
""" # Modifications licensed under:
Parts of this code are from torchvision and thus licensed under # SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
# SPDX-License-Identifier: Apache-2.0
BSD 3-Clause License #
# Parts of this code are from torchvision (https://github.com/pytorch/vision) licensed under
Copyright (c) Soumith Chintala 2016, # SPDX-FileCopyrightText: Soumith Chintala 2016
All rights reserved. # SPDX-License-Identifier: BSD-3-Clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
from __future__ import division from __future__ import division
......
from nndet.core.boxes.matcher.base import Matcher, MatcherType
from nndet.core.boxes.matcher.iou import IoUMatcher
from nndet.core.boxes.matcher.atss import ATSSMatcher
# Modifications licensed under:
# SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
# SPDX-License-Identifier: Apache-2.0
#
# Parts of this code are from mmdetection licensed under
# SPDX-FileCopyrightText: 2018-2023 OpenMMLab
# SPDX-License-Identifier: Apache-2.0
from typing import Sequence, Callable, Tuple
import torch
from torch import Tensor
from loguru import logger
from nndet.core.boxes.ops import box_iou, box_center_dist, center_in_boxes
from nndet.core.boxes.matcher.base import Matcher
INF = 100 # not really inv but here it is sufficient
class ATSSMatcher(Matcher):
def __init__(self,
num_candidates: int,
similarity_fn: Callable[[Tensor, Tensor], Tensor] = box_iou,
center_in_gt: bool = True,
):
"""
Compute matching based on ATSS
https://arxiv.org/abs/1912.02424
`Bridging the Gap Between Anchor-based and Anchor-free Detection
via Adaptive Training Sample Selection`
Args:
num_candidates: number of positions to select candidates from
similarity_fn: function for similarity computation between
boxes and anchors
center_in_gt: If diabled, matched anchor center points do not need
to lie withing the ground truth box.
"""
super().__init__(similarity_fn=similarity_fn)
self.num_candidates = num_candidates
self.min_dist = 0.01
self.center_in_gt = center_in_gt
logger.info(f"Running ATSS Matching with num_candidates={self.num_candidates} "
f"and center_in_gt {self.center_in_gt}.")
def compute_matches(self,
boxes: torch.Tensor,
anchors: torch.Tensor,
num_anchors_per_level: Sequence[int],
num_anchors_per_loc: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Compute matches according to ATTS for a single image
Adapted from
https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/assigners/atss_assigner.py
https://github.com/sfzhang15/ATSS/blob/master/atss_core/modeling/rpn/atss/loss.py
Args:
boxes: anchors are matches to these boxes (e.g. ground truth)
[N, dims * 2](x1, y1, x2, y2, (z1, z2))
anchors: anchors to match [M, dims * 2](x1, y1, x2, y2, (z1, z2))
num_anchors_per_level: number of anchors per feature pyramid level
num_anchors_per_loc: number of anchors per position
Returns:
Tensor: matrix which contains the similarity from each boxes
to each anchor [N, M]
Tensor: vector which contains the matched box index for all
anchors (if background `BELOW_LOW_THRESHOLD` is used
and if it should be ignored `BETWEEN_THRESHOLDS` is used)
[M]
"""
num_gt = boxes.shape[0]
num_anchors = anchors.shape[0]
distances, _, anchors_center = box_center_dist(boxes, anchors) # num_boxes x anchors
# select candidates based on center distance
candidate_idx = []
start_idx = 0
for level, apl in enumerate(num_anchors_per_level):
end_idx = start_idx + apl
selectable_k = min(self.num_candidates * num_anchors_per_loc, apl)
_, idx = distances[:, start_idx: end_idx].topk(selectable_k, dim=1, largest=False)
# idx shape [num_boxes x selectable_k]
candidate_idx.append(idx + start_idx)
start_idx = end_idx
# [num_boxes x num_candidates] (index of candidate anchors)
candidate_idx = torch.cat(candidate_idx, dim=1)
match_quality_matrix = self.similarity_fn(boxes, anchors) # [num_boxes x anchors]
candidate_overlaps = match_quality_matrix.gather(1, candidate_idx) # [num_boxes, n_candidates]
# compute adaptive iou threshold
overlaps_mean_per_gt = candidate_overlaps.mean(dim=1) # [num_boxes]
overlaps_std_per_gt = candidate_overlaps.std(dim=1) # [num_boxes]
overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt # [num_boxes]
is_pos = candidate_overlaps >= overlaps_thr_per_gt[:, None] # [num_boxes x n_candidates]
if self.center_in_gt: # can discard all candidates in case of very small objects :/
# center point of selected anchors needs to lie within the ground truth
boxes_idx = torch.arange(num_gt, device=boxes.device, dtype=torch.long)[:, None]\
.expand_as(candidate_idx).contiguous() # [num_boxes x n_candidates]
is_in_gt = center_in_boxes(
anchors_center[candidate_idx.view(-1)], boxes[boxes_idx.view(-1)], eps=self.min_dist)
is_pos = is_pos & is_in_gt.view_as(is_pos) # [num_boxes x n_candidates]
# in case on anchor is assigned to multiple boxes, use box with highest IoU
for ng in range(num_gt):
candidate_idx[ng, :] += ng * num_anchors
overlaps_inf = torch.full_like(match_quality_matrix, -INF).view(-1)
index = candidate_idx.view(-1)[is_pos.view(-1)]
overlaps_inf[index] = match_quality_matrix.view(-1)[index]
overlaps_inf = overlaps_inf.view_as(match_quality_matrix)
matched_vals, matches = overlaps_inf.max(dim=0)
matches[matched_vals == -INF] = self.BELOW_LOW_THRESHOLD
# print(f"Num matches {(matches >= 0).sum()}, Adapt IoU {overlaps_thr_per_gt}")
return match_quality_matrix, matches
# SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
# SPDX-License-Identifier: Apache-2.0
from typing import Sequence, Callable, Tuple, TypeVar
from abc import ABC
import torch
from torch import Tensor
from nndet.core.boxes.ops import box_iou
class Matcher(ABC):
BELOW_LOW_THRESHOLD: int = -1
BETWEEN_THRESHOLDS: int = -2
def __init__(self, similarity_fn: Callable[[Tensor, Tensor], Tensor] = box_iou):
"""
Matches boxes and anchors to each other
Args:
similarity_fn: function for similarity computation between
boxes and anchors
"""
self.similarity_fn = similarity_fn
def __call__(self,
boxes: torch.Tensor,
anchors: torch.Tensor,
num_anchors_per_level: Sequence[int],
num_anchors_per_loc: int,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Compute matches for a single image
Args:
boxes: anchors are matches to these boxes (e.g. ground truth)
[N, dims * 2](x1, y1, x2, y2, (z1, z2))
anchors: anchors to match [M, dims * 2](x1, y1, x2, y2, (z1, z2))
num_anchors_per_level: number of anchors per feature pyramid level
num_anchors_per_loc: number of anchors per position
Returns:
Tensor: matrix which contains the similarity from each boxes
to each anchor [N, M]
Tensor: vector which contains the matched box index for all
anchors (if background `BELOW_LOW_THRESHOLD` is used
and if it should be ignored `BETWEEN_THRESHOLDS` is used)
[M]
"""
if boxes.numel() == 0:
# no ground truth
num_anchors = anchors.shape[0]
match_quality_matrix = torch.tensor([]).to(anchors)
matches = torch.empty(num_anchors, dtype=torch.int64).fill_(self.BELOW_LOW_THRESHOLD)
return match_quality_matrix, matches
else:
# at least one ground truth
return self.compute_matches(
boxes=boxes, anchors=anchors,
num_anchors_per_level=num_anchors_per_level,
num_anchors_per_loc=num_anchors_per_loc,
)
def compute_matches(self,
boxes: torch.Tensor,
anchors: torch.Tensor,
num_anchors_per_level: Sequence[int],
num_anchors_per_loc: int,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Compute matches
Args:
boxes: anchors are matches to these boxes (e.g. ground truth)
[N, dims * 2](x1, y1, x2, y2, (z1, z2))
anchors: anchors to match [M, dims * 2](x1, y1, x2, y2, (z1, z2))
num_anchors_per_level: number of anchors per feature pyramid level
num_anchors_per_loc: number of anchors per position
Returns:
Tensor: matrix which contains the similarity from each boxes
to each anchor [N, M]
Tensor: vector which contains the matched box index for all
anchors (if background `BELOW_LOW_THRESHOLD` is used
and if it should be ignored `BETWEEN_THRESHOLDS` is used)
[M]
"""
raise NotImplementedError
MatcherType = TypeVar('MatcherType', bound=Matcher)
""" # Modifications licensed under:
Parts of this code are from torchvision and thus licensed under # SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
# SPDX-License-Identifier: Apache-2.0
#
# Parts of this code are from torchvision (https://github.com/pytorch/vision) licensed under
# SPDX-FileCopyrightText: 2016 Soumith Chintala
# SPDX-License-Identifier: BSD-3-Clause
BSD 3-Clause License
Copyright (c) Soumith Chintala 2016, from typing import Callable, Tuple
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
from typing import Sequence, Callable, Tuple, TypeVar
from abc import ABC
import torch import torch
from torch import Tensor from torch import Tensor
from loguru import logger from loguru import logger
from nndet.core.boxes.ops import box_iou, box_center_dist, center_in_boxes from nndet.core.boxes.ops import box_iou
from nndet.core.boxes.matcher.base import Matcher
INF = 100 # not really inv but here it is sufficient
class Matcher(ABC):
BELOW_LOW_THRESHOLD: int = -1
BETWEEN_THRESHOLDS: int = -2
def __init__(self, similarity_fn: Callable[[Tensor, Tensor], Tensor] = box_iou):
"""
Matches boxes and anchors to each other
Args:
similarity_fn: function for similarity computation between
boxes and anchors
"""
self.similarity_fn = similarity_fn
def __call__(self,
boxes: torch.Tensor,
anchors: torch.Tensor,
num_anchors_per_level: Sequence[int],
num_anchors_per_loc: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Compute matches for a single image
Args:
boxes: anchors are matches to these boxes (e.g. ground truth)
[N, dims * 2](x1, y1, x2, y2, (z1, z2))
anchors: anchors to match [M, dims * 2](x1, y1, x2, y2, (z1, z2))
num_anchors_per_level: number of anchors per feature pyramid level
num_anchors_per_loc: number of anchors per position
Returns:
Tensor: matrix which contains the similarity from each boxes
to each anchor [N, M]
Tensor: vector which contains the matched box index for all
anchors (if background `BELOW_LOW_THRESHOLD` is used
and if it should be ignored `BETWEEN_THRESHOLDS` is used)
[M]
"""
if boxes.numel() == 0:
# no ground truth
num_anchors = anchors.shape[0]
match_quality_matrix = torch.tensor([]).to(anchors)
matches = torch.empty(num_anchors, dtype=torch.int64).fill_(self.BELOW_LOW_THRESHOLD)
return match_quality_matrix, matches
else:
# at least one ground truth
return self.compute_matches(
boxes=boxes, anchors=anchors,
num_anchors_per_level=num_anchors_per_level,
num_anchors_per_loc=num_anchors_per_loc,
)
def compute_matches(self,
boxes: torch.Tensor,
anchors: torch.Tensor,
num_anchors_per_level: Sequence[int],
num_anchors_per_loc: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Compute matches
Args:
boxes: anchors are matches to these boxes (e.g. ground truth)
[N, dims * 2](x1, y1, x2, y2, (z1, z2))
anchors: anchors to match [M, dims * 2](x1, y1, x2, y2, (z1, z2))
num_anchors_per_level: number of anchors per feature pyramid level
num_anchors_per_loc: number of anchors per position
Returns:
Tensor: matrix which contains the similarity from each boxes
to each anchor [N, M]
Tensor: vector which contains the matched box index for all
anchors (if background `BELOW_LOW_THRESHOLD` is used
and if it should be ignored `BETWEEN_THRESHOLDS` is used)
[M]
"""
raise NotImplementedError
class IoUMatcher(Matcher): class IoUMatcher(Matcher):
...@@ -225,110 +120,3 @@ class IoUMatcher(Matcher): ...@@ -225,110 +120,3 @@ class IoUMatcher(Matcher):
logger.info(f"Inbetween IoU ranging from {match_bet_min} to {match_bet_max}") logger.info(f"Inbetween IoU ranging from {match_bet_min} to {match_bet_max}")
logger.info(f"Max background IoU: {matched_vals[below_low_threshold].max()}") logger.info(f"Max background IoU: {matched_vals[below_low_threshold].max()}")
logger.info("#################################") logger.info("#################################")
class ATSSMatcher(Matcher):
def __init__(self,
num_candidates: int,
similarity_fn: Callable[[Tensor, Tensor], Tensor] = box_iou,
center_in_gt: bool = True,
):
"""
Compute matching based on ATSS
https://arxiv.org/abs/1912.02424
`Bridging the Gap Between Anchor-based and Anchor-free Detection
via Adaptive Training Sample Selection`
Args:
num_candidates: number of positions to select candidates from
similarity_fn: function for similarity computation between
boxes and anchors
center_in_gt: If diabled, matched anchor center points do not need
to lie withing the ground truth box.
"""
super().__init__(similarity_fn=similarity_fn)
self.num_candidates = num_candidates
self.min_dist = 0.01
self.center_in_gt = center_in_gt
logger.info(f"Running ATSS Matching with num_candidates={self.num_candidates} "
f"and center_in_gt {self.center_in_gt}.")
def compute_matches(self,
boxes: torch.Tensor,
anchors: torch.Tensor,
num_anchors_per_level: Sequence[int],
num_anchors_per_loc: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Compute matches according to ATTS for a single image
Adapted from
(https://github.com/sfzhang15/ATSS/blob/79dfb28bd1/atss_core/modeling/rpn/atss
/loss.py#L180-L184)
Args:
boxes: anchors are matches to these boxes (e.g. ground truth)
[N, dims * 2](x1, y1, x2, y2, (z1, z2))
anchors: anchors to match [M, dims * 2](x1, y1, x2, y2, (z1, z2))
num_anchors_per_level: number of anchors per feature pyramid level
num_anchors_per_loc: number of anchors per position
Returns:
Tensor: matrix which contains the similarity from each boxes
to each anchor [N, M]
Tensor: vector which contains the matched box index for all
anchors (if background `BELOW_LOW_THRESHOLD` is used
and if it should be ignored `BETWEEN_THRESHOLDS` is used)
[M]
"""
num_gt = boxes.shape[0]
num_anchors = anchors.shape[0]
distances, boxes_center, anchors_center = box_center_dist(boxes, anchors) # num_boxes x anchors
# select candidates based on center distance
candidate_idx = []
start_idx = 0
for level, apl in enumerate(num_anchors_per_level):
end_idx = start_idx + apl
topk = min(self.num_candidates * num_anchors_per_loc, apl)
_, idx = distances[:, start_idx: end_idx].topk(topk, dim=1, largest=False)
# idx shape [num_boxes x topk]
candidate_idx.append(idx + start_idx)
start_idx = end_idx
# [num_boxes x num_candidates] (index of candidate anchors)
candidate_idx = torch.cat(candidate_idx, dim=1)
match_quality_matrix = self.similarity_fn(boxes, anchors) # [num_boxes x anchors]
candidate_ious = match_quality_matrix.gather(1, candidate_idx) # [num_boxes, n_candidates]
# compute adaptive iou threshold
iou_mean_per_gt = candidate_ious.mean(dim=1) # [num_boxes]
iou_std_per_gt = candidate_ious.std(dim=1) # [num_boxes]
iou_thresh_per_gt = iou_mean_per_gt + iou_std_per_gt # [num_boxes]
is_pos = candidate_ious >= iou_thresh_per_gt[:, None] # [num_boxes x n_candidates]
if self.center_in_gt: # can discard all candidates in case of very small objects :/
# center point of selected anchors needs to lie within the ground truth
boxes_idx = torch.arange(num_gt, device=boxes.device, dtype=torch.long)[:, None]\
.expand_as(candidate_idx).contiguous() # [num_boxes x n_candidates]
is_in_gt = center_in_boxes(
anchors_center[candidate_idx.view(-1)], boxes[boxes_idx.view(-1)], eps=self.min_dist)
is_pos = is_pos & is_in_gt.view_as(is_pos) # [num_boxes x n_candidates]
# in case on anchor is assigned to multiple boxes, use box with highest IoU
# TODO: think about a better way to do this
for ng in range(num_gt):
candidate_idx[ng, :] += ng * num_anchors
ious_inf = torch.full_like(match_quality_matrix, -INF).view(-1)
index = candidate_idx.view(-1)[is_pos.view(-1)]
ious_inf[index] = match_quality_matrix.view(-1)[index]
ious_inf = ious_inf.view_as(match_quality_matrix)
matched_vals, matches = ious_inf.max(dim=0)
matches[matched_vals == -INF] = self.BELOW_LOW_THRESHOLD
# print(f"Num matches {(matches >= 0).sum()}, Adapt IoU {iou_thresh_per_gt}")
return match_quality_matrix, matches
MatcherType = TypeVar('MatcherType', bound=Matcher)
# Modifications licensed under:
# SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
# SPDX-License-Identifier: Apache-2.0
#
# Parts of this code are from torchvision (https://github.com/pytorch/vision) licensed under
# SPDX-FileCopyrightText: 2016 Soumith Chintala
# SPDX-License-Identifier: BSD-3-Clause
import torch import torch
import torch.nn as nn import torch.nn as nn
...@@ -355,7 +364,7 @@ class BaseRetinaNet(AbstractModel): ...@@ -355,7 +364,7 @@ class BaseRetinaNet(AbstractModel):
keep_idxs = probs > self.score_thresh keep_idxs = probs > self.score_thresh
probs, idx = probs[keep_idxs], idx[keep_idxs] probs, idx = probs[keep_idxs], idx[keep_idxs]
anchor_idxs = idx // self.num_foreground_classes anchor_idxs = torch.div(idx, self.num_foreground_classes, rounding_mode="floor")
labels = idx % self.num_foreground_classes labels = idx % self.num_foreground_classes
boxes = boxes[anchor_idxs] boxes = boxes[anchor_idxs]
......
// Modifications licensed under:
// SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
// SPDX-License-Identifier: Apache-2.0
//
// Parts of this code are from torchvision licensed under
// SPDX-FileCopyrightText: 2016 Soumith Chintala
// SPDX-License-Identifier: BSD-3-Clause
/* adopted from /* adopted from
https://github.com/pytorch/vision/blob/master/torchvision/csrc/nms.h on Nov 15 2019 https://github.com/pytorch/vision/blob/master/torchvision/csrc/nms.h on Nov 15 2019
no cpu support, but could be added with this interface. no cpu support, but could be added with this interface.
......
// Parts of this code are from torchvision licensed under
// SPDX-FileCopyrightText: 2016 Soumith Chintala
// SPDX-License-Identifier: BSD-3-Clause
#pragma once #pragma once
#define CUDA_1D_KERNEL_LOOP(i, n) \ #define CUDA_1D_KERNEL_LOOP(i, n) \
......
/* // Modifications licensed under:
NMS implementation in CUDA from pytorch framework // SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
(https://github.com/pytorch/vision/tree/master/torchvision/csrc/cuda on Nov 13 2019) // SPDX-License-Identifier: Apache-2.0
//
Adapted for additional 3D capability by G. Ramien, DKFZ Heidelberg // Parts of this code are from torchvision licensed under
// SPDX-FileCopyrightText: 2016 Soumith Chintala
Parts of this code are from torchvision and thus licensed under // SPDX-License-Identifier: BSD-3-Clause
BSD 3-Clause License
Copyright (c) Soumith Chintala 2016,
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <torch/extension.h> #include <torch/extension.h>
#include <ATen/ATen.h> #include <ATen/ATen.h>
......
// Modifications licensed under:
// SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
// SPDX-License-Identifier: Apache-2.0
//
// Parts of this code are from torchvision licensed under
// SPDX-FileCopyrightText: 2016 Soumith Chintala
// SPDX-License-Identifier: BSD-3-Clause
#include <torch/extension.h> #include <torch/extension.h>
#include "cpu/nms.cpp" #include "cpu/nms.cpp"
......
""" # Modifications licensed under:
Some parts are adapted from https://github.com/cocodataset/cocoapi : # SPDX-FileCopyrightText: 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
# SPDX-License-Identifier: Apache-2.0
Copyright (c) 2014, Piotr Dollar and Tsung-Yi Lin #
All rights reserved. # Parts of this code are from cocoapi licensed under
# SPDX-FileCopyrightText: 2014, Piotr Dollar and Tsung-Yi Lin
Redistribution and use in source and binary forms, with or without # SPDX-License-Identifier: BSD-2-Clause-Views
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those
of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
"""
"""
For the remaining parts:
Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import time import time
import numpy as np import numpy as np
...@@ -137,7 +97,6 @@ class COCOMetric(DetectionMetric): ...@@ -137,7 +97,6 @@ class COCOMetric(DetectionMetric):
results = {} results = {}
results.update(self.compute_ap(dataset_statistics)) results.update(self.compute_ap(dataset_statistics))
results.update(self.compute_ar(dataset_statistics))
if self.verbose: if self.verbose:
toc = time.time() toc = time.time()
...@@ -149,15 +108,14 @@ class COCOMetric(DetectionMetric): ...@@ -149,15 +108,14 @@ class COCOMetric(DetectionMetric):
Compute AP metrics Compute AP metrics
Args: Args:
results_list (List[Dict[int, Dict[str, np.ndarray]]]): list with result s per image (in list) dataset_statistics (dict): computed statistics over dataset
per category (dict). Inner Dict contains multiple results obtained by :func:`box_matching_batch`. `counts`: Number of thresholds, Number recall thresholds, Number of classes, Number of max
`dtMatches`: matched detections [T, D], where T = number of thresholds, D = number of detections detection thresholds
`gtMatches`: matched ground truth boxes [T, G], where T = number of thresholds, G = number of `recall`: Computed recall values [num_iou_th, num_classes, num_max_detections]
ground truth `precision`: Precision values at specified recall thresholds
`dtScores`: prediction scores [D] detection scores [num_iou_th, num_recall_th, num_classes, num_max_detections]
`gtIgnore`: ground truth boxes which should be ignored [G] indicate whether ground truth `scores`: Scores corresponding to specified recall thresholds
should be ignored [num_iou_th, num_recall_th, num_classes, num_max_detections]
`dtIgnore`: detections which should be ignored [T, D], indicate which detections should be ignored
""" """
results = {} results = {}
if self.iou_range: # mAP if self.iou_range: # mAP
...@@ -186,47 +144,6 @@ class COCOMetric(DetectionMetric): ...@@ -186,47 +144,6 @@ class COCOMetric(DetectionMetric):
iou_idx=[idx], cls_idx=cls_idx, max_det_idx=-1) iou_idx=[idx], cls_idx=cls_idx, max_det_idx=-1)
return results return results
def compute_ar(self, dataset_statistics: dict) -> dict:
"""
Compute AR metrics
Args:
results_list (List[Dict[int, Dict[str, np.ndarray]]]): list with result s per image (in list)
per category (dict). Inner Dict contains multiple results obtained by :func:`box_matching_batch`.
`dtMatches`: matched detections [T, D], where T = number of thresholds, D = number of detections
`gtMatches`: matched ground truth boxes [T, G], where T = number of thresholds, G = number of
ground truth
`dtScores`: prediction scores [D] detection scores
`gtIgnore`: ground truth boxes which should be ignored [G] indicate whether ground truth
should be ignored
`dtIgnore`: detections which should be ignored [T, D], indicate which detections should be ignored
"""
results = {}
for max_det_idx, max_det in enumerate(self.max_detections): # mAR
key = f"mAR_IoU_{self.iou_range[0]:.2f}_{self.iou_range[1]:.2f}_{self.iou_range[2]:.2f}_MaxDet_{max_det}"
results[key] = self.select_ar(dataset_statistics, max_det_idx=max_det_idx)
if self.per_class:
for cls_idx, cls_str in enumerate(self.classes): # per class results
key = (f"{cls_str}_"
f"mAR_IoU_{self.iou_range[0]:.2f}_{self.iou_range[1]:.2f}_{self.iou_range[2]:.2f}_"
f"MaxDet_{max_det}")
results[key] = self.select_ar(dataset_statistics,
cls_idx=cls_idx, max_det_idx=max_det_idx)
for idx in self.iou_list_idx: # AR@IoU
key = f"AR_IoU_{self.iou_thresholds[idx]:.2f}_MaxDet_{self.max_detections[-1]}"
results[key] = self.select_ar(dataset_statistics, iou_idx=idx, max_det_idx=-1)
if self.per_class:
for cls_idx, cls_str in enumerate(self.classes): # per class results
key = (f"{cls_str}_"
f"AR_IoU_{self.iou_thresholds[idx]:.2f}_"
f"MaxDet_{self.max_detections[-1]}")
results[key] = self.select_ar(dataset_statistics, iou_idx=idx,
cls_idx=cls_idx, max_det_idx=-1)
return results
@staticmethod @staticmethod
def select_ap(dataset_statistics: dict, iou_idx: Union[int, List[int]] = None, def select_ap(dataset_statistics: dict, iou_idx: Union[int, List[int]] = None,
cls_idx: Union[int, Sequence[int]] = None, max_det_idx: int = -1) -> np.ndarray: cls_idx: Union[int, Sequence[int]] = None, max_det_idx: int = -1) -> np.ndarray:
...@@ -257,42 +174,6 @@ class COCOMetric(DetectionMetric): ...@@ -257,42 +174,6 @@ class COCOMetric(DetectionMetric):
prec = prec[..., max_det_idx] prec = prec[..., max_det_idx]
return np.mean(prec) return np.mean(prec)
@staticmethod
def select_ar(dataset_statistics: dict, iou_idx: Union[int, Sequence[int]] = None,
cls_idx: Union[int, Sequence[int]] = None,
max_det_idx: int = -1) -> np.ndarray:
"""
Compute average recall
Args:
dataset_statistics (dict): computed statistics over dataset
`counts`: Number of thresholds, Number recall thresholds, Number of classes, Number of max
detection thresholds
`recall`: Computed recall values [num_iou_th, num_classes, num_max_detections]
`precision`: Precision values at specified recall thresholds
[num_iou_th, num_recall_th, num_classes, num_max_detections]
`scores`: Scores corresponding to specified recall thresholds
[num_iou_th, num_recall_th, num_classes, num_max_detections]
iou_idx: index of IoU values to select for evaluation(if None, all values are used)
cls_idx: class indices to select, if None all classes will be selected
max_det_idx (int): index to select max detection threshold from data
Returns:
np.ndarray: recall value
"""
rec = dataset_statistics["recall"]
if iou_idx is not None:
rec = rec[iou_idx]
if cls_idx is not None:
rec = rec[..., cls_idx, :]
rec = rec[..., max_det_idx]
if len(rec[rec > -1]) == 0:
rec = -1
else:
rec = np.mean(rec[rec > -1])
return rec
def compute_statistics(self, results_list: List[Dict[int, Dict[str, np.ndarray]]] def compute_statistics(self, results_list: List[Dict[int, Dict[str, np.ndarray]]]
) -> Dict[str, Union[np.ndarray, List]]: ) -> Dict[str, Union[np.ndarray, List]]:
""" """
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment