Commit 7fda1f66 authored by jshilong's avatar jshilong Committed by ChaimZhu
Browse files

[Fix]Fix bugs and remove show related codes in dataset

parent ff1e5b4e
# Structure Aware Single-stage 3D Object Detection from Point Cloud # Structure Aware Single-stage 3D Object Detection from Point Cloud
> [Structure Aware Single-stage 3D Object Detection from Point Cloud]([https://arxiv.org/abs/2104.02323](https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.pdf)) > [Structure Aware Single-stage 3D Object Detection from Point Cloud](<%5Bhttps://arxiv.org/abs/2104.02323%5D(https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.pdf)>)
<!-- [ALGORITHM] --> <!-- [ALGORITHM] -->
......
...@@ -8,31 +8,31 @@ We list some potential troubles encountered by users and developers, along with ...@@ -8,31 +8,31 @@ We list some potential troubles encountered by users and developers, along with
The required versions of MMCV, MMDetection and MMSegmentation for different versions of MMDetection3D are as below. Please install the correct version of MMCV, MMDetection and MMSegmentation to avoid installation issues. The required versions of MMCV, MMDetection and MMSegmentation for different versions of MMDetection3D are as below. Please install the correct version of MMCV, MMDetection and MMSegmentation to avoid installation issues.
| MMDetection3D version | MMDetection version | MMSegmentation version | MMCV version | | MMDetection3D version | MMDetection version | MMSegmentation version | MMCV version |
| :-------------------: | :---------------------: | :--------------------: | :------------------------: | | :-------------------: | :----------------------: | :---------------------: | :-------------------------: |
| master | mmdet>=2.24.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.6.0 | | master | mmdet>=2.24.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0 |
| v1.0.0rc3 | mmdet>=2.24.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.6.0 | | v1.0.0rc3 | mmdet>=2.24.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0 |
| v1.0.0rc2 | mmdet>=2.24.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.6.0 | | v1.0.0rc2 | mmdet>=2.24.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0 |
| v1.0.0rc1 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.5.0 | | v1.0.0rc1 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.5.0 |
| v1.0.0rc0 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.17, <=1.5.0 | | v1.0.0rc0 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
| 0.18.1 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.17, <=1.5.0 | | 0.18.1 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
| 0.18.0 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.17, <=1.5.0 | | 0.18.0 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
| 0.17.3 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.3 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.17.2 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.2 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.17.1 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.1 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.17.0 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.0 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.16.0 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.16.0 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.15.0 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.15.0 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.14.0 | mmdet>=2.10.0, <=2.11.0 | mmseg==0.14.0 | mmcv-full>=1.3.1, <=1.4.0 | | 0.14.0 | mmdet>=2.10.0, \<=2.11.0 | mmseg==0.14.0 | mmcv-full>=1.3.1, \<=1.4.0 |
| 0.13.0 | mmdet>=2.10.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.4.0 | | 0.13.0 | mmdet>=2.10.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.4.0 |
| 0.12.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.4.0 | | 0.12.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.4.0 |
| 0.11.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.3.0 | | 0.11.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.3.0 |
| 0.10.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.3.0 | | 0.10.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.3.0 |
| 0.9.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.3.0 | | 0.9.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.3.0 |
| 0.8.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.1.5, <=1.3.0 | | 0.8.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.1.5, \<=1.3.0 |
| 0.7.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.1.5, <=1.3.0 | | 0.7.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.1.5, \<=1.3.0 |
| 0.6.0 | mmdet>=2.4.0, <=2.11.0 | Not required | mmcv-full>=1.1.3, <=1.2.0 | | 0.6.0 | mmdet>=2.4.0, \<=2.11.0 | Not required | mmcv-full>=1.1.3, \<=1.2.0 |
| 0.5.0 | 2.3.0 | Not required | mmcv-full==1.0.5 | | 0.5.0 | 2.3.0 | Not required | mmcv-full==1.0.5 |
- If you faced the error shown below when importing open3d: - If you faced the error shown below when importing open3d:
......
# Prerequisites # Prerequisites
In this section we demonstrate how to prepare an environment with PyTorch. In this section we demonstrate how to prepare an environment with PyTorch.
MMDection3D works on Linux, Windows (experimental support) and macOS and requires the following packages: MMDection3D works on Linux, Windows (experimental support) and macOS and requires the following packages:
...@@ -40,6 +41,7 @@ conda install pytorch torchvision cpuonly -c pytorch ...@@ -40,6 +41,7 @@ conda install pytorch torchvision cpuonly -c pytorch
We recommend that users follow our best practices to install MMDetection3D. However, the whole process is highly customizable. See [Customize Installation](#customize-installation) section for more information. We recommend that users follow our best practices to install MMDetection3D. However, the whole process is highly customizable. See [Customize Installation](#customize-installation) section for more information.
## Best Practices ## Best Practices
Assuming that you already have CUDA 11.0 installed, here is a full script for quick installation of MMDetection3D with conda. Assuming that you already have CUDA 11.0 installed, here is a full script for quick installation of MMDetection3D with conda.
Otherwise, you should refer to the step-by-step installation instructions in the next section. Otherwise, you should refer to the step-by-step installation instructions in the next section.
...@@ -57,7 +59,6 @@ pip install -e . ...@@ -57,7 +59,6 @@ pip install -e .
**Step 1.** Install [MMDetection](https://github.com/open-mmlab/mmdetection). **Step 1.** Install [MMDetection](https://github.com/open-mmlab/mmdetection).
```shell ```shell
pip install mmdet pip install mmdet
``` ```
...@@ -103,20 +104,20 @@ pip install -v -e . # or "python setup.py develop" ...@@ -103,20 +104,20 @@ pip install -v -e . # or "python setup.py develop"
Note: Note:
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models. 1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory. It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
> Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version. > Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version.
```shell ```shell
pip uninstall mmdet3d pip uninstall mmdet3d
rm -rf ./build rm -rf ./build
find . -name "*.so" | xargs rm find . -name "*.so" | xargs rm
``` ```
2. Following the above instructions, MMDetection3D is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number). 2. Following the above instructions, MMDetection3D is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
3. If you would like to use `opencv-python-headless` instead of `opencv-python`, 3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
you can install it before installing MMCV. you can install it before installing MMCV.
4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`. 4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
...@@ -135,14 +136,13 @@ you can install it before installing MMCV. ...@@ -135,14 +136,13 @@ you can install it before installing MMCV.
We also support Minkowski Engine as a sparse convolution backend. If necessary please follow original [installation guide](https://github.com/NVIDIA/MinkowskiEngine#installation) or use `pip`: We also support Minkowski Engine as a sparse convolution backend. If necessary please follow original [installation guide](https://github.com/NVIDIA/MinkowskiEngine#installation) or use `pip`:
```shell ```shell
conda install openblas-devel -c anaconda conda install openblas-devel -c anaconda
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=/opt/conda/include" --install-option="--blas=openblas" pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=/opt/conda/include" --install-option="--blas=openblas"
``` ```
5. The code can not be built for CPU only environment (where CUDA isn't available) for now. 5. The code can not be built for CPU only environment (where CUDA isn't available) for now.
## Verification ## Verification
### Verify with point cloud demo ### Verify with point cloud demo
...@@ -160,7 +160,7 @@ python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_secon ...@@ -160,7 +160,7 @@ python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_secon
``` ```
If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo. If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo.
Note that you need to install `pandas` and `plyfile` before using this script. This function can also be used for data preprocessing for training ```ply data```. Note that you need to install `pandas` and `plyfile` before using this script. This function can also be used for data preprocessing for training `ply data`.
```python ```python
import numpy as np import numpy as np
...@@ -206,6 +206,7 @@ More demos about single/multi-modality and indoor/outdoor 3D detection can be fo ...@@ -206,6 +206,7 @@ More demos about single/multi-modality and indoor/outdoor 3D detection can be fo
## Customize Installation ## Customize Installation
### CUDA Versions ### CUDA Versions
When installing PyTorch, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations: When installing PyTorch, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations:
- For Ampere-based NVIDIA GPUs, such as GeForce 30 series and NVIDIA A100, CUDA 11 is a must. - For Ampere-based NVIDIA GPUs, such as GeForce 30 series and NVIDIA A100, CUDA 11 is a must.
...@@ -229,8 +230,6 @@ For example, the following command install mmcv-full built for PyTorch 1.10.x an ...@@ -229,8 +230,6 @@ For example, the following command install mmcv-full built for PyTorch 1.10.x an
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
``` ```
### Using MMDetection3D with Docker ### Using MMDetection3D with Docker
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/master/docker/Dockerfile) to build an image. We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/master/docker/Dockerfile) to build an image.
......
# Copyright (c) OpenMMLab. All rights reserved. # Copyright (c) OpenMMLab. All rights reserved.
import copy import copy
import tempfile
import warnings
from os import path as osp from os import path as osp
from typing import Callable, List, Optional, Union from typing import Callable, List, Optional, Union
...@@ -11,8 +9,6 @@ from mmengine.dataset import BaseDataset ...@@ -11,8 +9,6 @@ from mmengine.dataset import BaseDataset
from mmdet3d.datasets import DATASETS from mmdet3d.datasets import DATASETS
from ..core.bbox import get_box_type from ..core.bbox import get_box_type
from .pipelines import Compose
from .utils import extract_result_dict, get_loading_pipeline
@DATASETS.register_module() @DATASETS.register_module()
...@@ -153,7 +149,7 @@ class Det3DDataset(BaseDataset): ...@@ -153,7 +149,7 @@ class Det3DDataset(BaseDataset):
return ann_info return ann_info
def parse_ann_info(self, info: dict) -> dict: def parse_ann_info(self, info: dict) -> Optional[dict]:
"""Process the `instances` in data info to `ann_info` """Process the `instances` in data info to `ann_info`
In `Custom3DDataset`, we simply concatenate all the field In `Custom3DDataset`, we simply concatenate all the field
...@@ -165,7 +161,7 @@ class Det3DDataset(BaseDataset): ...@@ -165,7 +161,7 @@ class Det3DDataset(BaseDataset):
info (dict): Info dict. info (dict): Info dict.
Returns: Returns:
dict: Processed `ann_info` dict | None: Processed `ann_info`
""" """
# add s or gt prefix for most keys after concat # add s or gt prefix for most keys after concat
name_mapping = { name_mapping = {
...@@ -179,16 +175,22 @@ class Det3DDataset(BaseDataset): ...@@ -179,16 +175,22 @@ class Det3DDataset(BaseDataset):
} }
instances = info['instances'] instances = info['instances']
keys = list(instances[0].keys()) # empty gt
ann_info = dict() if len(instances) == 0:
for ann_name in keys: return None
temp_anns = [item[ann_name] for item in instances] else:
if 'label' in ann_name: keys = list(instances[0].keys())
temp_anns = [self.label_mapping[item] for item in temp_anns] ann_info = dict()
temp_anns = np.array(temp_anns) for ann_name in keys:
if ann_name in name_mapping: temp_anns = [item[ann_name] for item in instances]
ann_name = name_mapping[ann_name] if 'label' in ann_name:
ann_info[ann_name] = temp_anns temp_anns = [
self.label_mapping[item] for item in temp_anns
]
temp_anns = np.array(temp_anns)
if ann_name in name_mapping:
ann_name = name_mapping[ann_name]
ann_info[ann_name] = temp_anns
return ann_info return ann_info
def parse_data_info(self, info: dict) -> dict: def parse_data_info(self, info: dict) -> dict:
...@@ -257,138 +259,3 @@ class Det3DDataset(BaseDataset): ...@@ -257,138 +259,3 @@ class Det3DDataset(BaseDataset):
example['data_sample'].gt_instances_3d.labels_3d) == 0: example['data_sample'].gt_instances_3d.labels_3d) == 0:
return None return None
return example return example
def format_results(self,
outputs,
pklfile_prefix=None,
submission_prefix=None):
"""Format the results to pkl file.
Args:
outputs (list[dict]): Testing results of the dataset.
pklfile_prefix (str): The prefix of pkl files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: (outputs, tmp_dir), outputs is the detection results,
tmp_dir is the temporal directory created for saving json
files when ``jsonfile_prefix`` is not specified.
"""
if pklfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
pklfile_prefix = osp.join(tmp_dir.name, 'results')
out = f'{pklfile_prefix}.pkl'
mmcv.dump(outputs, out)
return outputs, tmp_dir
def evaluate(self,
results,
metric=None,
iou_thr=(0.25, 0.5),
logger=None,
show=False,
out_dir=None,
pipeline=None):
"""Evaluate.
Evaluation in indoor protocol.
Args:
results (list[dict]): List of results.
metric (str | list[str], optional): Metrics to be evaluated.
Defaults to None.
iou_thr (list[float]): AP IoU thresholds. Defaults to (0.25, 0.5).
logger (logging.Logger | str, optional): Logger used for printing
related information during evaluation. Defaults to None.
show (bool, optional): Whether to visualize.
Default: False.
out_dir (str, optional): Path to save the visualization results.
Default: None.
pipeline (list[dict], optional): raw data loading for showing.
Default: None.
Returns:
dict: Evaluation results.
"""
from mmdet3d.core.evaluation import indoor_eval
assert isinstance(
results, list), f'Expect results to be list, got {type(results)}.'
assert len(results) > 0, 'Expect length of results > 0.'
assert len(results) == len(self.data_infos)
assert isinstance(
results[0], dict
), f'Expect elements in results to be dict, got {type(results[0])}.'
gt_annos = [info['annos'] for info in self.data_infos]
label2cat = {i: cat_id for i, cat_id in enumerate(self.CLASSES)}
ret_dict = indoor_eval(
gt_annos,
results,
iou_thr,
label2cat,
logger=logger,
box_type_3d=self.box_type_3d,
box_mode_3d=self.box_mode_3d)
if show:
self.show(results, out_dir, pipeline=pipeline)
return ret_dict
# TODO check this where does this method is used
def _build_default_pipeline(self):
"""Build the default pipeline for this dataset."""
raise NotImplementedError('_build_default_pipeline is not implemented '
f'for dataset {self.__class__.__name__}')
# TODO check this where does this method is used
def _get_pipeline(self, pipeline):
"""Get data loading pipeline in self.show/evaluate function.
Args:
pipeline (list[dict]): Input pipeline. If None is given,
get from self.pipeline.
"""
if pipeline is None:
if not hasattr(self, 'pipeline') or self.pipeline is None:
warnings.warn(
'Use default pipeline for data loading, this may cause '
'errors when data is on ceph')
return self._build_default_pipeline()
loading_pipeline = get_loading_pipeline(self.pipeline.transforms)
return Compose(loading_pipeline)
return Compose(pipeline)
# TODO check this where does this method is used
def _extract_data(self, index, pipeline, key, load_annos=False):
"""Load data using input pipeline and extract data according to key.
Args:
index (int): Index for accessing the target data.
pipeline (:obj:`Compose`): Composed data loading pipeline.
key (str | list[str]): One single or a list of data key.
load_annos (bool): Whether to load data annotations.
If True, need to set self.test_mode as False before loading.
Returns:
np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]:
A single or a list of loaded data.
"""
assert pipeline is not None, 'data loading pipeline is not provided'
# when we want to load ground-truth via pipeline (e.g. bbox, seg mask)
# we need to set self.test_mode as False so that we have 'annos'
if load_annos:
original_test_mode = self.test_mode
self.test_mode = False
input_dict = self.get_data_info(index)
self.pre_pipeline(input_dict)
example = pipeline(input_dict)
# extract data items according to keys
if isinstance(key, str):
data = extract_result_dict(example, key)
else:
data = [extract_result_dict(example, k) for k in key]
if load_annos:
self.test_mode = original_test_mode
return data
# Copyright (c) OpenMMLab. All rights reserved. # Copyright (c) OpenMMLab. All rights reserved.
import copy
import tempfile
from os import path as osp
from typing import Callable, List, Optional, Union from typing import Callable, List, Optional, Union
import mmcv
import numpy as np import numpy as np
import torch
from mmcv.utils import print_log
from mmdet3d.datasets import DATASETS from mmdet3d.datasets import DATASETS
from ..core import show_multi_modality_result, show_result from ..core.bbox import CameraInstance3DBoxes
from ..core.bbox import (Box3DMode, CameraInstance3DBoxes, Coord3DMode,
LiDARInstance3DBoxes, points_cam2img)
from .det3d_dataset import Det3DDataset from .det3d_dataset import Det3DDataset
from .pipelines import Compose
@DATASETS.register_module() @DATASETS.register_module()
...@@ -91,7 +82,7 @@ class KittiDataset(Det3DDataset): ...@@ -91,7 +82,7 @@ class KittiDataset(Det3DDataset):
if 'plane' in info: if 'plane' in info:
# convert ground plane to velodyne coordinates # convert ground plane to velodyne coordinates
plane = np.array(info['plane']) plane = np.array(info['plane'])
lidar2cam = np.array(info['lidar_points']['lidar2cam']) lidar2cam = np.array(info['images']['CAM2']['lidar2cam'])
reverse = np.linalg.inv(lidar2cam) reverse = np.linalg.inv(lidar2cam)
(plane_norm_cam, plane_off_cam) = (plane[:3], (plane_norm_cam, plane_off_cam) = (plane[:3],
...@@ -130,15 +121,13 @@ class KittiDataset(Det3DDataset): ...@@ -130,15 +121,13 @@ class KittiDataset(Det3DDataset):
- difficulty (int): Difficulty defined by KITTI. - difficulty (int): Difficulty defined by KITTI.
0, 1, 2 represent xxxxx respectively. 0, 1, 2 represent xxxxx respectively.
""" """
ann_info = super().parse_ann_info(info) ann_info = super().parse_ann_info(info)
if ann_info is None:
# empty instance
ann_info['gt_bboxes_3d'] = np.zeros((0, 7), dtype=np.float32)
ann_info['gt_labels_3d'] = np.zeros(0, dtype=np.int64)
bbox_labels_3d = ann_info['gt_labels_3d']
bbox_labels_3d = np.array(bbox_labels_3d)
ann_info['gt_labels_3d'] = bbox_labels_3d
ann_info['gt_labels'] = copy.deepcopy(ann_info['gt_labels_3d'])
ann_info = self._remove_dontcare(ann_info) ann_info = self._remove_dontcare(ann_info)
# in kitti, lidar2cam = R0_rect @ Tr_velo_to_cam # in kitti, lidar2cam = R0_rect @ Tr_velo_to_cam
lidar2cam = np.array(info['images']['CAM2']['lidar2cam']) lidar2cam = np.array(info['images']['CAM2']['lidar2cam'])
# convert gt_bboxes_3d to velodyne coordinates with `lidar2cam` # convert gt_bboxes_3d to velodyne coordinates with `lidar2cam`
...@@ -146,510 +135,4 @@ class KittiDataset(Det3DDataset): ...@@ -146,510 +135,4 @@ class KittiDataset(Det3DDataset):
ann_info['gt_bboxes_3d']).convert_to(self.box_mode_3d, ann_info['gt_bboxes_3d']).convert_to(self.box_mode_3d,
np.linalg.inv(lidar2cam)) np.linalg.inv(lidar2cam))
ann_info['gt_bboxes_3d'] = gt_bboxes_3d ann_info['gt_bboxes_3d'] = gt_bboxes_3d
return ann_info return ann_info
def format_results(self,
outputs,
pklfile_prefix=None,
submission_prefix=None):
"""Format the results to pkl file.
Args:
outputs (list[dict]): Testing results of the dataset.
pklfile_prefix (str): The prefix of pkl files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
submission_prefix (str): The prefix of submitted files. It
includes the file path and the prefix of filename, e.g.,
"a/b/prefix". If not specified, a temp file will be created.
Default: None.
Returns:
tuple: (result_files, tmp_dir), result_files is a dict containing
the json filepaths, tmp_dir is the temporal directory created
for saving json files when jsonfile_prefix is not specified.
"""
if pklfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
pklfile_prefix = osp.join(tmp_dir.name, 'results')
else:
tmp_dir = None
if not isinstance(outputs[0], dict):
result_files = self.bbox2result_kitti2d(outputs, self.CLASSES,
pklfile_prefix,
submission_prefix)
elif 'pts_bbox' in outputs[0] or 'img_bbox' in outputs[0]:
result_files = dict()
for name in outputs[0]:
results_ = [out[name] for out in outputs]
pklfile_prefix_ = pklfile_prefix + name
if submission_prefix is not None:
submission_prefix_ = submission_prefix + name
else:
submission_prefix_ = None
if 'img' in name:
result_files = self.bbox2result_kitti2d(
results_, self.CLASSES, pklfile_prefix_,
submission_prefix_)
else:
result_files_ = self.bbox2result_kitti(
results_, self.CLASSES, pklfile_prefix_,
submission_prefix_)
result_files[name] = result_files_
else:
result_files = self.bbox2result_kitti(outputs, self.CLASSES,
pklfile_prefix,
submission_prefix)
return result_files, tmp_dir
def evaluate(self,
results,
metric=None,
logger=None,
pklfile_prefix=None,
submission_prefix=None,
show=False,
out_dir=None,
pipeline=None):
"""Evaluation in KITTI protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str], optional): Metrics to be evaluated.
Default: None.
logger (logging.Logger | str, optional): Logger used for printing
related information during evaluation. Default: None.
pklfile_prefix (str, optional): The prefix of pkl files, including
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
submission_prefix (str, optional): The prefix of submission data.
If not specified, the submission data will not be generated.
Default: None.
show (bool, optional): Whether to visualize.
Default: False.
out_dir (str, optional): Path to save the visualization results.
Default: None.
pipeline (list[dict], optional): raw data loading for showing.
Default: None.
Returns:
dict[str, float]: Results of each evaluation metric.
"""
result_files, tmp_dir = self.format_results(results, pklfile_prefix)
from mmdet3d.core.evaluation import kitti_eval
gt_annos = [info['annos'] for info in self.data_infos]
if isinstance(result_files, dict):
ap_dict = dict()
for name, result_files_ in result_files.items():
eval_types = ['bbox', 'bev', '3d']
if 'img' in name:
eval_types = ['bbox']
ap_result_str, ap_dict_ = kitti_eval(
gt_annos,
result_files_,
self.CLASSES,
eval_types=eval_types)
for ap_type, ap in ap_dict_.items():
ap_dict[f'{name}/{ap_type}'] = float('{:.4f}'.format(ap))
print_log(
f'Results of {name}:\n' + ap_result_str, logger=logger)
else:
if metric == 'img_bbox':
ap_result_str, ap_dict = kitti_eval(
gt_annos, result_files, self.CLASSES, eval_types=['bbox'])
else:
ap_result_str, ap_dict = kitti_eval(gt_annos, result_files,
self.CLASSES)
print_log('\n' + ap_result_str, logger=logger)
if tmp_dir is not None:
tmp_dir.cleanup()
if show or out_dir:
self.show(results, out_dir, show=show, pipeline=pipeline)
return ap_dict
def bbox2result_kitti(self,
net_outputs,
class_names,
pklfile_prefix=None,
submission_prefix=None):
"""Convert 3D detection results to kitti format for evaluation and test
submission.
Args:
net_outputs (list[np.ndarray]): List of array storing the
inferenced bounding boxes and scores.
class_names (list[String]): A list of class names.
pklfile_prefix (str): The prefix of pkl file.
submission_prefix (str): The prefix of submission file.
Returns:
list[dict]: A list of dictionaries with the kitti format.
"""
assert len(net_outputs) == len(self.data_infos), \
'invalid list length of network outputs'
if submission_prefix is not None:
mmcv.mkdir_or_exist(submission_prefix)
det_annos = []
print('\nConverting prediction to KITTI format')
for idx, pred_dicts in enumerate(
mmcv.track_iter_progress(net_outputs)):
annos = []
info = self.data_infos[idx]
sample_idx = info['image']['image_idx']
image_shape = info['image']['image_shape'][:2]
box_dict = self.convert_valid_bboxes(pred_dicts, info)
anno = {
'name': [],
'truncated': [],
'occluded': [],
'alpha': [],
'bbox': [],
'dimensions': [],
'location': [],
'rotation_y': [],
'score': []
}
if len(box_dict['bbox']) > 0:
box_2d_preds = box_dict['bbox']
box_preds = box_dict['box3d_camera']
scores = box_dict['scores']
box_preds_lidar = box_dict['box3d_lidar']
label_preds = box_dict['label_preds']
for box, box_lidar, bbox, score, label in zip(
box_preds, box_preds_lidar, box_2d_preds, scores,
label_preds):
bbox[2:] = np.minimum(bbox[2:], image_shape[::-1])
bbox[:2] = np.maximum(bbox[:2], [0, 0])
anno['name'].append(class_names[int(label)])
anno['truncated'].append(0.0)
anno['occluded'].append(0)
anno['alpha'].append(
-np.arctan2(-box_lidar[1], box_lidar[0]) + box[6])
anno['bbox'].append(bbox)
anno['dimensions'].append(box[3:6])
anno['location'].append(box[:3])
anno['rotation_y'].append(box[6])
anno['score'].append(score)
anno = {k: np.stack(v) for k, v in anno.items()}
annos.append(anno)
else:
anno = {
'name': np.array([]),
'truncated': np.array([]),
'occluded': np.array([]),
'alpha': np.array([]),
'bbox': np.zeros([0, 4]),
'dimensions': np.zeros([0, 3]),
'location': np.zeros([0, 3]),
'rotation_y': np.array([]),
'score': np.array([]),
}
annos.append(anno)
if submission_prefix is not None:
curr_file = f'{submission_prefix}/{sample_idx:06d}.txt'
with open(curr_file, 'w') as f:
bbox = anno['bbox']
loc = anno['location']
dims = anno['dimensions'] # lhw -> hwl
for idx in range(len(bbox)):
print(
'{} -1 -1 {:.4f} {:.4f} {:.4f} {:.4f} '
'{:.4f} {:.4f} {:.4f} '
'{:.4f} {:.4f} {:.4f} {:.4f} {:.4f} {:.4f}'.format(
anno['name'][idx], anno['alpha'][idx],
bbox[idx][0], bbox[idx][1], bbox[idx][2],
bbox[idx][3], dims[idx][1], dims[idx][2],
dims[idx][0], loc[idx][0], loc[idx][1],
loc[idx][2], anno['rotation_y'][idx],
anno['score'][idx]),
file=f)
annos[-1]['sample_idx'] = np.array(
[sample_idx] * len(annos[-1]['score']), dtype=np.int64)
det_annos += annos
if pklfile_prefix is not None:
if not pklfile_prefix.endswith(('.pkl', '.pickle')):
out = f'{pklfile_prefix}.pkl'
mmcv.dump(det_annos, out)
print(f'Result is saved to {out}.')
return det_annos
def bbox2result_kitti2d(self,
net_outputs,
class_names,
pklfile_prefix=None,
submission_prefix=None):
"""Convert 2D detection results to kitti format for evaluation and test
submission.
Args:
net_outputs (list[np.ndarray]): List of array storing the
inferenced bounding boxes and scores.
class_names (list[String]): A list of class names.
pklfile_prefix (str): The prefix of pkl file.
submission_prefix (str): The prefix of submission file.
Returns:
list[dict]: A list of dictionaries have the kitti format
"""
assert len(net_outputs) == len(self.data_infos), \
'invalid list length of network outputs'
det_annos = []
print('\nConverting prediction to KITTI format')
for i, bboxes_per_sample in enumerate(
mmcv.track_iter_progress(net_outputs)):
annos = []
anno = dict(
name=[],
truncated=[],
occluded=[],
alpha=[],
bbox=[],
dimensions=[],
location=[],
rotation_y=[],
score=[])
sample_idx = self.data_infos[i]['image']['image_idx']
num_example = 0
for label in range(len(bboxes_per_sample)):
bbox = bboxes_per_sample[label]
for i in range(bbox.shape[0]):
anno['name'].append(class_names[int(label)])
anno['truncated'].append(0.0)
anno['occluded'].append(0)
anno['alpha'].append(0.0)
anno['bbox'].append(bbox[i, :4])
# set dimensions (height, width, length) to zero
anno['dimensions'].append(
np.zeros(shape=[3], dtype=np.float32))
# set the 3D translation to (-1000, -1000, -1000)
anno['location'].append(
np.ones(shape=[3], dtype=np.float32) * (-1000.0))
anno['rotation_y'].append(0.0)
anno['score'].append(bbox[i, 4])
num_example += 1
if num_example == 0:
annos.append(
dict(
name=np.array([]),
truncated=np.array([]),
occluded=np.array([]),
alpha=np.array([]),
bbox=np.zeros([0, 4]),
dimensions=np.zeros([0, 3]),
location=np.zeros([0, 3]),
rotation_y=np.array([]),
score=np.array([]),
))
else:
anno = {k: np.stack(v) for k, v in anno.items()}
annos.append(anno)
annos[-1]['sample_idx'] = np.array(
[sample_idx] * num_example, dtype=np.int64)
det_annos += annos
if pklfile_prefix is not None:
# save file in pkl format
pklfile_path = (
pklfile_prefix[:-4] if pklfile_prefix.endswith(
('.pkl', '.pickle')) else pklfile_prefix)
mmcv.dump(det_annos, pklfile_path)
if submission_prefix is not None:
# save file in submission format
mmcv.mkdir_or_exist(submission_prefix)
print(f'Saving KITTI submission to {submission_prefix}')
for i, anno in enumerate(det_annos):
sample_idx = self.data_infos[i]['image']['image_idx']
cur_det_file = f'{submission_prefix}/{sample_idx:06d}.txt'
with open(cur_det_file, 'w') as f:
bbox = anno['bbox']
loc = anno['location']
dims = anno['dimensions'][::-1] # lhw -> hwl
for idx in range(len(bbox)):
print(
'{} -1 -1 {:4f} {:4f} {:4f} {:4f} {:4f} {:4f} '
'{:4f} {:4f} {:4f} {:4f} {:4f} {:4f} {:4f}'.format(
anno['name'][idx],
anno['alpha'][idx],
*bbox[idx], # 4 float
*dims[idx], # 3 float
*loc[idx], # 3 float
anno['rotation_y'][idx],
anno['score'][idx]),
file=f,
)
print(f'Result is saved to {submission_prefix}')
return det_annos
def convert_valid_bboxes(self, box_dict, info):
"""Convert the predicted boxes into valid ones.
Args:
box_dict (dict): Box dictionaries to be converted.
- boxes_3d (:obj:`LiDARInstance3DBoxes`): 3D bounding boxes.
- scores_3d (torch.Tensor): Scores of boxes.
- labels_3d (torch.Tensor): Class labels of boxes.
info (dict): Data info.
Returns:
dict: Valid predicted boxes.
- bbox (np.ndarray): 2D bounding boxes.
- box3d_camera (np.ndarray): 3D bounding boxes in
camera coordinate.
- box3d_lidar (np.ndarray): 3D bounding boxes in
LiDAR coordinate.
- scores (np.ndarray): Scores of boxes.
- label_preds (np.ndarray): Class label predictions.
- sample_idx (int): Sample index.
"""
# TODO: refactor this function
box_preds = box_dict['boxes_3d']
scores = box_dict['scores_3d']
labels = box_dict['labels_3d']
sample_idx = info['image']['image_idx']
box_preds.limit_yaw(offset=0.5, period=np.pi * 2)
if len(box_preds) == 0:
return dict(
bbox=np.zeros([0, 4]),
box3d_camera=np.zeros([0, 7]),
box3d_lidar=np.zeros([0, 7]),
scores=np.zeros([0]),
label_preds=np.zeros([0, 4]),
sample_idx=sample_idx)
rect = info['calib']['R0_rect'].astype(np.float32)
Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32)
P2 = info['calib']['P2'].astype(np.float32)
img_shape = info['image']['image_shape']
P2 = box_preds.tensor.new_tensor(P2)
box_preds_camera = box_preds.convert_to(Box3DMode.CAM, rect @ Trv2c)
box_corners = box_preds_camera.corners
box_corners_in_image = points_cam2img(box_corners, P2)
# box_corners_in_image: [N, 8, 2]
minxy = torch.min(box_corners_in_image, dim=1)[0]
maxxy = torch.max(box_corners_in_image, dim=1)[0]
box_2d_preds = torch.cat([minxy, maxxy], dim=1)
# Post-processing
# check box_preds_camera
image_shape = box_preds.tensor.new_tensor(img_shape)
valid_cam_inds = ((box_2d_preds[:, 0] < image_shape[1]) &
(box_2d_preds[:, 1] < image_shape[0]) &
(box_2d_preds[:, 2] > 0) & (box_2d_preds[:, 3] > 0))
# check box_preds
limit_range = box_preds.tensor.new_tensor(self.pcd_limit_range)
valid_pcd_inds = ((box_preds.center > limit_range[:3]) &
(box_preds.center < limit_range[3:]))
valid_inds = valid_cam_inds & valid_pcd_inds.all(-1)
if valid_inds.sum() > 0:
return dict(
bbox=box_2d_preds[valid_inds, :].numpy(),
box3d_camera=box_preds_camera[valid_inds].tensor.numpy(),
box3d_lidar=box_preds[valid_inds].tensor.numpy(),
scores=scores[valid_inds].numpy(),
label_preds=labels[valid_inds].numpy(),
sample_idx=sample_idx)
else:
return dict(
bbox=np.zeros([0, 4]),
box3d_camera=np.zeros([0, 7]),
box3d_lidar=np.zeros([0, 7]),
scores=np.zeros([0]),
label_preds=np.zeros([0, 4]),
sample_idx=sample_idx)
def _build_default_pipeline(self):
"""Build the default pipeline for this dataset."""
pipeline = [
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4,
file_client_args=dict(backend='disk')),
dict(
type='DefaultFormatBundle3D',
class_names=self.CLASSES,
with_label=False),
dict(type='Collect3D', keys=['points'])
]
if self.modality['use_camera']:
pipeline.insert(0, dict(type='LoadImageFromFile'))
return Compose(pipeline)
def show(self, results, out_dir, show=True, pipeline=None):
"""Results visualization.
Args:
results (list[dict]): List of bounding boxes results.
out_dir (str): Output directory of visualization result.
show (bool): Whether to visualize the results online.
Default: False.
pipeline (list[dict], optional): raw data loading for showing.
Default: None.
"""
assert out_dir is not None, 'Expect out_dir, got none.'
pipeline = self._get_pipeline(pipeline)
for i, result in enumerate(results):
if 'pts_bbox' in result.keys():
result = result['pts_bbox']
data_info = self.data_infos[i]
pts_path = data_info['point_cloud']['velodyne_path']
file_name = osp.split(pts_path)[-1].split('.')[0]
points, img_metas, img = self._extract_data(
i, pipeline, ['points', 'img_metas', 'img'])
points = points.numpy()
# for now we convert points into depth mode
points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR,
Coord3DMode.DEPTH)
gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy()
show_gt_bboxes = Box3DMode.convert(gt_bboxes, Box3DMode.LIDAR,
Box3DMode.DEPTH)
pred_bboxes = result['boxes_3d'].tensor.numpy()
show_pred_bboxes = Box3DMode.convert(pred_bboxes, Box3DMode.LIDAR,
Box3DMode.DEPTH)
show_result(points, show_gt_bboxes, show_pred_bboxes, out_dir,
file_name, show)
# multi-modality visualization
if self.modality['use_camera'] and 'lidar2img' in img_metas.keys():
img = img.numpy()
# need to transpose channel to first dim
img = img.transpose(1, 2, 0)
show_pred_bboxes = LiDARInstance3DBoxes(
pred_bboxes, origin=(0.5, 0.5, 0))
show_gt_bboxes = LiDARInstance3DBoxes(
gt_bboxes, origin=(0.5, 0.5, 0))
show_multi_modality_result(
img,
show_gt_bboxes,
show_pred_bboxes,
img_metas['lidar2img'],
out_dir,
file_name,
box_mode='lidar',
show=show)
...@@ -367,7 +367,9 @@ class ObjectSample(BaseTransform): ...@@ -367,7 +367,9 @@ class ObjectSample(BaseTransform):
gt_labels_3d = input_dict['gt_labels_3d'] gt_labels_3d = input_dict['gt_labels_3d']
if self.use_ground_plane and 'plane' in input_dict['ann_info']: if self.use_ground_plane and 'plane' in input_dict['ann_info']:
ground_plane = input_dict['ann_info']['plane'] ground_plane = input_dict['plane']
assert ground_plane is not None, '`use_ground_plane` is True ' \
'but find plane is None'
input_dict['plane'] = ground_plane input_dict['plane'] = ground_plane
else: else:
ground_plane = None ground_plane = None
......
...@@ -62,7 +62,7 @@ class ScanNetDataset(Det3DDataset): ...@@ -62,7 +62,7 @@ class ScanNetDataset(Det3DDataset):
metainfo: dict = None, metainfo: dict = None,
data_prefix: dict = dict( data_prefix: dict = dict(
pts='points', pts='points',
pts_isntance_mask='instance_mask', pts_instance_mask='instance_mask',
pts_semantic_mask='semantic_mask'), pts_semantic_mask='semantic_mask'),
pipeline: List[Union[dict, Callable]] = [], pipeline: List[Union[dict, Callable]] = [],
modality=dict(use_camera=False, use_lidar=True), modality=dict(use_camera=False, use_lidar=True),
...@@ -116,12 +116,6 @@ class ScanNetDataset(Det3DDataset): ...@@ -116,12 +116,6 @@ class ScanNetDataset(Det3DDataset):
dict: Data information that will be passed to the data dict: Data information that will be passed to the data
preprocessing pipelines. It includes the following keys: preprocessing pipelines. It includes the following keys:
""" """
# TODO: whether all depth modality is pts ?
if self.modality['use_lidar']:
info['lidar_points']['lidar_path'] = \
osp.join(
self.data_prefix.get('pts', ''),
info['lidar_points']['lidar_path'])
info['axis_align_matrix'] = self._get_axis_align_matrix(info) info['axis_align_matrix'] = self._get_axis_align_matrix(info)
info['pts_instance_mask_path'] = osp.join( info['pts_instance_mask_path'] = osp.join(
self.data_prefix.get('pts_instance_mask', ''), self.data_prefix.get('pts_instance_mask', ''),
...@@ -143,7 +137,12 @@ class ScanNetDataset(Det3DDataset): ...@@ -143,7 +137,12 @@ class ScanNetDataset(Det3DDataset):
dict: Processed `ann_info` dict: Processed `ann_info`
""" """
ann_info = super().parse_ann_info(info) ann_info = super().parse_ann_info(info)
# empty gt
if ann_info is None:
ann_info['gt_bboxes_3d'] = np.zeros((0, 6), dtype=np.float32)
ann_info['gt_labels_3d'] = np.zeros((0, ), dtype=np.int64)
# to target box structure # to target box structure
ann_info['gt_bboxes_3d'] = DepthInstance3DBoxes( ann_info['gt_bboxes_3d'] = DepthInstance3DBoxes(
ann_info['gt_bboxes_3d'], ann_info['gt_bboxes_3d'],
box_dim=ann_info['gt_bboxes_3d'].shape[-1], box_dim=ann_info['gt_bboxes_3d'].shape[-1],
......
...@@ -352,24 +352,26 @@ def update_scannet_infos(pkl_path, out_dir): ...@@ -352,24 +352,26 @@ def update_scannet_infos(pkl_path, out_dir):
anns = ori_info_dict['annos'] anns = ori_info_dict['annos']
temp_data_info['axis_align_matrix'] = anns['axis_align_matrix'].tolist( temp_data_info['axis_align_matrix'] = anns['axis_align_matrix'].tolist(
) )
if anns['gt_num'] == 0:
num_instances = len(anns['name']) instance_list = []
ignore_class_name = set() else:
instance_list = [] num_instances = len(anns['name'])
for instance_id in range(num_instances): ignore_class_name = set()
empty_instance = get_empty_instance() instance_list = []
empty_instance['bbox_3d'] = anns['gt_boxes_upright_depth'][ for instance_id in range(num_instances):
instance_id].tolist() empty_instance = get_empty_instance()
empty_instance['bbox_3d'] = anns['gt_boxes_upright_depth'][
if anns['name'][instance_id] in METAINFO['CLASSES']: instance_id].tolist()
empty_instance['bbox_label_3d'] = METAINFO['CLASSES'].index(
anns['name'][instance_id]) if anns['name'][instance_id] in METAINFO['CLASSES']:
else: empty_instance['bbox_label_3d'] = METAINFO[
ignore_class_name.add(anns['name'][instance_id]) 'CLASSES'].index(anns['name'][instance_id])
empty_instance['bbox_label_3d'] = -1 else:
ignore_class_name.add(anns['name'][instance_id])
empty_instance = clear_instance_unused_keys(empty_instance) empty_instance['bbox_label_3d'] = -1
instance_list.append(empty_instance)
empty_instance = clear_instance_unused_keys(empty_instance)
instance_list.append(empty_instance)
temp_data_info['instances'] = instance_list temp_data_info['instances'] = instance_list
temp_data_info, _ = clear_data_info_unused_keys(temp_data_info) temp_data_info, _ = clear_data_info_unused_keys(temp_data_info)
converted_list.append(temp_data_info) converted_list.append(temp_data_info)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment