Commit 7fda1f66 authored by jshilong's avatar jshilong Committed by ChaimZhu
Browse files

[Fix]Fix bugs and remove show related codes in dataset

parent ff1e5b4e
# Structure Aware Single-stage 3D Object Detection from Point Cloud # Structure Aware Single-stage 3D Object Detection from Point Cloud
> [Structure Aware Single-stage 3D Object Detection from Point Cloud]([https://arxiv.org/abs/2104.02323](https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.pdf)) > [Structure Aware Single-stage 3D Object Detection from Point Cloud](<%5Bhttps://arxiv.org/abs/2104.02323%5D(https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.pdf)>)
<!-- [ALGORITHM] --> <!-- [ALGORITHM] -->
......
...@@ -9,29 +9,29 @@ We list some potential troubles encountered by users and developers, along with ...@@ -9,29 +9,29 @@ We list some potential troubles encountered by users and developers, along with
The required versions of MMCV, MMDetection and MMSegmentation for different versions of MMDetection3D are as below. Please install the correct version of MMCV, MMDetection and MMSegmentation to avoid installation issues. The required versions of MMCV, MMDetection and MMSegmentation for different versions of MMDetection3D are as below. Please install the correct version of MMCV, MMDetection and MMSegmentation to avoid installation issues.
| MMDetection3D version | MMDetection version | MMSegmentation version | MMCV version | | MMDetection3D version | MMDetection version | MMSegmentation version | MMCV version |
| :-------------------: | :---------------------: | :--------------------: | :------------------------: | | :-------------------: | :----------------------: | :---------------------: | :-------------------------: |
| master | mmdet>=2.24.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.6.0 | | master | mmdet>=2.24.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0 |
| v1.0.0rc3 | mmdet>=2.24.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.6.0 | | v1.0.0rc3 | mmdet>=2.24.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0 |
| v1.0.0rc2 | mmdet>=2.24.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.6.0 | | v1.0.0rc2 | mmdet>=2.24.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0 |
| v1.0.0rc1 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.4.8, <=1.5.0 | | v1.0.0rc1 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.5.0 |
| v1.0.0rc0 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.17, <=1.5.0 | | v1.0.0rc0 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
| 0.18.1 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.17, <=1.5.0 | | 0.18.1 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
| 0.18.0 | mmdet>=2.19.0, <=3.0.0 | mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.17, <=1.5.0 | | 0.18.0 | mmdet>=2.19.0, \<=3.0.0 | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
| 0.17.3 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.3 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.17.2 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.2 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.17.1 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.1 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.17.0 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.17.0 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.16.0 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.16.0 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.15.0 | mmdet>=2.14.0, <=3.0.0 | mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0 | | 0.15.0 | mmdet>=2.14.0, \<=3.0.0 | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0 |
| 0.14.0 | mmdet>=2.10.0, <=2.11.0 | mmseg==0.14.0 | mmcv-full>=1.3.1, <=1.4.0 | | 0.14.0 | mmdet>=2.10.0, \<=2.11.0 | mmseg==0.14.0 | mmcv-full>=1.3.1, \<=1.4.0 |
| 0.13.0 | mmdet>=2.10.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.4.0 | | 0.13.0 | mmdet>=2.10.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.4.0 |
| 0.12.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.4.0 | | 0.12.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.4.0 |
| 0.11.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.3.0 | | 0.11.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.3.0 |
| 0.10.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.3.0 | | 0.10.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.3.0 |
| 0.9.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.3.0 | | 0.9.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.2.4, \<=1.3.0 |
| 0.8.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.1.5, <=1.3.0 | | 0.8.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.1.5, \<=1.3.0 |
| 0.7.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.1.5, <=1.3.0 | | 0.7.0 | mmdet>=2.5.0, \<=2.11.0 | Not required | mmcv-full>=1.1.5, \<=1.3.0 |
| 0.6.0 | mmdet>=2.4.0, <=2.11.0 | Not required | mmcv-full>=1.1.3, <=1.2.0 | | 0.6.0 | mmdet>=2.4.0, \<=2.11.0 | Not required | mmcv-full>=1.1.3, \<=1.2.0 |
| 0.5.0 | 2.3.0 | Not required | mmcv-full==1.0.5 | | 0.5.0 | 2.3.0 | Not required | mmcv-full==1.0.5 |
- If you faced the error shown below when importing open3d: - If you faced the error shown below when importing open3d:
......
# Prerequisites # Prerequisites
In this section we demonstrate how to prepare an environment with PyTorch. In this section we demonstrate how to prepare an environment with PyTorch.
MMDection3D works on Linux, Windows (experimental support) and macOS and requires the following packages: MMDection3D works on Linux, Windows (experimental support) and macOS and requires the following packages:
...@@ -40,6 +41,7 @@ conda install pytorch torchvision cpuonly -c pytorch ...@@ -40,6 +41,7 @@ conda install pytorch torchvision cpuonly -c pytorch
We recommend that users follow our best practices to install MMDetection3D. However, the whole process is highly customizable. See [Customize Installation](#customize-installation) section for more information. We recommend that users follow our best practices to install MMDetection3D. However, the whole process is highly customizable. See [Customize Installation](#customize-installation) section for more information.
## Best Practices ## Best Practices
Assuming that you already have CUDA 11.0 installed, here is a full script for quick installation of MMDetection3D with conda. Assuming that you already have CUDA 11.0 installed, here is a full script for quick installation of MMDetection3D with conda.
Otherwise, you should refer to the step-by-step installation instructions in the next section. Otherwise, you should refer to the step-by-step installation instructions in the next section.
...@@ -57,7 +59,6 @@ pip install -e . ...@@ -57,7 +59,6 @@ pip install -e .
**Step 1.** Install [MMDetection](https://github.com/open-mmlab/mmdetection). **Step 1.** Install [MMDetection](https://github.com/open-mmlab/mmdetection).
```shell ```shell
pip install mmdet pip install mmdet
``` ```
...@@ -103,7 +104,7 @@ pip install -v -e . # or "python setup.py develop" ...@@ -103,7 +104,7 @@ pip install -v -e . # or "python setup.py develop"
Note: Note:
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models. 1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory. It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
> Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version. > Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version.
...@@ -116,7 +117,7 @@ It is recommended that you run step d each time you pull some updates from githu ...@@ -116,7 +117,7 @@ It is recommended that you run step d each time you pull some updates from githu
2. Following the above instructions, MMDetection3D is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number). 2. Following the above instructions, MMDetection3D is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
3. If you would like to use `opencv-python-headless` instead of `opencv-python`, 3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
you can install it before installing MMCV. you can install it before installing MMCV.
4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`. 4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
...@@ -142,7 +143,6 @@ you can install it before installing MMCV. ...@@ -142,7 +143,6 @@ you can install it before installing MMCV.
5. The code can not be built for CPU only environment (where CUDA isn't available) for now. 5. The code can not be built for CPU only environment (where CUDA isn't available) for now.
## Verification ## Verification
### Verify with point cloud demo ### Verify with point cloud demo
...@@ -160,7 +160,7 @@ python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_secon ...@@ -160,7 +160,7 @@ python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_secon
``` ```
If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo. If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo.
Note that you need to install `pandas` and `plyfile` before using this script. This function can also be used for data preprocessing for training ```ply data```. Note that you need to install `pandas` and `plyfile` before using this script. This function can also be used for data preprocessing for training `ply data`.
```python ```python
import numpy as np import numpy as np
...@@ -206,6 +206,7 @@ More demos about single/multi-modality and indoor/outdoor 3D detection can be fo ...@@ -206,6 +206,7 @@ More demos about single/multi-modality and indoor/outdoor 3D detection can be fo
## Customize Installation ## Customize Installation
### CUDA Versions ### CUDA Versions
When installing PyTorch, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations: When installing PyTorch, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations:
- For Ampere-based NVIDIA GPUs, such as GeForce 30 series and NVIDIA A100, CUDA 11 is a must. - For Ampere-based NVIDIA GPUs, such as GeForce 30 series and NVIDIA A100, CUDA 11 is a must.
...@@ -229,8 +230,6 @@ For example, the following command install mmcv-full built for PyTorch 1.10.x an ...@@ -229,8 +230,6 @@ For example, the following command install mmcv-full built for PyTorch 1.10.x an
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
``` ```
### Using MMDetection3D with Docker ### Using MMDetection3D with Docker
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/master/docker/Dockerfile) to build an image. We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/master/docker/Dockerfile) to build an image.
......
# Copyright (c) OpenMMLab. All rights reserved. # Copyright (c) OpenMMLab. All rights reserved.
import copy import copy
import tempfile
import warnings
from os import path as osp from os import path as osp
from typing import Callable, List, Optional, Union from typing import Callable, List, Optional, Union
...@@ -11,8 +9,6 @@ from mmengine.dataset import BaseDataset ...@@ -11,8 +9,6 @@ from mmengine.dataset import BaseDataset
from mmdet3d.datasets import DATASETS from mmdet3d.datasets import DATASETS
from ..core.bbox import get_box_type from ..core.bbox import get_box_type
from .pipelines import Compose
from .utils import extract_result_dict, get_loading_pipeline
@DATASETS.register_module() @DATASETS.register_module()
...@@ -153,7 +149,7 @@ class Det3DDataset(BaseDataset): ...@@ -153,7 +149,7 @@ class Det3DDataset(BaseDataset):
return ann_info return ann_info
def parse_ann_info(self, info: dict) -> dict: def parse_ann_info(self, info: dict) -> Optional[dict]:
"""Process the `instances` in data info to `ann_info` """Process the `instances` in data info to `ann_info`
In `Custom3DDataset`, we simply concatenate all the field In `Custom3DDataset`, we simply concatenate all the field
...@@ -165,7 +161,7 @@ class Det3DDataset(BaseDataset): ...@@ -165,7 +161,7 @@ class Det3DDataset(BaseDataset):
info (dict): Info dict. info (dict): Info dict.
Returns: Returns:
dict: Processed `ann_info` dict | None: Processed `ann_info`
""" """
# add s or gt prefix for most keys after concat # add s or gt prefix for most keys after concat
name_mapping = { name_mapping = {
...@@ -179,12 +175,18 @@ class Det3DDataset(BaseDataset): ...@@ -179,12 +175,18 @@ class Det3DDataset(BaseDataset):
} }
instances = info['instances'] instances = info['instances']
# empty gt
if len(instances) == 0:
return None
else:
keys = list(instances[0].keys()) keys = list(instances[0].keys())
ann_info = dict() ann_info = dict()
for ann_name in keys: for ann_name in keys:
temp_anns = [item[ann_name] for item in instances] temp_anns = [item[ann_name] for item in instances]
if 'label' in ann_name: if 'label' in ann_name:
temp_anns = [self.label_mapping[item] for item in temp_anns] temp_anns = [
self.label_mapping[item] for item in temp_anns
]
temp_anns = np.array(temp_anns) temp_anns = np.array(temp_anns)
if ann_name in name_mapping: if ann_name in name_mapping:
ann_name = name_mapping[ann_name] ann_name = name_mapping[ann_name]
...@@ -257,138 +259,3 @@ class Det3DDataset(BaseDataset): ...@@ -257,138 +259,3 @@ class Det3DDataset(BaseDataset):
example['data_sample'].gt_instances_3d.labels_3d) == 0: example['data_sample'].gt_instances_3d.labels_3d) == 0:
return None return None
return example return example
def format_results(self,
outputs,
pklfile_prefix=None,
submission_prefix=None):
"""Format the results to pkl file.
Args:
outputs (list[dict]): Testing results of the dataset.
pklfile_prefix (str): The prefix of pkl files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: (outputs, tmp_dir), outputs is the detection results,
tmp_dir is the temporal directory created for saving json
files when ``jsonfile_prefix`` is not specified.
"""
if pklfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
pklfile_prefix = osp.join(tmp_dir.name, 'results')
out = f'{pklfile_prefix}.pkl'
mmcv.dump(outputs, out)
return outputs, tmp_dir
def evaluate(self,
results,
metric=None,
iou_thr=(0.25, 0.5),
logger=None,
show=False,
out_dir=None,
pipeline=None):
"""Evaluate.
Evaluation in indoor protocol.
Args:
results (list[dict]): List of results.
metric (str | list[str], optional): Metrics to be evaluated.
Defaults to None.
iou_thr (list[float]): AP IoU thresholds. Defaults to (0.25, 0.5).
logger (logging.Logger | str, optional): Logger used for printing
related information during evaluation. Defaults to None.
show (bool, optional): Whether to visualize.
Default: False.
out_dir (str, optional): Path to save the visualization results.
Default: None.
pipeline (list[dict], optional): raw data loading for showing.
Default: None.
Returns:
dict: Evaluation results.
"""
from mmdet3d.core.evaluation import indoor_eval
assert isinstance(
results, list), f'Expect results to be list, got {type(results)}.'
assert len(results) > 0, 'Expect length of results > 0.'
assert len(results) == len(self.data_infos)
assert isinstance(
results[0], dict
), f'Expect elements in results to be dict, got {type(results[0])}.'
gt_annos = [info['annos'] for info in self.data_infos]
label2cat = {i: cat_id for i, cat_id in enumerate(self.CLASSES)}
ret_dict = indoor_eval(
gt_annos,
results,
iou_thr,
label2cat,
logger=logger,
box_type_3d=self.box_type_3d,
box_mode_3d=self.box_mode_3d)
if show:
self.show(results, out_dir, pipeline=pipeline)
return ret_dict
# TODO check this where does this method is used
def _build_default_pipeline(self):
"""Build the default pipeline for this dataset."""
raise NotImplementedError('_build_default_pipeline is not implemented '
f'for dataset {self.__class__.__name__}')
# TODO check this where does this method is used
def _get_pipeline(self, pipeline):
"""Get data loading pipeline in self.show/evaluate function.
Args:
pipeline (list[dict]): Input pipeline. If None is given,
get from self.pipeline.
"""
if pipeline is None:
if not hasattr(self, 'pipeline') or self.pipeline is None:
warnings.warn(
'Use default pipeline for data loading, this may cause '
'errors when data is on ceph')
return self._build_default_pipeline()
loading_pipeline = get_loading_pipeline(self.pipeline.transforms)
return Compose(loading_pipeline)
return Compose(pipeline)
# TODO check this where does this method is used
def _extract_data(self, index, pipeline, key, load_annos=False):
"""Load data using input pipeline and extract data according to key.
Args:
index (int): Index for accessing the target data.
pipeline (:obj:`Compose`): Composed data loading pipeline.
key (str | list[str]): One single or a list of data key.
load_annos (bool): Whether to load data annotations.
If True, need to set self.test_mode as False before loading.
Returns:
np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]:
A single or a list of loaded data.
"""
assert pipeline is not None, 'data loading pipeline is not provided'
# when we want to load ground-truth via pipeline (e.g. bbox, seg mask)
# we need to set self.test_mode as False so that we have 'annos'
if load_annos:
original_test_mode = self.test_mode
self.test_mode = False
input_dict = self.get_data_info(index)
self.pre_pipeline(input_dict)
example = pipeline(input_dict)
# extract data items according to keys
if isinstance(key, str):
data = extract_result_dict(example, key)
else:
data = [extract_result_dict(example, k) for k in key]
if load_annos:
self.test_mode = original_test_mode
return data
This diff is collapsed.
...@@ -367,7 +367,9 @@ class ObjectSample(BaseTransform): ...@@ -367,7 +367,9 @@ class ObjectSample(BaseTransform):
gt_labels_3d = input_dict['gt_labels_3d'] gt_labels_3d = input_dict['gt_labels_3d']
if self.use_ground_plane and 'plane' in input_dict['ann_info']: if self.use_ground_plane and 'plane' in input_dict['ann_info']:
ground_plane = input_dict['ann_info']['plane'] ground_plane = input_dict['plane']
assert ground_plane is not None, '`use_ground_plane` is True ' \
'but find plane is None'
input_dict['plane'] = ground_plane input_dict['plane'] = ground_plane
else: else:
ground_plane = None ground_plane = None
......
...@@ -62,7 +62,7 @@ class ScanNetDataset(Det3DDataset): ...@@ -62,7 +62,7 @@ class ScanNetDataset(Det3DDataset):
metainfo: dict = None, metainfo: dict = None,
data_prefix: dict = dict( data_prefix: dict = dict(
pts='points', pts='points',
pts_isntance_mask='instance_mask', pts_instance_mask='instance_mask',
pts_semantic_mask='semantic_mask'), pts_semantic_mask='semantic_mask'),
pipeline: List[Union[dict, Callable]] = [], pipeline: List[Union[dict, Callable]] = [],
modality=dict(use_camera=False, use_lidar=True), modality=dict(use_camera=False, use_lidar=True),
...@@ -116,12 +116,6 @@ class ScanNetDataset(Det3DDataset): ...@@ -116,12 +116,6 @@ class ScanNetDataset(Det3DDataset):
dict: Data information that will be passed to the data dict: Data information that will be passed to the data
preprocessing pipelines. It includes the following keys: preprocessing pipelines. It includes the following keys:
""" """
# TODO: whether all depth modality is pts ?
if self.modality['use_lidar']:
info['lidar_points']['lidar_path'] = \
osp.join(
self.data_prefix.get('pts', ''),
info['lidar_points']['lidar_path'])
info['axis_align_matrix'] = self._get_axis_align_matrix(info) info['axis_align_matrix'] = self._get_axis_align_matrix(info)
info['pts_instance_mask_path'] = osp.join( info['pts_instance_mask_path'] = osp.join(
self.data_prefix.get('pts_instance_mask', ''), self.data_prefix.get('pts_instance_mask', ''),
...@@ -143,7 +137,12 @@ class ScanNetDataset(Det3DDataset): ...@@ -143,7 +137,12 @@ class ScanNetDataset(Det3DDataset):
dict: Processed `ann_info` dict: Processed `ann_info`
""" """
ann_info = super().parse_ann_info(info) ann_info = super().parse_ann_info(info)
# empty gt
if ann_info is None:
ann_info['gt_bboxes_3d'] = np.zeros((0, 6), dtype=np.float32)
ann_info['gt_labels_3d'] = np.zeros((0, ), dtype=np.int64)
# to target box structure # to target box structure
ann_info['gt_bboxes_3d'] = DepthInstance3DBoxes( ann_info['gt_bboxes_3d'] = DepthInstance3DBoxes(
ann_info['gt_bboxes_3d'], ann_info['gt_bboxes_3d'],
box_dim=ann_info['gt_bboxes_3d'].shape[-1], box_dim=ann_info['gt_bboxes_3d'].shape[-1],
......
...@@ -352,7 +352,9 @@ def update_scannet_infos(pkl_path, out_dir): ...@@ -352,7 +352,9 @@ def update_scannet_infos(pkl_path, out_dir):
anns = ori_info_dict['annos'] anns = ori_info_dict['annos']
temp_data_info['axis_align_matrix'] = anns['axis_align_matrix'].tolist( temp_data_info['axis_align_matrix'] = anns['axis_align_matrix'].tolist(
) )
if anns['gt_num'] == 0:
instance_list = []
else:
num_instances = len(anns['name']) num_instances = len(anns['name'])
ignore_class_name = set() ignore_class_name = set()
instance_list = [] instance_list = []
...@@ -362,8 +364,8 @@ def update_scannet_infos(pkl_path, out_dir): ...@@ -362,8 +364,8 @@ def update_scannet_infos(pkl_path, out_dir):
instance_id].tolist() instance_id].tolist()
if anns['name'][instance_id] in METAINFO['CLASSES']: if anns['name'][instance_id] in METAINFO['CLASSES']:
empty_instance['bbox_label_3d'] = METAINFO['CLASSES'].index( empty_instance['bbox_label_3d'] = METAINFO[
anns['name'][instance_id]) 'CLASSES'].index(anns['name'][instance_id])
else: else:
ignore_class_name.add(anns['name'][instance_id]) ignore_class_name.add(anns['name'][instance_id])
empty_instance['bbox_label_3d'] = -1 empty_instance['bbox_label_3d'] = -1
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment