"...git@developer.sourcefind.cn:OpenDAS/fairscale.git" did not exist on "56add6d5166f870cdbe210d8b5b25921d439655a"
Unverified Commit 32f197e5 authored by VVsssssk's avatar VVsssssk Committed by GitHub
Browse files

[Docs]Docs about useful_tools (#1791)

* fix usetools

* fix comments
parent 7dfaf22b
...@@ -55,81 +55,6 @@ average iter time: 1.1959 s/iter ...@@ -55,81 +55,6 @@ average iter time: 1.1959 s/iter
   
## Visualization
### Results
To see the prediction results of trained models, you can run the following command
```bash
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --show --show-dir ${SHOW_DIR}
```
After running this command, plotted results including input data and the output of networks visualized on the input (e.g. `***_points.obj` and `***_pred.obj` in single-modality 3D detection task) will be saved in `${SHOW_DIR}`.
To see the prediction results during evaluation, you can run the following command
```bash
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --eval 'mAP' --eval-options 'show=True' 'out_dir=${SHOW_DIR}'
```
After running this command, you will obtain the input data, the output of networks and ground-truth labels visualized on the input (e.g. `***_points.obj`, `***_pred.obj`, `***_gt.obj`, `***_img.png` and `***_pred.png` in multi-modality detection task) in `${SHOW_DIR}`. When `show` is enabled, [Open3D](http://www.open3d.org/) will be used to visualize the results online. If you are running test in remote server without GUI, the online visualization is not supported, you can set `show=False` to only save the output results in `{SHOW_DIR}`.
As for offline visualization, you will have two options.
To visualize the results with `Open3D` backend, you can run the following command
```bash
python tools/misc/visualize_results.py ${CONFIG_FILE} --result ${RESULTS_PATH} --show-dir ${SHOW_DIR}
```
![](../../resources/open3d_visual.*)
Or you can use 3D visualization software such as the [MeshLab](http://www.meshlab.net/) to open these files under `${SHOW_DIR}` to see the 3D detection output. Specifically, open `***_points.obj` to see the input point cloud and open `***_pred.obj` to see the predicted 3D bounding boxes. This allows the inference and results generation to be done in remote server and the users can open them on their host with GUI.
**Notice**: The visualization API is a little unstable since we plan to refactor these parts together with MMDetection in the future.
### Dataset
We also provide scripts to visualize the dataset without inference. You can use `tools/misc/browse_dataset.py` to show loaded data and ground-truth online and save them on the disk. Currently we support single-modality 3D detection and 3D segmentation on all the datasets, multi-modality 3D detection on KITTI and SUN RGB-D, as well as monocular 3D detection on nuScenes. To browse the KITTI dataset, you can run the following command
```shell
python tools/misc/browse_dataset.py configs/_base_/datasets/kitti-3d-3class.py --task det --output-dir ${OUTPUT_DIR} --online
```
**Notice**: Once specifying `--output-dir`, the images of views specified by users will be saved when pressing `_ESC_` in open3d window. If you don't have a monitor, you can remove the `--online` flag to only save the visualization results and browse them offline.
To verify the data consistency and the effect of data augmentation, you can also add `--aug` flag to visualize the data after data augmentation using the command as below:
```shell
python tools/misc/browse_dataset.py configs/_base_/datasets/kitti-3d-3class.py --task det --aug --output-dir ${OUTPUT_DIR} --online
```
If you also want to show 2D images with 3D bounding boxes projected onto them, you need to find a config that supports multi-modality data loading, and then change the `--task` args to `multi_modality-det`. An example is showed below
```shell
python tools/misc/browse_dataset.py configs/mvxnet/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class.py --task multi_modality-det --output-dir ${OUTPUT_DIR} --online
```
![](../../resources/browse_dataset_multi_modality.png)
You can simply browse different datasets using different configs, e.g. visualizing the ScanNet dataset in 3D semantic segmentation task
```shell
python tools/misc/browse_dataset.py configs/_base_/datasets/scannet-seg.py --task seg --output-dir ${OUTPUT_DIR} --online
```
![](../../resources/browse_dataset_seg.png)
And browsing the nuScenes dataset in monocular 3D detection task
```shell
python tools/misc/browse_dataset.py configs/_base_/datasets/nus-mono3d.py --task mono-det --output-dir ${OUTPUT_DIR} --online
```
![](../../resources/browse_dataset_mono.png)
 
## Model Serving ## Model Serving
**Note**: This tool is still experimental now, only SECOND is supported to be served with [`TorchServe`](https://pytorch.org/serve/). We'll support more models in the future. **Note**: This tool is still experimental now, only SECOND is supported to be served with [`TorchServe`](https://pytorch.org/serve/). We'll support more models in the future.
......
...@@ -173,16 +173,18 @@ def load_json_logs(json_logs): ...@@ -173,16 +173,18 @@ def load_json_logs(json_logs):
log_dicts = [dict() for _ in json_logs] log_dicts = [dict() for _ in json_logs]
for json_log, log_dict in zip(json_logs, log_dicts): for json_log, log_dict in zip(json_logs, log_dicts):
with open(json_log, 'r') as log_file: with open(json_log, 'r') as log_file:
epoch = 1
for line in log_file: for line in log_file:
log = json.loads(line.strip()) log = json.loads(line.strip())
# skip lines without `epoch` field # skip lines only contains one key
if 'epoch' not in log: if not len(log) > 1:
continue continue
epoch = log.pop('epoch')
if epoch not in log_dict: if epoch not in log_dict:
log_dict[epoch] = defaultdict(list) log_dict[epoch] = defaultdict(list)
for k, v in log.items(): for k, v in log.items():
log_dict[epoch][k].append(v) log_dict[epoch][k].append(v)
if 'epoch' in log.keys():
epoch = log['epoch'] + 1
return log_dicts return log_dicts
......
...@@ -2,9 +2,10 @@ ...@@ -2,9 +2,10 @@
import argparse import argparse
import torch import torch
from mmcv import Config, DictAction from mmengine import Config, DictAction
from mmdet3d.models import build_model from mmdet3d.registry import MODELS
from mmdet3d.utils import register_all_modules
try: try:
from mmcv.cnn import get_model_complexity_info from mmcv.cnn import get_model_complexity_info
...@@ -42,7 +43,7 @@ def parse_args(): ...@@ -42,7 +43,7 @@ def parse_args():
def main(): def main():
register_all_modules()
args = parse_args() args = parse_args()
if args.modality == 'point': if args.modality == 'point':
...@@ -64,21 +65,11 @@ def main(): ...@@ -64,21 +65,11 @@ def main():
if args.cfg_options is not None: if args.cfg_options is not None:
cfg.merge_from_dict(args.cfg_options) cfg.merge_from_dict(args.cfg_options)
model = build_model( model = MODELS.build(cfg.model)
cfg.model,
train_cfg=cfg.get('train_cfg'),
test_cfg=cfg.get('test_cfg'))
if torch.cuda.is_available(): if torch.cuda.is_available():
model.cuda() model.cuda()
model.eval() model.eval()
if hasattr(model, 'forward_dummy'):
model.forward = model.forward_dummy
else:
raise NotImplementedError(
'FLOPs counter is currently not supported for {}'.format(
model.__class__.__name__))
flops, params = get_model_complexity_info(model, input_shape) flops, params = get_model_complexity_info(model, input_shape)
split_line = '=' * 30 split_line = '=' * 30
print(f'{split_line}\nInput shape: {input_shape}\n' print(f'{split_line}\nInput shape: {input_shape}\n'
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment