customize_models.md 20.9 KB
Newer Older
1
# Customize Models
twang's avatar
twang committed
2

3
We basically categorize model components into 6 types:
twang's avatar
twang committed
4

5
6
7
8
9
10
- encoder: Including voxel encoder and middle encoder used in voxel-based methods before backbone, e.g., `HardVFE` and `PointPillarsScatter`.
- backbone: Usually an FCN network to extract feature maps, e.g., `ResNet`, `SECOND`.
- neck: The component between backbones and heads, e.g., `FPN`, `SECONDFPN`.
- head: The component for specific tasks, e.g., `bbox prediction` and `mask prediction`.
- RoI extractor: The part for extracting RoI features from feature maps, e.g., `H3DRoIHead` and `PartAggregationROIHead`.
- loss: The component in heads for calculating losses, e.g., `FocalLoss`, `L1Loss`, and `GHMLoss`.
twang's avatar
twang committed
11
12
13
14
15
16
17

## Develop new components

### Add a new encoder

Here we show how to develop new components with an example of HardVFE.

18
#### 1. Define a new voxel encoder (e.g. HardVFE: Voxel feature encoder used in HV-SECOND)
twang's avatar
twang committed
19
20
21
22
23
24

Create a new file `mmdet3d/models/voxel_encoders/voxel_encoder.py`.

```python
import torch.nn as nn

25
from mmdet3d.registry import MODELS
twang's avatar
twang committed
26
27


28
@MODELS.register_module()
twang's avatar
twang committed
29
30
31
32
33
34
35
36
37
38
39
class HardVFE(nn.Module):

    def __init__(self, arg1, arg2):
        pass

    def forward(self, x):  # should return a tuple
        pass
```

#### 2. Import the module

40
You can either add the following line to `mmdet3d/models/voxel_encoders/__init__.py`:
twang's avatar
twang committed
41
42
43
44
45
46
47
48
49

```python
from .voxel_encoder import HardVFE
```

or alternatively add

```python
custom_imports = dict(
50
    imports=['mmdet3d.models.voxel_encoders.voxel_encoder'],
twang's avatar
twang committed
51
52
53
54
55
    allow_failed_imports=False)
```

to the config file to avoid modifying the original code.

Wenhao Wu's avatar
Wenhao Wu committed
56
#### 3. Use the voxel encoder in your config file
twang's avatar
twang committed
57
58
59
60
61
62
63

```python
model = dict(
    ...
    voxel_encoder=dict(
        type='HardVFE',
        arg1=xxx,
64
        arg2=yyy),
twang's avatar
twang committed
65
    ...
66
)
twang's avatar
twang committed
67
68
69
70
```

### Add a new backbone

71
Here we show how to develop new components with an example of [SECOND](https://www.mdpi.com/1424-8220/18/10/3337) (Sparsely Embedded Convolutional Detection).
twang's avatar
twang committed
72
73
74
75
76
77

#### 1. Define a new backbone (e.g. SECOND)

Create a new file `mmdet3d/models/backbones/second.py`.

```python
78
from mmengine.model import BaseModule
twang's avatar
twang committed
79

80
from mmdet3d.registry import MODELS
twang's avatar
twang committed
81
82


83
@MODELS.register_module()
84
class SECOND(BaseModule):
twang's avatar
twang committed
85
86
87
88
89
90
91
92
93
94

    def __init__(self, arg1, arg2):
        pass

    def forward(self, x):  # should return a tuple
        pass
```

#### 2. Import the module

95
You can either add the following line to `mmdet3d/models/backbones/__init__.py`:
twang's avatar
twang committed
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118

```python
from .second import SECOND
```

or alternatively add

```python
custom_imports = dict(
    imports=['mmdet3d.models.backbones.second'],
    allow_failed_imports=False)
```

to the config file to avoid modifying the original code.

#### 3. Use the backbone in your config file

```python
model = dict(
    ...
    backbone=dict(
        type='SECOND',
        arg1=xxx,
119
        arg2=yyy),
twang's avatar
twang committed
120
    ...
121
)
twang's avatar
twang committed
122
123
```

124
### Add a new neck
twang's avatar
twang committed
125

126
#### 1. Define a new neck (e.g. SECONDFPN)
twang's avatar
twang committed
127
128
129
130

Create a new file `mmdet3d/models/necks/second_fpn.py`.

```python
131
132
from mmengine.model import BaseModule

133
from mmdet3d.registry import MODELS
twang's avatar
twang committed
134

135

136
@MODELS.register_module()
137
class SECONDFPN(BaseModule):
twang's avatar
twang committed
138
139
140
141
142
143
144
145

    def __init__(self,
                 in_channels=[128, 128, 256],
                 out_channels=[256, 256, 256],
                 upsample_strides=[1, 2, 4],
                 norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01),
                 upsample_cfg=dict(type='deconv', bias=False),
                 conv_cfg=dict(type='Conv2d', bias=False),
146
147
                 use_conv_for_no_stride=False,
                 init_cfg=None):
twang's avatar
twang committed
148
149
        pass

150
    def forward(self, x):
twang's avatar
twang committed
151
152
153
154
155
156
        # implementation is ignored
        pass
```

#### 2. Import the module

157
You can either add the following line to `mmdet3d/models/necks/__init__.py`:
twang's avatar
twang committed
158
159
160
161
162
163
164
165
166
167
168
169
170

```python
from .second_fpn import SECONDFPN
```

or alternatively add

```python
custom_imports = dict(
    imports=['mmdet3d.models.necks.second_fpn'],
    allow_failed_imports=False)
```

171
to the config file to avoid modifying the original code.
twang's avatar
twang committed
172

Wenhao Wu's avatar
Wenhao Wu committed
173
#### 3. Use the neck in your config file
twang's avatar
twang committed
174
175

```python
Wenhao Wu's avatar
Wenhao Wu committed
176
177
178
179
180
181
182
183
model = dict(
    ...
    neck=dict(
        type='SECONDFPN',
        in_channels=[64, 128, 256],
        upsample_strides=[1, 2, 4],
        out_channels=[128, 128, 128]),
    ...
184
)
twang's avatar
twang committed
185
186
```

187
### Add a new head
twang's avatar
twang committed
188
189
190

Here we show how to develop a new head with the example of [PartA2 Head](https://arxiv.org/abs/1907.03670) as the following.

191
**Note**: Here the example of `PartA2 RoI Head` is used in the second stage. For one-stage heads, please refer to examples in `mmdet3d/models/dense_heads/`. They are more commonly used in 3D detection for autonomous driving due to its simplicity and high efficiency.
twang's avatar
twang committed
192
193

First, add a new bbox head in `mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py`.
194
`PartA2 RoI Head` implements a new bbox head for object detection.
195
To implement a bbox head, basically we need to implement two functions of the new module as the following. Sometimes other related functions like `loss` and `get_targets` are also required.
twang's avatar
twang committed
196
197

```python
198
from mmengine.model import BaseModule
twang's avatar
twang committed
199

200
201
202
from mmdet3d.registry import MODELS


203
@MODELS.register_module()
204
class PartA2BboxHead(BaseModule):
twang's avatar
twang committed
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
    """PartA2 RoI head."""

    def __init__(self,
                 num_classes,
                 seg_in_channels,
                 part_in_channels,
                 seg_conv_channels=None,
                 part_conv_channels=None,
                 merge_conv_channels=None,
                 down_conv_channels=None,
                 shared_fc_channels=None,
                 cls_channels=None,
                 reg_channels=None,
                 dropout_ratio=0.1,
                 roi_feat_size=14,
                 with_corner_loss=True,
                 bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'),
                 conv_cfg=dict(type='Conv1d'),
                 norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01),
                 loss_bbox=dict(
                     type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0),
                 loss_cls=dict(
                     type='CrossEntropyLoss',
                     use_sigmoid=True,
                     reduction='none',
230
231
232
                     loss_weight=1.0),
                 init_cfg=None):
        super(PartA2BboxHead, self).__init__(init_cfg=init_cfg)
twang's avatar
twang committed
233
234

    def forward(self, seg_feats, part_feats):
235
        pass
twang's avatar
twang committed
236
237
238
239
240
```

Second, implement a new RoI Head if it is necessary. We plan to inherit the new `PartAggregationROIHead` from `Base3DRoIHead`. We can find that a `Base3DRoIHead` already implements the following functions.

```python
241
from mmdet.models.roi_heads import BaseRoIHead
twang's avatar
twang committed
242

243
244
from mmdet3d.registry import MODELS, TASK_UTILS

twang's avatar
twang committed
245

246
class Base3DRoIHead(BaseRoIHead):
twang's avatar
twang committed
247
248
249
250
    """Base class for 3d RoIHeads."""

    def __init__(self,
                 bbox_head=None,
251
                 bbox_roi_extractor=None,
twang's avatar
twang committed
252
                 mask_head=None,
253
                 mask_roi_extractor=None,
twang's avatar
twang committed
254
                 train_cfg=None,
255
256
                 test_cfg=None,
                 init_cfg=None):
257
258
259
260
261
262
263
264
        super(Base3DRoIHead, self).__init__(
            bbox_head=bbox_head,
            bbox_roi_extractor=bbox_roi_extractor,
            mask_head=mask_head,
            mask_roi_extractor=mask_roi_extractor,
            train_cfg=train_cfg,
            test_cfg=test_cfg,
            init_cfg=init_cfg)
twang's avatar
twang committed
265

266
267
268
    def init_bbox_head(self, bbox_roi_extractor: dict,
                       bbox_head: dict) -> None:
        """Initialize box head and box roi extractor.
twang's avatar
twang committed
269

270
271
272
273
274
275
276
        Args:
            bbox_roi_extractor (dict or ConfigDict): Config of box
                roi extractor.
            bbox_head (dict or ConfigDict): Config of box in box head.
        """
        self.bbox_roi_extractor = MODELS.build(bbox_roi_extractor)
        self.bbox_head = MODELS.build(bbox_head)
twang's avatar
twang committed
277
278

    def init_assigner_sampler(self):
279
280
281
282
283
284
285
286
287
288
289
        """Initialize assigner and sampler."""
        self.bbox_assigner = None
        self.bbox_sampler = None
        if self.train_cfg:
            if isinstance(self.train_cfg.assigner, dict):
                self.bbox_assigner = TASK_UTILS.build(self.train_cfg.assigner)
            elif isinstance(self.train_cfg.assigner, list):
                self.bbox_assigner = [
                    TASK_UTILS.build(res) for res in self.train_cfg.assigner
                ]
            self.bbox_sampler = TASK_UTILS.build(self.train_cfg.sampler)
twang's avatar
twang committed
290

291
292
293
    def init_mask_head(self):
        """Initialize mask head, skip since ``PartAggregationROIHead`` does not
        have one."""
twang's avatar
twang committed
294
295
296
297
298
299
300
        pass
```

Double Head's modification is mainly in the bbox_forward logic, and it inherits other logics from the `Base3DRoIHead`.
In the `mmdet3d/models/roi_heads/part_aggregation_roi_head.py`, we implement the new RoI Head as the following:

```python
301
302
from typing import Dict, List, Tuple

303
304
from mmdet.models.task_modules import AssignResult, SamplingResult
from mmengine import ConfigDict
305
from torch import Tensor
twang's avatar
twang committed
306
307
from torch.nn import functional as F

308
309
310
311
from mmdet3d.registry import MODELS
from mmdet3d.structures import bbox3d2roi
from mmdet3d.utils import InstanceList
from ...structures.det3d_data_sample import SampleList
twang's avatar
twang committed
312
313
314
from .base_3droi_head import Base3DRoIHead


315
@MODELS.register_module()
twang's avatar
twang committed
316
317
class PartAggregationROIHead(Base3DRoIHead):
    """Part aggregation roi head for PartA2.
318

twang's avatar
twang committed
319
320
321
322
    Args:
        semantic_head (ConfigDict): Config of semantic head.
        num_classes (int): The number of classes.
        seg_roi_extractor (ConfigDict): Config of seg_roi_extractor.
323
        bbox_roi_extractor (ConfigDict): Config of part_roi_extractor.
twang's avatar
twang committed
324
325
326
327
328
329
        bbox_head (ConfigDict): Config of bbox_head.
        train_cfg (ConfigDict): Training config.
        test_cfg (ConfigDict): Testing config.
    """

    def __init__(self,
330
331
332
333
334
335
336
337
                 semantic_head: dict,
                 num_classes: int = 3,
                 seg_roi_extractor: dict = None,
                 bbox_head: dict = None,
                 bbox_roi_extractor: dict = None,
                 train_cfg: dict = None,
                 test_cfg: dict = None,
                 init_cfg: dict = None) -> None:
twang's avatar
twang committed
338
        super(PartAggregationROIHead, self).__init__(
339
            bbox_head=bbox_head,
340
            bbox_roi_extractor=bbox_roi_extractor,
341
342
343
            train_cfg=train_cfg,
            test_cfg=test_cfg,
            init_cfg=init_cfg)
twang's avatar
twang committed
344
345
        self.num_classes = num_classes
        assert semantic_head is not None
346
        self.init_seg_head(seg_roi_extractor, semantic_head)
twang's avatar
twang committed
347

348
349
350
    def init_seg_head(self, seg_roi_extractor: dict,
                      semantic_head: dict) -> None:
        """Initialize semantic head and seg roi extractor.
twang's avatar
twang committed
351

352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
        Args:
            seg_roi_extractor (dict): Config of seg
                roi extractor.
            semantic_head (dict): Config of semantic head.
        """
        self.semantic_head = MODELS.build(semantic_head)
        self.seg_roi_extractor = MODELS.build(seg_roi_extractor)

    @property
    def with_semantic(self):
        """bool: whether the head has semantic branch"""
        return hasattr(self,
                       'semantic_head') and self.semantic_head is not None

    def predict(self,
                feats_dict: Dict,
                rpn_results_list: InstanceList,
                batch_data_samples: SampleList,
                rescale: bool = False,
                **kwargs) -> InstanceList:
        """Perform forward propagation of the roi head and predict detection
        results on the features of the upstream network.
twang's avatar
twang committed
374
375

        Args:
376
            feats_dict (dict): Contains features from the first stage.
377
            rpn_results_list (List[:obj:`InstanceData`]): Detection results
378
379
380
381
382
383
384
                of rpn head.
            batch_data_samples (List[:obj:`Det3DDataSample`]): The Data
                samples. It usually includes information such as
                `gt_instance_3d`, `gt_panoptic_seg_3d` and `gt_sem_seg_3d`.
            rescale (bool): If True, return boxes in original image space.
                Defaults to False.

twang's avatar
twang committed
385
        Returns:
386
387
388
389
390
391
392
393
394
395
396
            list[:obj:`InstanceData`]: Detection results of each sample
            after the post process.
            Each item usually contains following keys.

            - scores_3d (Tensor): Classification scores, has a shape
              (num_instances, )
            - labels_3d (Tensor): Labels of bboxes, has a shape
              (num_instances, ).
            - bboxes_3d (BaseInstance3DBoxes): Prediction of bboxes,
              contains a tensor with shape (num_instances, C), where
              C >= 7.
twang's avatar
twang committed
397
        """
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
        assert self.with_bbox, 'Bbox head must be implemented in PartA2.'
        assert self.with_semantic, 'Semantic head must be implemented' \
                                   ' in PartA2.'

        batch_input_metas = [
            data_samples.metainfo for data_samples in batch_data_samples
        ]
        voxels_dict = feats_dict.pop('voxels_dict')
        # TODO: Split predict semantic and bbox
        results_list = self.predict_bbox(feats_dict, voxels_dict,
                                         batch_input_metas, rpn_results_list,
                                         self.test_cfg)
        return results_list

    def predict_bbox(self, feats_dict: Dict, voxel_dict: Dict,
                     batch_input_metas: List[dict],
                     rpn_results_list: InstanceList,
                     test_cfg: ConfigDict) -> InstanceList:
        """Perform forward propagation of the bbox head and predict detection
        results on the features of the upstream network.

        Args:
            feats_dict (dict): Contains features from the first stage.
            voxel_dict (dict): Contains information of voxels.
            batch_input_metas (list[dict], Optional): Batch image meta info.
                Defaults to None.
424
            rpn_results_list (List[:obj:`InstanceData`]): Detection results
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
                of rpn head.
            test_cfg (Config): Test config.

        Returns:
            list[:obj:`InstanceData`]: Detection results of each sample
            after the post process.
            Each item usually contains following keys.

            - scores_3d (Tensor): Classification scores, has a shape
              (num_instances, )
            - labels_3d (Tensor): Labels of bboxes, has a shape
              (num_instances, ).
            - bboxes_3d (BaseInstance3DBoxes): Prediction of bboxes,
              contains a tensor with shape (num_instances, C), where
              C >= 7.
        """
        ...

    def loss(self, feats_dict: Dict, rpn_results_list: InstanceList,
             batch_data_samples: SampleList, **kwargs) -> dict:
        """Perform forward propagation and loss calculation of the detection
        roi on the features of the upstream network.

        Args:
            feats_dict (dict): Contains features from the first stage.
450
            rpn_results_list (List[:obj:`InstanceData`]): Detection results
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
                of rpn head.
            batch_data_samples (List[:obj:`Det3DDataSample`]): The Data
                samples. It usually includes information such as
                `gt_instance_3d`, `gt_panoptic_seg_3d` and `gt_sem_seg_3d`.

        Returns:
            dict[str, Tensor]: A dictionary of loss components
        """
        assert len(rpn_results_list) == len(batch_data_samples)
        losses = dict()
        batch_gt_instances_3d = []
        batch_gt_instances_ignore = []
        voxels_dict = feats_dict.pop('voxels_dict')
        for data_sample in batch_data_samples:
            batch_gt_instances_3d.append(data_sample.gt_instances_3d)
            if 'ignored_instances' in data_sample:
                batch_gt_instances_ignore.append(data_sample.ignored_instances)
            else:
                batch_gt_instances_ignore.append(None)
        if self.with_semantic:
            semantic_results = self._semantic_forward_train(
                feats_dict, voxels_dict, batch_gt_instances_3d)
            losses.update(semantic_results.pop('loss_semantic'))

        sample_results = self._assign_and_sample(rpn_results_list,
                                                 batch_gt_instances_3d)
        if self.with_bbox:
            feats_dict.update(semantic_results)
            bbox_results = self._bbox_forward_train(feats_dict, voxels_dict,
                                                    sample_results)
            losses.update(bbox_results['loss_bbox'])

        return losses
twang's avatar
twang committed
484
485
```

486
Here we omit more details related to other functions. Please see the [code](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/models/roi_heads/part_aggregation_roi_head.py) for more details.
twang's avatar
twang committed
487
488

Last, the users need to add the module in
489
`mmdet3d/models/roi_heads/bbox_heads/__init__.py` and `mmdet3d/models/roi_heads/__init__.py` thus the corresponding registry could find and load them.
twang's avatar
twang committed
490
491
492
493
494

Alternatively, the users can add

```python
custom_imports=dict(
495
496
    imports=['mmdet3d.models.roi_heads.part_aggregation_roi_head', 'mmdet3d.models.roi_heads.bbox_heads.parta2_bbox_head'],
    allow_failed_imports=False)
twang's avatar
twang committed
497
498
499
500
```

to the config file and achieve the same goal.

501
The config file of `PartAggregationROIHead` is as the following:
twang's avatar
twang committed
502
503
504
505
506
507
508
509
510
511
512
513
514
515

```python
model = dict(
    ...
    roi_head=dict(
        type='PartAggregationROIHead',
        num_classes=3,
        semantic_head=dict(
            type='PointwiseSemanticHead',
            in_channels=16,
            extra_width=0.2,
            seg_score_thr=0.3,
            num_classes=3,
            loss_seg=dict(
516
                type='mmdet.FocalLoss',
twang's avatar
twang committed
517
518
519
520
521
522
                use_sigmoid=True,
                reduction='sum',
                gamma=2.0,
                alpha=0.25,
                loss_weight=1.0),
            loss_part=dict(
523
524
525
                type='mmdet.CrossEntropyLoss',
                use_sigmoid=True,
                loss_weight=1.0)),
twang's avatar
twang committed
526
527
528
529
530
531
532
        seg_roi_extractor=dict(
            type='Single3DRoIAwareExtractor',
            roi_layer=dict(
                type='RoIAwarePool3d',
                out_size=14,
                max_pts_per_voxel=128,
                mode='max')),
533
        bbox_roi_extractor=dict(
twang's avatar
twang committed
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
            type='Single3DRoIAwareExtractor',
            roi_layer=dict(
                type='RoIAwarePool3d',
                out_size=14,
                max_pts_per_voxel=128,
                mode='avg')),
        bbox_head=dict(
            type='PartA2BboxHead',
            num_classes=3,
            seg_in_channels=16,
            part_in_channels=4,
            seg_conv_channels=[64, 64],
            part_conv_channels=[64, 64],
            merge_conv_channels=[128, 128],
            down_conv_channels=[128, 256],
            bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'),
            shared_fc_channels=[256, 512, 512, 512],
            cls_channels=[256, 256],
            reg_channels=[256, 256],
            dropout_ratio=0.1,
            roi_feat_size=14,
            with_corner_loss=True,
            loss_bbox=dict(
557
                type='mmdet.SmoothL1Loss',
twang's avatar
twang committed
558
559
560
561
                beta=1.0 / 9.0,
                reduction='sum',
                loss_weight=1.0),
            loss_cls=dict(
562
                type='mmdet.CrossEntropyLoss',
twang's avatar
twang committed
563
564
                use_sigmoid=True,
                reduction='sum',
565
                loss_weight=1.0))),
twang's avatar
twang committed
566
    ...
567
)
twang's avatar
twang committed
568
569
570
```

Since MMDetection 2.0, the config system supports to inherit configs such that the users can focus on the modification.
571
The second stage of PartA2 Head mainly uses a new `PartAggregationROIHead` and a new
twang's avatar
twang committed
572
573
`PartA2BboxHead`, the arguments are set according to the `__init__` function of each module.

574
### Add a new loss
twang's avatar
twang committed
575

576
577
578
Assume you want to add a new loss as `MyLoss` for bounding box regression.
To add a new loss function, the users need to implement it in `mmdet3d/models/losses/my_loss.py`.
The decorator `weighted_loss` enables the loss to be weighted for each element.
twang's avatar
twang committed
579
580
581
582

```python
import torch
import torch.nn as nn
583
from mmdet.models.losses.utils import weighted_loss
twang's avatar
twang committed
584

585
from mmdet3d.registry import MODELS
586

twang's avatar
twang committed
587
588
589
590
591
592
593

@weighted_loss
def my_loss(pred, target):
    assert pred.size() == target.size() and target.numel() > 0
    loss = torch.abs(pred - target)
    return loss

594

595
@MODELS.register_module()
twang's avatar
twang committed
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
class MyLoss(nn.Module):

    def __init__(self, reduction='mean', loss_weight=1.0):
        super(MyLoss, self).__init__()
        self.reduction = reduction
        self.loss_weight = loss_weight

    def forward(self,
                pred,
                target,
                weight=None,
                avg_factor=None,
                reduction_override=None):
        assert reduction_override in (None, 'none', 'mean', 'sum')
        reduction = (
            reduction_override if reduction_override else self.reduction)
        loss_bbox = self.loss_weight * my_loss(
            pred, target, weight, reduction=reduction, avg_factor=avg_factor)
        return loss_bbox
```

Then the users need to add it in the `mmdet3d/models/losses/__init__.py`.

```python
from .my_loss import MyLoss, my_loss
```

Alternatively, you can add

```python
custom_imports=dict(
627
628
    imports=['mmdet3d.models.losses.my_loss'],
    allow_failed_imports=False)
twang's avatar
twang committed
629
630
631
632
```

to the config file and achieve the same goal.

633
634
To use it, users should modify the `loss_xxx` field.
Since `MyLoss` is for regression, you need to modify the `loss_bbox` field in the head.
twang's avatar
twang committed
635
636

```python
637
loss_bbox=dict(type='MyLoss', loss_weight=1.0)
twang's avatar
twang committed
638
```