Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dcnv3
Commits
41b18fd8
Commit
41b18fd8
authored
Jan 06, 2025
by
zhe chen
Browse files
Use pre-commit to reformat code
Use pre-commit to reformat code
parent
ff20ea39
Changes
390
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
362 additions
and
421 deletions
+362
-421
autonomous_driving/Online-HD-Map-Construction/src/models/heads/polyline_generator.py
...D-Map-Construction/src/models/heads/polyline_generator.py
+56
-60
autonomous_driving/Online-HD-Map-Construction/src/models/losses/__init__.py
.../Online-HD-Map-Construction/src/models/losses/__init__.py
+0
-2
autonomous_driving/Online-HD-Map-Construction/src/models/losses/detr_loss.py
...Online-HD-Map-Construction/src/models/losses/detr_loss.py
+9
-10
autonomous_driving/Online-HD-Map-Construction/src/models/mapers/__init__.py
.../Online-HD-Map-Construction/src/models/mapers/__init__.py
+0
-1
autonomous_driving/Online-HD-Map-Construction/src/models/mapers/base_mapper.py
...line-HD-Map-Construction/src/models/mapers/base_mapper.py
+10
-11
autonomous_driving/Online-HD-Map-Construction/src/models/mapers/vectormapnet.py
...ine-HD-Map-Construction/src/models/mapers/vectormapnet.py
+41
-50
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/__init__.py
...Map-Construction/src/models/transformer_utils/__init__.py
+0
-2
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/base_transformer.py
...truction/src/models/transformer_utils/base_transformer.py
+2
-14
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/deformable_transformer.py
...on/src/models/transformer_utils/deformable_transformer.py
+24
-27
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/fp16_dattn.py
...p-Construction/src/models/transformer_utils/fp16_dattn.py
+30
-36
autonomous_driving/Online-HD-Map-Construction/tools/evaluate_submission.py
...g/Online-HD-Map-Construction/tools/evaluate_submission.py
+13
-8
autonomous_driving/Online-HD-Map-Construction/tools/mmdet_test.py
...us_driving/Online-HD-Map-Construction/tools/mmdet_test.py
+1
-2
autonomous_driving/Online-HD-Map-Construction/tools/mmdet_train.py
...s_driving/Online-HD-Map-Construction/tools/mmdet_train.py
+0
-1
autonomous_driving/Online-HD-Map-Construction/tools/test.py
autonomous_driving/Online-HD-Map-Construction/tools/test.py
+16
-18
autonomous_driving/Online-HD-Map-Construction/tools/train.py
autonomous_driving/Online-HD-Map-Construction/tools/train.py
+20
-21
autonomous_driving/Online-HD-Map-Construction/tools/visualization/renderer.py
...nline-HD-Map-Construction/tools/visualization/renderer.py
+43
-36
autonomous_driving/Online-HD-Map-Construction/tools/visualization/visualize.py
...line-HD-Map-Construction/tools/visualization/visualize.py
+29
-26
autonomous_driving/README.md
autonomous_driving/README.md
+16
-18
autonomous_driving/occupancy_prediction/CITATION.cff
autonomous_driving/occupancy_prediction/CITATION.cff
+1
-1
autonomous_driving/occupancy_prediction/CODE_OF_CONDUCT.md
autonomous_driving/occupancy_prediction/CODE_OF_CONDUCT.md
+51
-77
No files found.
autonomous_driving/Online-HD-Map-Construction/src/models/heads/polyline_generator.py
View file @
41b18fd8
import
math
import
torch
import
torch
import
torch.nn
as
nn
import
torch.nn
as
nn
import
torch.nn.functional
as
F
import
torch.nn.functional
as
F
from
mmcv.runner
import
force_fp32
from
mmdet.models
import
HEADS
from
torch.distributions.categorical
import
Categorical
from
torch.distributions.categorical
import
Categorical
from
mmdet.models
import
HEADS
from
.detgen_utils.causal_trans
import
(
CausalTransformerDecoder
,
from
.detgen_utils.causal_trans
import
(
CausalTransformerDecoder
,
CausalTransformerDecoderLayer
)
CausalTransformerDecoderLayer
)
from
.detgen_utils.utils
import
(
dequantize_verts
,
generate_square_subsequent_mask
,
from
.detgen_utils.utils
import
(
generate_square_subsequent_mask
,
top_k_logits
,
quantize_verts
,
top_k_logits
,
top_p_logits
)
top_p_logits
)
from
mmcv.runner
import
force_fp32
,
auto_fp16
@
HEADS
.
register_module
(
force
=
True
)
@
HEADS
.
register_module
(
force
=
True
)
class
PolylineGenerator
(
nn
.
Module
):
class
PolylineGenerator
(
nn
.
Module
):
...
@@ -63,7 +63,7 @@ class PolylineGenerator(nn.Module):
...
@@ -63,7 +63,7 @@ class PolylineGenerator(nn.Module):
self
.
fp16_enabled
=
False
self
.
fp16_enabled
=
False
self
.
coord_dim
=
coord_dim
# if we use xyz else 2 when we use xy
self
.
coord_dim
=
coord_dim
# if we use xyz else 2 when we use xy
self
.
kp_coord_dim
=
coord_dim
if
coord_dim
==
2
else
2
# XXX
self
.
kp_coord_dim
=
coord_dim
if
coord_dim
==
2
else
2
# XXX
self
.
register_buffer
(
'canvas_size'
,
torch
.
tensor
(
canvas_size
))
self
.
register_buffer
(
'canvas_size'
,
torch
.
tensor
(
canvas_size
))
# initialize the model
# initialize the model
...
@@ -126,7 +126,7 @@ class PolylineGenerator(nn.Module):
...
@@ -126,7 +126,7 @@ class PolylineGenerator(nn.Module):
# Discrete vertex value embeddings
# Discrete vertex value embeddings
vert_embeddings
=
self
.
vertex_embed
(
bbox
)
vert_embeddings
=
self
.
vertex_embed
(
bbox
)
return
vert_embeddings
+
(
bbox_embedding
+
coord_embeddings
)[
None
]
return
vert_embeddings
+
(
bbox_embedding
+
coord_embeddings
)[
None
]
def
_prepare_context
(
self
,
batch
,
context
):
def
_prepare_context
(
self
,
batch
,
context
):
"""Prepare class label and vertex context."""
"""Prepare class label and vertex context."""
...
@@ -189,7 +189,7 @@ class PolylineGenerator(nn.Module):
...
@@ -189,7 +189,7 @@ class PolylineGenerator(nn.Module):
# Aggregate embeddings
# Aggregate embeddings
embeddings
=
vert_embeddings
+
\
embeddings
=
vert_embeddings
+
\
(
coord_embeddings
+
pos_embeddings
)[
None
]
(
coord_embeddings
+
pos_embeddings
)[
None
]
embeddings
=
torch
.
cat
([
condition_embedding
,
embeddings
],
dim
=
1
)
embeddings
=
torch
.
cat
([
condition_embedding
,
embeddings
],
dim
=
1
)
return
embeddings
return
embeddings
...
@@ -218,24 +218,22 @@ class PolylineGenerator(nn.Module):
...
@@ -218,24 +218,22 @@ class PolylineGenerator(nn.Module):
sizes
=
[
size
,
polyline_length
.
max
()]
sizes
=
[
size
,
polyline_length
.
max
()]
polyline_logits
=
[]
polyline_logits
=
[]
for
c_idx
,
size
in
zip
([
c1
,
c2
],
sizes
):
for
c_idx
,
size
in
zip
([
c1
,
c2
],
sizes
):
new_batch
=
assign_batch
(
batch
,
c_idx
,
size
)
new_batch
=
assign_batch
(
batch
,
c_idx
,
size
)
_poly_logits
=
self
.
_forward_train
(
new_batch
,
context
,
**
kwargs
)
_poly_logits
=
self
.
_forward_train
(
new_batch
,
context
,
**
kwargs
)
polyline_logits
.
append
(
_poly_logits
)
polyline_logits
.
append
(
_poly_logits
)
# maybe imporve the speed
# maybe imporve the speed
for
i
,
(
_poly_logits
,
size
)
in
enumerate
(
zip
(
polyline_logits
,
sizes
)):
for
i
,
(
_poly_logits
,
size
)
in
enumerate
(
zip
(
polyline_logits
,
sizes
)):
if
size
<
sizes
[
1
]:
if
size
<
sizes
[
1
]:
_poly_logits
=
F
.
pad
(
_poly_logits
,
(
0
,
0
,
0
,
sizes
[
1
]
-
size
),
"
constant
"
,
0
)
_poly_logits
=
F
.
pad
(
_poly_logits
,
(
0
,
0
,
0
,
sizes
[
1
]
-
size
),
'
constant
'
,
0
)
polyline_logits
[
i
]
=
_poly_logits
polyline_logits
[
i
]
=
_poly_logits
polyline_logits
=
torch
.
cat
(
polyline_logits
,
0
)
polyline_logits
=
torch
.
cat
(
polyline_logits
,
0
)
polyline_logits
=
polyline_logits
[
revert_idx
]
polyline_logits
=
polyline_logits
[
revert_idx
]
cat_dist
=
Categorical
(
logits
=
polyline_logits
)
cat_dist
=
Categorical
(
logits
=
polyline_logits
)
return
{
'polylines'
:
cat_dist
}
return
{
'polylines'
:
cat_dist
}
def
forward_train
(
self
,
batch
:
dict
,
context
:
dict
,
**
kwargs
):
def
forward_train
(
self
,
batch
:
dict
,
context
:
dict
,
**
kwargs
):
"""
"""
...
@@ -247,7 +245,7 @@ class PolylineGenerator(nn.Module):
...
@@ -247,7 +245,7 @@ class PolylineGenerator(nn.Module):
if
False
:
if
False
:
polyline_logits
=
self
.
_forward_train
(
batch
,
context
,
**
kwargs
)
polyline_logits
=
self
.
_forward_train
(
batch
,
context
,
**
kwargs
)
cat_dist
=
Categorical
(
logits
=
polyline_logits
)
cat_dist
=
Categorical
(
logits
=
polyline_logits
)
return
{
'polylines'
:
cat_dist
}
return
{
'polylines'
:
cat_dist
}
else
:
else
:
return
self
.
sperate_forward
(
batch
,
context
,
**
kwargs
)
return
self
.
sperate_forward
(
batch
,
context
,
**
kwargs
)
...
@@ -271,7 +269,7 @@ class PolylineGenerator(nn.Module):
...
@@ -271,7 +269,7 @@ class PolylineGenerator(nn.Module):
return
logits
return
logits
@
force_fp32
(
apply_to
=
(
'global_context_embedding'
,
'sequential_context_embeddings'
,
'cache'
))
@
force_fp32
(
apply_to
=
(
'global_context_embedding'
,
'sequential_context_embeddings'
,
'cache'
))
def
body
(
self
,
def
body
(
self
,
seqs
,
seqs
,
global_context_embedding
=
None
,
global_context_embedding
=
None
,
...
@@ -314,28 +312,27 @@ class PolylineGenerator(nn.Module):
...
@@ -314,28 +312,27 @@ class PolylineGenerator(nn.Module):
decoder_outputs
=
decoder_outputs
.
transpose
(
0
,
1
)
decoder_outputs
=
decoder_outputs
.
transpose
(
0
,
1
)
# since we only need the predict seq
# since we only need the predict seq
decoder_outputs
=
decoder_outputs
[:,
condition_len
-
1
:]
decoder_outputs
=
decoder_outputs
[:,
condition_len
-
1
:]
# Get logits and optionally process for sampling
# Get logits and optionally process for sampling
logits
=
self
.
_project_to_logits
(
decoder_outputs
)
logits
=
self
.
_project_to_logits
(
decoder_outputs
)
# y mask
# y mask
_vert_mask
=
torch
.
arange
(
logits
.
shape
[
-
1
],
device
=
logits
.
device
)
_vert_mask
=
torch
.
arange
(
logits
.
shape
[
-
1
],
device
=
logits
.
device
)
vertices_mask_y
=
(
_vert_mask
<
self
.
canvas_size
[
1
]
+
1
)
vertices_mask_y
=
(
_vert_mask
<
self
.
canvas_size
[
1
]
+
1
)
vertices_mask_y
[
0
]
=
False
# y position doesn't have stop sign
vertices_mask_y
[
0
]
=
False
# y position doesn't have stop sign
logits
[:,
1
::
self
.
coord_dim
]
=
logits
[:,
1
::
self
.
coord_dim
]
*
\
logits
[:,
1
::
self
.
coord_dim
]
=
logits
[:,
1
::
self
.
coord_dim
]
*
\
vertices_mask_y
-
~
vertices_mask_y
*
1e9
vertices_mask_y
-
~
vertices_mask_y
*
1e9
if
self
.
coord_dim
>
2
:
if
self
.
coord_dim
>
2
:
# z mask
# z mask
_vert_mask
=
torch
.
arange
(
logits
.
shape
[
-
1
],
device
=
logits
.
device
)
_vert_mask
=
torch
.
arange
(
logits
.
shape
[
-
1
],
device
=
logits
.
device
)
vertices_mask_z
=
(
_vert_mask
<
self
.
canvas_size
[
2
]
+
1
)
vertices_mask_z
=
(
_vert_mask
<
self
.
canvas_size
[
2
]
+
1
)
vertices_mask_z
[
0
]
=
False
# y position doesn't have stop sign
vertices_mask_z
[
0
]
=
False
# y position doesn't have stop sign
logits
[:,
2
::
self
.
coord_dim
]
=
logits
[:,
2
::
self
.
coord_dim
]
*
\
logits
[:,
2
::
self
.
coord_dim
]
=
logits
[:,
2
::
self
.
coord_dim
]
*
\
vertices_mask_z
-
~
vertices_mask_z
*
1e9
vertices_mask_z
-
~
vertices_mask_z
*
1e9
logits
=
logits
/
temperature
logits
=
logits
/
temperature
logits
=
top_k_logits
(
logits
,
top_k
)
logits
=
top_k_logits
(
logits
,
top_k
)
logits
=
top_p_logits
(
logits
,
top_p
)
logits
=
top_p_logits
(
logits
,
top_p
)
if
return_logits
:
if
return_logits
:
...
@@ -352,7 +349,7 @@ class PolylineGenerator(nn.Module):
...
@@ -352,7 +349,7 @@ class PolylineGenerator(nn.Module):
mask
=
gt
[
'polyline_masks'
]
mask
=
gt
[
'polyline_masks'
]
loss
=
-
torch
.
sum
(
loss
=
-
torch
.
sum
(
pred
[
'polylines'
].
log_prob
(
gt
[
'polylines'
])
*
mask
*
weight
)
/
weight
.
sum
()
pred
[
'polylines'
].
log_prob
(
gt
[
'polylines'
])
*
mask
*
weight
)
/
weight
.
sum
()
return
{
'seq'
:
loss
}
return
{
'seq'
:
loss
}
...
@@ -395,16 +392,15 @@ class PolylineGenerator(nn.Module):
...
@@ -395,16 +392,15 @@ class PolylineGenerator(nn.Module):
samples
=
torch
.
empty
(
samples
=
torch
.
empty
(
[
batch_size
,
0
],
dtype
=
torch
.
int32
,
device
=
device
)
[
batch_size
,
0
],
dtype
=
torch
.
int32
,
device
=
device
)
max_sample_length
=
max_sample_length
or
self
.
max_seq_length
max_sample_length
=
max_sample_length
or
self
.
max_seq_length
seq_len
=
max_sample_length
*
self
.
coord_dim
+
1
seq_len
=
max_sample_length
*
self
.
coord_dim
+
1
cache
=
None
cache
=
None
decoded_tokens
=
\
decoded_tokens
=
\
torch
.
zeros
((
batch_size
,
seq_len
),
torch
.
zeros
((
batch_size
,
seq_len
),
device
=
device
,
dtype
=
torch
.
long
)
device
=
device
,
dtype
=
torch
.
long
)
remain_idx
=
torch
.
arange
(
batch_size
,
device
=
device
)
remain_idx
=
torch
.
arange
(
batch_size
,
device
=
device
)
for
i
in
range
(
seq_len
):
for
i
in
range
(
seq_len
):
# While-loop body for autoregression calculation.
# While-loop body for autoregression calculation.
pred_dist
,
cache
=
self
.
body
(
pred_dist
,
cache
=
self
.
body
(
samples
,
samples
,
...
@@ -417,22 +413,22 @@ class PolylineGenerator(nn.Module):
...
@@ -417,22 +413,22 @@ class PolylineGenerator(nn.Module):
is_training
=
False
)
is_training
=
False
)
samples
=
pred_dist
.
sample
()
samples
=
pred_dist
.
sample
()
decoded_tokens
[
remain_idx
,
i
]
=
samples
[:,
-
1
]
decoded_tokens
[
remain_idx
,
i
]
=
samples
[:,
-
1
]
# Stopping conditions for autoregressive calculation.
# Stopping conditions for autoregressive calculation.
if
not
(
decoded_tokens
[:,:
i
+
1
]
!=
0
).
all
(
-
1
).
any
():
if
not
(
decoded_tokens
[:,
:
i
+
1
]
!=
0
).
all
(
-
1
).
any
():
break
break
# update state, check the new position is zero.
# update state, check the new position is zero.
valid_idx
=
(
samples
[:,
-
1
]
!=
0
).
nonzero
(
as_tuple
=
True
)[
0
]
valid_idx
=
(
samples
[:,
-
1
]
!=
0
).
nonzero
(
as_tuple
=
True
)[
0
]
remain_idx
=
remain_idx
[
valid_idx
]
remain_idx
=
remain_idx
[
valid_idx
]
cache
=
cache
[:,:,
valid_idx
]
cache
=
cache
[:,
:,
valid_idx
]
global_context
=
global_context
[
valid_idx
]
global_context
=
global_context
[
valid_idx
]
seq_context
=
seq_context
[
valid_idx
]
seq_context
=
seq_context
[
valid_idx
]
samples
=
samples
[
valid_idx
]
samples
=
samples
[
valid_idx
]
# decoded_tokens = torch.cat(decoded_tokens,dim=1)
# decoded_tokens = torch.cat(decoded_tokens,dim=1)
decoded_tokens
=
decoded_tokens
[:,:
i
+
1
]
decoded_tokens
=
decoded_tokens
[:,
:
i
+
1
]
outputs
=
self
.
post_process
(
decoded_tokens
,
seq_len
,
outputs
=
self
.
post_process
(
decoded_tokens
,
seq_len
,
device
,
only_return_complete
)
device
,
only_return_complete
)
...
@@ -455,7 +451,7 @@ class PolylineGenerator(nn.Module):
...
@@ -455,7 +451,7 @@ class PolylineGenerator(nn.Module):
_polyline_mask
=
torch
.
arange
(
sample_seq_length
)[
None
].
to
(
device
)
_polyline_mask
=
torch
.
arange
(
sample_seq_length
)[
None
].
to
(
device
)
# Get largest stopping point for incomplete samples.
# Get largest stopping point for incomplete samples.
valid_polyline_len
=
torch
.
full_like
(
polyline
[:,
0
],
sample_seq_length
)
valid_polyline_len
=
torch
.
full_like
(
polyline
[:,
0
],
sample_seq_length
)
zero_inds
=
(
polyline
==
0
).
type
(
torch
.
int32
).
argmax
(
-
1
)
zero_inds
=
(
polyline
==
0
).
type
(
torch
.
int32
).
argmax
(
-
1
)
# Real length
# Real length
...
@@ -463,7 +459,7 @@ class PolylineGenerator(nn.Module):
...
@@ -463,7 +459,7 @@ class PolylineGenerator(nn.Module):
polyline_mask
=
_polyline_mask
<
valid_polyline_len
[:,
None
]
polyline_mask
=
_polyline_mask
<
valid_polyline_len
[:,
None
]
# Mask faces beyond stopping token with zeros
# Mask faces beyond stopping token with zeros
polyline
=
polyline
*
polyline_mask
polyline
=
polyline
*
polyline_mask
# Pad to maximum size with zeros
# Pad to maximum size with zeros
pad_size
=
max_seq_len
-
sample_seq_length
pad_size
=
max_seq_len
-
sample_seq_length
...
@@ -486,25 +482,25 @@ class PolylineGenerator(nn.Module):
...
@@ -486,25 +482,25 @@ class PolylineGenerator(nn.Module):
}
}
return
outputs
return
outputs
def
find_best_sperate_plan
(
idx
,
array
):
def
find_best_sperate_plan
(
idx
,
array
):
h
=
array
[
-
1
]
-
array
[
idx
]
h
=
array
[
-
1
]
-
array
[
idx
]
w
=
idx
w
=
idx
cost
=
h
*
w
cost
=
h
*
w
return
cost
return
cost
def
get_chunk_idx
(
polyline_length
):
def
get_chunk_idx
(
polyline_length
):
_polyline_length
,
polyline_length_idx
=
torch
.
sort
(
polyline_length
)
_polyline_length
,
polyline_length_idx
=
torch
.
sort
(
polyline_length
)
costs
=
[]
costs
=
[]
for
i
in
range
(
len
(
_polyline_length
)):
for
i
in
range
(
len
(
_polyline_length
)):
cost
=
find_best_sperate_plan
(
i
,
_polyline_length
)
cost
=
find_best_sperate_plan
(
i
,
_polyline_length
)
costs
.
append
(
cost
)
costs
.
append
(
cost
)
seperate_point
=
torch
.
stack
(
costs
).
argmax
()
seperate_point
=
torch
.
stack
(
costs
).
argmax
()
chunk1
=
polyline_length_idx
[:
seperate_point
+
1
]
chunk1
=
polyline_length_idx
[:
seperate_point
+
1
]
chunk2
=
polyline_length_idx
[
seperate_point
+
1
:]
chunk2
=
polyline_length_idx
[
seperate_point
+
1
:]
revert_idx
=
torch
.
argsort
(
polyline_length_idx
)
revert_idx
=
torch
.
argsort
(
polyline_length_idx
)
...
@@ -517,9 +513,9 @@ def assign_bev(feat, idx):
...
@@ -517,9 +513,9 @@ def assign_bev(feat, idx):
def
assign_batch
(
batch
,
idx
,
size
):
def
assign_batch
(
batch
,
idx
,
size
):
new_batch
=
{}
new_batch
=
{}
for
k
,
v
in
batch
.
items
():
for
k
,
v
in
batch
.
items
():
new_batch
[
k
]
=
v
[
idx
]
new_batch
[
k
]
=
v
[
idx
]
if
new_batch
[
k
].
ndim
>
1
:
if
new_batch
[
k
].
ndim
>
1
:
new_batch
[
k
]
=
new_batch
[
k
][:,:
size
]
new_batch
[
k
]
=
new_batch
[
k
][:,
:
size
]
return
new_batch
return
new_batch
autonomous_driving/Online-HD-Map-Construction/src/models/losses/__init__.py
View file @
41b18fd8
from
.detr_loss
import
LinesLoss
,
MasksLoss
,
LenLoss
autonomous_driving/Online-HD-Map-Construction/src/models/losses/detr_loss.py
View file @
41b18fd8
import
mmcv
import
torch
import
torch
from
mmdet.models.builder
import
LOSSES
from
mmdet.models.losses.utils
import
weighted_loss
from
torch
import
nn
as
nn
from
torch
import
nn
as
nn
from
torch.nn
import
functional
as
F
from
torch.nn
import
functional
as
F
from
mmdet.models.losses
import
l1_loss
from
mmdet.models.losses.utils
import
weighted_loss
import
mmcv
from
mmdet.models.builder
import
LOSSES
@
mmcv
.
jit
(
derivate
=
True
,
coderize
=
True
)
@
mmcv
.
jit
(
derivate
=
True
,
coderize
=
True
)
...
@@ -77,7 +75,7 @@ class LinesLoss(nn.Module):
...
@@ -77,7 +75,7 @@ class LinesLoss(nn.Module):
loss
=
smooth_l1_loss
(
loss
=
smooth_l1_loss
(
pred
,
target
,
weight
,
reduction
=
reduction
,
avg_factor
=
avg_factor
,
beta
=
self
.
beta
)
pred
,
target
,
weight
,
reduction
=
reduction
,
avg_factor
=
avg_factor
,
beta
=
self
.
beta
)
return
loss
*
self
.
loss_weight
return
loss
*
self
.
loss_weight
@
mmcv
.
jit
(
derivate
=
True
,
coderize
=
True
)
@
mmcv
.
jit
(
derivate
=
True
,
coderize
=
True
)
...
@@ -123,7 +121,8 @@ class MasksLoss(nn.Module):
...
@@ -123,7 +121,8 @@ class MasksLoss(nn.Module):
loss
=
bce
(
pred
,
target
,
weight
,
reduction
=
reduction
,
loss
=
bce
(
pred
,
target
,
weight
,
reduction
=
reduction
,
avg_factor
=
avg_factor
)
avg_factor
=
avg_factor
)
return
loss
*
self
.
loss_weight
return
loss
*
self
.
loss_weight
@
mmcv
.
jit
(
derivate
=
True
,
coderize
=
True
)
@
mmcv
.
jit
(
derivate
=
True
,
coderize
=
True
)
@
weighted_loss
@
weighted_loss
...
@@ -167,4 +166,4 @@ class LenLoss(nn.Module):
...
@@ -167,4 +166,4 @@ class LenLoss(nn.Module):
loss
=
ce
(
pred
,
target
,
weight
,
reduction
=
reduction
,
loss
=
ce
(
pred
,
target
,
weight
,
reduction
=
reduction
,
avg_factor
=
avg_factor
)
avg_factor
=
avg_factor
)
return
loss
*
self
.
loss_weight
return
loss
*
self
.
loss_weight
\ No newline at end of file
autonomous_driving/Online-HD-Map-Construction/src/models/mapers/__init__.py
View file @
41b18fd8
from
.vectormapnet
import
VectorMapNet
autonomous_driving/Online-HD-Map-Construction/src/models/mapers/base_mapper.py
View file @
41b18fd8
from
abc
import
ABCMeta
,
abstractmethod
from
abc
import
ABCMeta
import
torch.nn
as
nn
import
torch.nn
as
nn
from
mmcv.runner
import
auto_fp16
from
mmcv.utils
import
print_log
from
mmcv.utils
import
print_log
from
mmdet.utils
import
get_root_logger
from
mmdet3d.models.builder
import
DETECTORS
from
mmdet3d.models.builder
import
DETECTORS
from
mmdet.utils
import
get_root_logger
MAPPERS
=
DETECTORS
MAPPERS
=
DETECTORS
class
BaseMapper
(
nn
.
Module
,
metaclass
=
ABCMeta
):
class
BaseMapper
(
nn
.
Module
,
metaclass
=
ABCMeta
):
"""Base class for mappers."""
"""Base class for mappers."""
...
@@ -40,7 +39,7 @@ class BaseMapper(nn.Module, metaclass=ABCMeta):
...
@@ -40,7 +39,7 @@ class BaseMapper(nn.Module, metaclass=ABCMeta):
return
((
hasattr
(
self
,
'roi_head'
)
and
self
.
roi_head
.
with_mask
)
return
((
hasattr
(
self
,
'roi_head'
)
and
self
.
roi_head
.
with_mask
)
or
(
hasattr
(
self
,
'mask_head'
)
and
self
.
mask_head
is
not
None
))
or
(
hasattr
(
self
,
'mask_head'
)
and
self
.
mask_head
is
not
None
))
#@abstractmethod
#
@abstractmethod
def
extract_feat
(
self
,
imgs
):
def
extract_feat
(
self
,
imgs
):
"""Extract features from images."""
"""Extract features from images."""
pass
pass
...
@@ -48,11 +47,11 @@ class BaseMapper(nn.Module, metaclass=ABCMeta):
...
@@ -48,11 +47,11 @@ class BaseMapper(nn.Module, metaclass=ABCMeta):
def
forward_train
(
self
,
*
args
,
**
kwargs
):
def
forward_train
(
self
,
*
args
,
**
kwargs
):
pass
pass
#@abstractmethod
#
@abstractmethod
def
simple_test
(
self
,
img
,
img_metas
,
**
kwargs
):
def
simple_test
(
self
,
img
,
img_metas
,
**
kwargs
):
pass
pass
#@abstractmethod
#
@abstractmethod
def
aug_test
(
self
,
imgs
,
img_metas
,
**
kwargs
):
def
aug_test
(
self
,
imgs
,
img_metas
,
**
kwargs
):
"""Test function with test time augmentation."""
"""Test function with test time augmentation."""
pass
pass
...
...
autonomous_driving/Online-HD-Map-Construction/src/models/mapers/vectormapnet.py
View file @
41b18fd8
import
mmcv
import
numpy
as
np
import
numpy
as
np
import
torch
import
torch
import
torch.nn
as
nn
import
torch.nn
as
nn
import
torch.nn.functional
as
F
from
mmdet3d.models.builder
import
build_backbone
,
build_head
,
build_neck
from
torch.nn.utils.rnn
import
pad_sequence
from
torch.nn.utils.rnn
import
pad_sequence
from
torchvision.models.resnet
import
resnet18
,
resnet50
from
torchvision.models.resnet
import
resnet18
from
mmdet3d.models.builder
import
(
build_backbone
,
build_head
,
from
.base_mapper
import
MAPPERS
,
BaseMapper
build_neck
)
from
.base_mapper
import
BaseMapper
,
MAPPERS
@
MAPPERS
.
register_module
()
@
MAPPERS
.
register_module
()
...
@@ -31,8 +27,7 @@ class VectorMapNet(BaseMapper):
...
@@ -31,8 +27,7 @@ class VectorMapNet(BaseMapper):
model_name
=
None
,
**
kwargs
):
model_name
=
None
,
**
kwargs
):
super
(
VectorMapNet
,
self
).
__init__
()
super
(
VectorMapNet
,
self
).
__init__
()
# Attribute
#Attribute
self
.
model_name
=
model_name
self
.
model_name
=
model_name
self
.
last_epoch
=
None
self
.
last_epoch
=
None
self
.
only_det
=
only_det
self
.
only_det
=
only_det
...
@@ -59,11 +54,10 @@ class VectorMapNet(BaseMapper):
...
@@ -59,11 +54,10 @@ class VectorMapNet(BaseMapper):
)
)
# BEV
# BEV
if
hasattr
(
self
.
backbone
,
'bev_w'
):
if
hasattr
(
self
.
backbone
,
'bev_w'
):
self
.
bev_w
=
self
.
backbone
.
bev_w
self
.
bev_w
=
self
.
backbone
.
bev_w
self
.
bev_h
=
self
.
backbone
.
bev_h
self
.
bev_h
=
self
.
backbone
.
bev_h
self
.
head
=
build_head
(
head_cfg
)
self
.
head
=
build_head
(
head_cfg
)
def
multiscale_neck
(
self
,
bev_embedding
):
def
multiscale_neck
(
self
,
bev_embedding
):
...
@@ -97,7 +91,7 @@ class VectorMapNet(BaseMapper):
...
@@ -97,7 +91,7 @@ class VectorMapNet(BaseMapper):
if
self
.
last_epoch
is
None
:
if
self
.
last_epoch
is
None
:
self
.
last_epoch
=
[
batch
,
img
,
img_metas
,
valid_idx
,
points
]
self
.
last_epoch
=
[
batch
,
img
,
img_metas
,
valid_idx
,
points
]
if
len
(
valid_idx
)
==
0
:
if
len
(
valid_idx
)
==
0
:
batch
,
img
,
img_metas
,
valid_idx
,
points
=
self
.
last_epoch
batch
,
img
,
img_metas
,
valid_idx
,
points
=
self
.
last_epoch
else
:
else
:
del
self
.
last_epoch
del
self
.
last_epoch
...
@@ -184,8 +178,8 @@ class VectorMapNet(BaseMapper):
...
@@ -184,8 +178,8 @@ class VectorMapNet(BaseMapper):
return
None
,
None
,
None
,
valid_idx
,
None
return
None
,
None
,
None
,
valid_idx
,
None
batch
=
{}
batch
=
{}
batch
[
'det'
]
=
format_det
(
polys
,
device
)
batch
[
'det'
]
=
format_det
(
polys
,
device
)
batch
[
'gen'
]
=
format_gen
(
polys
,
device
)
batch
[
'gen'
]
=
format_gen
(
polys
,
device
)
return
batch
,
imgs
,
img_metas
,
valid_idx
,
points
return
batch
,
imgs
,
img_metas
,
valid_idx
,
points
...
@@ -193,7 +187,7 @@ class VectorMapNet(BaseMapper):
...
@@ -193,7 +187,7 @@ class VectorMapNet(BaseMapper):
pad_points
=
pad_sequence
(
points
,
batch_first
=
True
)
pad_points
=
pad_sequence
(
points
,
batch_first
=
True
)
points_mask
=
torch
.
zeros_like
(
pad_points
[:,:,
0
]).
bool
()
points_mask
=
torch
.
zeros_like
(
pad_points
[:,
:,
0
]).
bool
()
for
i
in
range
(
len
(
points
)):
for
i
in
range
(
len
(
points
)):
valid_num
=
points
[
i
].
shape
[
0
]
valid_num
=
points
[
i
].
shape
[
0
]
points_mask
[
i
][:
valid_num
]
=
True
points_mask
[
i
][:
valid_num
]
=
True
...
@@ -202,15 +196,13 @@ class VectorMapNet(BaseMapper):
...
@@ -202,15 +196,13 @@ class VectorMapNet(BaseMapper):
def
format_det
(
polys
,
device
):
def
format_det
(
polys
,
device
):
batch
=
{
batch
=
{
'class_label'
:[],
'class_label'
:
[],
'batch_idx'
:[],
'batch_idx'
:
[],
'bbox'
:
[],
'bbox'
:
[],
}
}
for
batch_idx
,
poly
in
enumerate
(
polys
):
for
batch_idx
,
poly
in
enumerate
(
polys
):
keypoint_label
=
torch
.
from_numpy
(
poly
[
'det_label'
]).
to
(
device
)
keypoint_label
=
torch
.
from_numpy
(
poly
[
'det_label'
]).
to
(
device
)
keypoint
=
torch
.
from_numpy
(
poly
[
'keypoint'
]).
to
(
device
)
keypoint
=
torch
.
from_numpy
(
poly
[
'keypoint'
]).
to
(
device
)
...
@@ -220,8 +212,7 @@ def format_det(polys, device):
...
@@ -220,8 +212,7 @@ def format_det(polys, device):
return
batch
return
batch
def
format_gen
(
polys
,
device
):
def
format_gen
(
polys
,
device
):
line_cls
=
[]
line_cls
=
[]
polylines
,
polyline_masks
,
polyline_weights
=
[],
[],
[]
polylines
,
polyline_masks
,
polyline_weights
=
[],
[],
[]
bbox
,
line_cls
,
line_bs_idx
=
[],
[],
[]
bbox
,
line_cls
,
line_bs_idx
=
[],
[],
[]
...
@@ -230,13 +221,13 @@ def format_gen(polys,device):
...
@@ -230,13 +221,13 @@ def format_gen(polys,device):
# convert to cuda tensor
# convert to cuda tensor
for
k
in
poly
.
keys
():
for
k
in
poly
.
keys
():
if
isinstance
(
poly
[
k
],
np
.
ndarray
):
if
isinstance
(
poly
[
k
],
np
.
ndarray
):
poly
[
k
]
=
torch
.
from_numpy
(
poly
[
k
]).
to
(
device
)
poly
[
k
]
=
torch
.
from_numpy
(
poly
[
k
]).
to
(
device
)
else
:
else
:
poly
[
k
]
=
[
torch
.
from_numpy
(
v
).
to
(
device
)
for
v
in
poly
[
k
]]
poly
[
k
]
=
[
torch
.
from_numpy
(
v
).
to
(
device
)
for
v
in
poly
[
k
]]
line_cls
+=
poly
[
'gen_label'
]
line_cls
+=
poly
[
'gen_label'
]
line_bs_idx
+=
[
batch_idx
]
*
len
(
poly
[
'gen_label'
])
line_bs_idx
+=
[
batch_idx
]
*
len
(
poly
[
'gen_label'
])
# condition
# condition
bbox
+=
poly
[
'qkeypoint'
]
bbox
+=
poly
[
'qkeypoint'
]
...
...
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/__init__.py
View file @
41b18fd8
from
.deformable_transformer
import
DeformableDetrTransformer_
,
DeformableDetrTransformerDecoder_
from
.base_transformer
import
PlaceHolderEncoder
\ No newline at end of file
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/base_transformer.py
View file @
41b18fd8
import
numpy
as
np
import
torch
import
torch.nn
as
nn
import
torch.nn
as
nn
import
torch.nn.functional
as
F
from
mmcv.cnn.bricks.registry
import
TRANSFORMER_LAYER_SEQUENCE
from
mmcv.cnn
import
xavier_init
,
constant_init
from
mmcv.cnn.bricks.registry
import
(
ATTENTION
,
TRANSFORMER_LAYER_SEQUENCE
)
from
mmcv.cnn.bricks.transformer
import
(
MultiScaleDeformableAttention
,
TransformerLayerSequence
,
build_transformer_layer_sequence
)
from
mmcv.runner.base_module
import
BaseModule
from
mmdet.models.utils.builder
import
TRANSFORMER
@
TRANSFORMER_LAYER_SEQUENCE
.
register_module
()
@
TRANSFORMER_LAYER_SEQUENCE
.
register_module
()
class
PlaceHolderEncoder
(
nn
.
Module
):
class
PlaceHolderEncoder
(
nn
.
Module
):
...
@@ -21,5 +10,4 @@ class PlaceHolderEncoder(nn.Module):
...
@@ -21,5 +10,4 @@ class PlaceHolderEncoder(nn.Module):
self
.
embed_dims
=
embed_dims
self
.
embed_dims
=
embed_dims
def
forward
(
self
,
*
args
,
query
=
None
,
**
kwargs
):
def
forward
(
self
,
*
args
,
query
=
None
,
**
kwargs
):
return
query
return
query
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/deformable_transformer.py
View file @
41b18fd8
...
@@ -4,18 +4,12 @@ import warnings
...
@@ -4,18 +4,12 @@ import warnings
import
torch
import
torch
import
torch.nn
as
nn
import
torch.nn
as
nn
from
mmcv.cnn
import
build_activation_layer
,
build_norm_layer
,
xavier_init
from
mmcv.cnn
import
xavier_init
from
mmcv.cnn.bricks.registry
import
(
TRANSFORMER_LAYER
,
from
mmcv.cnn.bricks.registry
import
TRANSFORMER_LAYER_SEQUENCE
TRANSFORMER_LAYER_SEQUENCE
)
from
mmcv.cnn.bricks.transformer
import
TransformerLayerSequence
from
mmcv.cnn.bricks.transformer
import
(
BaseTransformerLayer
,
TransformerLayerSequence
,
build_transformer_layer_sequence
)
from
mmcv.runner.base_module
import
BaseModule
from
torch.nn.init
import
normal_
from
mmdet.models.utils.builder
import
TRANSFORMER
from
mmdet.models.utils.builder
import
TRANSFORMER
from
mmdet.models.utils.transformer
import
Transformer
from
mmdet.models.utils.transformer
import
Transformer
from
torch.nn.init
import
normal_
try
:
try
:
from
mmcv.ops.multi_scale_deform_attn
import
MultiScaleDeformableAttention
from
mmcv.ops.multi_scale_deform_attn
import
MultiScaleDeformableAttention
...
@@ -27,6 +21,7 @@ except ImportError:
...
@@ -27,6 +21,7 @@ except ImportError:
from
.fp16_dattn
import
MultiScaleDeformableAttentionFp16
from
.fp16_dattn
import
MultiScaleDeformableAttentionFp16
def
inverse_sigmoid
(
x
,
eps
=
1e-5
):
def
inverse_sigmoid
(
x
,
eps
=
1e-5
):
"""Inverse function of sigmoid.
"""Inverse function of sigmoid.
Args:
Args:
...
@@ -44,6 +39,7 @@ def inverse_sigmoid(x, eps=1e-5):
...
@@ -44,6 +39,7 @@ def inverse_sigmoid(x, eps=1e-5):
x2
=
(
1
-
x
).
clamp
(
min
=
eps
)
x2
=
(
1
-
x
).
clamp
(
min
=
eps
)
return
torch
.
log
(
x1
/
x2
)
return
torch
.
log
(
x1
/
x2
)
@
TRANSFORMER_LAYER_SEQUENCE
.
register_module
()
@
TRANSFORMER_LAYER_SEQUENCE
.
register_module
()
class
DeformableDetrTransformerDecoder_
(
TransformerLayerSequence
):
class
DeformableDetrTransformerDecoder_
(
TransformerLayerSequence
):
"""Implements the decoder in DETR transformer.
"""Implements the decoder in DETR transformer.
...
@@ -94,13 +90,13 @@ class DeformableDetrTransformerDecoder_(TransformerLayerSequence):
...
@@ -94,13 +90,13 @@ class DeformableDetrTransformerDecoder_(TransformerLayerSequence):
for
lid
,
layer
in
enumerate
(
self
.
layers
):
for
lid
,
layer
in
enumerate
(
self
.
layers
):
reference_points_input
=
\
reference_points_input
=
\
reference_points
[:,
:,
None
,:
self
.
kp_coord_dim
]
*
\
reference_points
[:,
:,
None
,
:
self
.
kp_coord_dim
]
*
\
valid_ratios
[:,
None
,:,:
self
.
kp_coord_dim
]
valid_ratios
[:,
None
,
:,
:
self
.
kp_coord_dim
]
# if reference_points.shape[-1] == 3 and self.kp_coord_dim==2:
# if reference_points.shape[-1] == 3 and self.kp_coord_dim==2:
output
=
layer
(
output
=
layer
(
output
,
output
,
*
args
,
*
args
,
reference_points
=
reference_points_input
[...,:
self
.
kp_coord_dim
],
reference_points
=
reference_points_input
[...,
:
self
.
kp_coord_dim
],
**
kwargs
)
**
kwargs
)
output
=
output
.
permute
(
1
,
0
,
2
)
output
=
output
.
permute
(
1
,
0
,
2
)
...
@@ -108,11 +104,12 @@ class DeformableDetrTransformerDecoder_(TransformerLayerSequence):
...
@@ -108,11 +104,12 @@ class DeformableDetrTransformerDecoder_(TransformerLayerSequence):
tmp
=
reg_branches
[
lid
](
output
)
tmp
=
reg_branches
[
lid
](
output
)
new_reference_points
=
tmp
new_reference_points
=
tmp
new_reference_points
[...,
:
self
.
kp_coord_dim
]
=
tmp
[
new_reference_points
[...,
:
self
.
kp_coord_dim
]
=
tmp
[
...,
:
self
.
kp_coord_dim
]
+
inverse_sigmoid
(
reference_points
)
...,
:
self
.
kp_coord_dim
]
+
inverse_sigmoid
(
reference_points
)
new_reference_points
=
new_reference_points
.
sigmoid
()
new_reference_points
=
new_reference_points
.
sigmoid
()
if
reference_points
.
shape
[
-
1
]
==
3
and
self
.
kp_coord_dim
==
2
:
if
reference_points
.
shape
[
-
1
]
==
3
and
self
.
kp_coord_dim
==
2
:
reference_points
[...,
-
1
]
=
tmp
[...,
-
1
].
sigmoid
().
detach
()
reference_points
[...,
-
1
]
=
tmp
[...,
-
1
].
sigmoid
().
detach
()
reference_points
[...,:
self
.
coord_dim
]
=
new_reference_points
.
detach
()
reference_points
[...,
:
self
.
coord_dim
]
=
new_reference_points
.
detach
()
output
=
output
.
permute
(
1
,
0
,
2
)
output
=
output
.
permute
(
1
,
0
,
2
)
if
self
.
return_intermediate
:
if
self
.
return_intermediate
:
...
@@ -174,7 +171,7 @@ class DeformableDetrTransformer_(Transformer):
...
@@ -174,7 +171,7 @@ class DeformableDetrTransformer_(Transformer):
for
m
in
self
.
modules
():
for
m
in
self
.
modules
():
if
isinstance
(
m
,
MultiScaleDeformableAttention
):
if
isinstance
(
m
,
MultiScaleDeformableAttention
):
m
.
init_weights
()
m
.
init_weights
()
elif
isinstance
(
m
,
MultiScaleDeformableAttentionFp16
):
elif
isinstance
(
m
,
MultiScaleDeformableAttentionFp16
):
m
.
init_weights
()
m
.
init_weights
()
if
not
self
.
as_two_stage
:
if
not
self
.
as_two_stage
:
xavier_init
(
self
.
reference_points_embed
,
distribution
=
'uniform'
,
bias
=
0.
)
xavier_init
(
self
.
reference_points_embed
,
distribution
=
'uniform'
,
bias
=
0.
)
...
@@ -231,7 +228,7 @@ class DeformableDetrTransformer_(Transformer):
...
@@ -231,7 +228,7 @@ class DeformableDetrTransformer_(Transformer):
scale
=
2
*
math
.
pi
scale
=
2
*
math
.
pi
dim_t
=
torch
.
arange
(
dim_t
=
torch
.
arange
(
num_pos_feats
,
dtype
=
torch
.
float32
,
device
=
proposals
.
device
)
num_pos_feats
,
dtype
=
torch
.
float32
,
device
=
proposals
.
device
)
dim_t
=
temperature
**
(
2
*
(
dim_t
//
2
)
/
num_pos_feats
)
dim_t
=
temperature
**
(
2
*
(
dim_t
//
2
)
/
num_pos_feats
)
# N, L, 4
# N, L, 4
proposals
=
proposals
.
sigmoid
()
*
scale
proposals
=
proposals
.
sigmoid
()
*
scale
# N, L, 4, 128
# N, L, 4, 128
...
@@ -317,7 +314,7 @@ class DeformableDetrTransformer_(Transformer):
...
@@ -317,7 +314,7 @@ class DeformableDetrTransformer_(Transformer):
spatial_shapes
=
torch
.
as_tensor
(
spatial_shapes
=
torch
.
as_tensor
(
spatial_shapes
,
dtype
=
torch
.
long
,
device
=
feat_flatten
.
device
)
spatial_shapes
,
dtype
=
torch
.
long
,
device
=
feat_flatten
.
device
)
level_start_index
=
torch
.
cat
((
spatial_shapes
.
new_zeros
(
level_start_index
=
torch
.
cat
((
spatial_shapes
.
new_zeros
(
(
1
,
)),
spatial_shapes
.
prod
(
1
).
cumsum
(
0
)[:
-
1
]))
(
1
,)),
spatial_shapes
.
prod
(
1
).
cumsum
(
0
)[:
-
1
]))
valid_ratios
=
torch
.
stack
(
valid_ratios
=
torch
.
stack
(
[
self
.
get_valid_ratio
(
m
)
for
m
in
mlvl_masks
],
1
)
[
self
.
get_valid_ratio
(
m
)
for
m
in
mlvl_masks
],
1
)
...
...
autonomous_driving/Online-HD-Map-Construction/src/models/transformer_utils/fp16_dattn.py
View file @
41b18fd8
from
turtle
import
forward
import
warnings
import
warnings
try
:
try
:
from
mmcv.ops.multi_scale_deform_attn
import
MultiScaleDeformableAttention
from
mmcv.ops.multi_scale_deform_attn
import
MultiScaleDeformableAttention
except
ImportError
:
except
ImportError
:
...
@@ -7,12 +7,6 @@ except ImportError:
...
@@ -7,12 +7,6 @@ except ImportError:
'`MultiScaleDeformableAttention` in MMCV has been moved to '
'`MultiScaleDeformableAttention` in MMCV has been moved to '
'`mmcv.ops.multi_scale_deform_attn`, please update your MMCV'
)
'`mmcv.ops.multi_scale_deform_attn`, please update your MMCV'
)
from
mmcv.cnn.bricks.transformer
import
MultiScaleDeformableAttention
from
mmcv.cnn.bricks.transformer
import
MultiScaleDeformableAttention
from
mmcv.runner
import
force_fp32
,
auto_fp16
from
mmcv.cnn.bricks.registry
import
ATTENTION
from
mmcv.runner.base_module
import
BaseModule
,
ModuleList
,
Sequential
from
mmcv.cnn.bricks.transformer
import
build_attention
import
math
import
math
import
warnings
import
warnings
...
@@ -20,13 +14,15 @@ import warnings
...
@@ -20,13 +14,15 @@ import warnings
import
torch
import
torch
import
torch.nn
as
nn
import
torch.nn
as
nn
import
torch.nn.functional
as
F
import
torch.nn.functional
as
F
from
torch.autograd.function
import
Function
,
once_differentiable
from
mmcv
import
deprecated_api_warning
from
mmcv
import
deprecated_api_warning
from
mmcv.cnn
import
constant_init
,
xavier_init
from
mmcv.cnn
import
constant_init
,
xavier_init
from
mmcv.cnn.bricks.registry
import
ATTENTION
from
mmcv.cnn.bricks.registry
import
ATTENTION
from
mmcv.runner
import
BaseModule
from
mmcv.cnn.bricks.transformer
import
build_attention
from
mmcv.runner
import
force_fp32
from
mmcv.runner.base_module
import
BaseModule
from
mmcv.utils
import
ext_loader
from
mmcv.utils
import
ext_loader
from
torch.autograd.function
import
Function
,
once_differentiable
ext_module
=
ext_loader
.
load_ext
(
ext_module
=
ext_loader
.
load_ext
(
'_ext'
,
[
'ms_deform_attn_backward'
,
'ms_deform_attn_forward'
])
'_ext'
,
[
'ms_deform_attn_backward'
,
'ms_deform_attn_forward'
])
from
torch.cuda.amp
import
custom_bwd
,
custom_fwd
from
torch.cuda.amp
import
custom_bwd
,
custom_fwd
...
@@ -35,16 +31,15 @@ from torch.cuda.amp import custom_bwd, custom_fwd
...
@@ -35,16 +31,15 @@ from torch.cuda.amp import custom_bwd, custom_fwd
@
ATTENTION
.
register_module
()
@
ATTENTION
.
register_module
()
class
MultiScaleDeformableAttentionFp16
(
BaseModule
):
class
MultiScaleDeformableAttentionFp16
(
BaseModule
):
def
__init__
(
self
,
attn_cfg
=
None
,
init_cfg
=
None
,
**
kwarg
):
def
__init__
(
self
,
attn_cfg
=
None
,
init_cfg
=
None
,
**
kwarg
):
super
(
MultiScaleDeformableAttentionFp16
,
self
).
__init__
(
init_cfg
)
super
(
MultiScaleDeformableAttentionFp16
,
self
).
__init__
(
init_cfg
)
# import ipdb; ipdb.set_trace()
# import ipdb; ipdb.set_trace()
self
.
deformable_attention
=
build_attention
(
attn_cfg
)
self
.
deformable_attention
=
build_attention
(
attn_cfg
)
self
.
deformable_attention
.
init_weights
()
self
.
deformable_attention
.
init_weights
()
self
.
fp16_enabled
=
False
self
.
fp16_enabled
=
False
@
force_fp32
(
apply_to
=
(
'query'
,
'key'
,
'value'
,
'query_pos'
,
'reference_points'
,
'identity'
))
@
force_fp32
(
apply_to
=
(
'query'
,
'key'
,
'value'
,
'query_pos'
,
'reference_points'
,
'identity'
))
def
forward
(
self
,
query
,
def
forward
(
self
,
query
,
key
=
None
,
key
=
None
,
value
=
None
,
value
=
None
,
...
@@ -64,8 +59,7 @@ class MultiScaleDeformableAttentionFp16(BaseModule):
...
@@ -64,8 +59,7 @@ class MultiScaleDeformableAttentionFp16(BaseModule):
key_padding_mask
=
key_padding_mask
,
key_padding_mask
=
key_padding_mask
,
reference_points
=
reference_points
,
reference_points
=
reference_points
,
spatial_shapes
=
spatial_shapes
,
spatial_shapes
=
spatial_shapes
,
level_start_index
=
level_start_index
,
**
kwargs
)
level_start_index
=
level_start_index
,
**
kwargs
)
class
MultiScaleDeformableAttnFunctionFp32
(
Function
):
class
MultiScaleDeformableAttnFunctionFp32
(
Function
):
...
@@ -118,7 +112,7 @@ class MultiScaleDeformableAttnFunctionFp32(Function):
...
@@ -118,7 +112,7 @@ class MultiScaleDeformableAttnFunctionFp32(Function):
Tuple[Tensor]: Gradient
Tuple[Tensor]: Gradient
of input tensors in forward.
of input tensors in forward.
"""
"""
value
,
value_spatial_shapes
,
value_level_start_index
,
\
value
,
value_spatial_shapes
,
value_level_start_index
,
\
sampling_locations
,
attention_weights
=
ctx
.
saved_tensors
sampling_locations
,
attention_weights
=
ctx
.
saved_tensors
grad_value
=
torch
.
zeros_like
(
value
)
grad_value
=
torch
.
zeros_like
(
value
)
grad_sampling_loc
=
torch
.
zeros_like
(
sampling_locations
)
grad_sampling_loc
=
torch
.
zeros_like
(
sampling_locations
)
...
@@ -161,7 +155,7 @@ def multi_scale_deformable_attn_pytorch(value, value_spatial_shapes,
...
@@ -161,7 +155,7 @@ def multi_scale_deformable_attn_pytorch(value, value_spatial_shapes,
"""
"""
bs
,
_
,
num_heads
,
embed_dims
=
value
.
shape
bs
,
_
,
num_heads
,
embed_dims
=
value
.
shape
_
,
num_queries
,
num_heads
,
num_levels
,
num_points
,
_
=
\
_
,
num_queries
,
num_heads
,
num_levels
,
num_points
,
_
=
\
sampling_locations
.
shape
sampling_locations
.
shape
value_list
=
value
.
split
([
H_
*
W_
for
H_
,
W_
in
value_spatial_shapes
],
value_list
=
value
.
split
([
H_
*
W_
for
H_
,
W_
in
value_spatial_shapes
],
dim
=
1
)
dim
=
1
)
...
...
autonomous_driving/Online-HD-Map-Construction/tools/evaluate_submission.py
View file @
41b18fd8
import
sys
import
os
import
os
import
sys
sys
.
path
.
append
(
os
.
path
.
abspath
(
'.'
))
sys
.
path
.
append
(
os
.
path
.
abspath
(
'.'
))
from
src.datasets.evaluation.vector_eval
import
VectorEvaluate
import
argparse
import
argparse
from
src.datasets.evaluation.vector_eval
import
VectorEvaluate
def
parse_args
():
def
parse_args
():
parser
=
argparse
.
ArgumentParser
(
parser
=
argparse
.
ArgumentParser
(
description
=
'Evaluate a submission file'
)
description
=
'Evaluate a submission file'
)
...
@@ -17,12 +20,14 @@ def parse_args():
...
@@ -17,12 +20,14 @@ def parse_args():
args
=
parser
.
parse_args
()
args
=
parser
.
parse_args
()
return
args
return
args
def
main
(
args
):
def
main
(
args
):
evaluator
=
VectorEvaluate
(
args
.
gt
,
n_workers
=
0
)
evaluator
=
VectorEvaluate
(
args
.
gt
,
n_workers
=
0
)
results
=
evaluator
.
evaluate
(
args
.
submission
)
results
=
evaluator
.
evaluate
(
args
.
submission
)
print
(
results
)
print
(
results
)
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
args
=
parse_args
()
args
=
parse_args
()
main
(
args
)
main
(
args
)
autonomous_driving/Online-HD-Map-Construction/tools/mmdet_test.py
View file @
41b18fd8
...
@@ -9,7 +9,6 @@ import torch
...
@@ -9,7 +9,6 @@ import torch
import
torch.distributed
as
dist
import
torch.distributed
as
dist
from
mmcv.image
import
tensor2imgs
from
mmcv.image
import
tensor2imgs
from
mmcv.runner
import
get_dist_info
from
mmcv.runner
import
get_dist_info
from
mmdet.core
import
encode_mask_results
from
mmdet.core
import
encode_mask_results
...
@@ -120,7 +119,7 @@ def collect_results_cpu(result_part, size, tmpdir=None):
...
@@ -120,7 +119,7 @@ def collect_results_cpu(result_part, size, tmpdir=None):
if
tmpdir
is
None
:
if
tmpdir
is
None
:
MAX_LEN
=
512
MAX_LEN
=
512
# 32 is whitespace
# 32 is whitespace
dir_tensor
=
torch
.
full
((
MAX_LEN
,
),
dir_tensor
=
torch
.
full
((
MAX_LEN
,),
32
,
32
,
dtype
=
torch
.
uint8
,
dtype
=
torch
.
uint8
,
device
=
'cuda'
)
device
=
'cuda'
)
...
...
autonomous_driving/Online-HD-Map-Construction/tools/mmdet_train.py
View file @
41b18fd8
...
@@ -8,7 +8,6 @@ from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner,
...
@@ -8,7 +8,6 @@ from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner,
Fp16OptimizerHook
,
OptimizerHook
,
build_optimizer
,
Fp16OptimizerHook
,
OptimizerHook
,
build_optimizer
,
build_runner
)
build_runner
)
from
mmcv.utils
import
build_from_cfg
from
mmcv.utils
import
build_from_cfg
from
mmdet.core
import
DistEvalHook
,
EvalHook
from
mmdet.core
import
DistEvalHook
,
EvalHook
from
mmdet.datasets
import
(
build_dataloader
,
build_dataset
,
from
mmdet.datasets
import
(
build_dataloader
,
build_dataset
,
replace_ImageToTensor
)
replace_ImageToTensor
)
...
...
autonomous_driving/Online-HD-Map-Construction/tools/test.py
View file @
41b18fd8
import
argparse
import
argparse
import
mmcv
import
os
import
os
import
os.path
as
osp
import
os.path
as
osp
import
torch
import
torch
import
warnings
from
mmcv
import
Config
from
mmcv
import
Config
,
DictAction
from
mmcv.cnn
import
fuse_conv_bn
from
mmcv.cnn
import
fuse_conv_bn
from
mmcv.parallel
import
MMDataParallel
,
MMDistributedDataParallel
from
mmcv.parallel
import
MMDataParallel
,
MMDistributedDataParallel
from
mmcv.runner
import
(
get_dist_info
,
init_dist
,
load_checkpoint
,
from
mmcv.runner
import
(
get_dist_info
,
init_dist
,
load_checkpoint
,
wrap_fp16_model
)
wrap_fp16_model
)
from
mmdet3d.apis
import
single_gpu_test
from
mmdet3d.apis
import
single_gpu_test
from
mmdet3d.datasets
import
build_dataloader
,
build_dataset
from
mmdet3d.datasets
import
build_dataloader
,
build_dataset
from
mmdet3d.models
import
build_model
from
mmdet3d.models
import
build_model
from
mmdet.datasets
import
replace_ImageToTensor
from
mmdet_test
import
multi_gpu_test
from
mmdet_test
import
multi_gpu_test
from
mmdet_train
import
set_random_seed
from
mmdet_train
import
set_random_seed
from
mmdet.datasets
import
replace_ImageToTensor
def
parse_args
():
def
parse_args
():
...
@@ -106,8 +104,8 @@ def main():
...
@@ -106,8 +104,8 @@ def main():
plg_lib
=
importlib
.
import_module
(
_module_path
)
plg_lib
=
importlib
.
import_module
(
_module_path
)
plugin_dirs
=
cfg
.
plugin_dir
plugin_dirs
=
cfg
.
plugin_dir
if
not
isinstance
(
plugin_dirs
,
list
):
if
not
isinstance
(
plugin_dirs
,
list
):
plugin_dirs
=
[
plugin_dirs
,]
plugin_dirs
=
[
plugin_dirs
,
]
for
plugin_dir
in
plugin_dirs
:
for
plugin_dir
in
plugin_dirs
:
import_path
(
plugin_dir
)
import_path
(
plugin_dir
)
...
@@ -154,7 +152,7 @@ def main():
...
@@ -154,7 +152,7 @@ def main():
osp
.
splitext
(
osp
.
basename
(
args
.
config
))[
0
])
osp
.
splitext
(
osp
.
basename
(
args
.
config
))[
0
])
cfg_data_dict
.
work_dir
=
cfg
.
work_dir
cfg_data_dict
.
work_dir
=
cfg
.
work_dir
print
(
'work_dir: '
,
cfg
.
work_dir
)
print
(
'work_dir: '
,
cfg
.
work_dir
)
dataset
=
build_dataset
(
cfg_data_dict
)
dataset
=
build_dataset
(
cfg_data_dict
)
data_loader
=
build_dataloader
(
data_loader
=
build_dataloader
(
dataset
,
dataset
,
...
...
autonomous_driving/Online-HD-Map-Construction/tools/train.py
View file @
41b18fd8
...
@@ -2,26 +2,25 @@ from __future__ import division
...
@@ -2,26 +2,25 @@ from __future__ import division
import
argparse
import
argparse
import
copy
import
copy
import
mmcv
import
os
import
os
import
time
import
time
import
torch
import
warnings
import
warnings
from
mmcv
import
Config
,
DictAction
from
mmcv.runner
import
get_dist_info
,
init_dist
from
os
import
path
as
osp
from
os
import
path
as
osp
import
mmcv
import
torch
from
mmcv
import
Config
,
DictAction
from
mmcv.runner
import
get_dist_info
,
init_dist
from
mmdet
import
__version__
as
mmdet_version
from
mmdet
import
__version__
as
mmdet_version
from
mmdet3d
import
__version__
as
mmdet3d_version
from
mmdet3d
import
__version__
as
mmdet3d_version
from
mmdet3d.apis
import
train_model
from
mmdet3d.apis
import
train_model
from
mmdet3d.datasets
import
build_dataset
from
mmdet3d.datasets
import
build_dataset
# from builder import build_model
from
mmdet3d.models
import
build_model
from
mmdet3d.utils
import
collect_env
,
get_root_logger
from
mmdet3d.utils
import
collect_env
,
get_root_logger
from
mmseg
import
__version__
as
mmseg_version
# warper
# warper
from
mmdet_train
import
set_random_seed
from
mmdet_train
import
set_random_seed
# from builder import build_model
from
mmseg
import
__version__
as
mmseg_version
from
mmdet3d.models
import
build_model
def
parse_args
():
def
parse_args
():
...
@@ -126,8 +125,8 @@ def main():
...
@@ -126,8 +125,8 @@ def main():
plg_lib
=
importlib
.
import_module
(
_module_path
)
plg_lib
=
importlib
.
import_module
(
_module_path
)
plugin_dirs
=
cfg
.
plugin_dir
plugin_dirs
=
cfg
.
plugin_dir
if
not
isinstance
(
plugin_dirs
,
list
):
if
not
isinstance
(
plugin_dirs
,
list
):
plugin_dirs
=
[
plugin_dirs
,]
plugin_dirs
=
[
plugin_dirs
,
]
for
plugin_dir
in
plugin_dirs
:
for
plugin_dir
in
plugin_dirs
:
import_path
(
plugin_dir
)
import_path
(
plugin_dir
)
...
...
autonomous_driving/Online-HD-Map-Construction/tools/visualization/renderer.py
View file @
41b18fd8
import
os.path
as
osp
import
os
import
numpy
as
np
import
copy
import
copy
import
os
import
os.path
as
osp
import
cv2
import
cv2
import
matplotlib.pyplot
as
plt
import
matplotlib.pyplot
as
plt
import
numpy
as
np
from
PIL
import
Image
from
PIL
import
Image
from
shapely.geometry
import
LineString
from
shapely.geometry
import
LineString
def
remove_nan_values
(
uv
):
def
remove_nan_values
(
uv
):
is_u_valid
=
np
.
logical_not
(
np
.
isnan
(
uv
[:,
0
]))
is_u_valid
=
np
.
logical_not
(
np
.
isnan
(
uv
[:,
0
]))
is_v_valid
=
np
.
logical_not
(
np
.
isnan
(
uv
[:,
1
]))
is_v_valid
=
np
.
logical_not
(
np
.
isnan
(
uv
[:,
1
]))
...
@@ -15,6 +17,7 @@ def remove_nan_values(uv):
...
@@ -15,6 +17,7 @@ def remove_nan_values(uv):
uv_valid
=
uv
[
is_uv_valid
]
uv_valid
=
uv
[
is_uv_valid
]
return
uv_valid
return
uv_valid
def
points_ego2img
(
pts_ego
,
extrinsics
,
intrinsics
):
def
points_ego2img
(
pts_ego
,
extrinsics
,
intrinsics
):
pts_ego_4d
=
np
.
concatenate
([
pts_ego
,
np
.
ones
([
len
(
pts_ego
),
1
])],
axis
=-
1
)
pts_ego_4d
=
np
.
concatenate
([
pts_ego
,
np
.
ones
([
len
(
pts_ego
),
1
])],
axis
=-
1
)
pts_cam_4d
=
extrinsics
@
pts_ego_4d
.
T
pts_cam_4d
=
extrinsics
@
pts_ego_4d
.
T
...
@@ -26,6 +29,7 @@ def points_ego2img(pts_ego, extrinsics, intrinsics):
...
@@ -26,6 +29,7 @@ def points_ego2img(pts_ego, extrinsics, intrinsics):
return
uv
,
depth
return
uv
,
depth
def
interp_fixed_dist
(
line
,
sample_dist
):
def
interp_fixed_dist
(
line
,
sample_dist
):
''' Interpolate a line at fixed interval.
''' Interpolate a line at fixed interval.
...
@@ -39,13 +43,14 @@ def interp_fixed_dist(line, sample_dist):
...
@@ -39,13 +43,14 @@ def interp_fixed_dist(line, sample_dist):
distances
=
list
(
np
.
arange
(
sample_dist
,
line
.
length
,
sample_dist
))
distances
=
list
(
np
.
arange
(
sample_dist
,
line
.
length
,
sample_dist
))
# make sure to sample at least two points when sample_dist > line.length
# make sure to sample at least two points when sample_dist > line.length
distances
=
[
0
,]
+
distances
+
[
line
.
length
,
]
distances
=
[
0
,
]
+
distances
+
[
line
.
length
,
]
sampled_points
=
np
.
array
([
list
(
line
.
interpolate
(
distance
).
coords
)
sampled_points
=
np
.
array
([
list
(
line
.
interpolate
(
distance
).
coords
)
for
distance
in
distances
]).
squeeze
()
for
distance
in
distances
]).
squeeze
()
return
sampled_points
return
sampled_points
def
draw_polyline_ego_on_img
(
polyline_ego
,
img_bgr
,
extrinsics
,
intrinsics
,
color_bgr
,
thickness
):
def
draw_polyline_ego_on_img
(
polyline_ego
,
img_bgr
,
extrinsics
,
intrinsics
,
color_bgr
,
thickness
):
# if 2-dimension, assume z=0
# if 2-dimension, assume z=0
if
polyline_ego
.
shape
[
1
]
==
2
:
if
polyline_ego
.
shape
[
1
]
==
2
:
...
@@ -103,6 +108,7 @@ def draw_polyline_ego_on_img(polyline_ego, img_bgr, extrinsics, intrinsics, colo
...
@@ -103,6 +108,7 @@ def draw_polyline_ego_on_img(polyline_ego, img_bgr, extrinsics, intrinsics, colo
# thickness_px=thickness,
# thickness_px=thickness,
# )
# )
def
draw_visible_polyline_cv2
(
line
,
valid_pts_bool
,
image
,
color
,
thickness_px
):
def
draw_visible_polyline_cv2
(
line
,
valid_pts_bool
,
image
,
color
,
thickness_px
):
"""Draw a polyline onto an image using given line segments.
"""Draw a polyline onto an image using given line segments.
Args:
Args:
...
@@ -147,8 +153,9 @@ COLOR_MAPS_PLT = {
...
@@ -147,8 +153,9 @@ COLOR_MAPS_PLT = {
}
}
CAM_NAMES_AV2
=
[
'ring_front_center'
,
'ring_front_right'
,
'ring_front_left'
,
CAM_NAMES_AV2
=
[
'ring_front_center'
,
'ring_front_right'
,
'ring_front_left'
,
'ring_rear_right'
,
'ring_rear_left'
,
'ring_side_right'
,
'ring_side_left'
,
'ring_rear_right'
,
'ring_rear_left'
,
'ring_side_right'
,
'ring_side_left'
,
]
]
class
Renderer
(
object
):
class
Renderer
(
object
):
"""Render map elements on image views.
"""Render map elements on image views.
...
@@ -171,8 +178,8 @@ class Renderer(object):
...
@@ -171,8 +178,8 @@ class Renderer(object):
map_path
=
os
.
path
.
join
(
out_dir
,
'map.jpg'
)
map_path
=
os
.
path
.
join
(
out_dir
,
'map.jpg'
)
plt
.
figure
(
figsize
=
(
self
.
roi_size
[
0
],
self
.
roi_size
[
1
]))
plt
.
figure
(
figsize
=
(
self
.
roi_size
[
0
],
self
.
roi_size
[
1
]))
plt
.
xlim
(
-
self
.
roi_size
[
0
]
/
2
-
1
,
self
.
roi_size
[
0
]
/
2
+
1
)
plt
.
xlim
(
-
self
.
roi_size
[
0
]
/
2
-
1
,
self
.
roi_size
[
0
]
/
2
+
1
)
plt
.
ylim
(
-
self
.
roi_size
[
1
]
/
2
-
1
,
self
.
roi_size
[
1
]
/
2
+
1
)
plt
.
ylim
(
-
self
.
roi_size
[
1
]
/
2
-
1
,
self
.
roi_size
[
1
]
/
2
+
1
)
plt
.
axis
(
'off'
)
plt
.
axis
(
'off'
)
plt
.
imshow
(
car_img
,
extent
=
[
-
1.5
,
1.5
,
-
1.2
,
1.2
])
plt
.
imshow
(
car_img
,
extent
=
[
-
1.5
,
1.5
,
-
1.2
,
1.2
])
...
...
autonomous_driving/Online-HD-Map-Construction/tools/visualization/visualize.py
View file @
41b18fd8
import
argparse
import
argparse
import
mmcv
from
mmcv
import
Config
import
os
import
os
import
mmcv
from
renderer
import
Renderer
from
renderer
import
Renderer
CAT2ID
=
{
CAT2ID
=
{
...
@@ -14,6 +14,7 @@ ID2CAT = {v: k for k, v in CAT2ID.items()}
...
@@ -14,6 +14,7 @@ ID2CAT = {v: k for k, v in CAT2ID.items()}
ROI_SIZE
=
(
60
,
30
)
ROI_SIZE
=
(
60
,
30
)
def
parse_args
():
def
parse_args
():
parser
=
argparse
.
ArgumentParser
(
parser
=
argparse
.
ArgumentParser
(
description
=
'Visualize groundtruth and results'
)
description
=
'Visualize groundtruth and results'
)
...
@@ -37,6 +38,7 @@ def parse_args():
...
@@ -37,6 +38,7 @@ def parse_args():
return
args
return
args
def
import_plugin
(
cfg
):
def
import_plugin
(
cfg
):
'''
'''
import modules, registry will be update
import modules, registry will be update
...
@@ -59,8 +61,8 @@ def import_plugin(cfg):
...
@@ -59,8 +61,8 @@ def import_plugin(cfg):
plg_lib
=
importlib
.
import_module
(
_module_path
)
plg_lib
=
importlib
.
import_module
(
_module_path
)
plugin_dirs
=
cfg
.
plugin_dir
plugin_dirs
=
cfg
.
plugin_dir
if
not
isinstance
(
plugin_dirs
,
list
):
if
not
isinstance
(
plugin_dirs
,
list
):
plugin_dirs
=
[
plugin_dirs
,]
plugin_dirs
=
[
plugin_dirs
,
]
for
plugin_dir
in
plugin_dirs
:
for
plugin_dir
in
plugin_dirs
:
import_path
(
plugin_dir
)
import_path
(
plugin_dir
)
...
@@ -74,6 +76,7 @@ def import_plugin(cfg):
...
@@ -74,6 +76,7 @@ def import_plugin(cfg):
print
(
f
'importing
{
_module_path
}
/'
)
print
(
f
'importing
{
_module_path
}
/'
)
plg_lib
=
importlib
.
import_module
(
_module_path
)
plg_lib
=
importlib
.
import_module
(
_module_path
)
def
main
(
args
):
def
main
(
args
):
log_id
=
args
.
log_id
log_id
=
args
.
log_id
ann
=
mmcv
.
load
(
args
.
ann_file
)
ann
=
mmcv
.
load
(
args
.
ann_file
)
...
...
autonomous_driving/README.md
View file @
41b18fd8
<div
id=
"top"
align=
"center"
>
<div
id=
"top"
align=
"center"
>
# InternImage for CVPR 2023 Workshop on End-to-End Autonomous Driving
# InternImage for CVPR 2023 Workshop on End-to-End Autonomous Driving
</div>
</div>
## 1. InternImage-based Baseline for CVPR23 Occupancy Prediction Challenge
## 1. InternImage-based Baseline for CVPR23 Occupancy Prediction Challenge
We achieve an improvement of 1.44 in MIOU baseline by leveraging the InterImage-based model.
We achieve an improvement of 1.44 in MIOU baseline by leveraging the InterImage-based model.
model name
|weight
| mIoU | others | barrier | bicycle | bus
|
car | construction_vehicle | motorcycle | pedestrian | traffic_cone | trailer |
truck | driveable_surface | other_flat | sidewalk | terrain | manmade | vegetation |
|
model name
| weight
| mIoU
| others | barrier | bicycle |
bus
|
car
| construction_vehicle | motorcycle | pedestrian | traffic_cone | trailer | truck | driveable_surface | other_flat | sidewalk | terrain | manmade | vegetation |
----|:
----------
:| :
--
:
| :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :----------------------: | :---: | :------: | :------: |
|
----------
----------
-- | :--
-------------------------------------------------------------------------------------------------
: | :--
-
: | :--
--
: | :--
---
: | :--
---
: | :--
-
: | :--
-
: | :--
----------------
: | :--
------
: | :--
------
: | :--
--------
: | :--
---
: | :--
-
: | :---------------
: | :-
-------: | :---
---
: | :-----
: | :----
-: | :------
--
: |
bevformer_intern-s_occ
|
[
Google Drive
](
https://drive.google.com/file/d/1LV9K8hrskKf51xY1wbqTKzK7WZmVXEV_/view?usp=sharing
)
| 25.11 | 6.93
|
35.57
|
10.40 | 35.97 | 41.23 |
13.72 | 20.30 | 21.10 | 18.34 |
19.18 | 28.64 |
49.82 |
30.74
|
31.00
|
27.44
|
19.29
|
17.29
|
|
bevformer_intern-s_occ
|
[
Google Drive
](
https://drive.google.com/file/d/1LV9K8hrskKf51xY1wbqTKzK7WZmVXEV_/view?usp=sharing
)
| 25.11 |
6.93
|
35.57
|
10.40
| 35.97 | 41.23 |
13.72 | 20.30 | 21.10 | 18.34 |
19.18
| 28.64 |
49.82 |
30.74
|
31.00
|
27.44
|
19.29
|
17.29
|
bevformer_base_occ
|
[
Google Drive
](
https://drive.google.com/file/d/1NyoiosafAmne1qiABeNOPXR-P-y0i7_I/view?usp=share_link
)
| 23.67 | 5.03 | 38.79
|
9.98 | 34.41 | 41.09 |
13.24 |
16.50
|
18.15
|
17.83
|
18.66 | 27.70 |
48.95 |
27.73
|
29.08 | 25.38
|
15.41 | 14.46
|
|
bevformer_base_occ
|
[
Google Drive
](
https://drive.google.com/file/d/1NyoiosafAmne1qiABeNOPXR-P-y0i7_I/view?usp=share_link
)
| 23.67 |
5.03
|
38.79
|
9.98
| 34.41 | 41.09 |
13.24 |
16.50
|
18.15
|
17.83
|
18.66
| 27.70 |
48.95 |
27.73
|
29.08
|
25.38
|
15.41
|
14.46
|
### Get Started
### Get Started
please refer to
[
README.md
](
./occupancy_prediction/README.md
)
please refer to
[
README.md
](
./occupancy_prediction/README.md
)
## 2. InternImage-based Baseline for Online HD Map Construction Challenge For Autonomous Driving
## 2. InternImage-based Baseline for Online HD Map Construction Challenge For Autonomous Driving
By incorporating the InterImage-based model, we observe an enhancement of 6.56 in mAP baseline.
By incorporating the InterImage-based model, we observe an enhancement of 6.56 in mAP baseline.
model name
|weight|
$
\m
athrm{mAP}$ | $
\m
athrm{AP}_{pc}$ | $
\m
athrm{AP}_{div}$ | $
\m
athrm{AP}_{bound}$ |
|
model name
| weight |
$
\
\
mathrm{mAP}$ | $
\
\
mathrm{AP}
\
_
{pc}$ | $
\
\
mathrm{AP}
\
_
{div}$ | $
\
\
mathrm{AP}
\
_
{bound}$ |
----|:----------:| :--: | :--: | :--: | :
--: |
| ------------------- | :---------------------------------------------------------------------------------------------------------------: | :-------------: | :------------------: | :-------------------: | :-------------------
--: |
vectormapnet_intern
|
[
Checkpoint
](
https://github.com/OpenGVLab/InternImage/releases/download/track_model/vectormapnet_internimage.pth
)
| 49.35
| 45.05 | 56.78 | 46.22 |
|
vectormapnet_intern
|
[
Checkpoint
](
https://github.com/OpenGVLab/InternImage/releases/download/track_model/vectormapnet_internimage.pth
)
|
49.35
| 45.05 | 56.78 | 46.22 |
vectormapnet_base
|
[
Google Drive
](
https://drive.google.com/file/d/16D1CMinwA8PG1sd9PV9_WtHzcBohvO-D/view
)
| 42.79 | 37.22 | 50.47 | 40.68 |
|
vectormapnet_base
|
[
Google Drive
](
https://drive.google.com/file/d/16D1CMinwA8PG1sd9PV9_WtHzcBohvO-D/view
)
| 42.79 | 37.22 | 50.47 | 40.68 |
### Get Started
### Get Started
please refer to
[
README.md
](
Online-HD-Map-Construction/README.md
)
please refer to
[
README.md
](
Online-HD-Map-Construction/README.md
)
## 3. InternImage-based Baseline for CVPR23 OpenLane-V2 Challenge
## 3. InternImage-based Baseline for CVPR23 OpenLane-V2 Challenge
Through the implementation of the InterImage-based model, we achieve an advancement of 0.009 in F-score baseline.
Through the implementation of the InterImage-based model, we achieve an advancement of 0.009 in F-score baseline.
| | OpenLane-V2 Score | DET
<sub>
l
</sub>
| DET
<sub>
t
</sub>
| TOP
<sub>
ll
</sub>
| TOP
<sub>
lt
</sub>
| F-Score |
| | OpenLane-V2 Score | DET
<sub>
l
</sub>
| DET
<sub>
t
</sub>
| TOP
<sub>
ll
</sub>
| TOP
<sub>
lt
</sub>
| F-Score |
|
--
-----------
|--
-----------------
|--
---------------
|--
---------------
|
----------------
--|-
----------------
-|
-------
--
|
|
-----------
|
-----------------
|
---------------
|
---------------
|
----------------
|
----------------
|
-------
|
| base r50 | 0.292 | 0.183 | 0.457 | 0.022 | 0.143 | 0.215 |
| base r50 | 0.292 | 0.183 | 0.457 | 0.022 | 0.143 | 0.215 |
| InternImage | 0.325 | 0.194 | 0.537 | 0.02 | 0.17 | 0.224 |
| InternImage | 0.325 | 0.194 | 0.537 | 0.02 | 0.17 | 0.224 |
### Get Started
### Get Started
please refer to
[
README.md
](
./openlane-v2/README.md
)
please refer to
[
README.md
](
./openlane-v2/README.md
)
autonomous_driving/occupancy_prediction/CITATION.cff
View file @
41b18fd8
autonomous_driving/occupancy_prediction/CODE_OF_CONDUCT.md
View file @
41b18fd8
...
@@ -2,127 +2,101 @@
...
@@ -2,127 +2,101 @@
## Our Pledge
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for
community a harassment-free experience for everyone, regardless of age, body
everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity
size, visible or invisible disability, ethnicity, sex characteristics, gender
and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion,
identity and expression, level of experience, education, socio-economic status,
or sexual identity and orientation.
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
diverse, inclusive, and healthy community.
## Our Standards
## Our Standards
Examples of behavior that contributes to a positive environment for our
Examples of behavior that contributes to a positive environment for our community include:
community include:
*
Demonstrating empathy and kindness toward other people
-
Demonstrating empathy and kindness toward other people
*
Being respectful of differing opinions, viewpoints, and experiences
-
Being respectful of differing opinions, viewpoints, and experiences
*
Giving and gracefully accepting constructive feedback
-
Giving and gracefully accepting constructive feedback
*
Accepting responsibility and apologizing to those affected by our mistakes,
-
Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
and learning from the experience
-
Focusing on what is best not just for us as individuals, but for the overall community
*
Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
Examples of unacceptable behavior include:
*
The use of sexualized language or imagery, and sexual attention or
-
The use of sexualized language or imagery, and sexual attention or advances of any kind
advances of any kind
-
Trolling, insulting or derogatory comments, and personal or political attacks
*
Trolling, insulting or derogatory comments, and personal or political attacks
-
Public or private harassment
*
Public or private harassment
-
Publishing others' private information, such as a physical or email address, without their explicit permission
*
Publishing others' private information, such as a physical or email
-
Other conduct which could reasonably be considered inappropriate in a professional setting
address, without their explicit permission
*
Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take
acceptable behavior and will take appropriate and fair corrective action in
appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive,
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits,
comments, commits, code, wiki edits, issues, and other contributions that are
issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for
not aligned to this Code of Conduct, and will communicate reasons for moderation
moderation decisions when appropriate.
decisions when appropriate.
## Scope
## Scope
This Code of Conduct applies within all community spaces, and also applies when
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing
an individual is officially representing the community in public spaces.
the community in public spaces. Examples of representing our community include using an official e-mail address, posting
Examples of representing our community include using an official e-mail address,
via an official social media account, or acting as an appointed representative at an online or offline event.
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible
reported to the community leaders responsible for enforcement at
for enforcement at contact@opendrivelab.com. All complaints will be reviewed and investigated promptly and fairly.
contact@opendrivelab.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
reporter of any incident.
## Enforcement Guidelines
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem
the consequences for any action they deem
in violation of this Code of Conduct:
in violation of this Code of Conduct:
### 1. Correction
### 1. Correction
**Community Impact**
: Use of inappropriate language or other behavior deemed
**Community Impact**
: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the
unprofessional or unwelcome in the
community.
community.
**Consequence**
: A private, written warning from community leaders, providing
**Consequence**
: A private, written warning from community leaders, providing clarity around the nature of the violation
clarity around the nature of the violation and an explanation of why the
and an explanation of why the behavior was inappropriate. A public apology may be requested.
behavior was inappropriate. A public apology may be requested.
### 2. Warning
### 2. Warning
**Community Impact**
: A violation through a single incident or series
**Community Impact**
: A violation through a single incident or series of actions.
of actions.
**Consequence**
: A warning with consequences for continued behavior. No
**Consequence**
: A warning with consequences for continued behavior. No interaction with the people involved, including
interaction with the people involved, including unsolicited interaction with
unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding
those enforcing the Code of Conduct, for a specified period of time. This
interactions in community spaces as well as external channels like social media. Violating these terms may lead to a
includes avoiding interactions in community spaces as well as external channels
temporary or permanent ban.
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
### 3. Temporary Ban
**Community Impact**
: A serious violation of community standards, including
**Community Impact**
: A serious violation of community standards, including sustained inappropriate behavior.
sustained inappropriate behavior.
**Consequence**
: A temporary ban from any sort of interaction or public
**Consequence**
: A temporary ban from any sort of interaction or public communication with the community for a specified
communication with the community for a specified period of time. No public or
period of time. No public or private interaction with the people involved, including unsolicited interaction with those
private interaction with the people involved, including unsolicited interaction
enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
### 4. Permanent Ban
**Community Impact**
: Demonstrating a pattern of violation of community
**Community Impact**
: Demonstrating a pattern of violation of community standards, including sustained inappropriate
standards, including sustained inappropriate behavior, harassment of an
behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**
: A permanent ban from any sort of public interaction within
**Consequence**
: A permanent ban from any sort of public interaction within the community.
the community.
## Attribution
## Attribution
This Code of Conduct is adapted from the
[
Contributor Covenant
][
homepage
]
,
This Code of Conduct is adapted from the
[
Contributor Covenant
][
homepage
]
, version 2.0, available at
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by
[
Mozilla's code of conduct
Community Impact Guidelines were inspired
enforcement ladder
](
https://github.com/mozilla/diversity
)
.
by
[
Mozilla's code of conduct enforcement ladder
](
https://github.com/mozilla/diversity
)
.
[
homepage
]:
https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
https://www.contributor-covenant.org/translations.
[
homepage
]:
https://www.contributor-covenant.org
Prev
1
2
3
4
5
6
7
…
20
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment