Unverified Commit 68ea4b4d authored by Jingwei Zhang's avatar Jingwei Zhang Committed by GitHub
Browse files

[Fix] Fix formula in the readthedocs (#2580)

* fix latex in readthedocs

* fix zh_cn latex

* fix zh_cn latex

* fix en latex
parent 583e9075
......@@ -48,7 +48,7 @@ Despite the variety of datasets and equipment, by summarizing the line of works
left ------ 0 ------> x right
```
The definition of coordinate systems in this tutorial is actually **more than just defining the three axes**. For a box in the form of `` $$`(x, y, z, dx, dy, dz, r)`$$ ``, our coordinate systems also define how to interpret the box dimensions `` $$`(dx, dy, dz)`$$ `` and the yaw angle `` $$`r`$$ ``.
The definition of coordinate systems in this tutorial is actually **more than just defining the three axes**. For a box in the form of $(x, y, z, dx, dy, dz, r)$, our coordinate systems also define how to interpret the box dimensions $(dx, dy, dz)$ and the yaw angle $r$.
The illustration of the three coordinate systems is shown below:
......@@ -60,13 +60,13 @@ We will stick to the three coordinate systems defined in this tutorial in the fu
## Definition of the yaw angle
Please refer to [wikipedia](https://en.wikipedia.org/wiki/Euler_angles#Tait%E2%80%93Bryan_angles) for the standard definition of the yaw angle. In object detection, we choose an axis as the gravity axis, and a reference direction on the plane `` $$`\Pi`$$ `` perpendicular to the gravity axis, then the reference direction has a yaw angle of 0, and other directions on `` $$`\Pi`$$ `` have non-zero yaw angles depending on its angle with the reference direction.
Please refer to [wikipedia](https://en.wikipedia.org/wiki/Euler_angles#Tait%E2%80%93Bryan_angles) for the standard definition of the yaw angle. In object detection, we choose an axis as the gravity axis, and a reference direction on the plane $\Pi$ perpendicular to the gravity axis, then the reference direction has a yaw angle of 0, and other directions on $\Pi$ have non-zero yaw angles depending on its angle with the reference direction.
Currently, for all supported datasets, annotations do not include pitch angle and roll angle, which means we need only consider the yaw angle when predicting boxes and calculating overlap between boxes.
In MMDetection3D, all three coordinate systems are right-handed coordinate systems, which means the ascending direction of the yaw angle is counter-clockwise if viewed from the negative direction of the gravity axis (the axis is pointing at one's eyes).
The figure below shows that, in this right-handed coordinate system, if we set the positive direction of the x-axis as a reference direction, then the positive direction of the y-axis has a yaw angle of `` $$`\frac{\pi}{2}`$$ ``.
The figure below shows that, in this right-handed coordinate system, if we set the positive direction of the x-axis as a reference direction, then the positive direction of the y-axis has a yaw angle of $\frac{\pi}{2}$.
```
z up y front (yaw=0.5*pi)
......@@ -97,9 +97,9 @@ __|____|____|____|______\ x right
## Definition of the box dimensions
The definition of the box dimensions cannot be disentangled with the definition of the yaw angle. In the previous section, we said that the direction of a box is defined to be parallel with the x-axis if its yaw angle is 0. Then naturally, the dimension of a box which corresponds to the x-axis should be `` $$`dx`$$ ``. However, this is not always the case in some datasets (we will address that later).
The definition of the box dimensions cannot be disentangled with the definition of the yaw angle. In the previous section, we said that the direction of a box is defined to be parallel with the x-axis if its yaw angle is 0. Then naturally, the dimension of a box which corresponds to the x-axis should be $dx$. However, this is not always the case in some datasets (we will address that later).
The following figures show the meaning of the correspondence between the x-axis and `` $$`dx`$$ ``, and between the y-axis and `` $$`dy`$$ ``.
The following figures show the meaning of the correspondence between the x-axis and $dx$, and between the y-axis and $dy$.
```
y front
......@@ -116,7 +116,7 @@ __|____|____|____|______\ x right
| dy
```
Note that the box direction is always parallel with the edge `` $$`dx`$$ ``.
Note that the box direction is always parallel with the edge $dx$.
```
y front
......@@ -143,12 +143,12 @@ In SECOND, the LiDAR coordinate system for a box is defined as follows (a bird's
![](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/kittibox.png)
For each box, the dimensions are `` $$`(w, l, h)`$$ ``, and the reference direction for the yaw angle is the positive direction of the y axis. For more details, refer to the [repo](https://github.com/traveller59/second.pytorch#concepts).
For each box, the dimensions are $(w, l, h)$, and the reference direction for the yaw angle is the positive direction of the y axis. For more details, refer to the [repo](https://github.com/traveller59/second.pytorch#concepts).
Our LiDAR coordinate system has two changes:
- The yaw angle is defined to be right-handed instead of left-handed for consistency;
- The box dimensions are `` $$`(l, w, h)`$$ `` instead of `` $$`(w, l, h)`$$ ``, since `` $$`w`$$ `` corresponds to `` $$`dy`$$ `` and `` $$`l`$$ `` corresponds to `` $$`dx`$$ `` in KITTI.
- The box dimensions are $(l, w, h)$ instead of $(w, l, h)$, since $w$ corresponds to $dy$ and $l$ corresponds to $dx$ in KITTI.
### Waymo
......@@ -156,7 +156,7 @@ We use the KITTI-format data of Waymo dataset. Therefore, KITTI and Waymo also s
### NuScenes
NuScenes provides a toolkit for evaluation, in which each box is wrapped into a `Box` instance. The coordinate system of `Box` is different from our LiDAR coordinate system in that the first two elements of the box dimension correspond to `` $$`(dy, dx)`$$ ``, or `` $$`(w, l)`$$ ``, respectively, instead of the reverse. For more details, please refer to the NuScenes [tutorial](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md#notes).
NuScenes provides a toolkit for evaluation, in which each box is wrapped into a `Box` instance. The coordinate system of `Box` is different from our LiDAR coordinate system in that the first two elements of the box dimension correspond to $(dy, dx)$, or $(w, l)$, respectively, instead of the reverse. For more details, please refer to the NuScenes [tutorial](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md#notes).
Readers may refer to the [NuScenes development kit](https://github.com/nutonomy/nuscenes-devkit/tree/master/python-sdk/nuscenes/eval/detection) for the definition of a [NuScenes box](https://github.com/nutonomy/nuscenes-devkit/blob/2c6a752319f23910d5f55cc995abc547a9e54142/python-sdk/nuscenes/utils/data_classes.py#L457) and implementation of [NuScenes evaluation](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/eval/detection/evaluate.py).
......@@ -188,25 +188,25 @@ Take the conversion between our Camera coordinate system and LiDAR coordinate sy
First, for points and box centers, the coordinates before and after the conversion satisfy the following relationship:
- `` $$`x_{LiDAR}=z_{camera}`$$ ``
- `` $$`y_{LiDAR}=-x_{camera}`$$ ``
- `` $$`z_{LiDAR}=-y_{camera}`$$ ``
- $x\_{LiDAR}=z\_{camera}$
- $y\_{LiDAR}=-x\_{camera}$
- $z\_{LiDAR}=-y\_{camera}$
Then, the box dimensions before and after the conversion satisfy the following relationship:
- `` $$`dx_{LiDAR}=dx_{camera}`$$ ``
- `` $$`dy_{LiDAR}=dz_{camera}`$$ ``
- `` $$`dz_{LiDAR}=dy_{camera}`$$ ``
- $dx\_{LiDAR}=dx\_{camera}$
- $dy\_{LiDAR}=dz\_{camera}$
- $dz\_{LiDAR}=dy\_{camera}$
Finally, the yaw angle should also be converted:
- `` $$`r_{LiDAR}=-\frac{\pi}{2}-r_{camera}`$$ ``
- $r\_{LiDAR}=-\frac{\pi}{2}-r\_{camera}$
See the code [here](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/core/bbox/structures/box_3d_mode.py) for more details.
### Bird's Eye View
The BEV of a camera coordinate system box is `` $$`(x, z, dx, dz, -r)`$$ `` if the 3D box is `` $$`(x, y, z, dx, dy, dz, r)`$$ ``. The inversion of the sign of the yaw angle is because the positive direction of the gravity axis of the Camera coordinate system points to the ground.
The BEV of a camera coordinate system box is $(x, z, dx, dz, -r)$ if the 3D box is $(x, y, z, dx, dy, dz, r)$. The inversion of the sign of the yaw angle is because the positive direction of the gravity axis of the Camera coordinate system points to the ground.
See the code [here](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/core/bbox/structures/cam_box3d.py) for more details.
......@@ -228,18 +228,18 @@ For each box related op, we have marked the type of boxes to which we can apply
No. For example, in KITTI, we need a calibration matrix when converting from Camera coordinate system to LiDAR coordinate system.
#### Q3: How does a phase difference of `` $$`2\pi`$$ `` in the yaw angle of a box affect evaluation?
#### Q3: How does a phase difference of $2\pi$ in the yaw angle of a box affect evaluation?
For IoU calculation, a phase difference of `` $$`2\pi`$$ `` in the yaw angle will result in the same box, thus not affecting evaluation.
For IoU calculation, a phase difference of $2\pi$ in the yaw angle will result in the same box, thus not affecting evaluation.
For angle prediction evaluation such as the NDS metric in NuScenes and the AOS metric in KITTI, the angle of predicted boxes will be first standardized, so the phase difference of `` $$`2\pi`$$ `` will not change the result.
For angle prediction evaluation such as the NDS metric in NuScenes and the AOS metric in KITTI, the angle of predicted boxes will be first standardized, so the phase difference of $2\pi$ will not change the result.
#### Q4: How does a phase difference of `` $$`\pi`$$ `` in the yaw angle of a box affect evaluation?
#### Q4: How does a phase difference of $\pi$ in the yaw angle of a box affect evaluation?
For IoU calculation, a phase difference of `` $$`\pi`$$ `` in the yaw angle will result in the same box, thus not affecting evaluation.
For IoU calculation, a phase difference of $\pi$ in the yaw angle will result in the same box, thus not affecting evaluation.
However, for angle prediction evaluation, this will result in the exact opposite direction.
Just think about a car. The yaw angle is the angle between the direction of the car front and the positive direction of the x-axis. If we add `` $$`\pi`$$ `` to this angle, the car front will become the car rear.
Just think about a car. The yaw angle is the angle between the direction of the car front and the positive direction of the x-axis. If we add $\pi$ to this angle, the car front will become the car rear.
For categories such as barrier, the front and the rear have no difference, therefore a phase difference of `` $$`\pi`$$ `` will not affect the angle prediction score.
For categories such as barrier, the front and the rear have no difference, therefore a phase difference of $\pi$ will not affect the angle prediction score.
......@@ -48,7 +48,7 @@ MMDetection3D 使用 3 种不同的坐标系。3D 目标检测领域中不同坐
左 ------ 0 ------> x 右
```
该教程中的坐标系定义实际上**不仅仅是定义三个轴**。对于形如 `` $$`(x, y, z, dx, dy, dz, r)`$$ `` 的框来说,我们的坐标系也定义了如何解释框的尺寸 `` $$`(dx, dy, dz)`$$ `` 和转向角 (yaw) 角度 `` $$`r`$$ ``
该教程中的坐标系定义实际上**不仅仅是定义三个轴**。对于形如 $(x, y, z, dx, dy, dz, r)$ 的框来说,我们的坐标系也定义了如何解释框的尺寸 $(dx, dy, dz)$ 和转向角 (yaw) 角度 $r$
三个坐标系的图示如下:
......@@ -60,13 +60,13 @@ MMDetection3D 使用 3 种不同的坐标系。3D 目标检测领域中不同坐
## 转向角 (yaw) 的定义
请参考[维基百科](https://en.wikipedia.org/wiki/Euler_angles#Tait%E2%80%93Bryan_angles)了解转向角的标准定义。在目标检测中,我们选择一个轴作为重力轴,并在垂直于重力轴的平面 `` $$`\Pi`$$ `` 上选取一个参考方向,那么参考方向的转向角为 0,在 `` $$`\Pi`$$ `` 上的其他方向有非零的转向角,其角度取决于其与参考方向的角度。
请参考[维基百科](https://en.wikipedia.org/wiki/Euler_angles#Tait%E2%80%93Bryan_angles)了解转向角的标准定义。在目标检测中,我们选择一个轴作为重力轴,并在垂直于重力轴的平面 $\Pi$ 上选取一个参考方向,那么参考方向的转向角为 0,在 $\Pi$ 上的其他方向有非零的转向角,其角度取决于其与参考方向的角度。
目前,对于所有支持的数据集,标注不包括俯仰角 (pitch) 和滚动角 (roll),这意味着我们在预测框和计算框之间的重叠时只需考虑转向角 (yaw)。
在 MMDetection3D 中,所有坐标系都是右手坐标系,这意味着如果从重力轴的负方向(轴的正方向指向人眼)看,转向角 (yaw) 沿着逆时针方向增加。
下图显示,在右手坐标系中,如果我们设定 x 轴正方向为参考方向,那么 y 轴正方向的转向角 (yaw) 为 `` $$`\frac{\pi}{2}`$$ ``
下图显示,在右手坐标系中,如果我们设定 x 轴正方向为参考方向,那么 y 轴正方向的转向角 (yaw) 为 $\frac{\pi}{2}$
```
z 上 y 前 (yaw=0.5*pi)
......@@ -97,9 +97,9 @@ __|____|____|____|______\ x 右
## 框尺寸的定义
框尺寸的定义与转向角 (yaw) 的定义是分不开的。在上一节中,我们提到如果一个框的转向角 (yaw) 为 0,它的方向就被定义为与 x 轴平行。那么自然地,一个框对应于 x 轴的尺寸应该是 `` $$`dx`$$ ``。但是,这在某些数据集中并非总是如此(我们稍后会解决这个问题)。
框尺寸的定义与转向角 (yaw) 的定义是分不开的。在上一节中,我们提到如果一个框的转向角 (yaw) 为 0,它的方向就被定义为与 x 轴平行。那么自然地,一个框对应于 x 轴的尺寸应该是 $dx$。但是,这在某些数据集中并非总是如此(我们稍后会解决这个问题)。
下图展示了 x 轴和 `` $$`dx`$$ ``,y 轴和 `` $$`dy`$$ `` 对应的含义。
下图展示了 x 轴和 $dx$,y 轴和 $dy$ 对应的含义。
```
y 前
......@@ -116,7 +116,7 @@ __|____|____|____|______\ x 右
| dy
```
注意框的方向总是和 `` $$`dx`$$ `` 边平行。
注意框的方向总是和 $dx$ 边平行。
```
y 前
......@@ -143,12 +143,12 @@ KITTI 数据集的原始标注是在相机坐标系下的,详见 [get_label_an
![](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/kittibox.png)
对于每个框来说,尺寸为 `` $$`(w, l, h)`$$ ``,转向角 (yaw) 的参考方向为 y 轴正方向。更多细节请参考[代码库](https://github.com/traveller59/second.pytorch#concepts)
对于每个框来说,尺寸为 $(w, l, h)$,转向角 (yaw) 的参考方向为 y 轴正方向。更多细节请参考[代码库](https://github.com/traveller59/second.pytorch#concepts)
我们的激光雷达坐标系有两处改变:
- 转向角 (yaw) 被定义为右手而非左手,从而保持一致性;
- 框的尺寸为 `` $$`(l, w, h)`$$ `` 而非 `` $$`(w, l, h)`$$ ``,由于在 KITTI 数据集中 `` $$`w`$$ `` 对应 `` $$`dy`$$ ```` $$`l`$$ `` 对应 `` $$`dx`$$ ``
- 框的尺寸为 $(l, w, h)$ 而非 $(w, l, h)$,由于在 KITTI 数据集中 $w$ 对应 $dy$,$l$ 对应 $dx$
### Waymo
......@@ -156,7 +156,7 @@ KITTI 数据集的原始标注是在相机坐标系下的,详见 [get_label_an
### NuScenes
NuScenes 提供了一个评估工具包,其中每个框都被包装成一个 `Box` 实例。`Box` 的坐标系不同于我们的激光雷达坐标系,在 `Box` 坐标系中,前两个表示框尺寸的元素分别对应 `` $$`(dy, dx)`$$ `` 或者 `` $$`(w, l)`$$ ``,和我们的表示方法相反。更多细节请参考 NuScenes [教程](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/zh_cn/datasets/nuscenes_det.md#notes)
NuScenes 提供了一个评估工具包,其中每个框都被包装成一个 `Box` 实例。`Box` 的坐标系不同于我们的激光雷达坐标系,在 `Box` 坐标系中,前两个表示框尺寸的元素分别对应 $(dy, dx)$ 或者 $(w, l)$,和我们的表示方法相反。更多细节请参考 NuScenes [教程](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/zh_cn/datasets/nuscenes_det.md#notes)
读者可以参考 [NuScenes 开发工具](https://github.com/nutonomy/nuscenes-devkit/tree/master/python-sdk/nuscenes/eval/detection),了解 [NuScenes 框](https://github.com/nutonomy/nuscenes-devkit/blob/2c6a752319f23910d5f55cc995abc547a9e54142/python-sdk/nuscenes/utils/data_classes.py#L457) 的定义和 [NuScenes 评估](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/eval/detection/evaluate.py)的过程。
......@@ -188,25 +188,25 @@ SUN RGB-D 的原始数据不是点云而是 RGB-D 图像。我们通过反投影
首先,对于点和框的中心点,坐标转换前后满足下列关系:
- `` $$`x_{LiDAR}=z_{camera}`$$ ``
- `` $$`y_{LiDAR}=-x_{camera}`$$ ``
- `` $$`z_{LiDAR}=-y_{camera}`$$ ``
- $x\_{LiDAR}=z\_{camera}$
- $y\_{LiDAR}=-x\_{camera}$
- $z\_{LiDAR}=-y\_{camera}$
然后,框的尺寸转换前后满足下列关系:
- `` $$`dx_{LiDAR}=dx_{camera}`$$ ``
- `` $$`dy_{LiDAR}=dz_{camera}`$$ ``
- `` $$`dz_{LiDAR}=dy_{camera}`$$ ``
- $dx\_{LiDAR}=dx\_{camera}$
- $dy\_{LiDAR}=dz\_{camera}$
- $dz\_{LiDAR}=dy\_{camera}$
最后,转向角 (yaw) 也应该被转换:
- `` $$`r_{LiDAR}=-\frac{\pi}{2}-r_{camera}`$$ ``
- $r\_{LiDAR}=-\frac{\pi}{2}-r\_{camera}$
详见[此处](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/core/bbox/structures/box_3d_mode.py)代码了解更多细节。
### 鸟瞰图
如果 3D 框是 `` $$`(x, y, z, dx, dy, dz, r)`$$ ``,相机坐标系下框的鸟瞰图是 `` $$`(x, z, dx, dz, -r)`$$ ``。转向角 (yaw) 符号取反是因为相机坐标系重力轴的正方向指向地面。
如果 3D 框是 $(x, y, z, dx, dy, dz, r)$,相机坐标系下框的鸟瞰图是 $(x, z, dx, dz, -r)$。转向角 (yaw) 符号取反是因为相机坐标系重力轴的正方向指向地面。
详见[此处](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/core/bbox/structures/cam_box3d.py)代码了解更多细节。
......@@ -228,18 +228,18 @@ SUN RGB-D 的原始数据不是点云而是 RGB-D 图像。我们通过反投影
否。例如在 KITTI 中,从相机坐标系转换为激光雷达坐标系时,我们需要一个校准矩阵。
#### Q3: 框中转向角 (yaw) `` $$`2\pi`$$ `` 的相位差如何影响评估?
#### Q3: 框中转向角 (yaw) $2\pi$ 的相位差如何影响评估?
对于交并比 (IoU) 计算,转向角 (yaw) 有 `` $$`2\pi`$$ `` 的相位差的两个框是相同的,所以不会影响评估。
对于交并比 (IoU) 计算,转向角 (yaw) 有 $2\pi$ 的相位差的两个框是相同的,所以不会影响评估。
对于角度预测评估,例如 NuScenes 中的 NDS 指标和 KITTI 中的 AOS 指标,会先对预测框的角度进行标准化,因此 `` $$`2\pi`$$ `` 的相位差不会改变结果。
对于角度预测评估,例如 NuScenes 中的 NDS 指标和 KITTI 中的 AOS 指标,会先对预测框的角度进行标准化,因此 $2\pi$ 的相位差不会改变结果。
#### Q4: 框中转向角 (yaw) `` $$`\pi`$$ `` 的相位差如何影响评估?
#### Q4: 框中转向角 (yaw) $\pi$ 的相位差如何影响评估?
对于交并比 (IoU) 计算,转向角 (yaw) 有 `` $$`\pi`$$ `` 的相位差的两个框是相同的,所以不会影响评估。
对于交并比 (IoU) 计算,转向角 (yaw) 有 $\pi$ 的相位差的两个框是相同的,所以不会影响评估。
然而,对于角度预测评估,这会导致完全相反的方向。
考虑一辆汽车,转向角 (yaw) 是汽车前部方向与 x 轴正方向之间的夹角。如果我们将该角度增加 `` $$`\pi`$$ ``,车前部将变成车后部。
考虑一辆汽车,转向角 (yaw) 是汽车前部方向与 x 轴正方向之间的夹角。如果我们将该角度增加 $\pi$,车前部将变成车后部。
对于某些类别,例如障碍物,前后没有区别,因此 `` $$`\pi`$$ `` 的相位差不会对角度预测分数产生影响。
对于某些类别,例如障碍物,前后没有区别,因此 $\pi$ 的相位差不会对角度预测分数产生影响。
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment