README.md 2.73 KB
Newer Older
Your Name's avatar
Your Name committed
1
# YoloV7
shizhm's avatar
shizhm committed
2

Your Name's avatar
Your Name committed
3
4
5
6
7
8
## 模型介绍

YOLOV7是2022年最新出现的一种YOLO系列目标检测模型,在论文 [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696)中提出。

## 模型结构

Your Name's avatar
Your Name committed
9
YoloV7模型的网络结构包括三个部分:input、backbone和head。与yolov5不同的是,将neck层与head层合称为head层,实际上的功能是一样的。各个部分的功能和yolov5相同,如backbone用于提取特征,head用于预测。yolov7依旧基于anchor based的方法,同时在网络架构上增加E-ELAN层,并将REP层也加入进来,方便后续部署,同时在训练时,在head时,新增Aux_detect用于辅助检测。
Your Name's avatar
Your Name committed
10

Your Name's avatar
Your Name committed
11
12
13
## 构建安装

在光源可拉取推理的docker镜像,YoloV7工程推荐的镜像如下:
Your Name's avatar
Your Name committed
14

Your Name's avatar
Your Name committed
15
16
17
```python
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:ort1.14.0_migraphx3.0.0-dtk22.10.1
```
Your Name's avatar
Your Name committed
18

Your Name's avatar
Your Name committed
19
### 安装Opencv依赖
Your Name's avatar
Your Name committed
20

Your Name's avatar
Your Name committed
21
22
23
```python
cd <path_to_migraphx_samples>
sh ./3rdParty/InstallOpenCVDependences.sh
Your Name's avatar
Your Name committed
24
```
Your Name's avatar
Your Name committed
25
26
27
28
29
30
31
32
33
34
35

### 修改CMakeLists.txt

- 如果使用ubuntu系统,需要修改CMakeLists.txt中依赖库路径:
  将"${CMAKE_CURRENT_SOURCE_DIR}/depend/lib64/"修改为"${CMAKE_CURRENT_SOURCE_DIR}/depend/lib/"

- **MIGraphX2.3.0及以上版本需要c++17**


### 安装OpenCV并构建工程

Your Name's avatar
Your Name committed
36
```
Your Name's avatar
Your Name committed
37
38
39
40
rbuild build -d depend
```

### 设置环境变量
Your Name's avatar
Your Name committed
41

Your Name's avatar
Your Name committed
42
43
44
将依赖库依赖加入环境变量LD_LIBRARY_PATH,在~/.bashrc中添加如下语句:

**Centos**:
Your Name's avatar
Your Name committed
45
46

```
Your Name's avatar
Your Name committed
47
48
49
50
51
52
53
export LD_LIBRARY_PATH=<path_to_migraphx_samples>/depend/lib64/:$LD_LIBRARY_PATH
```

**Ubuntu**:

```
export LD_LIBRARY_PATH=<path_to_migraphx_samples>/depend/lib/:$LD_LIBRARY_PATH
Your Name's avatar
Your Name committed
54
55
```

Your Name's avatar
Your Name committed
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
然后执行:

```
source ~/.bashrc
```

## 推理

### C++版本推理

成功编译YoloV7工程后,在build目录下输入如下命令运行该示例:

```
./MIGraphX_Samples 0
```

程序运行结束会在build目录生成YoloV7检测结果图像。

<img src="./Resource/Images/Result.jpg" alt="Result" style="zoom:50%;" />

### python版本推理
Your Name's avatar
Your Name committed
77
78
79
80

YoloV7模型的推理示例程序是YoloV7_infer_migraphx.py,使用如下命令运行该推理示例:

```
Your Name's avatar
Your Name committed
81
82
83
84
85
86
87
# 进入python示例目录
cd ./Python

# 安装依赖
pip install -r requirements.txt

# 运行程序
Your Name's avatar
Your Name committed
88
89
90
91
92
93
94
95
python YoloV7_infer_migraphx.py \
	--imgpath 测试图像路径 \ 
	--modelpath onnx模型路径 \
	--objectThreshold 判断是否有物体阈值,默认0.5 \
	--confThreshold 置信度阈值,默认0.25 \
	--nmsThreshold nms阈值,默认0.5 \
```

Your Name's avatar
Your Name committed
96
程序运行结束会在当前目录生成YoloV7检测结果图像。
Your Name's avatar
Your Name committed
97

Your Name's avatar
Your Name committed
98
<img src="./Resource/Images/Result.jpg" alt="Result_2" style="zoom: 50%;" />
Your Name's avatar
Your Name committed
99
100
101

## 历史版本

Your Name's avatar
Your Name committed
102
​		https://developer.hpccube.com/codes/modelzoo/yolov7_migraphx
Your Name's avatar
Your Name committed
103
104
105

## 参考

Your Name's avatar
Your Name committed
106
​		https://github.com/WongKinYiu/yolov7