若报错Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to .cache/torch/checkpoints/resnet50-19c8e357.pth失败,则需提前下载resnet50-19c8e357.pth,拷贝至.cache/torch/checkpoints/。
若报错Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to .cache/torch/checkpoints/resnet50-19c8e357.pth失败,则需提前下载resnet50-19c8e357.pth,拷贝至.cache/torch/checkpoints/。
Train on 2017COCO train data set, compute mAP on 2017COCO val data set.
在2017年COCO训练数据集上进行训练,在2017年COCO数据集中的val数据上计算mAP。
# 4. 模型
# 4. 模型
### Publication/Attribution
### 出版/署名
Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg. SSD: Single Shot MultiBox Detector. In the Proceedings of the European Conference on Computer Vision (ECCV), 2016.
Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg. SSD: Single Shot MultiBox Detector. In the Proceedings of the European Conference on Computer Vision (ECCV), 2016.
Backbone is ResNet34 pretrained on ILSVRC 2012 (from torchvision). Modifications to the backbone networks: remove conv_5x residual blocks, change the first 3x3 convolution of the conv_4x block from stride 2 to stride1 (this increases the resolution of the feature map to which detector heads are attached), attach all 6 detector heads to the output of the last conv_4x residual block. Thus detections are attached to 38x38, 19x19, 10x10, 5x5, 3x3, and 1x1 feature maps.
<ahref="https://github.com/ultralytics/yolov5/actions"><imgsrc="https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg"alt="CI CPU testing"></a>
<ahref="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"></a>
<ahref="https://www.kaggle.com/ultralytics/yolov5"><imgsrc="https://kaggle.com/static/images/open-in-kaggle.svg"alt="Open In Kaggle"></a>
YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents <ahref="https://ultralytics.com">Ultralytics</a>
open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
|Automatically track and visualize all your YOLOv5 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) |
<!-- ## <div align="center">Compete and Win</div>
We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
***COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
***GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
***EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
***Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
* All checkpoints are trained to 300 epochs with default settings and hyperparameters.
***mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
***Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
***TTA**[Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv5 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors!
<ahref="https://github.com/ultralytics/yolov5/actions"><imgsrc="https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg"alt="CI CPU testing"></a>
<ahref="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"></a>
<ahref="https://www.kaggle.com/ultralytics/yolov5"><imgsrc="https://kaggle.com/static/images/open-in-kaggle.svg"alt="Open In Kaggle"></a>
YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents <ahref="https://ultralytics.com">Ultralytics</a>
open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
|Automatically track and visualize all your YOLOv5 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) |
<!-- ## <div align="center">Compete and Win</div>
We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
***COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
***GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
***EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
***Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
* All checkpoints are trained to 300 epochs with default settings and hyperparameters.
***mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
***Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
***TTA**[Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## <div align="center">Contribute</div>
We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv5 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors!
[](https://colab.research.google.com/github/zzh8829/yolov3-tf2/blob/master/colab_gpu.ipynb)
## 数据集准备
This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices.
I have created a complete tutorial on how to train from scratch using the VOC2012 Dataset.
See the documentation here https://github.com/zzh8829/yolov3-tf2/blob/master/docs/training_voc.md
For customzied training, you need to generate tfrecord following the TensorFlow Object Detection API.
For example you can use [Microsoft VOTT](https://github.com/Microsoft/VoTT) to generate such dataset.
You can also use this [script](https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_pascal_tf_record.py) to create the pascal voc dataset.
1. For nan loss, try to make learning rate smaller
可以利用如下方式可视化数据集:
2. Double check the format of your input data. Data input labelled by vott and labelImg is different. so make sure the input box is the right, and check carefully the format is `x1/width,y1/height,x2/width,y2/height` and **NOT** x1,y1,x2,y2, or x,y,w,h
Make sure to visualize your custom dataset using this tool
--transfer: <none|darknet|no_output|frozen|fine_tune>: none: Training from scratch, darknet: Transfer darknet, no_output: Transfer all but output, frozen: Transfer and freeze all,
fine_tune: Transfer all and freeze darknet only
(default: 'none')
--val_dataset: path to validation dataset
(default: '')
--weights: path to weights file
(default: './checkpoints/yolov3.tf')
```
```
## Change Log
#### October 1, 2019
- Updated to Tensorflow to v2.0.0 Release
## References
It is pretty much impossible to implement this from the yolov3 paper alone. I had to reference the official (very hard to understand) and many un-official (many minor errors) repos to piece together the complete picture.
[](https://colab.research.google.com/github/zzh8829/yolov3-tf2/blob/master/colab_gpu.ipynb)
This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices.
## Key Features
- [x] TensorFlow 2.0
- [x] `yolov3` with pre-trained Weights
- [x] `yolov3-tiny` with pre-trained Weights
- [x] Inference example
- [x] Transfer learning example
- [x] Eager mode training with `tf.GradientTape`
- [x] Graph mode training with `model.fit`
- [x] Functional model with `tf.keras.layers`
- [x] Input pipeline using `tf.data`
- [x] Tensorflow Serving
- [x] Vectorized transformations
- [x] GPU accelerated
-[x] Fully integrated with `absl-py` from [abseil.io](https://abseil.io)
I have created a complete tutorial on how to train from scratch using the VOC2012 Dataset.
See the documentation here https://github.com/zzh8829/yolov3-tf2/blob/master/docs/training_voc.md
For customzied training, you need to generate tfrecord following the TensorFlow Object Detection API.
For example you can use [Microsoft VOTT](https://github.com/Microsoft/VoTT) to generate such dataset.
You can also use this [script](https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_pascal_tf_record.py) to create the pascal voc dataset.
saved_model_cli show --dir serving/yolov3/1/ --tag_set serve --signature_def serving_default
```
The inputs are preprocessed images (see `dataset.transform_iamges`)
outputs are
```
yolo_nms_0: bounding boxes
yolo_nms_1: scores
yolo_nms_2: classes
yolo_nms_3: numbers of valid detections
```
## Benchmark (No Training Yet)
Numbers are obtained with rough calculations from `detect_video.py`
### Macbook Pro 13 (2.7GHz i5)
| Detection | 416x416 | 320x320 | 608x608 |
|-------------|---------|---------|---------|
| YoloV3 | 1000ms | 500ms | 1546ms |
| YoloV3-Tiny | 100ms | 58ms | 208ms |
### Desktop PC (GTX 970)
| Detection | 416x416 | 320x320 | 608x608 |
|-------------|---------|---------|---------|
| YoloV3 | 74ms | 57ms | 129ms |
| YoloV3-Tiny | 18ms | 15ms | 28ms |
### AWS g3.4xlarge (Tesla M60)
| Detection | 416x416 | 320x320 | 608x608 |
|-------------|---------|---------|---------|
| YoloV3 | 66ms | 50ms | 123ms |
| YoloV3-Tiny | 15ms | 10ms | 24ms |
### RTX 2070 (credit to @AnaRhisT94)
| Detection | 416x416 |
|-------------|---------|
| YoloV3 predict_on_batch | 29-32ms |
| YoloV3 predict_on_batch + TensorRT | 22-28ms |
Darknet version of YoloV3 at 416x416 takes 29ms on Titan X.
Considering Titan X has about double the benchmark of Tesla M60,
Performance-wise this implementation is pretty comparable.
## Implementation Details
### Eager execution
Great addition for existing TensorFlow experts.
Not very easy to use without some intermediate understanding of TensorFlow graphs.
It is annoying when you accidentally use incompatible features like tensor.shape[0]
or some sort of python control flow that works fine in eager mode, but
totally breaks down when you try to compile the model to graph.
### model(x) vs. model.predict(x)
When calling model(x) directly, we are executing the graph in eager mode. For
`model.predict`, tf actually compiles the graph on the first run and then
execute in graph mode. So if you are only running the model once, `model(x)` is
faster since there is no compilation needed. Otherwise, `model.predict` or
using exported SavedModel graph is much faster (by 2x). For non real-time usage,
`model.predict_on_batch` is even faster as tested by @AnaRhisT94)
### GradientTape
Extremely useful for debugging purpose, you can set breakpoints anywhere.
You can compile all the keras fitting functionalities with gradient tape using the
`run_eagerly` argument in model.compile. From my limited testing, all training methods
including GradientTape, keras.fit, eager or not yeilds similar performance. But graph
mode is still preferred since it's a tiny bit more efficient.
### @tf.function
@tf.function is very cool. It's like an in-between version of eager and graph.
You can step through the function by disabling tf.function and then gain
performance when you enable it in production. Important note, you should not
pass any non-tensor parameter to @tf.function, it will cause re-compilation
on every call. I am not sure whats the best way other than using globals.
### absl.py (abseil)
Absolutely amazing. If you don't know already, absl.py is officially used by
internal projects at Google. It standardizes application interface for Python
and many other languages. After using it within Google, I was so excited
to hear abseil going open source. It includes many decades of best practices
learned from creating large size scalable applications. I literally have
nothing bad to say about it, strongly recommend absl.py to everybody.
### Loading pre-trained Darknet weights
very hard with pure functional API because the layer ordering is different in
tf.keras and darknet. The clean solution here is creating sub-models in keras.
Keras is not able to save nested model in h5 format properly, TF Checkpoint is
recommended since its offically supported by TensorFlow.
### tf.keras.layers.BatchNormalization
It doesn't work very well for transfer learning. There are many articles and
github issues all over the internet. I used a simple hack to make it work nicer
on transfer learning with small batches.
### What is the output of transform_targets ???
I know it's very confusion but the output is tuple of shape
```
(
[N, 13, 13, 3, 6],
[N, 26, 26, 3, 6],
[N, 52, 52, 3, 6]
)
```
where N is the number of labels in batch and the last dimension "6" represents
`[x, y, w, h, obj, class]` of the bounding boxes.
### IOU and Score Threshold
the default threshold is 0.5 for both IOU and score, you can adjust them
according to your need by setting `--yolo_iou_threshold` and
`--yolo_score_threshold` flags
### Maximum number of boxes
By default there can be maximum 100 bounding boxes per image,
if for some reason you would like to have more boxes you can use the `--yolo_max_boxes` flag.
### NAN Loss / Training Failed / Doesn't Converge
Many people including me have succeeded in training, so the code definitely works
@LongxingTan in https://github.com/zzh8829/yolov3-tf2/issues/128 provided some of his insights summarized here:
1. For nan loss, try to make learning rate smaller
2. Double check the format of your input data. Data input labelled by vott and labelImg is different. so make sure the input box is the right, and check carefully the format is `x1/width,y1/height,x2/width,y2/height` and **NOT** x1,y1,x2,y2, or x,y,w,h
Make sure to visualize your custom dataset using this tool
--transfer: <none|darknet|no_output|frozen|fine_tune>: none: Training from scratch, darknet: Transfer darknet, no_output: Transfer all but output, frozen: Transfer and freeze all,
fine_tune: Transfer all and freeze darknet only
(default: 'none')
--val_dataset: path to validation dataset
(default: '')
--weights: path to weights file
(default: './checkpoints/yolov3.tf')
```
## Change Log
#### October 1, 2019
- Updated to Tensorflow to v2.0.0 Release
## References
It is pretty much impossible to implement this from the yolov3 paper alone. I had to reference the official (very hard to understand) and many un-official (many minor errors) repos to piece together the complete picture.