Commit cf008715 authored by dcuai's avatar dcuai
Browse files

Merge branch 'master' into 'master'

update to dtk24.04.1

See merge request !2
parents 199584a7 75cd9ac8
[submodule "mmclassification-mmcv"] [submodule "mmpretrain-mmcv"]
path = mmclassification-mmcv path = mmpretrain-mmcv
url = http://developer.hpccube.com/codes/aicomponent/mmclassification-mmcv url = https://developer.hpccube.com/codes/OpenDAS/mmpretrain-mmcv
...@@ -20,25 +20,39 @@ Seresnet50的整体结构包括基础网络部分和Squeeze-and-Excitation(SE) ...@@ -20,25 +20,39 @@ Seresnet50的整体结构包括基础网络部分和Squeeze-and-Excitation(SE)
## 环境配置 ## 环境配置
### Docker**(方法一)** ### Docker(方法一)
推荐使用docker方式运行,拉取提供的docker镜像
```shell
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
```
基于拉取的镜像创建容器
```shell
# <your IMAGE ID or NAME>用以上拉取的docker的镜像ID或名称替换
docker run -it --name=seresnet50_mmcv --network=host --ipc=host --shm-size=16g --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v /opt/hyhal:/opt/hyhal:ro <your IMAGE ID> bash
```
克隆并安装git仓库,安装相关依赖
```python ```python
git clone --recursive http://developer.hpccube.com/codes/modelzoo/seresnet50_mmcv.git git clone --recursive http://developer.hpccube.com/codes/modelzoo/seresnet50_mmcv.git
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-22.10.1-py37-latest cd seresnet50_mmcv/mmpretrain-mmcv
# <your IMAGE ID>用以上拉取的docker的镜像ID替换 pip install -e .
docker run --shm-size 10g --network=host --name=seresnet50 --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $PWD/seresnet50_mmcv :/home/seresnet50_mmcv -it <your IMAGE ID> bash
cd seresnet50_mmcv/mmclassification-mmcv
pip install -r requirements.txt pip install -r requirements.txt
``` ```
### Dockerfile(方法二) ### Dockerfile(方法二)
```plaintext ```bash
cd seresnet50_mmcv/docker cd seresnet50_mmcv/docker
docker build --no-cache -t seresnet50_mmcv:latest . docker build --no-cache -t seresnet50_mmcv:latest .
docker run --rm --shm-size 10g --network=host --name=seresnet50 --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $PWD/../../seresnet50_mmcv:/home/seresnet50_mmcv -it <your IMAGE ID> bash docker run -it --name=seresnet50_mmcv --network=host --ipc=host --shm-size=16g --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v /opt/hyhal:/opt/hyhal:ro <your IMAGE ID> bash
# 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:pip install -r requirements.txt pip install -e .
# 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:
# pip install -r requirements.txt
``` ```
### Anaconda(方法三) ### Anaconda(方法三)
...@@ -46,11 +60,11 @@ docker run --rm --shm-size 10g --network=host --name=seresnet50 --privileged --d ...@@ -46,11 +60,11 @@ docker run --rm --shm-size 10g --network=host --name=seresnet50 --privileged --d
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装: https://developer.hpccube.com/tool/ 1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装: https://developer.hpccube.com/tool/
```plaintext ```plaintext
DTK驱动:dtk22.10.1 DTK驱动: DTK-24.04.1
python:python3.7 python==3.10
torch:1.10.0 torch==2.1.0
torchvision:0.10.0 torchvision==0.16.0+das1.1.git7d45932.abi1.dtk2404.torch2.1
mmcv:1.6.1 mmcv==2.0.1+das1.1.gite58da25.abi1.dtk2404.torch2.1.0
Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应 Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应
``` ```
...@@ -62,48 +76,117 @@ pip install -r requirements.txt ...@@ -62,48 +76,117 @@ pip install -r requirements.txt
## 数据集 ## 数据集
在本测试中可以使用ImageNet数据集。 ### ImageNet
[快速下载地址](http://113.200.138.88:18080/aidatasets/project-dependency/imagenet-2012)
下载ImageNet数据集:https://image-net.org/ 在本项目中可以使用ImageNet数据集。ImageNet数据集官方下载地址:https://image-net.org
下载val数据:链接:https://pan.baidu.com/s/1oXsmsYahGVG3uOZ8e535LA?pwd=c3bc 提取码:c3bc 替换ImageNet数据集中的val目录 也可于SCNet快速下载[imagenet-2012](http://113.200.138.88:18080/aidatasets/project-dependency/imagenet-2012),下载其中的ILSVRC2012_img_train.tar和ILSVRC2012_img_val.tar,并按照以下方式解包
或者从SCNet下载[ImageNet](http://113.200.138.88:18080/aidatasets/project-dependency/imagenet-2012) ```bash
- ImageNet数据集中的val部分[val](http://113.200.138.88:18080/aidatasets/project-dependency/shufflenet_v2_mmcv) cd mmpretrain-mmcv/data/imagenet
mkdir train && cd train
tar -xvf ILSVRC2012_img_train.tar
```
处理后的数据结构如下: 解包后是1000个tar文件,每个tar对应了一个类别,分别解包至对应文件夹,可利用如下shell脚本。
```bash
for tarfile in *.tar; do
dirname="${tarfile%.tar}"
mkdir "$dirname"
tar -xvf "$tarfile" -C "$dirname"
done
```
目录结构如下
``` ```
data data
──imagenet ── imagenet
├── meta ├── train
├──val.txt │   ├── n01440764
├──train.txt │   │   ├── n01440764_10026.JPEG
... │   │   ├── n01440764_10027.JPEG
├── train ├──val
├── val │   ├── n01440764
│   │   ├── ILSVRC2012_val_00000293.JPEG
``` ```
### Tiny-ImageNet-200
由于ImageNet完整数据集较大,可以使用[tiny-imagenet-200](http://cs231n.stanford.edu/tiny-imagenet-200.zip)进行测试,可于SCNet快速下载[tiny-imagenet-200-scnet](http://113.200.138.88:18080/aidatasets/project-dependency/tiny-imagenet-200) ,此时需要对配置脚本进行一些修改:
- dataset配置文件(configs/\_\_base\_\_/datasets/{DATASET_CONFIG}.py)中,需要对以下字段进行修改
```python
# dataset settings
dataset_type = 'CustomDataset' # 修改为CustomDataset
data_preprocessor = dict(
num_classes=200, # 修改类别为200
...
)
...
train_dataloader = dict(
batch_size=32,
num_workers=5,
dataset=dict(
type=dataset_type,
data_root='data/imagenet',
data_prefix='train', # 改为data_prefix='train',val_dataloader中同理
pipeline=train_pipeline),
sampler=dict(type='DefaultSampler', shuffle=True),
)
```
- model配置文件(configs/\_\_base\_\_/models/{MODEL_CONFIG}.py)中,同样需要将类别相关的值设置为200。
```python
# model settings
model = dict(
type='ImageClassifier',
...
head=dict(
type='LinearClsHead',
num_classes=200, # 将类别数改为200
...
))
```
本仓库的mmpretrain-mmcv中提供了使用tiny-imagenet-200进行训练的若干配置脚本,可参考进行设置。
## 训练 ## 训练
将训练数据解压到data目录下。 将训练数据集解压后放置于mmpretrain-mmcv/data/,对于tiny-imagenet,目录结构如下:
```
data
└── imagenet
├── test/
├── train/
├── val/
├── wnids.txt
└── words.txt
```
### 单机8卡 ### 单机8卡训练
./seresnet50.sh - tiny-imagenet-200
## result ```shell
bash tools/dist_train.sh seresnet50-test.py 8
```
- imagenet
```shell
bash tools/dist_train.sh configs/seresnet/seresnet50_8xb32_in1k.py 8
```
![img](https://developer.hpccube.com/codes/modelzoo/vit_pytorch/-/raw/master/image/README/1695381570003.png) 如需其他卡数训练,将命令中的8改为所需卡数即可;
### 精度 如遇端口占用问题,可在tools/dist_train.sh修改端口。
测试数据使用的是ImageNet数据集,使用的加速卡是DCU Z100L。
| 卡数 | 精度 |
| :--: | :---------------------: |
| 8 | top1:0.7754;top5:0.9373 |
## 应用场景 ## 应用场景
...@@ -113,7 +196,7 @@ data ...@@ -113,7 +196,7 @@ data
### 热点行业 ### 热点行业
制造,能源,交通,网安 制造,能源,交通,网安,安防
## 源码仓库及问题反馈 ## 源码仓库及问题反馈
...@@ -121,4 +204,4 @@ https://developer.hpccube.com/codes/modelzoo/seresnet50_mmcv ...@@ -121,4 +204,4 @@ https://developer.hpccube.com/codes/modelzoo/seresnet50_mmcv
## 参考资料 ## 参考资料
https://github.com/open-mmlab/mmpretrain https://github.com/open-mmlab/mmpretrainhttps://github.com/open-mmlab/mmpretrain
FROM image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-22.10.1-py37-latest FROM image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
# 安装pip相关依赖 # 安装pip相关依赖
COPY requirements.txt requirements.txt COPY requirements.txt requirements.txt
......
albumentations>=0.3.2 --no-binary qudida,albumentations einops
colorama importlib-metadata
requests mat4py
matplotlib
modelindex
numpy<2
rich rich
scipy coverage
matplotlib>=3.1.0
numpy
packaging
codecov
flake8
interrogate interrogate
isort==4.3.21
pytest pytest
xdoctest >= 0.10.0 albumentations>=0.3.2 --no-binary qudida,albumentations # For Albumentations data transform
yapf grad-cam >= 1.3.7,<1.5.0 # For CAM visualization
requests # For torchserve
scikit-learn # For t-SNE visualization and unit tests.
\ No newline at end of file
Subproject commit 0f6a312ab4b30c6e27efd93608268fe0fe3f7dcc
mmpretrain-mmcv @ 12c02d09
Subproject commit 12c02d0917bcbfbac86f52b93f02ce87edb7835b
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment