Commit c4cc2da4 authored by Rayyyyy's avatar Rayyyyy
Browse files

Add icon and scnet.

parent 0a0fe244
# ViT
## 论文
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
https://arxiv.org/abs/2010.11929
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
## 模型结构
Vision Transformer先将图像用卷积进行分块以降低计算量,再对每一块进行展平处理变成序列,然后将序列添加位置编码和cls token,再输入多层Transformer结构提取特征,最后将cls tooken取出来通过一个MLP(多层感知机)用于分类。
![img](https://developer.hpccube.com/codes/modelzoo/megatron-deepspeed-vit_pytorch/-/raw/main/doc/vit.png)
......@@ -18,9 +14,7 @@ Vision Transformer先将图像用卷积进行分块以降低计算量,再对
![img](https://developer.hpccube.com/codes/modelzoo/megatron-deepspeed-vit_pytorch/-/raw/main/doc/attention.png)
## 环境配置
### Docker(方法一)
```plaintext
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-22.10.1-py37-latest
# <your IMAGE ID>用以上拉取的docker的镜像ID替换
......@@ -29,7 +23,6 @@ pip install -r requirements.txt
```
### Dockerfile(方法二)
```plaintext
cd ViT-PyTorch/docker
docker build --no-cache -t ViT-PyTorch:latest .
......@@ -38,7 +31,6 @@ docker run --rm --shm-size 10g --network=host --name=megatron --privileged --dev
```
### Anaconda(方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装: https://developer.hpccube.com/tool/
```plaintext
......@@ -56,10 +48,7 @@ pip install -r requirements.txt
```
## 数据集
cifar10
链接:https://pan.baidu.com/s/1ZFMQVBGQZI6UWZKJcTYPAQ?pwd=fq3l 提取码:fq3l
[cifar10](http://113.200.138.88:18080/aidatasets/project-dependency/cifar)
```
├── batches.meta
......@@ -74,44 +63,39 @@ cifar10
## 训练
下载预训练模型放在checkpoint目录下:
```
wget https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz
```
### 单机单卡
export HIP_VISIBLE_DEVICES=0
python3 -m torch.distributed.launch --nproc_per_node=1 train.py --name cifar10-100_500 --dataset cifar10 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --train_batch_size 64 --num_steps 500
```
export HIP_VISIBLE_DEVICES=0
python3 -m torch.distributed.launch --nproc_per_node=1 train.py --name cifar10-100_500 --dataset cifar10 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --train_batch_size 64 --num_steps 500
```
### 单机多卡
```
python3 -m torch.distributed.launch --nproc_per_node=8 train.py --name cifar10-100_500 --dataset cifar10 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --train_batch_size 64 --num_steps 500
```
## result
![1695381570003](image/README/1695381570003.png)
## 精度
测试数据使用的是cifar10,使用的加速卡是DCU Z100L。
| 卡数 | 精度 |
| :------: | :------: |
| 1 | Best Accuracy=0.3051 |
## 应用场景
## 应用场景
### 算法类别
图像分类
### 热点行业
制造,能源,交通,网安
### 源码仓库及问题反馈
https://developer.hpccube.com/codes/modelzoo/vit-pytorch
- https://developer.hpccube.com/codes/modelzoo/vit-pytorch
### 参考
https://github.com/jeonsworld/ViT-pytorch
\ No newline at end of file
- https://github.com/jeonsworld/ViT-pytorch
icon.png

64.6 KB

Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment