Commit 67a0c14f authored by chenzk's avatar chenzk
Browse files

v1.2

parent fcf05766
Pipeline #513 canceled with stage
# ViT
## 论文
https://arxiv.org/abs/2010.11929
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
## 模型结构
Vision Transformer先将图像用卷积进行分块以降低计算量,再对每一块进行展平处理变成序列,然后将序列添加位置编码和cls token,再输入多层Transformer结构提取特征,最后将cls tooken取出来通过一个MLP(多层感知机)用于分类。
![img](./images/vit.png)
## 算法原理
Vision Transformer先将图像用卷积进行分块以降低计算量,再对每一块进行展平处理变成序列,然后将序列添加位置编码和cls token,再输入多层Transformer结构提取特征,最后将cls tooken取出来通过一个MLP(多层感知机)用于分类。
Transformer的核心思想是利用注意力模块attention提取特征:
图像领域借鉴《Transformer is all you need!》算法论文中的Encoder结构提取特征,Transformer的核心思想是利用注意力模块attention提取特征:
![img](./images/attention.png)
## 环境配置
### Docker
......@@ -143,6 +143,8 @@ sbatch examples/vit_mpi.sh
`制造,环境,医疗,气象`
### 算法框架
`pytorch`
## 源码仓库及问题反馈
- https://developer.hpccube.com/codes/modelzoo/megatron-deepspeed-vit_pytorch
## 参考资料
- https://github.com/bigscience-workshop/Megatron-DeepSpeed
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment