Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Megatron-DeepSpeed-ViT_pytorch
Commits
67a0c14f
Commit
67a0c14f
authored
Aug 18, 2023
by
chenzk
Browse files
v1.2
parent
fcf05766
Pipeline
#513
canceled with stage
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
4 deletions
+6
-4
README.md
README.md
+6
-4
No files found.
README.md
View file @
67a0c14f
# ViT
## 论文
https://arxiv.org/abs/2010.11929
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
-
https://arxiv.org/abs/2010.11929
## 模型结构
Vision Transformer先将图像用卷积进行分块以降低计算量,再对每一块进行展平处理变成序列,然后将序列添加位置编码和cls token,再输入多层Transformer结构提取特征,最后将cls tooken取出来通过一个MLP(多层感知机)用于分类。

## 算法原理
Vision Transformer先将图像用卷积进行分块以降低计算量,再对每一块进行展平处理变成序列,然后将序列添加位置编码和cls token,再输入多层Transformer结构提取特征,最后将cls tooken取出来通过一个MLP(多层感知机)用于分类。
Transformer的核心思想是利用注意力模块attention提取特征:
图像领域借鉴《Transformer is all you need!》算法论文中的Encoder结构提取特征,Transformer的核心思想是利用注意力模块attention提取特征:

## 环境配置
### Docker
...
...
@@ -143,6 +143,8 @@ sbatch examples/vit_mpi.sh
`制造,环境,医疗,气象`
### 算法框架
`pytorch`
## 源码仓库及问题反馈
-
https://developer.hpccube.com/codes/modelzoo/megatron-deepspeed-vit_pytorch
## 参考资料
-
https://github.com/bigscience-workshop/Megatron-DeepSpeed
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment