Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
03716b4a
Commit
03716b4a
authored
Mar 15, 2022
by
Le Hou
Committed by
A. Unique TensorFlower
Mar 15, 2022
Browse files
Minor documentation update
PiperOrigin-RevId: 434836886
parent
f1bbc215
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
17 additions
and
1 deletion
+17
-1
official/projects/token_dropping/README.md
official/projects/token_dropping/README.md
+17
-1
No files found.
official/projects/token_dropping/README.md
View file @
03716b4a
# Token Dropping for Efficient BERT Pretraining
# Token Dropping for Efficient BERT Pretraining
The token dropping method aims to accelerate the pretraining of transformer
This is the official implementation of the token dropping method
[
Pang et al. Token Dropping for Efficient BERT Pretraining. ACL 2022
](
#reference
)
.
Token dropping aims to accelerate the pretraining of transformer
models such as BERT without degrading its performance on downstream tasks. In
models such as BERT without degrading its performance on downstream tasks. In
particular, we drop unimportant tokens starting from an intermediate layer in
particular, we drop unimportant tokens starting from an intermediate layer in
the model, to make the model focus on important tokens more efficiently with its
the model, to make the model focus on important tokens more efficiently with its
...
@@ -86,3 +89,16 @@ modeling library:
...
@@ -86,3 +89,16 @@ modeling library:
*
[
train.py
](
https://github.com/tensorflow/models/blob/master/official/projects/token_dropping/train.py
)
*
[
train.py
](
https://github.com/tensorflow/models/blob/master/official/projects/token_dropping/train.py
)
is the program entry.
is the program entry.
## Reference
Please cite our paper:
```
@inproceedings{pang2022,
title={Token Dropping for Efficient BERT Pretraining},
author={Richard Yuanzhe Pang*, Le Hou*, Tianyi Zhou, Yuexin Wu, Xinying Song, Xiaodan Song, Denny Zhou},
year={2022},
organization={Association for Computational Linguistics}
}
```
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment