Commit 5498e94a authored by suily's avatar suily
Browse files

Initial commit

parent 14530156
Pipeline #1635 failed with stages
in 0 seconds
# This workflow will install Python dependencies, run tests and lint.
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Build
on:
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10']
steps:
- name: Cancel previous
uses: styfle/cancel-workflow-action@0.8.0
with:
access_token: ${{ github.token }}
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
pip install .
pip install .[test]
- name: Run pytest
run: |
pytest vit_jax
[style]
based_on_style: yapf
\ No newline at end of file
# How to Contribute
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.
## Contributor License Agreement
Contributions to this project must be accompanied by a Contributor License
Agreement (CLA). You (or your employer) retain the copyright to your
contribution; this simply gives us permission to use and redistribute your
contributions as part of the project. Head over to
<https://cla.developers.google.com/> to see your current agreements on file or
to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Code reviews
All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
information on using pull requests.
## Community Guidelines
This project follows
[Google's Open Source Community Guidelines](https://opensource.google/conduct/).
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [2020] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
\ No newline at end of file
# Vision Transformer and MLP-Mixer Architectures
In this repository we release models from the papers
- [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
- [MLP-Mixer: An all-MLP Architecture for Vision](https://arxiv.org/abs/2105.01601)
- [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
- [When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations](https://arxiv.org/abs/2106.01548)
- [LiT: Zero-Shot Transfer with Locked-image text Tuning](https://arxiv.org/abs/2111.07991)
- [Surrogate Gap Minimization Improves Sharpness-Aware Training](https://arxiv.org/abs/2203.08065)
The models were pre-trained on the [ImageNet](http://www.image-net.org/) and
[ImageNet-21k](http://www.image-net.org/) datasets. We provide the code for
fine-tuning the released models in
[JAX](https://jax.readthedocs.io)/[Flax](http://flax.readthedocs.io).
The models from this codebase were originally trained in
https://github.com/google-research/big_vision/
where you can find more advanced code (e.g. multi-host training), as well as
some of the original training scripts (e.g.
[configs/vit_i21k.py](https://github.com/google-research/big_vision/blob/main/big_vision/configs/vit_i21k.py)
for pre-training a ViT, or
[configs/transfer.py](https://github.com/google-research/big_vision/blob/main/big_vision/configs/transfer.py)
for transfering a model).
Table of contents:
- [Vision Transformer and MLP-Mixer Architectures](#vision-transformer-and-mlp-mixer-architectures)
- [Colab](#colab)
- [Installation](#installation)
- [Fine-tuning a model](#fine-tuning-a-model)
- [Vision Transformer](#vision-transformer)
- [Available ViT models](#available-vit-models)
- [Expected ViT results](#expected-vit-results)
- [MLP-Mixer](#mlp-mixer)
- [Available Mixer models](#available-mixer-models)
- [Expected Mixer results](#expected-mixer-results)
- [LiT models](#lit-models)
- [Running on cloud](#running-on-cloud)
- [Create a VM](#create-a-vm)
- [Setup VM](#setup-vm)
- [Bibtex](#bibtex)
- [Disclaimers](#disclaimers)
- [Changelog](#changelog)
## Colab
Below Colabs run both with GPUs, and TPUs (8 cores, data parallelism).
The first Colab demonstrates the JAX code of Vision Transformers and MLP Mixers.
This Colab allows you to edit the files from the repository directly in the
Colab UI and has annotated Colab cells that walk you through the code step by
step, and lets you interact with the data.
https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb
The second Colab allows you to explore the >50k Vision Transformer and hybrid
checkpoints that were used to generate the data of the third paper "How to train
your ViT? ...". The Colab includes code to explore and select checkpoints, and
to do inference both using the JAX code from this repo, and also using the
popular [`timm`] PyTorch library that can directly load these checkpoints as
well. Note that a handful of models are also available directly from TF-Hub:
[sayakpaul/collections/vision_transformer] (external contribution by [Sayak
Paul]).
The second Colab also lets you fine-tune the checkpoints on any tfds dataset
and your own dataset with examples in individual JPEG files (optionally directly
reading from Google Drive).
https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax_augreg.ipynb
**Note**: As for now (6/20/21) Google Colab only supports a single GPU (Nvidia
Tesla T4), and TPUs (currently TPUv2-8) are attached indirectly to the Colab VM
and communicate over slow network, which leads to pretty bad training speed. You
would usually want to set up a dedicated machine if you have a non-trivial
amount of data to fine-tune on. For details see the
[Running on cloud](#running-on-cloud) section.
[`timm`]: https://github.com/rwightman/pytorch-image-models
[sayakpaul/collections/vision_transformer]: https://tfhub.dev/sayakpaul/collections/vision_transformer
[Sayak Paul]: https://github.com/sayakpaul
## Installation
Make sure you have `Python>=3.10` installed on your machine.
Install JAX and python dependencies by running:
```
# If using GPU:
pip install -r vit_jax/requirements.txt
# If using TPU:
pip install -r vit_jax/requirements-tpu.txt
```
For newer versions of [JAX](https://github.com/google/jax), follow the instructions
provided in the corresponding repository linked here. Note that installation
instructions for CPU, GPU and TPU differs slightly.
Install [Flaxformer](https://github.com/google/flaxformer), follow the instructions
provided in the corresponding repository linked here.
For more details refer to the section [Running on cloud](#running-on-cloud)
below.
## Fine-tuning a model
You can run fine-tuning of the downloaded model on your dataset of interest. All
models share the same command line interface.
For example for fine-tuning a ViT-B/16 (pre-trained on imagenet21k) on CIFAR10
(note how we specify `b16,cifar10` as arguments to the config, and how we
instruct the code to access the models directly from a GCS bucket instead of
first downloading them into the local directory):
```bash
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
--config.pretrained_dir='gs://vit_models/imagenet21k'
```
In order to fine-tune a Mixer-B/16 (pre-trained on imagenet21k) on CIFAR10:
```bash
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/mixer_base16_cifar10.py \
--config.pretrained_dir='gs://mixer_models/imagenet21k'
```
The "How to train your ViT? ..." paper added >50k checkpoints that you can
fine-tune with the [`configs/augreg.py`] config. When you only specify the model
name (the `config.name` value from [`configs/model.py`]), then the best i21k
checkpoint by upstream validation accuracy ("recommended" checkpoint, see
section 4.5 of the paper) is chosen. To make up your mind which model you want
to use, have a look at Figure 3 in the paper. It's also possible to choose a
different checkpoint (see Colab [`vit_jax_augreg.ipynb`]) and then specify the
value from the `filename` or `adapt_filename` column, which correspond to the
filenames without `.npz` from the [`gs://vit_models/augreg`] directory.
```bash
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/augreg.py:R_Ti_16 \
--config.dataset=oxford_iiit_pet \
--config.base_lr=0.01
```
Currently, the code will automatically download CIFAR-10 and CIFAR-100 datasets.
Other public or custom datasets can be easily integrated, using [tensorflow
datasets library](https://github.com/tensorflow/datasets/). Note that you will
also need to update `vit_jax/input_pipeline.py` to specify some parameters about
any added dataset.
Note that our code uses all available GPUs/TPUs for fine-tuning.
To see a detailed list of all available flags, run `python3 -m vit_jax.train
--help`.
Notes on memory:
- Different models require different amount of memory. Available memory also
depends on the accelerator configuration (both type and count). If you
encounter an out-of-memory error you can increase the value of
`--config.accum_steps=8` -- alternatively, you could also decrease the
`--config.batch=512` (and decrease `--config.base_lr` accordingly).
- The host keeps a shuffle buffer in memory. If you encounter a host OOM (as
opposed to an accelerator OOM), you can decrease the default
`--config.shuffle_buffer=50000`.
## Vision Transformer
by Alexey Dosovitskiy\*†, Lucas Beyer\*, Alexander Kolesnikov\*, Dirk
Weissenborn\*, Xiaohua Zhai\*, Thomas Unterthiner, Mostafa Dehghani, Matthias
Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit and Neil Houlsby\*†.
(\*) equal technical contribution, (†) equal advising.
![Figure 1 from paper](vit_figure.png)
Overview of the model: we split an image into fixed-size patches, linearly embed
each of them, add position embeddings, and feed the resulting sequence of
vectors to a standard Transformer encoder. In order to perform classification,
we use the standard approach of adding an extra learnable "classification token"
to the sequence.
### Available ViT models
We provide a variety of ViT models in different GCS buckets. The models can be
downloaded with e.g.:
```
wget https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz
```
The model filenames (without the `.npz` extension) correspond to the
`config.model_name` in [`vit_jax/configs/models.py`]
- [`gs://vit_models/imagenet21k`] - Models pre-trained on ImageNet-21k.
- [`gs://vit_models/imagenet21k+imagenet2012`] - Models pre-trained on
ImageNet-21k and fine-tuned on ImageNet.
- [`gs://vit_models/augreg`] - Models pre-trained on ImageNet-21k,
applying varying amounts of [AugReg]. Improved performance.
- [`gs://vit_models/sam`] - Models pre-trained on ImageNet with [SAM].
- [`gs://vit_models/gsam`] - Models pre-trained on ImageNet with [GSAM].
We recommend using the following checkpoints, trained with [AugReg] that have
the best pre-training metrics:
| Model | Pre-trained checkpoint | Size | Fine-tuned checkpoint | Resolution | Img/sec | Imagenet accuracy |
| :------- | :----------------------------------------------------------------------------------------- | -------: | :--------------------------------------------------------------------------------------------------------------------------------- | ---------: | ------: | ----------------: |
| L/16 | `gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0.npz` | 1243 MiB | `gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz` | 384 | 50 | 85.59% |
| B/16 | `gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz` | 391 MiB | `gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz` | 384 | 138 | 85.49% |
| S/16 | `gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz` | 115 MiB | `gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz` | 384 | 300 | 83.73% |
| R50+L/32 | `gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1.npz` | 1337 MiB | `gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz` | 384 | 327 | 85.99% |
| R26+S/32 | `gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz` | 170 MiB | `gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz` | 384 | 560 | 83.85% |
| Ti/16 | `gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz` | 37 MiB | `gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz` | 384 | 610 | 78.22% |
| B/32 | `gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz` | 398 MiB | `gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz` | 384 | 955 | 83.59% |
| S/32 | `gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0.npz` | 118 MiB | `gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz` | 384 | 2154 | 79.58% |
| R+Ti/16 | `gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz` | 40 MiB | `gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz` | 384 | 2426 | 75.40% |
The results from the original ViT paper (https://arxiv.org/abs/2010.11929) have
been replicated using the models from [`gs://vit_models/imagenet21k`]:
| model | dataset | dropout=0.0 | dropout=0.1 |
|:-------------|:-------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| R50+ViT-B_16 | cifar10 | 98.72%, 3.9h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5ER50.ViT-B_16/cifar10/do_0.0&_smoothingWeight=0) | 98.94%, 10.1h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5ER50.ViT-B_16/cifar10/do_0.1&_smoothingWeight=0) |
| R50+ViT-B_16 | cifar100 | 90.88%, 4.1h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5ER50.ViT-B_16/cifar100/do_0.0&_smoothingWeight=0) | 92.30%, 10.1h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5ER50.ViT-B_16/cifar100/do_0.1&_smoothingWeight=0) |
| R50+ViT-B_16 | imagenet2012 | 83.72%, 9.9h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5ER50.ViT-B_16/imagenet2012/do_0.0&_smoothingWeight=0) | 85.08%, 24.2h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5ER50.ViT-B_16/imagenet2012/do_0.1&_smoothingWeight=0) |
| ViT-B_16 | cifar10 | 99.02%, 2.2h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_16/cifar10/do_0.0&_smoothingWeight=0) | 98.76%, 7.8h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_16/cifar10/do_0.1&_smoothingWeight=0) |
| ViT-B_16 | cifar100 | 92.06%, 2.2h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_16/cifar100/do_0.0&_smoothingWeight=0) | 91.92%, 7.8h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_16/cifar100/do_0.1&_smoothingWeight=0) |
| ViT-B_16 | imagenet2012 | 84.53%, 6.5h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_16/imagenet2012/do_0.0&_smoothingWeight=0) | 84.12%, 19.3h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_16/imagenet2012/do_0.1&_smoothingWeight=0) |
| ViT-B_32 | cifar10 | 98.88%, 0.8h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_32/cifar10/do_0.0&_smoothingWeight=0) | 98.75%, 1.8h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_32/cifar10/do_0.1&_smoothingWeight=0) |
| ViT-B_32 | cifar100 | 92.31%, 0.8h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_32/cifar100/do_0.0&_smoothingWeight=0) | 92.05%, 1.8h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_32/cifar100/do_0.1&_smoothingWeight=0) |
| ViT-B_32 | imagenet2012 | 81.66%, 3.3h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_32/imagenet2012/do_0.0&_smoothingWeight=0) | 81.31%, 4.9h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-B_32/imagenet2012/do_0.1&_smoothingWeight=0) |
| ViT-L_16 | cifar10 | 99.13%, 6.9h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_16/cifar10/do_0.0&_smoothingWeight=0) | 99.14%, 24.7h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_16/cifar10/do_0.1&_smoothingWeight=0) |
| ViT-L_16 | cifar100 | 92.91%, 7.1h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_16/cifar100/do_0.0&_smoothingWeight=0) | 93.22%, 24.4h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_16/cifar100/do_0.1&_smoothingWeight=0) |
| ViT-L_16 | imagenet2012 | 84.47%, 16.8h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_16/imagenet2012/do_0.0&_smoothingWeight=0) | 85.05%, 59.7h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_16/imagenet2012/do_0.1&_smoothingWeight=0) |
| ViT-L_32 | cifar10 | 99.06%, 1.9h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_32/cifar10/do_0.0&_smoothingWeight=0) | 99.09%, 6.1h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_32/cifar10/do_0.1&_smoothingWeight=0) |
| ViT-L_32 | cifar100 | 93.29%, 1.9h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_32/cifar100/do_0.0&_smoothingWeight=0) | 93.34%, 6.2h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_32/cifar100/do_0.1&_smoothingWeight=0) |
| ViT-L_32 | imagenet2012 | 81.89%, 7.5h (A100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_32/imagenet2012/do_0.0&_smoothingWeight=0) | 81.13%, 15.0h (V100), [tb.dev](https://tensorboard.dev/experiment/nwXQNjudRJW3dtQzhPZwwA/#scalars&regexInput=%5EViT-L_32/imagenet2012/do_0.1&_smoothingWeight=0) |
We also would like to emphasize that high-quality results can be achieved with
shorter training schedules and encourage users of our code to play with
hyper-parameters to trade-off accuracy and computational budget.
Some examples for CIFAR-10/100 datasets are presented in the table below.
| upstream | model | dataset | total_steps / warmup_steps | accuracy | wall-clock time | link |
| ----------- | -------- | ------------ | --------------------------- | -------- | --------------- | ---------------------------------------------------------------------------- |
| imagenet21k | ViT-B_16 | cifar10 | 500 / 50 | 98.59% | 17m | [tensorboard.dev](https://tensorboard.dev/experiment/QgkpiW53RPmjkabe1ME31g/) |
| imagenet21k | ViT-B_16 | cifar10 | 1000 / 100 | 98.86% | 39m | [tensorboard.dev](https://tensorboard.dev/experiment/w8DQkDeJTOqJW5js80gOQg/) |
| imagenet21k | ViT-B_16 | cifar100 | 500 / 50 | 89.17% | 17m | [tensorboard.dev](https://tensorboard.dev/experiment/5hM4GrnAR0KEZg725Ewnqg/) |
| imagenet21k | ViT-B_16 | cifar100 | 1000 / 100 | 91.15% | 39m | [tensorboard.dev](https://tensorboard.dev/experiment/QLQTaaIoT9uEcAjtA0eRwg/) |
## MLP-Mixer
by Ilya Tolstikhin\*, Neil Houlsby\*, Alexander Kolesnikov\*, Lucas Beyer\*,
Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers,
Jakob Uszkoreit, Mario Lucic, Alexey Dosovitskiy.
(\*) equal contribution.
![Figure 1 from paper](mixer_figure.png)
MLP-Mixer (*Mixer* for short) consists of per-patch linear embeddings, Mixer
layers, and a classifier head. Mixer layers contain one token-mixing MLP and one
channel-mixing MLP, each consisting of two fully-connected layers and a GELU
nonlinearity. Other components include: skip-connections, dropout, and linear
classifier head.
For installation follow [the same steps](#installation) as above.
### Available Mixer models
We provide the Mixer-B/16 and Mixer-L/16 models pre-trained on the ImageNet and
ImageNet-21k datasets. Details can be found in Table 3 of the Mixer paper. All
the models can be found at:
https://console.cloud.google.com/storage/mixer_models/
Note that these models are also available directly from TF-Hub:
[sayakpaul/collections/mlp-mixer] (external contribution by [Sayak
Paul]).
[sayakpaul/collections/mlp-mixer]: https://tfhub.dev/sayakpaul/collections/mlp-mixer
### Expected Mixer results
We ran the fine-tuning code on Google Cloud machine with four V100 GPUs with the
default adaption parameters from this repository. Here are the results:
upstream | model | dataset | accuracy | wall_clock_time | link
:----------- | :--------- | :------ | -------: | :-------------- | :---
ImageNet | Mixer-B/16 | cifar10 | 96.72% | 3.0h | [tensorboard.dev](https://tensorboard.dev/experiment/j9zCYt9yQVm93nqnsDZayA/)
ImageNet | Mixer-L/16 | cifar10 | 96.59% | 3.0h | [tensorboard.dev](https://tensorboard.dev/experiment/Q4feeErzRGGop5XzAvYj2g/)
ImageNet-21k | Mixer-B/16 | cifar10 | 96.82% | 9.6h | [tensorboard.dev](https://tensorboard.dev/experiment/mvP4McV2SEGFeIww20ie5Q/)
ImageNet-21k | Mixer-L/16 | cifar10 | 98.34% | 10.0h | [tensorboard.dev](https://tensorboard.dev/experiment/dolAJyQYTYmudytjalF6Jg/)
## LiT models
For details, refer to the Google AI blog post
[LiT: adding language understanding to image models](http://ai.googleblog.com/2022/04/locked-image-tuning-adding-language.html),
or read the CVPR paper "LiT: Zero-Shot Transfer with Locked-image text Tuning"
(https://arxiv.org/abs/2111.07991).
We published a Transformer B/16-base model with an ImageNet zeroshot accuracy of
72.1%, and a L/16-large model with an ImageNet zeroshot accuracy of 75.7%. For
more details about these models, please refer to the
[LiT model card](model_cards/lit.md).
We provide a in-browser demo with small text encoders for interactive use (the
smallest models should even run on a modern cell phone):
https://google-research.github.io/vision_transformer/lit/
And finally a Colab to use the JAX models with both image and text encoders:
https://colab.research.google.com/github/google-research/vision_transformer/blob/main/lit.ipynb
Note that none of above models support multi-lingual inputs yet, but we're
working on publishing such models and will update this repository once they
become available.
This repository only contains evaluation code for LiT models. You can find the
training code in the `big_vision` repository:
https://github.com/google-research/big_vision/tree/main/big_vision/configs/proj/image_text
Expected zeroshot results from [`model_cards/lit.md`] (note that the zeroshot
evaluation is slightly different from the simplified evaluation in the Colab):
| Model | B16B_2 | L16L |
| :--- | ---: | ---: |
| ImageNet zero-shot | 73.9% | 75.7% |
| ImageNet v2 zero-shot | 65.1% | 66.6% |
| CIFAR100 zero-shot | 79.0% | 80.5% |
| Pets37 zero-shot | 83.3% | 83.3% |
| Resisc45 zero-shot | 25.3% | 25.6% |
| MS-COCO Captions image-to-text retrieval | 51.6% | 48.5% |
| MS-COCO Captions text-to-image retrieval | 31.8% | 31.1% |
## Running on cloud
While above [colabs](#colab) are pretty useful to get started, you would usually
want to train on a larger machine with more powerful accelerators.
### Create a VM
You can use the following commands to setup a VM with GPUs on Google Cloud:
```bash
# Set variables used by all commands below.
# Note that project must have accounting set up.
# For a list of zones with GPUs refer to
# https://cloud.google.com/compute/docs/gpus/gpu-regions-zones
PROJECT=my-awesome-gcp-project # Project must have billing enabled.
VM_NAME=vit-jax-vm-gpu
ZONE=europe-west4-b
# Below settings have been tested with this repository. You can choose other
# combinations of images & machines (e.g.), refer to the corresponding gcloud commands:
# gcloud compute images list --project ml-images
# gcloud compute machine-types list
# etc.
gcloud compute instances create $VM_NAME \
--project=$PROJECT --zone=$ZONE \
--image=c1-deeplearning-tf-2-5-cu110-v20210527-debian-10 \
--image-project=ml-images --machine-type=n1-standard-96 \
--scopes=cloud-platform,storage-full --boot-disk-size=256GB \
--boot-disk-type=pd-ssd --metadata=install-nvidia-driver=True \
--maintenance-policy=TERMINATE \
--accelerator=type=nvidia-tesla-v100,count=8
# Connect to VM (after some minutes needed to setup & start the machine).
gcloud compute ssh --project $PROJECT --zone $ZONE $VM_NAME
# Stop the VM after use (only storage is billed for a stopped VM).
gcloud compute instances stop --project $PROJECT --zone $ZONE $VM_NAME
# Delete VM after use (this will also remove all data stored on VM).
gcloud compute instances delete --project $PROJECT --zone $ZONE $VM_NAME
```
Alternatively, you can use the following similar commands to set up a Cloud VM
with TPUs attached to them (below commands copied from the [TPU tutorial]):
[TPU tutorial]: https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm
```bash
PROJECT=my-awesome-gcp-project # Project must have billing enabled.
VM_NAME=vit-jax-vm-tpu
ZONE=europe-west4-a
# Required to set up service identity initially.
gcloud beta services identity create --service tpu.googleapis.com
# Create a VM with TPUs directly attached to it.
gcloud alpha compute tpus tpu-vm create $VM_NAME \
--project=$PROJECT --zone=$ZONE \
--accelerator-type v3-8 \
--version tpu-vm-base
# Connect to VM (after some minutes needed to setup & start the machine).
gcloud alpha compute tpus tpu-vm ssh --project $PROJECT --zone $ZONE $VM_NAME
# Stop the VM after use (only storage is billed for a stopped VM).
gcloud alpha compute tpus tpu-vm stop --project $PROJECT --zone $ZONE $VM_NAME
# Delete VM after use (this will also remove all data stored on VM).
gcloud alpha compute tpus tpu-vm delete --project $PROJECT --zone $ZONE $VM_NAME
```
### Setup VM
And then fetch the repository and the install dependencies (including `jaxlib`
with TPU support) as usual:
```bash
git clone --depth=1 --branch=master https://github.com/google-research/vision_transformer
cd vision_transformer
# optional: install virtualenv
pip3 install virtualenv
python3 -m virtualenv env
. env/bin/activate
```
If you're connected to a VM with GPUs attached, install JAX and other dependencies with the following
command:
```bash
pip install -r vit_jax/requirements.txt
```
If you're connected to a VM with TPUs attached, install JAX and other dependencies with the following
command:
```bash
pip install -r vit_jax/requirements-tpu.txt
```
Install [Flaxformer](https://github.com/google/flaxformer), follow the instructions
provided in the corresponding repository linked here.
For both GPUs and TPUs, Check that JAX can connect to attached accelerators with the command:
```bash
python -c 'import jax; print(jax.devices())'
```
And finally execute one of the commands mentioned in the section
[fine-tuning a model](#fine-tuning-a-model).
## Bibtex
```
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
@article{tolstikhin2021mixer,
title={MLP-Mixer: An all-MLP Architecture for Vision},
author={Tolstikhin, Ilya and Houlsby, Neil and Kolesnikov, Alexander and Beyer, Lucas and Zhai, Xiaohua and Unterthiner, Thomas and Yung, Jessica and Steiner, Andreas and Keysers, Daniel and Uszkoreit, Jakob and Lucic, Mario and Dosovitskiy, Alexey},
journal={arXiv preprint arXiv:2105.01601},
year={2021}
}
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
@article{chen2021outperform,
title={When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations},
author={Chen, Xiangning and Hsieh, Cho-Jui and Gong, Boqing},
journal={arXiv preprint arXiv:2106.01548},
year={2021},
}
@article{zhuang2022gsam,
title={Surrogate Gap Minimization Improves Sharpness-Aware Training},
author={Zhuang, Juntang and Gong, Boqing and Yuan, Liangzhe and Cui, Yin and Adam, Hartwig and Dvornek, Nicha and Tatikonda, Sekhar and Duncan, James and Liu, Ting},
journal={ICLR},
year={2022},
}
@article{zhai2022lit,
title={LiT: Zero-Shot Transfer with Locked-image Text Tuning},
author={Zhai, Xiaohua and Wang, Xiao and Mustafa, Basil and Steiner, Andreas and Keysers, Daniel and Kolesnikov, Alexander and Beyer, Lucas},
journal={CVPR},
year={2022}
}
```
## Changelog
In reverse chronological order:
- 2022-08-18: Added LiT-B16B_2 model that was trained for 60k steps
(LiT_B16B: 30k) without linear head on the image side (LiT_B16B: 768) and has
better performance.
- 2022-06-09: Added the ViT and Mixer models trained from scratch using
[GSAM] on ImageNet without strong data augmentations. The resultant ViTs
outperform those of similar sizes trained using AdamW optimizer or the
original [SAM] algorithm, or with strong data augmentations.
- 2022-04-14: Added models and Colab for [LiT models](#lit-models).
- 2021-07-29: Added ViT-B/8 AugReg models (3 upstream checkpoints and adaptations
with resolution=224).
- 2021-07-02: Added the "When Vision Transformers Outperform
ResNets..." paper
- 2021-07-02: Added [SAM](https://arxiv.org/abs/2010.01412)
(Sharpness-Aware Minimization) optimized ViT and MLP-Mixer checkpoints.
- 2021-06-20: Added the "How to train your ViT? ..." paper, and a new
Colab to explore the >50k pre-trained and fine-tuned checkpoints mentioned in
the paper.
- 2021-06-18: This repository was rewritten to use Flax Linen API and
`ml_collections.ConfigDict` for configuration.
- 2021-05-19: With publication of the "How to train your ViT? ..."
paper, we added more than 50k ViT and hybrid models pre-trained on ImageNet and
ImageNet-21k with various degrees of data augmentation and model regularization,
and fine-tuned on ImageNet, Pets37, Kitti-distance, CIFAR-100, and Resisc45.
Check out [`vit_jax_augreg.ipynb`] to navigate this treasure trove of models!
For example, you can use that Colab to fetch the filenames of recommended
pre-trained and fine-tuned checkpoints from the `i21k_300` column of Table 3 in
the paper.
- 2020-12-01: Added the R50+ViT-B/16 hybrid model (ViT-B/16 on
top of a Resnet-50 backbone). When pretrained on imagenet21k, this model
achieves almost the performance of the L/16 model with less than half the
computational finetuning cost. Note that "R50" is somewhat modified for the
B/16 variant: The original ResNet-50 has [3,4,6,3] blocks, each reducing the
resolution of the image by a factor of two. In combination with the ResNet
stem this would result in a reduction of 32x so even with a patch size of
(1,1) the ViT-B/16 variant cannot be realized anymore. For this reason we
instead use [3,4,9] blocks for the R50+B/16 variant.
- 2020-11-09: Added the ViT-L/16 model.
- 2020-10-29: Added ViT-B/16 and ViT-L/16 models pretrained
on ImageNet-21k and then fine-tuned on ImageNet at 224x224 resolution (instead
of default 384x384). These models have the suffix "-224" in their name.
They are expected to achieve 81.2% and 82.7% top-1 accuracies respectively.
## Disclaimers
Open source release prepared by Andreas Steiner.
Note: This repository was forked and modified from
[google-research/big_transfer](https://github.com/google-research/big_transfer).
**This is not an official Google product.**
[GSAM]: https://arxiv.org/abs/2203.08065
[SAM]: https://arxiv.org/abs/2010.01412
[AugReg]: https://arxiv.org/abs/2106.10270
[`vit_jax/configs/models.py`]: https://github.com/google-research/vision_transformer/blob/main/vit_jax/configs/models.py
[`model_cards/lit.md`]: https://github.com/google-research/vision_transformer/blob/main/model_cards/lit.md
[`configs/augreg.py`]: https://github.com/google-research/vision_transformer/blob/main/vit_jax/configs/augreg.py
[`configs/model.py`]: https://github.com/google-research/vision_transformer/blob/main/vit_jax/configs/models.py
[`vit_jax_augreg.ipynb`]: https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax_augreg.ipynb
[`vit_jax.ipynb`]: https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb
[`gs://vit_models/imagenet21k`]: https://console.cloud.google.com/storage/browser/vit_models/imagenet21k/
[`gs://vit_models/imagenet21k+imagenet2012`]: https://console.cloud.google.com/storage/browser/vit_models/imagenet21k+imagenet2012/
[`gs://vit_models/augreg`]: https://console.cloud.google.com/storage/browser/vit_models/augreg/
[`gs://vit_models/sam`]: https://console.cloud.google.com/storage/browser/vit_models/sam/
[`gs://mixer_models/sam`]: https://console.cloud.google.com/storage/mixer_models/sam/
[`gs://vit_models/gsam`]: https://console.cloud.google.com/storage/browser/vit_models/gsam/
[`gs://mixer_models/gsam`]: https://console.cloud.google.com/storage/mixer_models/gsam/
This source diff could not be displayed because it is too large. You can view the blob instead.
# Model Card: LiT (Locked image Tuning)
Last updated: 2022-06-19
Version: 1.0
- This doc: https://github.com/google-research/vision_transformer/blob/main/model_cards/lit.md
- Model Page: https://github.com/google-research/vision_transformer#lit-models
- Other Links:
[LiT Blogpost](https://ai.googleblog.com/2022/04/locked-image-tuning-adding-language.html),
[LiT Paper],
[LiT Demo](https://google-research.github.io/vision_transformer/lit/)
A text/image input model that can be used to embed text/image individually,
and compute similarities between embeddings of text/image pairs. This enables
use cases like zero shot classification, or image/text retrieval.
Note that this model card refers to the models that have been released on
Github specifically (B16B_2, L16L). The [LiT Paper] also evaluates models that
have not been released and use different datasets for training. The Colab
[`lit.ipynb`] lists some more models (L16S, L16Ti) which are similar to L16L,
but with a smaller text tower.
[LiT Paper]: https://arxiv.org/abs/2111.07991
[`lit.ipynb`]: https://colab.research.google.com/github/google-research/vision_transformer/blob/main/lit.ipynb
## Model Summary
- Architecture: Multimodal model with transformer text encoder and transformer
image encoder.
- Inputs: Images presented in 224x224x3 input, text inputs are tokenized and
cropped to the first 16 tokens.
- Outputs: Image and text embeddings (of size 768 or 1024).
- Person of contact: Andreas Steiner (Google Brain)
- Model authors: Xioahua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner,
Daniel Keysers, Alexander Kolesnikov, Lucas Beyer (Google Brain)
Citation:
```bibtex
@article{zhai2022lit,
title={LiT: Zero-Shot Transfer with Locked-image Text Tuning},
author={Zhai, Xiaohua and Wang, Xiao and Mustafa, Basil and Steiner, Andreas and Keysers, Daniel and Kolesnikov, Alexander and Beyer, Lucas},
journal={CVPR},
year={2022}
}
```
## Model Data
Training data:
- [Pre-trained image-tower](http://arxiv.org/abs/2106.10270) (using the
recommended checkpoints from the paper, Section 4.2)
- [ImageNet-21k](https://www.image-net.org/static_files/papers/imagenet_cvpr09.pdf)
- [BERT](http://arxiv.org/abs/1810.04805) pre-trained text tower
- [BookCorpus](https://github.com/jackbandy/bookcorpus-datasheet)
- English wikipedia
- Multi-modal datasets
- [CC12M](https://arxiv.org/abs/2102.08981)
- [YFCC100M](https://arxiv.org/abs/1503.01817)
Evaluation data (see also section [Evaluation Results](#evaluation-results)
below):
- Zero-shot classification
- [ImageNet](https://www.image-net.org/static_files/papers/imagenet_cvpr09.pdf)
- [ImageNet v2](http://arxiv.org/abs/1902.10811)
- [CIFAR100](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
- [Pets37](https://ieeexplore.ieee.org/abstract/document/6248092)
- [Resisc45](http://arxiv.org/abs/1703.00121)
- Image-text retrieval
- [MS-COCO Captions](https://arxiv.org/abs/1504.00325)
## Model Creation & Maintenance
The model has been initialized from BERT & ViT checkpoints (see details above
"training dataset"), and then contrastively tuned on CC12M and YFCC100M.
All datasets have been released in previous publications independent from this
model. The datasets and model are not regularly updated.
The published B16B_2 and L16L models are medium sized and can be used on a normal
computer, or on a single GPU/TPU.
| Model | B16B_2 | L16L |
| :--- | ---: | ---: |
| Size | 474 MB | 2.4 GB |
| Weights | 196M | 638M |
| Layers | 2x12 | 2x24 |
| Latency (single TPU core) | 1200/sec | 400/sec |
Software/hardware used for training:
- JAX 0.3.13, Flax 0.5.0
- 128 TPUv4 cores
Software/hardware used for deployment:
- JAX 0.3.13, Flax 0.5.0
- CPU/GPU/TPU
Compute requirements for training:
| Model | B16B_2 | L16L |
| :--- | ---: | ---: |
| Number of Chips | 64 | 64 |
| Training Time (days) | 0.3 | 1 |
| Total Computation (FLOPS) | 2.7E+19 | 9E+19 |
| Measured Performance (TFLOPS/s) | 1153 | 1614 |
| Energy Consumption (MWh) | 0.14 | 0.16 |
Compute requirements for inference:
| Model | B16B_2 | L16L |
| :--- | ---: | ---: |
| FLOPS/example | approx. 10 | approx. 30 |
## Evaluation Results
Benchmark information:
- Zero-shot classification (as explained in [CLIP Paper])
- We chose to evaluate a set of datasets that are commonly used, and provide
insights where the model works very well (such as ImageNet v2 or CIFAR100),
as well as where it is much more limited (such as Resisc45).
- Image-text retrieval (Appendix section I.3 in [LiT Paper])
[CLIP Paper]: https://arxiv.org/abs/2103.00020
Evaluation results:
| Model | B16B_2 | L16L |
| :--- | ---: | ---: |
| ImageNet zero-shot | 73.9% | 75.7% |
| ImageNet v2 zero-shot | 65.1% | 66.6% |
| CIFAR100 zero-shot | 79.0% | 80.5% |
| Pets37 zero-shot | 83.3% | 83.3% |
| Resisc45 zero-shot | 25.3% | 25.6% |
| MS-COCO Captions image-to-text retrieval | 51.6% | 48.5% |
| MS-COCO Captions text-to-image retrieval | 31.8% | 31.1% |
## Limitations
Known limitations:
- Any deployment of this model, both for commercial applications and
non-commercial applications, is currently out of scope.
- Before using the model in a constrained (i.e. not deployed) environment, users
should do in-depth testing for their specific use case (e.g. on a constrained
set of class labels of interest).
- These models have only been trained on English text and will fail for most
non-English inputs.
- These models have not been evaluated with respect to their biases and fairness
aspects. We suspect that biases found in the datasets used for training will
be replicated by model representations, and model predictions should a priori
be considered to replicate these biases, with consequences to various fairness
metrics.
Ethical considerations & risks:
- The publication is based on previous work ([CLIP Paper]) that has been shown
(Section 7) to replicate gender biases, perform variably for different groups
of people (by gender, skin color), and cause representational harm in varying
degree for different groups of people (by age, skin color). In the same
section, previous authors have shown that a discriminative image/text model
has the potential to be used in a surveillance context for coarse
classification (although not for fine-grained classification), potentially
lowering the barrier for such problematic use cases.
- These models have not been evaluated for the problems mentioned in previous
work, but until such an evaluation is performed, we expect similar risks.
## Model Usage
Sensitive use: The model has been trained on image datasets containing
pictures of people, both for the pre-training of the image encoder
(ImageNet-21k), and for the contrastive tuning (CC12M and YFCC100M).
The model is used exclusively in research for now:
- [Zero-Shot Text-Guided Object Generation with Dream Fields](https://arxiv.org/abs/2112.01455)
- [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230)
## Model Comparison
In comparison with "private data" model from [CLIP Paper]:
- As of 6/10/22, the best published CLIP model is the L/14-336px variant.
- Similar performance (e.g. ImageNet zero-shot classification accuracy:
76.2% CLIP vs. LiT L16L 75.7%)
- LiT is trained solely on publicly available datasets, while CLIP is trained on
a private undisclosed dataset.
- The LiT L16L model is considerably smaller: CLIP uses 576 tokens vs. LiT L16L
uses 196 tokens – since the runtime/memory complexity of attention scales with
the square of the number of tokens, this corresponds to a factor of 8.63x.
In comparison with "public data" model from [CLIP Paper]:
- The only model trained without the private data mentioned in the CLIP paper
(Section D), namely on YFCC100M.
- LiT has much better performance (e.g. ImageNet zero-shot classification
accuracy: 31.3% CLIP vs. LiT L16L 75.7%)
## System Dependencies
Can be used as a stand-alone model (e.g. for zero-shot classification or
retrieval), or as part of a more complex system (basically any system that uses
CLIP as a component can instead use a LiT model).
Pre-processing instructions can be found on Github:
[vit_jax/preprocess.py](https://github.com/google-research/vision_transformer/blob/main/vit_jax/preprocess.py).
The published models include a pre-processing configuration (specifying
tokenizer vocabulary and image pre-processing).
The model outputs image and text embeddings and a temperature. If similarities
are to be computed between image and text embeddings (e.g. for computing output
distributions), then the similarities between the embeddings should be computed
with the dot product, and these should then be multiplied by the temperature
before a softmax is applied.
## Changelog
- 2022-08-16: Replaced model B16B with an updated version B16B_2 that was
trained for 60k steps (before: 30k) without linear head on the image side
(before: 768) and has better performance.
absl-py>=0.12.0
# aqtp!=0.1.1 # https://github.com/google/aqt/issues/196
chex>=0.0.7
clu>=0.0.3
einops>=0.3.0
flax>=0.6.4
git+https://github.com/google/flaxformer
# jax[cuda11_cudnn86]>=0.4.2
#--find-links https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
ml-collections>=0.1.0
numpy>=1.19.5
pandas>=1.1.0
tensorflow-cpu>=2.13.0 # tensorflow-cpu>=2.4.0 # Using tensorflow-cpu to have all GPU memory for JAX.
tensorflow-datasets>=4.0.1
tensorflow-probability>=0.11.1
# tensorflow-text>=2.9.0
# 适配
aqtp==0.1.0
tensorflow-text==2.13.0
scipy==1.12.0
orbax-checkpoint==0.4.1
gsutil
# tensorflow-2.13.1+das1.1.git56b06c8.abi1.dtk2404-cp310-cp310-manylinux_2_31_x86_64.whl
# jax-0.4.23+das1.1.git387bd43.abi1.dtk2404-py3-none-any.whl
# jaxlib-0.4.23+das1.1.git387bd43.abi1.dtk2404-cp310-cp310-manylinux_2_31_x86_64.whl
# Copyright 2024 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""setup.py for vision_transformer repo, vit_jax package."""
import os
from setuptools import find_packages
from setuptools import setup
here = os.path.abspath(os.path.dirname(__file__))
try:
README = open(os.path.join(here, 'README.md'), encoding='utf-8').read()
except IOError:
README = ''
install_requires = [
'absl-py',
'aqtp!=0.1.1', # https://github.com/google/aqt/issues/196
'clu',
'einops',
'flax',
'flaxformer @ git+https://github.com/google/flaxformer',
'jax',
'ml-collections',
'numpy',
'packaging',
'pandas',
'scipy',
'tensorflow_datasets',
'tensorflow_probability',
'tensorflow',
'tensorflow_text',
'tqdm',
]
tests_require = [
'pytest',
]
__version__ = None
with open(os.path.join(here, 'version.py')) as f:
exec(f.read(), globals()) # pylint: disable=exec-used
setup(
name='vit_jax',
version=__version__,
description='Original JAX implementation of Vision Transformer models.',
long_description=README,
long_description_content_type='text/markdown',
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 3.7',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
],
keywords='',
author='Vision Transformer Authors',
author_email='no-reply@google.com',
url='https://github.com/google-research/vision_transformer',
packages=find_packages(),
zip_safe=False,
install_requires=install_requires,
tests_require=tests_require,
extras_require=dict(test=tests_require),
)
from vit_jax import checkpoint
from vit_jax import input_pipeline
from vit_jax import utils
from vit_jax import models
from vit_jax import train
from vit_jax.configs import common as common_config
from vit_jax.configs import models as models_config
from absl import logging
import flax
import jax
from matplotlib import pyplot as plt
import numpy as np
import optax
import tqdm
import os
logging.set_verbosity(logging.INFO)
# import PIL
import tensorflow_datasets as tfds
import time
# import tensorflow as tf
'''显示可用设备gpu的数量。'''
from jax.lib import xla_bridge
jax_test=xla_bridge.get_backend().platform
print(jax_test,jax.local_devices())
if not (jax_test=='gpu'):
exit()
model_name = 'ViT-B_16' #@param ["ViT-B_32", "Mixer-B_16"]
# assert os.path.exists(f'./test_result/{model_name}.npz')
'''加载数据集'''
dataset = 'cifar100' # imagenet2012 cifar10 cifar100
batch_size = 512
config = common_config.with_dataset(common_config.get_config(), dataset)
# config.shuffle_buffer=1000
# config.accum_steps=64
config.batch = batch_size
config.pp.crop = 384
# 建立数据集
ds_train = input_pipeline.get_data_from_tfds(config=config, mode='train')
ds_test = input_pipeline.get_data_from_tfds(config=config, mode='test')
num_classes = input_pipeline.get_dataset_info(dataset, 'train')['num_classes']
del config # Only needed to instantiate datasets.
# Fetch a batch of test images for illustration purposes.
batch = next(iter(ds_test.as_numpy_iterator()))
# Note the shape : [num_local_devices, local_batch_size, h, w, c]
print(batch['image'].shape)
exit()
# tf.config.set_visible_devices([], 'GPU')
# print(tf.config.get_visible_devices('GPU'))
'''加载预训练模型'''
model_config = models_config.MODEL_CONFIGS[model_name]
print(model_config)
# 加载模型定义并初始化随机参数。
# 这也将模型编译为XLA(第一次需要几分钟)。
if model_name.startswith('Mixer'):
model = models.MlpMixer(num_classes=num_classes, **model_config)
else:
model = models.VisionTransformer(num_classes=num_classes, **model_config)
variables = jax.jit(lambda: model.init(
jax.random.PRNGKey(0),
# 丢弃用于初始化的批处理的“num_local_devices”维度。
batch['image'][0, :1],
train=False,
), backend='cpu')()
#加载和转换预训练检查点。
# 这涉及到加载实际的预训练模型结果,但也要修改一点参数,例如改变最终层,并调整位置嵌入的大小。有关详细信息,请参阅代码和本文的方法。
params = checkpoint.load_pretrained(
pretrained_path=f'./test_result/{model_name}.npz',
init_params=variables['params'],
model_config=model_config
)
'''评估'''
params_repl = flax.jax_utils.replicate(params)
print('params.cls:', type(params['head']['bias']).__name__,
params['head']['bias'].shape)
print('params_repl.cls:', type(params_repl['head']['bias']).__name__,
params_repl['head']['bias'].shape)
# 然后将调用映射到我们模型的forward pass到所有可用的设备。
vit_apply_repl = jax.pmap(lambda params, inputs: model.apply(
dict(params=params), inputs, train=False))
def get_accuracy(params_repl):
"""返回对测试集求值的精度"""
good = total = 0
steps = input_pipeline.get_dataset_info(dataset, 'test')['num_examples'] // batch_size
for _, batch in zip(tqdm.trange(steps), ds_test.as_numpy_iterator()):
predicted = vit_apply_repl(params_repl, batch['image'])
is_same = predicted.argmax(axis=-1) == batch['label'].argmax(axis=-1)
good += is_same.sum()
total += len(is_same.flatten())
return good / total
# 没有微调的随机性能。
print(get_accuracy(params_repl))
exit()
'''微调'''
# 100 Steps take approximately 15 minutes in the TPU runtime.
total_steps = 50
warmup_steps = 5
decay_type = 'cosine'
grad_norm_clip = 1
# 这控制了批处理被分割的转发次数。8适用于具有8个设备的TPU运行时。64应该可以在GPU上工作。当然,您也可以调整上面的batch_size,但这需要相应地调整学习率。
accum_steps = 64 # TODO:可能要改
base_lr = 0.03
# 检查 train.make_update_fn
lr_fn = utils.create_learning_rate_schedule(total_steps, base_lr, decay_type, warmup_steps)
# 我们使用一个动量优化器,使用一半精度的状态来节省内存。它还实现了梯度裁剪
tx = optax.chain(
optax.clip_by_global_norm(grad_norm_clip),
optax.sgd(
learning_rate=lr_fn,
momentum=0.9,
accumulator_dtype='bfloat16',
),
)
update_fn_repl = train.make_update_fn(
apply_fn=model.apply, accum_steps=accum_steps, tx=tx)
opt_state = tx.init(params)
opt_state_repl = flax.jax_utils.replicate(opt_state)
# Initialize PRNGs for dropout.
update_rng_repl = flax.jax_utils.replicate(jax.random.PRNGKey(0))
# 训练更新
losses = []
lrs = []
# Completes in ~20 min on the TPU runtime.
start = time.time()
for step, batch in zip(
tqdm.trange(1, total_steps + 1),
ds_train.as_numpy_iterator(),
):
params_repl, opt_state_repl, loss_repl, update_rng_repl = update_fn_repl(
params_repl, opt_state_repl, batch, update_rng_repl)
losses.append(loss_repl[0])
lrs.append(lr_fn(step))
end = time.time()
print(f"{model_name}_{dataset}_{total_steps}_{warmup_steps}微调时间为:",end-start)
print(get_accuracy(params_repl))
# 绘制学习率变化曲线并保存
plt.plot(losses)
plt.savefig(f'./test_result/{model_name}_{dataset}/losses_plot.png')
plt.close()
plt.plot(lrs)
plt.savefig(f'./test_result/{model_name}_{dataset}/lrs_plot.png')
plt.close()
# 在CIFAR10上,Mixer-B/16应该是~96.7%,vitb /32应该是97.7%(都是@224)
exit()
# exit()
'''推理'''
# #下载一个预训练的模型
# model_name = 'ViT-L_16'
# model_config = models_config.MODEL_CONFIGS[model_name]
# print(model_config)
# model = models.VisionTransformer(num_classes=1000, **model_config)
# assert os.path.exists(f'./test_result/{model_name}_imagenet2012.npz')
# # 加载和转换预训练的检查点
# params = checkpoint.load(f'./test_result/{model_name}_imagenet2012.npz')
# params['pre_logits'] = {} # Need to restore empty leaf for Flax.
# # 获取图像标签.
# # get_ipython().system('wget https://storage.googleapis.com/bit_models/ilsvrc2012_wordnet_lemmas.txt')
# imagenet_labels = dict(enumerate(open('./test_result/ilsvrc2012_wordnet_lemmas.txt')))
# # 得到一张具有正确尺寸的随机图片
# # resolution = 224 if model_name.startswith('Mixer') else 384
# # get_ipython().system('wget https://picsum.photos/$resolution -O picsum.jpg')
# img = PIL.Image.open('./test_result/picsum.jpg')
# # 预测单个项目的批处理(注意非常高效的TPU使用…)
# logits, = model.apply(dict(params=params), (np.array(img) / 128 - 1)[None, ...], train=False)
# preds = np.array(jax.nn.softmax(logits))
# for idx in preds.argsort()[:-11:-1]:
# print(f'{preds[idx]:.5f} : {imagenet_labels[idx]}', end='')
\ No newline at end of file
export HIP_VISIBLE_DEVICES=4 # 配置GPU/dcu训练,指定"编码"(多个即多GPU/dcu)
export USE_MIOPEN_BATCHNORM=1 # 启用MIOPEN库的批归一化优化,用于加快训练速度?
# 指定存储模型的位置
# 用于微调的模型/数据集
# 预训练模型的位置
# 累加梯度(tpu=8,cpu=64)
# 微调轮次
# 学习率衰减轮次
# 训练批次
# 图像块的分辨率
# 精度
for model_datasets in 'b16,cifar10' 'b16,cifar100' 'l16,cifar10' 'l16,cifar100'
do
python -m vit_jax.main --workdir=$(pwd)/test_result/dcu/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:$model_datasets \
--config.pretrained_dir=$(pwd)/test_result \
--config.accum_steps=64 \
--config.total_steps=500 \
--config.warmup_steps=50 \
--config.batch=512 \
--config.pp.crop=384 \
--config.optim_dtype='bfloat16'
done
\ No newline at end of file
# Copyright 2024 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Current vision_transformer version at head on Github."""
__version__ = "0.0.8"
This source diff could not be displayed because it is too large. You can view the blob instead.
# Copyright 2024 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright 2024 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from collections import abc
import re
from absl import logging
import flax
from flax.training import checkpoints
import jax.numpy as jnp
import numpy as np
from packaging import version
import pandas as pd
import scipy.ndimage
from tensorflow.io import gfile # pylint: disable=import-error
import tqdm
def _flatten_dict(d, parent_key='', sep='/'):
"""Flattens a dictionary, keeping empty leaves."""
items = []
for k, v in d.items():
path = parent_key + sep + k if parent_key else k
if isinstance(v, abc.Mapping):
items.extend(_flatten_dict(v, path, sep=sep).items())
else:
items.append((path, v))
# Keeps the empty dict if it was set explicitly.
if parent_key and not d:
items.append((parent_key, {}))
return dict(items)
def inspect_params(*,
params,
expected,
fail_if_extra=True,
fail_if_missing=True):
"""Inspects whether the params are consistent with the expected keys."""
params_flat = _flatten_dict(params)
expected_flat = _flatten_dict(expected)
missing_keys = expected_flat.keys() - params_flat.keys()
extra_keys = params_flat.keys() - expected_flat.keys()
# Adds back empty dict explicitly, to support layers without weights.
# Context: FLAX ignores empty dict during serialization.
empty_keys = set()
for k in missing_keys:
if isinstance(expected_flat[k], dict) and not expected_flat[k]:
params[k] = {}
empty_keys.add(k)
missing_keys -= empty_keys
if empty_keys:
logging.warning('Inspect recovered empty keys:\n%s', empty_keys)
if missing_keys:
logging.info('Inspect missing keys:\n%s', missing_keys)
if extra_keys:
logging.info('Inspect extra keys:\n%s', extra_keys)
if (missing_keys and fail_if_missing) or (extra_keys and fail_if_extra):
raise ValueError(f'Missing params from checkpoint: {missing_keys}.\n'
f'Extra params in checkpoint: {extra_keys}.\n'
f'Restored params from checkpoint: {params_flat.keys()}.\n'
f'Expected params from code: {expected_flat.keys()}.')
return params
def recover_tree(keys, values):
"""Recovers a tree as a nested dict from flat names and values.
This function is useful to analyze checkpoints that are without need to access
the exact source code of the experiment. In particular, it can be used to
extract an reuse various subtrees of the scheckpoint, e.g. subtree of
parameters.
Args:
keys: a list of keys, where '/' is used as separator between nodes.
values: a list of leaf values.
Returns:
A nested tree-like dict.
"""
tree = {}
sub_trees = collections.defaultdict(list)
for k, v in zip(keys, values):
if '/' not in k:
tree[k] = v
else:
k_left, k_right = k.split('/', 1)
sub_trees[k_left].append((k_right, v))
for k, kv_pairs in sub_trees.items():
k_subtree, v_subtree = zip(*kv_pairs)
tree[k] = recover_tree(k_subtree, v_subtree)
return tree
def copy(src, dst, progress=True, block_size=1024 * 1024 * 10):
"""Copies a file with progress bar.
Args:
src: Source file. Path must be readable by `tf.io.gfile`.
dst: Destination file. Path must be readable by `tf.io.gfile`.
progress: Whether to show a progres bar.
block_size: Size of individual blocks to be read/written.
"""
stats = gfile.stat(src)
n = int(np.ceil(stats.length / block_size))
range_or_trange = tqdm.trange if progress else range
with gfile.GFile(src, 'rb') as fin:
with gfile.GFile(dst, 'wb') as fout:
for _ in range_or_trange(n):
fout.write(fin.read(block_size))
def load(path):
"""Loads params from a checkpoint previously stored with `save()`."""
with gfile.GFile(path, 'rb') as f:
ckpt_dict = np.load(f, allow_pickle=False)
keys, values = zip(*list(ckpt_dict.items()))
params = checkpoints.convert_pre_linen(recover_tree(keys, values))
if isinstance(params, flax.core.FrozenDict):
params = params.unfreeze()
if version.parse(flax.__version__) >= version.parse('0.3.6'):
params = _fix_groupnorm(params)
return params
def _fix_groupnorm(params):
# See https://github.com/google/flax/issues/1721
regex = re.compile(r'gn(\d+|_root|_proj)$')
def fix_gn(args):
path, array = args
if len(path) > 1 and regex.match(
path[-2]) and path[-1] in ('bias', 'scale'):
array = array.squeeze()
return (path, array)
return flax.traverse_util.unflatten_dict(
dict(map(fix_gn,
flax.traverse_util.flatten_dict(params).items())))
def load_pretrained(*, pretrained_path, init_params, model_config):
"""加载/转换一个预训练的检查点进行微调
Args:
pretrained_path: File pointing to pretrained checkpoint.
init_params: Parameters from model. Will be used for the head of the model
and to verify that the model is compatible with the stored checkpoint.
model_config: Configuration of the model. Will be used to configure the head
and rescale the position embeddings.
Returns:
Parameters like `init_params`, but loaded with pretrained weights from
`pretrained_path` and adapted accordingly.
"""
restored_params = inspect_params(
params=load(pretrained_path),
expected=init_params,
fail_if_extra=False,
fail_if_missing=False)
# The following allows implementing fine-tuning head variants depending on the
# value of `representation_size` in the fine-tuning job:
# - `None` : drop the whole head and attach a nn.Linear.
# - same number as in pre-training means : keep the head but reset the last
# layer (logits) for the new task.
if model_config.get('representation_size') is None:
if 'pre_logits' in restored_params:
logging.info('load_pretrained: drop-head variant')
restored_params['pre_logits'] = {}
restored_params['head']['kernel'] = init_params['head']['kernel']
restored_params['head']['bias'] = init_params['head']['bias']
if 'posembed_input' in restored_params.get('Transformer', {}):
# Rescale the grid of position embeddings. Param shape is (1,N,1024)
posemb = restored_params['Transformer']['posembed_input']['pos_embedding']
posemb_new = init_params['Transformer']['posembed_input']['pos_embedding']
if posemb.shape != posemb_new.shape:
logging.info('load_pretrained: resized variant: %s to %s', posemb.shape,
posemb_new.shape)
posemb = interpolate_posembed(
posemb, posemb_new.shape[1], model_config.classifier == 'token')
restored_params['Transformer']['posembed_input']['pos_embedding'] = posemb
if version.parse(flax.__version__) >= version.parse('0.3.6'):
restored_params = _fix_groupnorm(restored_params)
return flax.core.freeze(restored_params)
def interpolate_posembed(posemb, num_tokens: int, has_class_token: bool):
"""Interpolate given positional embedding parameters into a new shape.
Args:
posemb: positional embedding parameters.
num_tokens: desired number of tokens.
has_class_token: True if the positional embedding parameters contain a
class token.
Returns:
Positional embedding parameters interpolated into the new shape.
"""
assert posemb.shape[0] == 1
if has_class_token:
posemb_tok, posemb_grid = posemb[:, :1], posemb[0, 1:]
num_tokens -= 1
else:
posemb_tok, posemb_grid = posemb[:, :0], posemb[0, 0:]
gs_old = int(np.sqrt(len(posemb_grid)))
gs_new = int(np.sqrt(num_tokens))
logging.info('interpolate_posembed: grid-size from %s to %s', gs_old, gs_new)
assert gs_old ** 2 == len(posemb_grid), f'{gs_old ** 2} != {len(posemb_grid)}'
assert gs_new ** 2 == num_tokens, f'{gs_new ** 2} != {num_tokens}'
posemb_grid = posemb_grid.reshape(gs_old, gs_old, -1)
zoom = (gs_new / gs_old, gs_new / gs_old, 1)
posemb_grid = scipy.ndimage.zoom(posemb_grid, zoom, order=1)
posemb_grid = posemb_grid.reshape(1, gs_new * gs_new, -1)
return jnp.array(np.concatenate([posemb_tok, posemb_grid], axis=1))
def get_augreg_df(directory='gs://vit_models/augreg'):
"""Reads DataFrame describing AugReg models from GCS bucket.
This function returns a dataframe that describes the models that were
published as part of the paper "How to train your ViT? Data, Augmentation, and
Regularization in Vision Transformers" (https://arxiv.org/abs/TODO).
Note that every row in the dataset corresponds to a pre-training checkpoint
(column "filename"), and a fine-tuning checkpoint (column "adapt_filename").
Every pre-trained checkpoint is fine-tuned many times.
Args:
directory: Pathname of directory containing "index.csv"
Returns:
Dataframe with the following columns:
- name: Name of the model, as used in descriptions in paper (e.g. "B/16",
or "R26+S/32").
- ds: Dataset used for pre-training: "i1k" (300 epochs), "i21k" (300
epochs), and "i21k_30" (30 epochs).
- lr: Learning rate used for pre-training.
- aug: Data augmentation used for pre-training. Refer to paper for
details.
- wd: Weight decay used for pre-training.
- do: Dropout used for pre-training.
- sd: Stochastic depth used for pre-training.
- best_val: Best accuracy on validation set that was reached during the
pre-training. Note that "validation set" can refer to minival (meaning
split from training set, as for example for "imagenet2012" dataset).
- final_val: Final validation set accuracy.
- final_test: Final testset accuracy (in cases where there is no official
testset, like for "imagenet2012", this refers to the validation set).
- adapt_ds: What dataset was used for fine-tuning.
- adapt_lr: Learning rate used for fine-tuning.
- adapt_steps: Number of steps used for fine-tuning (with a fixed batch
size of 512).
- adapt_resolution: Resolution that was used for fine-tuning.
- adapt_final_val: Final validation accuracy after fine-tuning.
- adapt_final_test: Final test accuracy after fine-tuning.
- params: Number of parameters.
- infer_samples_per_sec: Numbers of sample per seconds during inference on
a V100 GPU (measured with `timm` implementation).
- filename: Name of the pre-training checkpoint. Can be found at
"gs://vit_models/augreg/{filename}.npz".
- adapt_filename: Name of the fine-tuning checkpoint.
"""
with gfile.GFile(f'{directory}/index.csv') as f:
return pd.read_csv(f)
# Copyright 2024 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tempfile
from absl.testing import absltest
import jax
import jax.numpy as jnp
from vit_jax import checkpoint
from vit_jax import models
from vit_jax import test_utils
from vit_jax.configs import models as config_lib
class CheckpointTest(absltest.TestCase):
def test_load_pretrained(self):
tempdir = tempfile.gettempdir()
model_config = config_lib.get_testing_config()
test_utils.create_checkpoint(model_config, f'{tempdir}/testing.npz')
model = models.VisionTransformer(num_classes=2, **model_config)
variables = model.init(
jax.random.PRNGKey(0),
inputs=jnp.ones([1, 32, 32, 3], jnp.float32),
train=False,
)
checkpoint.load_pretrained(
pretrained_path=f'{tempdir}/testing.npz',
init_params=variables['params'],
model_config=model_config)
if __name__ == '__main__':
absltest.main()
# Configs
This directory contains `ml_collections.ConfigDict` configurations. It is
structured in a way that factors out common configuration parameters into
`common.py` and model configurations into `models.py`.
To select one of these configurations you can specify it on the command line:
```sh
python -m vit_jax.main --config=$(pwd)/vit_jax/configs/vit.py:b32,cifar10
```
The above example specifies the additional parameter `b32,cifar10` that is
parsed in the file `vit.py` and parametrizes the configuration.
Note that you can override any configuration parameters at the command line by
specifying additional parameters like `--config.accumulation_steps=1`.
# Copyright 2024 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment