*`-`: Result unavailable, because lacking published checkpoints / architectures.
* NASNet, ENAS, AmoebaNet, PNAS, DARTS are based on the same implementation, with configuration differences.
* NasBench101 and 201 will directly proceed to stage 3 as it's cheaper to train them than to find a checkpoint.
## Space Planned
We welcome suggestions and contributions.
-[AutoFormer](https://openaccess.thecvf.com/content/ICCV2021/html/Chen_AutoFormer_Searching_Transformers_for_Visual_Recognition_ICCV_2021_paper.html), [PR under review](https://github.com/microsoft/nni/pull/4551)
-[NAS-BERT](https://arxiv.org/abs/2105.14444)
- Something speech, like [LightSpeech](https://arxiv.org/abs/2102.04040)
## Searched Model Zoo
Create a searched model with pretrained weights like the following:
The metrics listed above are obtained by evaluating the checkpoints provided the original author and converted to NNI NAS format with [these scripts](https://github.com/ultmaster/spacehub-conversion). Do note that some metrics can be higher / lower than the original report, because there could be subtle differences between data preprocessing, operation implementation (e.g., 3rd-party hswish vs ``nn.Hardswish``), or even library versions we are using. But most of these errors are acceptable (~0.1%). We will retrain these architectures in a reproducible and fair training settings, and update these results when the training is ready.
Latency / FLOPs data are missing in the table. Measuring them would be another task.