Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
640e1b6c
"vscode:/vscode.git/clone" did not exist on "90bcb55e0cb309c5752a3ef04158c568f4706392"
Unverified
Commit
640e1b6c
authored
Jul 21, 2023
by
Sylvain Gugger
Committed by
GitHub
Jul 21, 2023
Browse files
Remove tokenizers from the doc table (#24963)
parent
0511369a
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
202 additions
and
212 deletions
+202
-212
docs/source/en/index.md
docs/source/en/index.md
+200
-200
utils/check_table.py
utils/check_table.py
+2
-12
No files found.
docs/source/en/index.md
View file @
640e1b6c
...
@@ -278,205 +278,205 @@ Flax), PyTorch, and/or TensorFlow.
...
@@ -278,205 +278,205 @@ Flax), PyTorch, and/or TensorFlow.
<!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!-->
<!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!-->
| Model |
Tokenizer slow | Tokenizer fast |
PyTorch support | TensorFlow support | Flax Support |
| Model | PyTorch support | TensorFlow support | Flax Support |
|:-----------------------------:|:--------------
:|:--------------:|:--------------
-:|:------------------:|:------------:|
|:-----------------------------:|:---------------:|:------------------:|:------------:|
| ALBERT | ✅
| ✅ | ✅
| ✅ | ✅ |
| ALBERT | ✅ | ✅ | ✅ |
| ALIGN |
❌ | ❌ |
✅ | ❌ | ❌ |
| ALIGN | ✅ | ❌ | ❌ |
| AltCLIP |
❌ | ❌ |
✅ | ❌ | ❌ |
| AltCLIP | ✅ | ❌ | ❌ |
| Audio Spectrogram Transformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Audio Spectrogram Transformer | ✅ | ❌ | ❌ |
| Autoformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Autoformer | ✅ | ❌ | ❌ |
| Bark |
❌ | ❌ |
✅ | ❌ | ❌ |
| Bark | ✅ | ❌ | ❌ |
| BART | ✅
| ✅ | ✅
| ✅ | ✅ |
| BART | ✅ | ✅ | ✅ |
| BEiT |
❌ | ❌ |
✅ | ❌ | ✅ |
| BEiT | ✅ | ❌ | ✅ |
| BERT | ✅
| ✅ | ✅
| ✅ | ✅ |
| BERT | ✅ | ✅ | ✅ |
| Bert Generation | ✅
| ❌ | ✅
| ❌ | ❌ |
| Bert Generation | ✅ | ❌ | ❌ |
| BigBird | ✅
| ✅ | ✅
| ❌ | ✅ |
| BigBird | ✅ | ❌ | ✅ |
| BigBird-Pegasus |
❌ | ❌ |
✅ | ❌ | ❌ |
| BigBird-Pegasus | ✅ | ❌ | ❌ |
| BioGpt | ✅
| ❌ | ✅
| ❌ | ❌ |
| BioGpt | ✅ | ❌ | ❌ |
| BiT |
❌ | ❌ |
✅ | ❌ | ❌ |
| BiT | ✅ | ❌ | ❌ |
| Blenderbot | ✅
| ✅ | ✅
| ✅ | ✅ |
| Blenderbot | ✅ | ✅ | ✅ |
| BlenderbotSmall | ✅
| ✅ | ✅
| ✅ | ✅ |
| BlenderbotSmall | ✅ | ✅ | ✅ |
| BLIP |
❌ | ❌ |
✅ | ✅ | ❌ |
| BLIP | ✅ | ✅ | ❌ |
| BLIP-2 |
❌ | ❌ |
✅ | ❌ | ❌ |
| BLIP-2 | ✅ | ❌ | ❌ |
| BLOOM |
❌ | ✅ |
✅ | ❌ | ❌ |
| BLOOM | ✅ | ❌ | ❌ |
| BridgeTower |
❌ | ❌ |
✅ | ❌ | ❌ |
| BridgeTower | ✅ | ❌ | ❌ |
| CamemBERT | ✅
| ✅ | ✅
| ✅ | ❌ |
| CamemBERT | ✅ | ✅ | ❌ |
| CANINE | ✅
| ❌ | ✅
| ❌ | ❌ |
| CANINE | ✅ | ❌ | ❌ |
| Chinese-CLIP |
❌ | ❌ |
✅ | ❌ | ❌ |
| Chinese-CLIP | ✅ | ❌ | ❌ |
| CLAP |
❌ | ❌ |
✅ | ❌ | ❌ |
| CLAP | ✅ | ❌ | ❌ |
| CLIP | ✅
| ✅ | ✅
| ✅ | ✅ |
| CLIP | ✅ | ✅ | ✅ |
| CLIPSeg |
❌ | ❌ |
✅ | ❌ | ❌ |
| CLIPSeg | ✅ | ❌ | ❌ |
| CodeGen | ✅
| ✅ | ✅
| ❌ | ❌ |
| CodeGen | ✅ | ❌ | ❌ |
| Conditional DETR |
❌ | ❌ |
✅ | ❌ | ❌ |
| Conditional DETR | ✅ | ❌ | ❌ |
| ConvBERT | ✅
| ✅ | ✅
| ✅ | ❌ |
| ConvBERT | ✅ | ✅ | ❌ |
| ConvNeXT |
❌ | ❌ |
✅ | ✅ | ❌ |
| ConvNeXT | ✅ | ✅ | ❌ |
| ConvNeXTV2 |
❌ | ❌ |
✅ | ❌ | ❌ |
| ConvNeXTV2 | ✅ | ❌ | ❌ |
| CPM-Ant | ✅
| ❌ | ✅
| ❌ | ❌ |
| CPM-Ant | ✅ | ❌ | ❌ |
| CTRL | ✅
| ❌ | ✅
| ✅ | ❌ |
| CTRL | ✅ | ✅ | ❌ |
| CvT |
❌ | ❌ |
✅ | ✅ | ❌ |
| CvT | ✅ | ✅ | ❌ |
| Data2VecAudio |
❌ | ❌ |
✅ | ❌ | ❌ |
| Data2VecAudio | ✅ | ❌ | ❌ |
| Data2VecText |
❌ | ❌ |
✅ | ❌ | ❌ |
| Data2VecText | ✅ | ❌ | ❌ |
| Data2VecVision |
❌ | ❌ |
✅ | ✅ | ❌ |
| Data2VecVision | ✅ | ✅ | ❌ |
| DeBERTa | ✅
| ✅ | ✅
| ✅ | ❌ |
| DeBERTa | ✅ | ✅ | ❌ |
| DeBERTa-v2 | ✅
| ✅ | ✅
| ✅ | ❌ |
| DeBERTa-v2 | ✅ | ✅ | ❌ |
| Decision Transformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Decision Transformer | ✅ | ❌ | ❌ |
| Deformable DETR |
❌ | ❌ |
✅ | ❌ | ❌ |
| Deformable DETR | ✅ | ❌ | ❌ |
| DeiT |
❌ | ❌ |
✅ | ✅ | ❌ |
| DeiT | ✅ | ✅ | ❌ |
| DETA |
❌ | ❌ |
✅ | ❌ | ❌ |
| DETA | ✅ | ❌ | ❌ |
| DETR |
❌ | ❌ |
✅ | ❌ | ❌ |
| DETR | ✅ | ❌ | ❌ |
| DiNAT |
❌ | ❌ |
✅ | ❌ | ❌ |
| DiNAT | ✅ | ❌ | ❌ |
| DINOv2 |
❌ | ❌ |
✅ | ❌ | ❌ |
| DINOv2 | ✅ | ❌ | ❌ |
| DistilBERT | ✅
| ✅ | ✅
| ✅ | ✅ |
| DistilBERT | ✅ | ✅ | ✅ |
| DonutSwin |
❌ | ❌ |
✅ | ❌ | ❌ |
| DonutSwin | ✅ | ❌ | ❌ |
| DPR | ✅
| ✅ | ✅
| ✅ | ❌ |
| DPR | ✅ | ✅ | ❌ |
| DPT |
❌ | ❌ |
✅ | ❌ | ❌ |
| DPT | ✅ | ❌ | ❌ |
| EfficientFormer |
❌ | ❌ |
✅ | ✅ | ❌ |
| EfficientFormer | ✅ | ✅ | ❌ |
| EfficientNet |
❌ | ❌ |
✅ | ❌ | ❌ |
| EfficientNet | ✅ | ❌ | ❌ |
| ELECTRA | ✅
| ✅ | ✅
| ✅ | ✅ |
| ELECTRA | ✅ | ✅ | ✅ |
| EnCodec |
❌ | ❌ |
✅ | ❌ | ❌ |
| EnCodec | ✅ | ❌ | ❌ |
| Encoder decoder |
❌ | ❌ |
✅ | ✅ | ✅ |
| Encoder decoder | ✅ | ✅ | ✅ |
| ERNIE |
❌ | ❌ |
✅ | ❌ | ❌ |
| ERNIE | ✅ | ❌ | ❌ |
| ErnieM | ✅
| ❌ | ✅
| ❌ | ❌ |
| ErnieM | ✅ | ❌ | ❌ |
| ESM | ✅
| ❌ | ✅
| ✅ | ❌ |
| ESM | ✅ | ✅ | ❌ |
| FairSeq Machine-Translation | ✅
| ❌ | ✅
| ❌ | ❌ |
| FairSeq Machine-Translation | ✅ | ❌ | ❌ |
| Falcon |
❌ | ❌ |
✅ | ❌ | ❌ |
| Falcon | ✅ | ❌ | ❌ |
| FlauBERT | ✅
| ❌ | ✅
| ✅ | ❌ |
| FlauBERT | ✅ | ✅ | ❌ |
| FLAVA |
❌ | ❌ |
✅ | ❌ | ❌ |
| FLAVA | ✅ | ❌ | ❌ |
| FNet | ✅
| ✅ | ✅
| ❌ | ❌ |
| FNet | ✅ | ❌ | ❌ |
| FocalNet |
❌ | ❌ |
✅ | ❌ | ❌ |
| FocalNet | ✅ | ❌ | ❌ |
| Funnel Transformer | ✅
| ✅ | ✅
| ✅ | ❌ |
| Funnel Transformer | ✅ | ✅ | ❌ |
| GIT |
❌ | ❌ |
✅ | ❌ | ❌ |
| GIT | ✅ | ❌ | ❌ |
| GLPN |
❌ | ❌ |
✅ | ❌ | ❌ |
| GLPN | ✅ | ❌ | ❌ |
| GPT Neo |
❌ | ❌ |
✅ | ❌ | ✅ |
| GPT Neo | ✅ | ❌ | ✅ |
| GPT NeoX |
❌ | ✅ |
✅ | ❌ | ❌ |
| GPT NeoX | ✅ | ❌ | ❌ |
| GPT NeoX Japanese | ✅
| ❌ | ✅
| ❌ | ❌ |
| GPT NeoX Japanese | ✅ | ❌ | ❌ |
| GPT-J |
❌ | ❌ |
✅ | ✅ | ✅ |
| GPT-J | ✅ | ✅ | ✅ |
| GPT-Sw3 | ✅
| ✅ | ✅
| ✅ | ✅ |
| GPT-Sw3 | ✅ | ✅ | ✅ |
| GPTBigCode |
❌ | ❌ |
✅ | ❌ | ❌ |
| GPTBigCode | ✅ | ❌ | ❌ |
| GPTSAN-japanese | ✅
| ❌ | ✅
| ❌ | ❌ |
| GPTSAN-japanese | ✅ | ❌ | ❌ |
| Graphormer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Graphormer | ✅ | ❌ | ❌ |
| GroupViT |
❌ | ❌ |
✅ | ✅ | ❌ |
| GroupViT | ✅ | ✅ | ❌ |
| Hubert |
❌ | ❌ |
✅ | ✅ | ❌ |
| Hubert | ✅ | ✅ | ❌ |
| I-BERT |
❌ | ❌ |
✅ | ❌ | ❌ |
| I-BERT | ✅ | ❌ | ❌ |
| ImageGPT |
❌ | ❌ |
✅ | ❌ | ❌ |
| ImageGPT | ✅ | ❌ | ❌ |
| Informer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Informer | ✅ | ❌ | ❌ |
| InstructBLIP |
❌ | ❌ |
✅ | ❌ | ❌ |
| InstructBLIP | ✅ | ❌ | ❌ |
| Jukebox | ✅
| ❌ | ✅
| ❌ | ❌ |
| Jukebox | ✅ | ❌ | ❌ |
| LayoutLM | ✅
| ✅ | ✅
| ✅ | ❌ |
| LayoutLM | ✅ | ✅ | ❌ |
| LayoutLMv2 | ✅
| ✅ | ✅
| ❌ | ❌ |
| LayoutLMv2 | ✅ | ❌ | ❌ |
| LayoutLMv3 | ✅
| ✅ | ✅
| ✅ | ❌ |
| LayoutLMv3 | ✅ | ✅ | ❌ |
| LED | ✅
| ✅ | ✅
| ✅ | ❌ |
| LED | ✅ | ✅ | ❌ |
| LeViT |
❌ | ❌ |
✅ | ❌ | ❌ |
| LeViT | ✅ | ❌ | ❌ |
| LiLT |
❌ | ❌ |
✅ | ❌ | ❌ |
| LiLT | ✅ | ❌ | ❌ |
| LLaMA | ✅
| ✅ | ✅
| ❌ | ❌ |
| LLaMA | ✅ | ❌ | ❌ |
| Longformer | ✅
| ✅ | ✅
| ✅ | ❌ |
| Longformer | ✅ | ✅ | ❌ |
| LongT5 |
❌ | ❌ |
✅ | ❌ | ✅ |
| LongT5 | ✅ | ❌ | ✅ |
| LUKE | ✅
| ❌ | ✅
| ❌ | ❌ |
| LUKE | ✅ | ❌ | ❌ |
| LXMERT | ✅
| ✅ | ✅
| ✅ | ❌ |
| LXMERT | ✅ | ✅ | ❌ |
| M-CTC-T |
❌ | ❌ |
✅ | ❌ | ❌ |
| M-CTC-T | ✅ | ❌ | ❌ |
| M2M100 | ✅
| ❌ | ✅
| ❌ | ❌ |
| M2M100 | ✅ | ❌ | ❌ |
| Marian | ✅
| ❌ | ✅
| ✅ | ✅ |
| Marian | ✅ | ✅ | ✅ |
| MarkupLM | ✅
| ✅ | ✅
| ❌ | ❌ |
| MarkupLM | ✅ | ❌ | ❌ |
| Mask2Former |
❌ | ❌ |
✅ | ❌ | ❌ |
| Mask2Former | ✅ | ❌ | ❌ |
| MaskFormer |
❌ | ❌ |
✅ | ❌ | ❌ |
| MaskFormer | ✅ | ❌ | ❌ |
| MaskFormerSwin | ❌
| ❌ | ❌
| ❌ | ❌ |
| MaskFormerSwin | ❌ | ❌ | ❌ |
| mBART | ✅
| ✅ | ✅
| ✅ | ✅ |
| mBART | ✅ | ✅ | ✅ |
| MEGA |
❌ | ❌ |
✅ | ❌ | ❌ |
| MEGA | ✅ | ❌ | ❌ |
| Megatron-BERT |
❌ | ❌ |
✅ | ❌ | ❌ |
| Megatron-BERT | ✅ | ❌ | ❌ |
| MGP-STR | ✅
| ❌ | ✅
| ❌ | ❌ |
| MGP-STR | ✅ | ❌ | ❌ |
| MobileBERT | ✅
| ✅ | ✅
| ✅ | ❌ |
| MobileBERT | ✅ | ✅ | ❌ |
| MobileNetV1 |
❌ | ❌ |
✅ | ❌ | ❌ |
| MobileNetV1 | ✅ | ❌ | ❌ |
| MobileNetV2 |
❌ | ❌ |
✅ | ❌ | ❌ |
| MobileNetV2 | ✅ | ❌ | ❌ |
| MobileViT |
❌ | ❌ |
✅ | ✅ | ❌ |
| MobileViT | ✅ | ✅ | ❌ |
| MobileViTV2 |
❌ | ❌ |
✅ | ❌ | ❌ |
| MobileViTV2 | ✅ | ❌ | ❌ |
| MPNet | ✅
| ✅ | ✅
| ✅ | ❌ |
| MPNet | ✅ | ✅ | ❌ |
| MRA |
❌ | ❌ |
✅ | ❌ | ❌ |
| MRA | ✅ | ❌ | ❌ |
| MT5 | ✅
| ✅ | ✅
| ✅ | ✅ |
| MT5 | ✅ | ✅ | ✅ |
| MusicGen |
❌ | ❌ |
✅ | ❌ | ❌ |
| MusicGen | ✅ | ❌ | ❌ |
| MVP | ✅
| ✅ | ✅
| ❌ | ❌ |
| MVP | ✅ | ❌ | ❌ |
| NAT |
❌ | ❌ |
✅ | ❌ | ❌ |
| NAT | ✅ | ❌ | ❌ |
| Nezha |
❌ | ❌ |
✅ | ❌ | ❌ |
| Nezha | ✅ | ❌ | ❌ |
| NLLB-MOE |
❌ | ❌ |
✅ | ❌ | ❌ |
| NLLB-MOE | ✅ | ❌ | ❌ |
| Nyströmformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Nyströmformer | ✅ | ❌ | ❌ |
| OneFormer |
❌ | ❌ |
✅ | ❌ | ❌ |
| OneFormer | ✅ | ❌ | ❌ |
| OpenAI GPT | ✅
| ✅ | ✅
| ✅ | ❌ |
| OpenAI GPT | ✅ | ✅ | ❌ |
| OpenAI GPT-2 | ✅
| ✅ | ✅
| ✅ | ✅ |
| OpenAI GPT-2 | ✅ | ✅ | ✅ |
| OpenLlama |
❌ | ❌ |
✅ | ❌ | ❌ |
| OpenLlama | ✅ | ❌ | ❌ |
| OPT |
❌ | ❌ |
✅ | ✅ | ✅ |
| OPT | ✅ | ✅ | ✅ |
| OWL-ViT |
❌ | ❌ |
✅ | ❌ | ❌ |
| OWL-ViT | ✅ | ❌ | ❌ |
| Pegasus | ✅
| ✅ | ✅
| ✅ | ✅ |
| Pegasus | ✅ | ✅ | ✅ |
| PEGASUS-X |
❌ | ❌ |
✅ | ❌ | ❌ |
| PEGASUS-X | ✅ | ❌ | ❌ |
| Perceiver | ✅
| ❌ | ✅
| ❌ | ❌ |
| Perceiver | ✅ | ❌ | ❌ |
| Pix2Struct |
❌ | ❌ |
✅ | ❌ | ❌ |
| Pix2Struct | ✅ | ❌ | ❌ |
| PLBart | ✅
| ❌ | ✅
| ❌ | ❌ |
| PLBart | ✅ | ❌ | ❌ |
| PoolFormer |
❌ | ❌ |
✅ | ❌ | ❌ |
| PoolFormer | ✅ | ❌ | ❌ |
| ProphetNet | ✅
| ❌ | ✅
| ❌ | ❌ |
| ProphetNet | ✅ | ❌ | ❌ |
| QDQBert |
❌ | ❌ |
✅ | ❌ | ❌ |
| QDQBert | ✅ | ❌ | ❌ |
| RAG | ✅
| ❌ | ✅
| ✅ | ❌ |
| RAG | ✅ | ✅ | ❌ |
| REALM | ✅
| ✅ | ✅
| ❌ | ❌ |
| REALM | ✅ | ❌ | ❌ |
| Reformer | ✅
| ✅ | ✅
| ❌ | ❌ |
| Reformer | ✅ | ❌ | ❌ |
| RegNet |
❌ | ❌ |
✅ | ✅ | ✅ |
| RegNet | ✅ | ✅ | ✅ |
| RemBERT | ✅
| ✅ | ✅
| ✅ | ❌ |
| RemBERT | ✅ | ✅ | ❌ |
| ResNet |
❌ | ❌ |
✅ | ✅ | ✅ |
| ResNet | ✅ | ✅ | ✅ |
| RetriBERT | ✅
| ✅ | ✅
| ❌ | ❌ |
| RetriBERT | ✅ | ❌ | ❌ |
| RoBERTa | ✅
| ✅ | ✅
| ✅ | ✅ |
| RoBERTa | ✅ | ✅ | ✅ |
| RoBERTa-PreLayerNorm |
❌ | ❌ |
✅ | ✅ | ✅ |
| RoBERTa-PreLayerNorm | ✅ | ✅ | ✅ |
| RoCBert | ✅
| ❌ | ✅
| ❌ | ❌ |
| RoCBert | ✅ | ❌ | ❌ |
| RoFormer | ✅
| ✅ | ✅
| ✅ | ✅ |
| RoFormer | ✅ | ✅ | ✅ |
| RWKV |
❌ | ❌ |
✅ | ❌ | ❌ |
| RWKV | ✅ | ❌ | ❌ |
| SAM |
❌ | ❌ |
✅ | ✅ | ❌ |
| SAM | ✅ | ✅ | ❌ |
| SegFormer |
❌ | ❌ |
✅ | ✅ | ❌ |
| SegFormer | ✅ | ✅ | ❌ |
| SEW |
❌ | ❌ |
✅ | ❌ | ❌ |
| SEW | ✅ | ❌ | ❌ |
| SEW-D |
❌ | ❌ |
✅ | ❌ | ❌ |
| SEW-D | ✅ | ❌ | ❌ |
| Speech Encoder decoder |
❌ | ❌ |
✅ | ❌ | ✅ |
| Speech Encoder decoder | ✅ | ❌ | ✅ |
| Speech2Text | ✅
| ❌ | ✅
| ✅ | ❌ |
| Speech2Text | ✅ | ✅ | ❌ |
| Speech2Text2 |
✅ | ❌ |
❌ | ❌ | ❌ |
| Speech2Text2 | ❌ | ❌ | ❌ |
| SpeechT5 | ✅
| ❌ | ✅
| ❌ | ❌ |
| SpeechT5 | ✅ | ❌ | ❌ |
| Splinter | ✅
| ✅ | ✅
| ❌ | ❌ |
| Splinter | ✅ | ❌ | ❌ |
| SqueezeBERT | ✅
| ✅ | ✅
| ❌ | ❌ |
| SqueezeBERT | ✅ | ❌ | ❌ |
| SwiftFormer |
❌ | ❌ |
✅ | ❌ | ❌ |
| SwiftFormer | ✅ | ❌ | ❌ |
| Swin Transformer |
❌ | ❌ |
✅ | ✅ | ❌ |
| Swin Transformer | ✅ | ✅ | ❌ |
| Swin Transformer V2 |
❌ | ❌ |
✅ | ❌ | ❌ |
| Swin Transformer V2 | ✅ | ❌ | ❌ |
| Swin2SR |
❌ | ❌ |
✅ | ❌ | ❌ |
| Swin2SR | ✅ | ❌ | ❌ |
| SwitchTransformers |
❌ | ❌ |
✅ | ❌ | ❌ |
| SwitchTransformers | ✅ | ❌ | ❌ |
| T5 | ✅
| ✅ | ✅
| ✅ | ✅ |
| T5 | ✅ | ✅ | ✅ |
| Table Transformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Table Transformer | ✅ | ❌ | ❌ |
| TAPAS | ✅
| ❌ | ✅
| ✅ | ❌ |
| TAPAS | ✅ | ✅ | ❌ |
| Time Series Transformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Time Series Transformer | ✅ | ❌ | ❌ |
| TimeSformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| TimeSformer | ✅ | ❌ | ❌ |
| TimmBackbone | ❌
| ❌ | ❌
| ❌ | ❌ |
| TimmBackbone | ❌ | ❌ | ❌ |
| Trajectory Transformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Trajectory Transformer | ✅ | ❌ | ❌ |
| Transformer-XL | ✅
| ❌ | ✅
| ✅ | ❌ |
| Transformer-XL | ✅ | ✅ | ❌ |
| TrOCR |
❌ | ❌ |
✅ | ❌ | ❌ |
| TrOCR | ✅ | ❌ | ❌ |
| TVLT |
❌ | ❌ |
✅ | ❌ | ❌ |
| TVLT | ✅ | ❌ | ❌ |
| UMT5 |
❌ | ❌ |
✅ | ❌ | ❌ |
| UMT5 | ✅ | ❌ | ❌ |
| UniSpeech |
❌ | ❌ |
✅ | ❌ | ❌ |
| UniSpeech | ✅ | ❌ | ❌ |
| UniSpeechSat |
❌ | ❌ |
✅ | ❌ | ❌ |
| UniSpeechSat | ✅ | ❌ | ❌ |
| UPerNet |
❌ | ❌ |
✅ | ❌ | ❌ |
| UPerNet | ✅ | ❌ | ❌ |
| VAN |
❌ | ❌ |
✅ | ❌ | ❌ |
| VAN | ✅ | ❌ | ❌ |
| VideoMAE |
❌ | ❌ |
✅ | ❌ | ❌ |
| VideoMAE | ✅ | ❌ | ❌ |
| ViLT |
❌ | ❌ |
✅ | ❌ | ❌ |
| ViLT | ✅ | ❌ | ❌ |
| Vision Encoder decoder |
❌ | ❌ |
✅ | ✅ | ✅ |
| Vision Encoder decoder | ✅ | ✅ | ✅ |
| VisionTextDualEncoder |
❌ | ❌ |
✅ | ✅ | ✅ |
| VisionTextDualEncoder | ✅ | ✅ | ✅ |
| VisualBERT |
❌ | ❌ |
✅ | ❌ | ❌ |
| VisualBERT | ✅ | ❌ | ❌ |
| ViT |
❌ | ❌ |
✅ | ✅ | ✅ |
| ViT | ✅ | ✅ | ✅ |
| ViT Hybrid |
❌ | ❌ |
✅ | ❌ | ❌ |
| ViT Hybrid | ✅ | ❌ | ❌ |
| ViTMAE |
❌ | ❌ |
✅ | ✅ | ❌ |
| ViTMAE | ✅ | ✅ | ❌ |
| ViTMSN |
❌ | ❌ |
✅ | ❌ | ❌ |
| ViTMSN | ✅ | ❌ | ❌ |
| ViViT |
❌ | ❌ |
✅ | ❌ | ❌ |
| ViViT | ✅ | ❌ | ❌ |
| Wav2Vec2 | ✅
| ❌ | ✅
| ✅ | ✅ |
| Wav2Vec2 | ✅ | ✅ | ✅ |
| Wav2Vec2-Conformer |
❌ | ❌ |
✅ | ❌ | ❌ |
| Wav2Vec2-Conformer | ✅ | ❌ | ❌ |
| WavLM |
❌ | ❌ |
✅ | ❌ | ❌ |
| WavLM | ✅ | ❌ | ❌ |
| Whisper | ✅
| ✅ | ✅
| ✅ | ✅ |
| Whisper | ✅ | ✅ | ✅ |
| X-CLIP |
❌ | ❌ |
✅ | ❌ | ❌ |
| X-CLIP | ✅ | ❌ | ❌ |
| X-MOD |
❌ | ❌ |
✅ | ❌ | ❌ |
| X-MOD | ✅ | ❌ | ❌ |
| XGLM | ✅
| ✅ | ✅
| ✅ | ✅ |
| XGLM | ✅ | ✅ | ✅ |
| XLM | ✅
| ❌ | ✅
| ✅ | ❌ |
| XLM | ✅ | ✅ | ❌ |
| XLM-ProphetNet | ✅
| ❌ | ✅
| ❌ | ❌ |
| XLM-ProphetNet | ✅ | ❌ | ❌ |
| XLM-RoBERTa | ✅
| ✅ | ✅
| ✅ | ✅ |
| XLM-RoBERTa | ✅ | ✅ | ✅ |
| XLM-RoBERTa-XL |
❌ | ❌ |
✅ | ❌ | ❌ |
| XLM-RoBERTa-XL | ✅ | ❌ | ❌ |
| XLNet | ✅
| ✅ | ✅
| ✅ | ❌ |
| XLNet | ✅ | ✅ | ❌ |
| YOLOS |
❌ | ❌ |
✅ | ❌ | ❌ |
| YOLOS | ✅ | ❌ | ❌ |
| YOSO |
❌ | ❌ |
✅ | ❌ | ❌ |
| YOSO | ✅ | ❌ | ❌ |
<!-- End table-->
<!-- End table-->
utils/check_table.py
View file @
640e1b6c
...
@@ -93,8 +93,6 @@ def get_model_table_from_auto_modules():
...
@@ -93,8 +93,6 @@ def get_model_table_from_auto_modules():
model_name_to_prefix
=
{
name
:
config
.
replace
(
"Config"
,
""
)
for
name
,
config
in
model_name_to_config
.
items
()}
model_name_to_prefix
=
{
name
:
config
.
replace
(
"Config"
,
""
)
for
name
,
config
in
model_name_to_config
.
items
()}
# Dictionaries flagging if each model prefix has a slow/fast tokenizer, backend in PT/TF/Flax.
# Dictionaries flagging if each model prefix has a slow/fast tokenizer, backend in PT/TF/Flax.
slow_tokenizers
=
collections
.
defaultdict
(
bool
)
fast_tokenizers
=
collections
.
defaultdict
(
bool
)
pt_models
=
collections
.
defaultdict
(
bool
)
pt_models
=
collections
.
defaultdict
(
bool
)
tf_models
=
collections
.
defaultdict
(
bool
)
tf_models
=
collections
.
defaultdict
(
bool
)
flax_models
=
collections
.
defaultdict
(
bool
)
flax_models
=
collections
.
defaultdict
(
bool
)
...
@@ -102,13 +100,7 @@ def get_model_table_from_auto_modules():
...
@@ -102,13 +100,7 @@ def get_model_table_from_auto_modules():
# Let's lookup through all transformers object (once).
# Let's lookup through all transformers object (once).
for
attr_name
in
dir
(
transformers_module
):
for
attr_name
in
dir
(
transformers_module
):
lookup_dict
=
None
lookup_dict
=
None
if
attr_name
.
endswith
(
"Tokenizer"
):
if
_re_tf_models
.
match
(
attr_name
)
is
not
None
:
lookup_dict
=
slow_tokenizers
attr_name
=
attr_name
[:
-
9
]
elif
attr_name
.
endswith
(
"TokenizerFast"
):
lookup_dict
=
fast_tokenizers
attr_name
=
attr_name
[:
-
13
]
elif
_re_tf_models
.
match
(
attr_name
)
is
not
None
:
lookup_dict
=
tf_models
lookup_dict
=
tf_models
attr_name
=
_re_tf_models
.
match
(
attr_name
).
groups
()[
0
]
attr_name
=
_re_tf_models
.
match
(
attr_name
).
groups
()[
0
]
elif
_re_flax_models
.
match
(
attr_name
)
is
not
None
:
elif
_re_flax_models
.
match
(
attr_name
)
is
not
None
:
...
@@ -129,7 +121,7 @@ def get_model_table_from_auto_modules():
...
@@ -129,7 +121,7 @@ def get_model_table_from_auto_modules():
# Let's build that table!
# Let's build that table!
model_names
=
list
(
model_name_to_config
.
keys
())
model_names
=
list
(
model_name_to_config
.
keys
())
model_names
.
sort
(
key
=
str
.
lower
)
model_names
.
sort
(
key
=
str
.
lower
)
columns
=
[
"Model"
,
"Tokenizer slow"
,
"Tokenizer fast"
,
"PyTorch support"
,
"TensorFlow support"
,
"Flax Support"
]
columns
=
[
"Model"
,
"PyTorch support"
,
"TensorFlow support"
,
"Flax Support"
]
# We'll need widths to properly display everything in the center (+2 is to leave one extra space on each side).
# We'll need widths to properly display everything in the center (+2 is to leave one extra space on each side).
widths
=
[
len
(
c
)
+
2
for
c
in
columns
]
widths
=
[
len
(
c
)
+
2
for
c
in
columns
]
widths
[
0
]
=
max
([
len
(
name
)
for
name
in
model_names
])
+
2
widths
[
0
]
=
max
([
len
(
name
)
for
name
in
model_names
])
+
2
...
@@ -144,8 +136,6 @@ def get_model_table_from_auto_modules():
...
@@ -144,8 +136,6 @@ def get_model_table_from_auto_modules():
prefix
=
model_name_to_prefix
[
name
]
prefix
=
model_name_to_prefix
[
name
]
line
=
[
line
=
[
name
,
name
,
check
[
slow_tokenizers
[
prefix
]],
check
[
fast_tokenizers
[
prefix
]],
check
[
pt_models
[
prefix
]],
check
[
pt_models
[
prefix
]],
check
[
tf_models
[
prefix
]],
check
[
tf_models
[
prefix
]],
check
[
flax_models
[
prefix
]],
check
[
flax_models
[
prefix
]],
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment