Unverified Commit cebb96f5 authored by Patrick von Platen's avatar Patrick von Platen Committed by GitHub
Browse files

Add more subsections to main doc (#11758)

* add headers to main doc

* Apply suggestions from code review

* update

* upload
parent da7e73b7
...@@ -256,7 +256,7 @@ Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. ...@@ -256,7 +256,7 @@ Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
1. **[XLSR-Wav2Vec2](https://huggingface.co/transformers/model_doc/xlsr_wav2vec2.html)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[XLSR-Wav2Vec2](https://huggingface.co/transformers/model_doc/xlsr_wav2vec2.html)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR. 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/transformers/index.html#bigtable). To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/transformers/index.html#supported-frameworks).
These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html). These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html).
......
...@@ -84,7 +84,10 @@ The documentation is organized in five parts: ...@@ -84,7 +84,10 @@ The documentation is organized in five parts:
- **INTERNAL HELPERS** for the classes and functions we use internally. - **INTERNAL HELPERS** for the classes and functions we use internally.
The library currently contains Jax, PyTorch and Tensorflow implementations, pretrained model weights, usage scripts and The library currently contains Jax, PyTorch and Tensorflow implementations, pretrained model weights, usage scripts and
conversion utilities for the following models: conversion utilities for the following models.
Supported models
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. ..
This list is updated automatically from the README with `make fix-copies`. Do not update manually! This list is updated automatically from the README with `make fix-copies`. Do not update manually!
...@@ -267,7 +270,8 @@ conversion utilities for the following models: ...@@ -267,7 +270,8 @@ conversion utilities for the following models:
Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
.. _bigtable: Supported frameworks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The table below represents the current support in the library for each of those models, whether they have a Python The table below represents the current support in the library for each of those models, whether they have a Python
tokenizer (called "slow"). A "fast" tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via tokenizer (called "slow"). A "fast" tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via
......
...@@ -20,7 +20,7 @@ Based on the script [`run_qa.py`](https://github.com/huggingface/transformers/bl ...@@ -20,7 +20,7 @@ Based on the script [`run_qa.py`](https://github.com/huggingface/transformers/bl
**Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it **Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it
uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in
[this table](https://huggingface.co/transformers/index.html#bigtable), if it doesn't you can still use the old version [this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version
of the script. of the script.
The old version of this script can be found [here](https://github.com/huggingface/transformers/tree/master/examples/legacy/question-answering). The old version of this script can be found [here](https://github.com/huggingface/transformers/tree/master/examples/legacy/question-answering).
......
...@@ -304,7 +304,7 @@ def main(): ...@@ -304,7 +304,7 @@ def main():
if not isinstance(tokenizer, PreTrainedTokenizerFast): if not isinstance(tokenizer, PreTrainedTokenizerFast):
raise ValueError( raise ValueError(
"This example script only works for models that have a fast tokenizer. Checkout the big table of models " "This example script only works for models that have a fast tokenizer. Checkout the big table of models "
"at https://huggingface.co/transformers/index.html#bigtable to find the model types that meet this " "at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this "
"requirement" "requirement"
) )
......
...@@ -52,7 +52,7 @@ python run_ner.py \ ...@@ -52,7 +52,7 @@ python run_ner.py \
**Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it **Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it
uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in
[this table](https://huggingface.co/transformers/index.html#bigtable), if it doesn't you can still use the old version [this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version
of the script. of the script.
## Old version of the script ## Old version of the script
......
...@@ -306,7 +306,7 @@ def main(): ...@@ -306,7 +306,7 @@ def main():
if not isinstance(tokenizer, PreTrainedTokenizerFast): if not isinstance(tokenizer, PreTrainedTokenizerFast):
raise ValueError( raise ValueError(
"This example script only works for models that have a fast tokenizer. Checkout the big table of models " "This example script only works for models that have a fast tokenizer. Checkout the big table of models "
"at https://huggingface.co/transformers/index.html#bigtable to find the model types that meet this " "at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this "
"requirement" "requirement"
) )
......
...@@ -302,7 +302,7 @@ def check_model_list_copy(overwrite=False, max_per_line=119): ...@@ -302,7 +302,7 @@ def check_model_list_copy(overwrite=False, max_per_line=119):
rst_list, start_index, end_index, lines = _find_text_in_file( rst_list, start_index, end_index, lines = _find_text_in_file(
filename=os.path.join(PATH_TO_DOCS, "index.rst"), filename=os.path.join(PATH_TO_DOCS, "index.rst"),
start_prompt=" This list is updated automatically from the README", start_prompt=" This list is updated automatically from the README",
end_prompt=".. _bigtable:", end_prompt="Supported frameworks",
) )
md_list = get_model_list() md_list = get_model_list()
converted_list = convert_to_rst(md_list, max_per_line=max_per_line) converted_list = convert_to_rst(md_list, max_per_line=max_per_line)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment