Commit c99fe038 authored by Julien Chaumond's avatar Julien Chaumond
Browse files

[doc] Fix broken links + remove crazy big notebook

parent 66113bd6
...@@ -40,7 +40,7 @@ python run_language_modeling.py \ ...@@ -40,7 +40,7 @@ python run_language_modeling.py \
## Model in action / Example of usage ✒ ## Model in action / Example of usage ✒
You can get the following script [here](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) You can get the following script [here](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py)
```bash ```bash
python run_generation.py \ python run_generation.py \
......
...@@ -37,7 +37,7 @@ python run_language_modeling.py \ ...@@ -37,7 +37,7 @@ python run_language_modeling.py \
## Model in action / Example of usage: ✒ ## Model in action / Example of usage: ✒
You can get the following script [here](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) You can get the following script [here](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py)
```bash ```bash
python run_generation.py \ python run_generation.py \
......
...@@ -19,7 +19,7 @@ I preprocessed the dataset and splitted it as train / dev (80/20) ...@@ -19,7 +19,7 @@ I preprocessed the dataset and splitted it as train / dev (80/20)
| Dev | 2.2 K | | Dev | 2.2 K |
- [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) - [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)
- Labels covered: - Labels covered:
......
...@@ -29,7 +29,7 @@ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following com ...@@ -29,7 +29,7 @@ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following com
```bash ```bash
export SQUAD_DIR=path/to/nl_squad export SQUAD_DIR=path/to/nl_squad
python transformers/examples/run_squad.py \ python transformers/examples/question-answering/run_squad.py \
--model_type bert \ --model_type bert \
--model_name_or_path dccuchile/bert-base-spanish-wwm-cased \ --model_name_or_path dccuchile/bert-base-spanish-wwm-cased \
--do_train \ --do_train \
......
...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio ...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio
## Model training ## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM. The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results: ## Results:
......
...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio ...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio
## Model training ## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM. The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results: ## Results:
......
...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio ...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio
## Model training ## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM. The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results: ## Results:
......
...@@ -11,7 +11,7 @@ thumbnail: ...@@ -11,7 +11,7 @@ thumbnail:
- Dataset: [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) 📚 - Dataset: [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) 📚
- [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) 🏋️‍♂️ - [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py) 🏋️‍♂️
## Metrics on test set 📋 ## Metrics on test set 📋
......
...@@ -19,7 +19,7 @@ I preprocessed the dataset and splitted it as train / dev (80/20) ...@@ -19,7 +19,7 @@ I preprocessed the dataset and splitted it as train / dev (80/20)
| Dev | 2.2 K | | Dev | 2.2 K |
- [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) - [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)
- Labels covered: - Labels covered:
......
...@@ -11,7 +11,7 @@ This model is a fine-tuned version of the Spanish BERT [(BETO)](https://github.c ...@@ -11,7 +11,7 @@ This model is a fine-tuned version of the Spanish BERT [(BETO)](https://github.c
- [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora) - [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora)
#### [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) #### [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)
#### 21 Syntax annotations (Labels) covered: #### 21 Syntax annotations (Labels) covered:
......
...@@ -19,7 +19,7 @@ I preprocessed the dataset and splitted it as train / dev (80/20) ...@@ -19,7 +19,7 @@ I preprocessed the dataset and splitted it as train / dev (80/20)
| Dev | 50 K | | Dev | 50 K |
- [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) - [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)
- **60** Labels covered: - **60** Labels covered:
......
...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio ...@@ -29,7 +29,7 @@ The smaller BERT models are intended for environments with restricted computatio
## Model training ## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM. The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results: ## Results:
......
...@@ -11,7 +11,7 @@ thumbnail: ...@@ -11,7 +11,7 @@ thumbnail:
- Dataset: [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) 📚 for 15 languages - Dataset: [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) 📚 for 15 languages
- [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) 🏋️‍♂️ - [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py) 🏋️‍♂️
## Metrics on test set 📋 ## Metrics on test set 📋
......
...@@ -31,7 +31,7 @@ The model was fine-tuned on a Tesla P100 GPU and 25GB of RAM. ...@@ -31,7 +31,7 @@ The model was fine-tuned on a Tesla P100 GPU and 25GB of RAM.
The script is the following: The script is the following:
```python ```python
python transformers/examples/run_squad.py \ python transformers/examples/question-answering/run_squad.py \
--model_type distilbert \ --model_type distilbert \
--model_name_or_path distilbert-base-multilingual-cased \ --model_name_or_path distilbert-base-multilingual-cased \
--do_train \ --do_train \
......
...@@ -26,7 +26,7 @@ thumbnail: ...@@ -26,7 +26,7 @@ thumbnail:
## Model training ## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM. The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results: ## Results:
......
...@@ -23,7 +23,7 @@ thumbnail: ...@@ -23,7 +23,7 @@ thumbnail:
## Model training ## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM. The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results: ## Results:
......
...@@ -932,7 +932,7 @@ class AlbertForQuestionAnswering(AlbertPreTrainedModel): ...@@ -932,7 +932,7 @@ class AlbertForQuestionAnswering(AlbertPreTrainedModel):
Examples:: Examples::
# The checkpoint albert-base-v2 is not fine-tuned for question answering. Please see the # The checkpoint albert-base-v2 is not fine-tuned for question answering. Please see the
# examples/run_squad.py example to see how to fine-tune a model to a question answering task. # examples/question-answering/run_squad.py example to see how to fine-tune a model to a question answering task.
from transformers import AlbertTokenizer, AlbertForQuestionAnswering from transformers import AlbertTokenizer, AlbertForQuestionAnswering
import torch import torch
......
...@@ -643,7 +643,7 @@ class RobertaForQuestionAnswering(BertPreTrainedModel): ...@@ -643,7 +643,7 @@ class RobertaForQuestionAnswering(BertPreTrainedModel):
Examples:: Examples::
# The checkpoint roberta-large is not fine-tuned for question answering. Please see the # The checkpoint roberta-large is not fine-tuned for question answering. Please see the
# examples/run_squad.py example to see how to fine-tune a model to a question answering task. # examples/question-answering/run_squad.py example to see how to fine-tune a model to a question answering task.
from transformers import RobertaTokenizer, RobertaForQuestionAnswering from transformers import RobertaTokenizer, RobertaForQuestionAnswering
import torch import torch
......
...@@ -865,7 +865,7 @@ class TFAlbertForQuestionAnswering(TFAlbertPreTrainedModel): ...@@ -865,7 +865,7 @@ class TFAlbertForQuestionAnswering(TFAlbertPreTrainedModel):
Examples:: Examples::
# The checkpoint albert-base-v2 is not fine-tuned for question answering. Please see the # The checkpoint albert-base-v2 is not fine-tuned for question answering. Please see the
# examples/run_squad.py example to see how to fine-tune a model to a question answering task. # examples/question-answering/run_squad.py example to see how to fine-tune a model to a question answering task.
import tensorflow as tf import tensorflow as tf
from transformers import AlbertTokenizer, TFAlbertForQuestionAnswering from transformers import AlbertTokenizer, TFAlbertForQuestionAnswering
......
...@@ -481,7 +481,7 @@ class TFRobertaForQuestionAnswering(TFRobertaPreTrainedModel): ...@@ -481,7 +481,7 @@ class TFRobertaForQuestionAnswering(TFRobertaPreTrainedModel):
Examples:: Examples::
# The checkpoint roberta-base is not fine-tuned for question answering. Please see the # The checkpoint roberta-base is not fine-tuned for question answering. Please see the
# examples/run_squad.py example to see how to fine-tune a model to a question answering task. # examples/question-answering/run_squad.py example to see how to fine-tune a model to a question answering task.
import tensorflow as tf import tensorflow as tf
from transformers import RobertaTokenizer, TFRobertaForQuestionAnswering from transformers import RobertaTokenizer, TFRobertaForQuestionAnswering
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment