Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
010e0460
Unverified
Commit
010e0460
authored
Mar 25, 2020
by
Travis McGuire
Committed by
GitHub
Mar 25, 2020
Browse files
Updated/added model cards (#3435)
parent
ffa17fe3
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
127 additions
and
34 deletions
+127
-34
model_cards/twmkn9/albert-base-v2-squad2/README.md
model_cards/twmkn9/albert-base-v2-squad2/README.md
+19
-17
model_cards/twmkn9/bert-base-uncased-squad2/README.md
model_cards/twmkn9/bert-base-uncased-squad2/README.md
+19
-17
model_cards/twmkn9/distilbert-base-uncased-squad2/README.md
model_cards/twmkn9/distilbert-base-uncased-squad2/README.md
+45
-0
model_cards/twmkn9/distilroberta-base-squad2/README.md
model_cards/twmkn9/distilroberta-base-squad2/README.md
+44
-0
No files found.
model_cards/twmkn9/albert-base-v2-squad2/README.md
View file @
010e0460
This model is ALBERT base v2 trained on SQuAD v2 as:
This model is
[
ALBERT base v2
](
https://huggingface.co/albert-base-v2
)
trained on SQuAD v2 as:
```
python run_squad.py
--model_type albert
--model_name_or_path albert-base-v2
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/albert_base_fine/
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type albert
--model_name_or_path albert-base-v2
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/albert_fine/
```
Performance on a dev subset is close to the original paper:
...
...
model_cards/twmkn9/bert-base-uncased-squad2/README.md
View file @
010e0460
This model is BERT base uncased trained on SQuAD v2 as:
This model is
[
BERT base uncased
](
https://huggingface.co/bert-base-uncased
)
trained on SQuAD v2 as:
```
python run_squad.py
--model_type bert
--model_name_or_path bert-base-uncased
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/bert_base_fine/
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type bert
--model_name_or_path bert-base-uncased
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/bert_fine_tuned/
```
Performance on a dev subset is close to the original paper:
...
...
model_cards/twmkn9/distilbert-base-uncased-squad2/README.md
0 → 100644
View file @
010e0460
This model is
[
Distilbert base uncased
](
https://huggingface.co/distilbert-base-uncased
)
trained on SQuAD v2 as:
```
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type distilbert
--model_name_or_path distilbert-base-uncased
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/distilbert_fine_tuned/
```
Performance on a dev subset is close to the original paper:
```
Results:
{
'exact': 64.88976637051661,
'f1': 68.1776176526635,
'total': 6078,
'HasAns_exact': 69.7594501718213,
'HasAns_f1': 76.62665295288285,
'HasAns_total': 2910,
'NoAns_exact': 60.416666666666664,
'NoAns_f1': 60.416666666666664,
'NoAns_total': 3168,
'best_exact': 64.88976637051661,
'best_exact_thresh': 0.0,
'best_f1': 68.17761765266337,
'best_f1_thresh': 0.0
}
```
We are hopeful this might save you time, energy, and compute. Cheers!
\ No newline at end of file
model_cards/twmkn9/distilroberta-base-squad2/README.md
0 → 100644
View file @
010e0460
This model is
[
Distilroberta base
](
https://huggingface.co/distilroberta-base
)
trained on SQuAD v2 as:
```
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type robberta
--model_name_or_path distilroberta-base
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/distilroberta_fine_tuned/
```
Performance on a dev subset is close to the original paper:
```
Results:
{
'exact': 70.9279368213228,
'f1': 74.60439802429168,
'total': 6078,
'HasAns_exact': 67.62886597938144,
'HasAns_f1': 75.30774267754136,
'HasAns_total': 2910,
'NoAns_exact': 73.95833333333333,
'NoAns_f1': 73.95833333333333, 'NoAns_total': 3168,
'best_exact': 70.94438960184272,
'best_exact_thresh': 0.0,
'best_f1': 74.62085080481161,
'best_f1_thresh': 0.0
}
```
We are hopeful this might save you time, energy, and compute. Cheers!
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment