Unverified Commit edbaad2c authored by Stas Bekman's avatar Stas Bekman Committed by GitHub
Browse files

[model cards] fix metadata - 3rd attempt (#7218)

parent 999a1c95
......@@ -10,11 +10,9 @@ tags:
- allenai
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt16/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
- wmt16
metrics:
- http://www.statmt.org/wmt16/metrics-task.html
- bleu
---
# FSMT
......@@ -95,3 +93,7 @@ echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
......@@ -10,11 +10,9 @@ tags:
- allenai
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt16/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
- wmt16
metrics:
- http://www.statmt.org/wmt16/metrics-task.html
- bleu
---
# FSMT
......@@ -95,3 +93,7 @@ echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
......@@ -10,11 +10,9 @@ tags:
- allenai
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt16/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
- wmt16
metrics:
- http://www.statmt.org/wmt16/metrics-task.html
- bleu
---
# FSMT
......@@ -95,3 +93,7 @@ echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-6-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
......@@ -11,10 +11,9 @@ tags:
- allenai
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -92,3 +91,7 @@ echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt19-de-en-6-6-base $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
......@@ -11,10 +11,9 @@ tags:
- allenai
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -92,3 +91,7 @@ echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt19-de-en-6-6-big $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
......@@ -10,10 +10,9 @@ tags:
- facebook
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -88,6 +87,10 @@ PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/w
```
note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`.
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
## TODO
......
......@@ -10,10 +10,9 @@ tags:
- facebook
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -88,6 +87,10 @@ PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/w
```
note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`.
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
## TODO
......
......@@ -10,10 +10,9 @@ tags:
- facebook
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -88,6 +87,10 @@ PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/w
```
note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`.
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
## TODO
......
......@@ -10,10 +10,9 @@ tags:
- facebook
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -88,6 +87,10 @@ PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/w
```
note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`.
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
## TODO
......
......@@ -35,11 +35,9 @@ tags:
- allenai
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt16/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
- wmt16
metrics:
- http://www.statmt.org/wmt16/metrics-task.html
- bleu
---
# FSMT
......@@ -120,6 +118,10 @@ echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/{model_name} $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
"""
model_card_dir.mkdir(parents=True, exist_ok=True)
path = os.path.join(model_card_dir, "README.md")
......
......@@ -35,10 +35,9 @@ tags:
- allenai
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -116,6 +115,10 @@ echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/{model_name} $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
"""
model_card_dir.mkdir(parents=True, exist_ok=True)
path = os.path.join(model_card_dir, "README.md")
......
......@@ -36,10 +36,9 @@ tags:
- facebook
license: Apache 2.0
datasets:
- [main source](http://www.statmt.org/wmt19/)
- [test-set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
- wmt19
metrics:
- http://www.statmt.org/wmt19/metrics-task.html
- bleu
---
# FSMT
......@@ -114,6 +113,10 @@ PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/w
```
note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`.
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
## TODO
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment