Unverified Commit 3552d0e0 authored by Julien Chaumond's avatar Julien Chaumond Committed by GitHub
Browse files

[model_cards] Migrate cards from this repo to model repos on huggingface.co (#9013)



* rm all model cards

* Update the .rst

@sgugger it is still not super crystal clear/streamlined so let me know if any ideas to make it simpler

* Add a rootlevel README.md with simple instructions/context

* Update docs/source/model_sharing.rst
Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Apply suggestions from code review
Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>

* make style

* rm all model cards
Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
parent 29e45979
---
language:
- en
- de
license: apache-2.0
datasets:
- wmt14
tags:
- translation
---
# bert2bert_L-24_wmt_de_en EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/bert24_de_en/1).
The model is an encoder-decoder model that was initialized on the `bert-large` checkpoints for both the encoder
and decoder and fine-tuned on German to English translation on the WMT dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for translation, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/bert2bert_L-24_wmt_de_en", pad_token="<pad>", eos_token="</s>", bos_token="<s>")
model = AutoModelForSeq2SeqLM.from_pretrained("google/bert2bert_L-24_wmt_de_en")
sentence = "Willst du einen Kaffee trinken gehen mit mir?"
input_ids = tokenizer(sentence, return_tensors="pt", add_special_tokens=False).input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Want to drink a kaffee go with me? .
```
---
language:
- en
- de
license: apache-2.0
datasets:
- wmt14
tags:
- translation
---
# bert2bert_L-24_wmt_en_de EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/bert24_en_de/1).
The model is an encoder-decoder model that was initialized on the `bert-large` checkpoints for both the encoder
and decoder and fine-tuned on English to German translation on the WMT dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for translation, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/bert2bert_L-24_wmt_en_de", pad_token="<pad>", eos_token="</s>", bos_token="<s>")
model = AutoModelForSeq2SeqLM.from_pretrained("google/bert2bert_L-24_wmt_en_de")
sentence = "Would you like to grab a coffee with me this week?"
input_ids = tokenizer(sentence, return_tensors="pt", add_special_tokens=False).input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Möchten Sie diese Woche einen Kaffee mit mir schnappen?
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
../../iuliaturc/bert_uncased_L-2_H-128_A-2/README.md
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment