# Roberta2Roberta_L-24_cnn_daily_mail EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_cnndm/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on summarization on the CNN / Dailymail dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.