IncontrasttoBART,Pegasus' pretraining task is intentionally similar to summarization: important sentences are masked and are generated together as one output sequence from the remaining sentences, similar to an extractive summary.
The library provides a version of this model for conditional generation, which should be used for summarization.