To evaluate models that are loaded via `AutoSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface, you instead use `--model hf-seq2seq`. Support for this model type is currently pending.
To evaluate models that are loaded via `AutoSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface, you instead use `--model hf-seq2seq`. Support for this model type is currently pending.
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring.
However, if your model *is too large to be run on a single one of your GPUs*, then we provide an alternative method to run these large models.