Unverified Commit 525b8f5d authored by Hailey Schoelkopf's avatar Hailey Schoelkopf Committed by GitHub
Browse files

Update docs on LM.loglikelihood_rolling abstract method (#1532)

parent faee1adf
......@@ -54,7 +54,7 @@ class LM(abc.ABC):
pass
@abc.abstractmethod
def loglikelihood_rolling(self, requests) -> List[Tuple[float, bool]]:
def loglikelihood_rolling(self, requests) -> List[Tuple[float]]:
"""Compute full log-likelihood of a string, with no truncation, for perplexity computation
- We will use the full max context length of the model.
- For inputs that exceed the max context length, we divide the tokenized string into chunks of up to
......@@ -83,15 +83,13 @@ class LM(abc.ABC):
2. For the last pair, we provide the full context, but only score the last two tokens
:param requests: list[Instance]
A list of Instance objects with property `args` which returns a tuple (context, continuation).
A list of Instance objects with property `args` which returns a tuple (context,).
string: str
String for which we are computing per-token loglikelihood
:return: list[tuple[float, bool]]
A list of pairs (logprob, isgreedy)
String for which we are computing overall loglikelihood
:return: list[tuple[float]]
A list of tuples (logprob,)
logprob: float
The log probability of `continuation`
isgreedy:
Whether `continuation` would be generated by greedy sampling from `context`
The log probability of `context` conditioned on the EOT token.
"""
pass
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment