This will perform *data-parallel evaluation*: that is, placing a **single full copy** of your model onto each available GPU and *splitting batches across GPUs* to evaluate on K GPUs K times faster than on one.
This will perform *data-parallel evaluation*: that is, placing a **single full copy** of your model onto each available GPU and *splitting batches across GPUs* to evaluate on K GPUs K times faster than on one.
However, if your model *is too large to be run on a single one of your GPUs*, then we provide an alternative method to run these large models: use of the `parallelize` argument.
If your model is *is too large to be run on a single one of your GPUs* then you can use `accelerate` with Fully Sharded Data Parallel (FSDP) that splits the weights of the model across your data parallel ranks. To enable this, ensure you select `YES` when asked ```Do you want to use FullyShardedDataParallel?``` when running `accelerate config`. To enable memory-efficient loading, select `YES` when asked `Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start?`. This will ensure only the rank 0 process loads the model and then broadcasts the parameters to the other ranks instead of having each rank load all parameters which can lead to large RAM usage spikes around the start of the script that may cause errors.
We also provide an second method to run these large models: use of the `parallelize` argument.
```
```
python main.py \
python main.py \
--model hf \
--model hf \
...
@@ -132,7 +134,7 @@ To pass even more advanced keyword arguments to `accelerate`, we allow for the f
...
@@ -132,7 +134,7 @@ To pass even more advanced keyword arguments to `accelerate`, we allow for the f
-`max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to RAM.
-`max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to RAM.
-`offload_folder`: a folder where model weights will be offloaded to disk if needed.
-`offload_folder`: a folder where model weights will be offloaded to disk if needed.
Using this setting helps for massive models like BLOOM which require, or to avoid exceeding your total system RAM (by default, with `accelerate launch` one copy of the model for each GPU is initialized in RAM before moving it to GPU, resulting in large RAM usage spikes around the start of the script that may cause errors such as `Killed`.) However, it naively splits models across GPUs, resulting in only a single GPU performing work at any point in time, and so is much slower than launching with `accelerate launch`, possibly by a factor of the total # of GPUs.
Note that this method naively splits models across GPUs, resulting in only a single GPU performing work at any point in time, and so is much slower than launching with `accelerate launch`, possibly by a factor of the total # of GPUs.
**Note that this option requires launching evaluation via `python main.py` rather than `accelerate launch main.py`.**
**Note that this option requires launching evaluation via `python main.py` rather than `accelerate launch main.py`.**
@@ -4,6 +4,7 @@ Welcome to the docs for the LM Evaluation Harness!
...
@@ -4,6 +4,7 @@ Welcome to the docs for the LM Evaluation Harness!
## Table of Contents
## Table of Contents
* To learn about the public interface of the library, as well as how to evaluate via the commandline or as integrated into an external library, see the [Interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/user_guide.md)
* To learn how to add a new library, API, or model type to the library, as well as a quick explainer on the types of ways to evaluate an LM, see the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/model_guide.md).
* To learn how to add a new library, API, or model type to the library, as well as a quick explainer on the types of ways to evaluate an LM, see the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/model_guide.md).
* For a crash course on adding new tasks to the library, see our [New Task Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/new_task_guide.md).
* For a crash course on adding new tasks to the library, see our [New Task Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/new_task_guide.md).
* To learn more about pushing the limits of task configuration that the Eval Harness supports, see the [Advanced Task Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/advanced_task_guide.md).
* To learn more about pushing the limits of task configuration that the Eval Harness supports, see the [Advanced Task Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/advanced_task_guide.md).
This document details the interface exposed by `lm-eval` and provides details on what flags are available to users.
## Command-line Interface
A majority of users run the library by cloning it from Github and running the `main.py` script.
Equivalently, running the library can be done via the `lm-eval` entrypoint at the command line.
This mode supports a number of command-line arguments, the details of which can be also be seen via running with `-h` or `--help`:
*`--model` : Selects which model type or provider is evaluated. Must be a string corresponding to the name of the model type/provider being used. See [the main README](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor#commercial-apis) for a full list of enabled model names and supported libraries or APIs.
*`--model_args` : Controls parameters passed to the model constructor. Accepts a string containing comma-separated keyword arguments to the model class of the format `"arg1=val1,arg2=val2,..."`, such as, for example `--model_args pretrained=EleutherAI/pythia-160m,dtype=float32`. For a full list of what keyword arguments, see the initialization of the `lm_eval.api.model.LM` subclass, e.g. [`HFLM`](https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/models/huggingface.py#L66)
*`--tasks` : Determines which tasks or task groups are evaluated. Accepts a comma-separated list of task names or task group names. Must be solely comprised of valid tasks/groups.
*`--num_fewshot` : Sets the number of few-shot examples to place in context. Must be an integer.
*`--batch_size` : Sets the batch size used for evaluation. Can be a positive integer or `"auto"` to automatically select the largest batch size that will fit in memory, speeding up evaluation. One can pass `--batch_size auto:N` to re-select the maximum batch size `N` times during evaluation. This can help accelerate evaluation further, since `lm-eval` sorts documents in descending order of context length.
*`--max_batch_size` : Sets the maximum batch size to try to fit in memory, if `--batch_size auto` is passed.
*`--device` : Sets which device to place the model onto. Must be a string, for example, `"cuda", "cuda:0", "cpu", "mps"`. Defaults to "cuda", and can be ignored if running multi-GPU or running a non-local model type.
*`--output_path` : A string of the form `dir/file.jsonl` or `dir/`. Provides a path where high-level results will be saved, either into the file named or into the directory named. If `--log_samples` is passed as well, then per-document outputs and metrics will be saved into the directory as well.
*`--log_samples` : If this flag is passed, then the model's outputs, and the text fed into the model, will be saved at per-document granularity. Must be used with `--output_path`.
*`--limit` : Accepts an integer, or a float between 0.0 and 1.0 . If passed, will limit the number of documents to evaluate to the first X documents (if an integer) per task or first X% of documents per task. Useful for debugging, especially on costly API models.
*`--use_cache` : Should be a path where a sqlite db file can be written to. Takes a string of format `/path/to/sqlite_cache_` in order to create a cache db at `/path/to/sqlite_cache_rank{i}.db` for each process (0-NUM_GPUS). This allows results of prior runs to be cached, so that there is no need to re-run results in order to re-score or re-run a given (model, task) pair again.
*`--decontamination_ngrams_path` : Deprecated, see (this commit)[https://github.com/EleutherAI/lm-evaluation-harness/commit/00209e10f6e27edf5d766145afaf894079b5fe10] or older for a working decontamination-checker tool.
*`--check_integrity` : If this flag is used, the library tests for each task selected are run to confirm task integrity.
*`--write_out` : Used for diagnostic purposes to observe the format of task documents passed to a model. If this flag is used, then prints the prompt and gold target string for the first document of each task.
*`--show_config` : If used, prints the full `lm_eval.api.task.TaskConfig` contents (non-default settings the task YAML file) for each task which was run, at the completion of an evaluation. Useful for when one is modifying a task's configuration YAML locally to transmit the exact configurations used for debugging or for reproducibility purposes.
*`--include_path` : Accepts a path to a folder. If passed, then all YAML files containing `lm-eval`` compatible task configurations will be added to the task registry as available tasks. Used for when one is writing config files for their own task in a folder other than `lm_eval/tasks/`
## External Library Usage
We also support using the library's external API for use within model training loops or other scripts.
`lm_eval` supplies two functions for external import and use: `lm_eval.evaluate()` and `lm_eval.simple_evaluate()`.
`simple_evaluate()` can be used by simply creating an `lm_eval.api.model.LM` subclass that implements the methods described in the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor/docs/model_guide.md), and wrapping your custom model in that class as follows:
```python
import lm_eval
...
my_model = initialize_my_model() # create your model (could be running finetuning with some custom modeling code)
...
lm_obj = Your_LM(model=my_model, batch_size=16) # instantiate an LM subclass that takes your initialized model and can run `Your_LM.loglikelihood()`, `Your_LM.loglikelihood_rolling()`, `Your_LM.greedy_until()`
See https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/evaluator.py#L35 for a full description of all arguments available. All keyword arguments to simple_evaluate share the same role as the command-line flags described previously.
Additionally, the `evaluate()` function offers the core evaluation functionality provided by the library, but without some of the special handling and simplification + abstraction provided by `simple_evaluate()`.
See https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/evaluator.py#L173 for more details.
As a brief example usage of `evaluate()`:
```python
import lm_eval
from my_tasks import MyTask1 # suppose you've defined a custom lm_eval.api.Task subclass in your own external codebase
...
my_model = initialize_my_model() # create your model (could be running finetuning with some custom modeling code)
...
lm_obj = Your_LM(model=my_model, batch_size=16) # instantiate an LM subclass that takes your initialized model and can run `Your_LM.loglikelihood()`, `Your_LM.loglikelihood_rolling()`, `Your_LM.greedy_until()`
"""Instantiate and evaluate a model on a list of tasks.
"""Instantiate and evaluate a model on a list of tasks.
...
@@ -117,10 +117,11 @@ def simple_evaluate(
...
@@ -117,10 +117,11 @@ def simple_evaluate(
task_dict=lm_eval.tasks.get_task_dict(tasks)
task_dict=lm_eval.tasks.get_task_dict(tasks)
fortask_nameintask_dict.keys():
fortask_nameintask_dict.keys():
task_obj=task_dict[task_name]
task_obj=task_dict[task_name]
iftype(task_obj)==tuple:
iftype(task_obj)==tuple:
group,task_obj=task_obj
group,task_obj=task_obj
iftask_objisNone:
continue
config=task_obj._config
config=task_obj._config
ifnum_fewshotisnotNone:
ifnum_fewshotisnotNone:
...
@@ -175,17 +176,17 @@ def evaluate(
...
@@ -175,17 +176,17 @@ def evaluate(
lm,
lm,
task_dict,
task_dict,
limit=None,
limit=None,
bootstrap_iters=100000,
bootstrap_iters:int=100000,
decontamination_ngrams_path=None,
decontamination_ngrams_path=None,
write_out=False,
write_out:bool=False,
log_samples=True,
log_samples:bool=True,
):
):
"""Instantiate and evaluate a model on a list of tasks.
"""Instantiate and evaluate a model on a list of tasks.
:param lm: obj
:param lm: obj
Language Model
Language Model
:param task_dict: dict[str, Task]
:param task_dict: dict[str, Task]
Dictionary of tasks. Tasks will be taken to have name task.EVAL_HARNESS_NAME if defined and type(task).__name__ otherwise.
Dictionary of tasks. Tasks will be taken to have name type(task).config.task .
:param limit: int, optional
:param limit: int, optional
Limit the number of examples per task (only use this for testing)
Limit the number of examples per task (only use this for testing)
:param bootstrap_iters:
:param bootstrap_iters:
...
@@ -210,24 +211,30 @@ def evaluate(
...
@@ -210,24 +211,30 @@ def evaluate(
samples=collections.defaultdict(list)
samples=collections.defaultdict(list)
# tracks all Instances/requests a model must generate output on.
# tracks all Instances/requests a model must generate output on.
requests=collections.defaultdict(list)
requests=collections.defaultdict(list)
# Stores task scores based on task grouping.
# Aggregated task scores presented with groups
aggregate=collections.defaultdict(dict)
results_agg=collections.defaultdict(dict)
# tracks if a task was chosen via user selecting a group containing it
# Aggregated groups scores only
task_groups=collections.defaultdict(dict)
groups_agg=collections.defaultdict(dict)
# stores the amount to pad out reqs per req. type so that
# stores the amount to pad out reqs per req. type so that
# number of fwd passes per distributed rank is equal
# number of fwd passes per distributed rank is equal
padding_requests=collections.defaultdict(int)
padding_requests=collections.defaultdict(int)
# store the hierarchy to do proper ordering
# Stores group related keys and values for group-aggregation
task_hierarchy=collections.defaultdict(list)
task_groups=collections.defaultdict(dict)
# store the ordering of tasks and groups
task_order=collections.defaultdict(int)
# store the aggregation for aggregating across tasks in the same group
sample_agg_fn=collections.defaultdict(dict)
# get lists of each type of request
# get lists of each type of request
fortask_name,taskintask_dict.items():
fortask_name,taskintask_dict.items():
iftype(task)==tuple:
iftype(task)==tuple:
group,task=task
group_name,task=task
task_groups[task_name]=group
task_hierarchy[group_name].append(task_name)
aggregate[task_name]={}
else:
task_hierarchy[task_name]=[]
iftaskisNone:
continue
versions[task_name]=task.VERSION
versions[task_name]=task.VERSION
configs[task_name]=dict(task.dump_config())
configs[task_name]=dict(task.dump_config())
...
@@ -252,7 +259,8 @@ def evaluate(
...
@@ -252,7 +259,8 @@ def evaluate(
# print the prompt for the first few documents
# print the prompt for the first few documents
ifinst.doc_id<1:
ifinst.doc_id<1:
eval_logger.info(
eval_logger.info(
f"Task: {task_name}; document {inst.doc_id}; context prompt (starting on next line):\n{inst.args[0]}\n(end of prompt on previous line)"
f"Task: {task_name}; document {inst.doc_id}; context prompt (starting on next line):\
\n{inst.args[0]}\n(end of prompt on previous line)\ntarget string or answer choice index (starting on next line):\n{task.doc_to_target(inst.doc)}\n(end of target on previous line)"
)
)
eval_logger.info(f"Request: {str(inst)}")
eval_logger.info(f"Request: {str(inst)}")
...
@@ -302,6 +310,8 @@ def evaluate(
...
@@ -302,6 +310,8 @@ def evaluate(
fortask_name,taskintask_dict.items():
fortask_name,taskintask_dict.items():
iftype(task)==tuple:
iftype(task)==tuple:
group,task=task
group,task=task
iftaskisNone:
continue
task.apply_filters()
task.apply_filters()
### Collect values of metrics on all datapoints ###
### Collect values of metrics on all datapoints ###
...
@@ -311,6 +321,8 @@ def evaluate(
...
@@ -311,6 +321,8 @@ def evaluate(
fortask_name,taskintask_dict.items():
fortask_name,taskintask_dict.items():
iftype(task)==tuple:
iftype(task)==tuple:
group,task=task
group,task=task
iftaskisNone:
continue
# TODO: make it possible to use a different metric per filter
# TODO: make it possible to use a different metric per filter