Unverified Commit 969b48bf authored by Hailey Schoelkopf's avatar Hailey Schoelkopf Committed by GitHub
Browse files

Don't use `get_task_dict()` in task registration / initialization (#1331)



* don't use get_task_dict() as a helper, it will download the dataset!

* pre-commit

* Update README.md

---------
Co-authored-by: default avatarlintangsutawika <lintang@eleuther.ai>
parent 45a8f709
...@@ -192,7 +192,7 @@ Note that for externally hosted models, configs such as `--device` and `--batch_ ...@@ -192,7 +192,7 @@ Note that for externally hosted models, configs such as `--device` and `--batch_
| Your local inference server! | :heavy_check_mark: | `local-completions` or `local-chat-completions` (using `openai-chat-completions` model type) | Any server address that accepts GET requests using HF models and mirror's OpenAI's ChatCompletions interface | `generate_until` | | ... | | Your local inference server! | :heavy_check_mark: | `local-completions` or `local-chat-completions` (using `openai-chat-completions` model type) | Any server address that accepts GET requests using HF models and mirror's OpenAI's ChatCompletions interface | `generate_until` | | ... |
| `local-completions` (using `openai-completions` model type) | Any server address that accepts GET requests using HF models and mirror's OpenAI's Completions interface | `generate_until` | | ... | | `local-completions` (using `openai-completions` model type) | Any server address that accepts GET requests using HF models and mirror's OpenAI's Completions interface | `generate_until` | | ... |
Models which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while models that are local or APIs that supply logprobs/logits can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`. Models which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while local models, or APIs that supply logprobs/logits of their prompts, can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.
For more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md#interface). For more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md#interface).
......
...@@ -67,12 +67,12 @@ def register_configurable_group(config: Dict[str, str], yaml_path: str = None) - ...@@ -67,12 +67,12 @@ def register_configurable_group(config: Dict[str, str], yaml_path: str = None) -
if "task" in task_config: if "task" in task_config:
task_name = task_config["task"] task_name = task_config["task"]
if task_name in ALL_TASKS: if task_name in ALL_TASKS:
task_obj = get_task_dict(task_name)[task_name] task_obj = TASK_REGISTRY[task_name]
if type(task_obj) == tuple: if type(task_obj) == tuple:
_, task_obj = task_obj _, task_obj = task_obj
if task_obj is not None: if task_obj is not None:
base_config = task_obj._config.to_dict(keep_callable=True) base_config = task_obj.CONFIG.to_dict(keep_callable=True)
task_name_config["task"] = f"{group}_{task_name}" task_name_config["task"] = f"{group}_{task_name}"
task_config = utils.load_yaml_config(yaml_path, task_config) task_config = utils.load_yaml_config(yaml_path, task_config)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment