• Steven Basart's avatar
    Fixes scrolls task bug with few_shot examples (#2003) · 801322e0
    Steven Basart authored
    Bug:
    
    ```
    python -m scripts.write_out --task scrolls_quality --output_base_path ~/workspace/
    Traceback (most recent call last):
      File "<frozen runpy>", line 198, in _run_module_as_main
      File "<frozen runpy>", line 88, in _run_code
      File "/lm-evaluation-harness/scripts/write_out.py", line 92, in <module>
        main()
      File "/lm-evaluation-harness/scripts/write_out.py", line 51, in main
        task_dict = tasks.get_task_dict(task_names, task_manager)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/lm-evaluation-harness/lm_eval/tasks/__init__.py", line 423, in get_task_dict
        task_name_from_string_dict = task_manager.load_task_or_group(
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/lm-evaluation-harness/lm_eval/tasks/__init__.py", line 271, in load_task_or_group
        collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
      File "/lm-evaluation-harness/lm_eval/tasks/__init__.py", line 162, in _load_individual_task_or_group
        return load_task(task_config, task=name_or_config, group=parent_name)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/lm-evaluation-harness/lm_eval/tasks/__init__.py", line 148, in load_task
        task_object = config["class"]()
                      ^^^^^^^^^^^^^^^^^
      File "/lm-evaluation-harness/lm_eval/tasks/scrolls/task.py", line 120, in __init__
        super().__init__()
      File "/lm-evaluation-harness/lm_eval/api/task.py", line 703, in __init__
        self._config = TaskConfig(**config)
                       ^^^^^^^^^^^^^^^^^^^^
    TypeError: lm_eval.api.task.TaskConfig() argument after ** must be a mapping, not NoneType
    ```
    801322e0
task.py 14 KB