"vscode:/vscode.git/clone" did not exist on "192746cf93769516cdb44da0db690c5c417807fa"
Commit be3dfa50 authored by jerrrrry's avatar jerrrrry
Browse files

Initial commit

parents
Pipeline #2876 failed with stages
in 0 seconds
# NPHardEval
## Introduction
The following introduction comes from the abstract in [NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes](https://arxiv.org/abs/2312.14890)
```
This benchmark is designed to evaluate the reasoning abilities of LLMs across a broad spectrum of 900 algorithmic questions, extending up to the NP-Hard complexity class. These questions are meticulously chosen to represent a wide range of complexity class below the NP-hard complexity class, offering a rigorous measure of the reasoning ability of LLMs.
```
## Official link
### Paper
[NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes](https://arxiv.org/abs/2312.14890)
### Repository
[NPHardEval](https://github.com/casmlab/NPHardEval)
## Examples
Input example I:
```
The Graph Coloring Problem (Decision Version, GCP-D) involves determining if it is possible to color the vertices of a graph using a given number of colors, ensuring no two adjacent vertices have the same color.
Find out if the vertices of a graph with 6 vertices can be colored using only 3 colors, such that no adjacent vertices share the same color.
Provide a yes or no answer, along with a concise explanation of your reasoning. Keep your explanation focused and brief.
Enclose your output in <root></root> tags. Include your reasoning in <reasoning></reasoning> tags and the final yes/no answer in <final_answer></final_answer> tags, like <final_answer>{'Feasible': 'YES_OR_NO'}</final_answer>.
The graph is below:
Vertex 2 is connected to vertex 4.
Vertex 2 is connected to vertex 5.
Vertex 3 is connected to vertex 5.
Vertex 4 is connected to vertex 5.
Vertex 5 is connected to vertex 6.
```
Output example I (from GPT-4):
```
<root>
<final_answer>{'Feasible': 'YES'}</final_answer>
<reasoning>Yes, it is possible to color the vertices of the given graph with 6 vertices using only 3 colors, such that no adjacent vertices share the same color. The graph does not contain any triangles or cycles of length greater than 3, which means it is a planar graph. By the Four Color Theorem, any planar graph can be colored using at most 4 colors. Since we have only 3 colors available, it is feasible to color the vertices without any adjacent vertices sharing the same color.</reasoning>
</root>
```
Input example II:
```
The Traveling Salesman Problem (Decision Version, TSP-D) focuses on determining if a salesman can complete a route, visiting each city at least once, with the total travel distance being less than a specified value.
Check if it's possible for a salesman to visit each of the 10 cities at least once and return to the starting city with the total distance less than 3481.5. The distances between each pair of cities are given.
Provide a yes or no answer, with a succinct explanation of your decision process. Focus on clarity and brevity in your response.
Enclose your output in <root></root> tags. Present your reasoning in <reasoning></reasoning> tags and the final yes/no answer in <final_answer></final_answer> tags, like <final_answer>{'Feasible': 'YES_OR_NO'}</final_answer>.
The distances between cities are below:
The distance between City 0 and City 1 is 58.0.
The distance between City 0 and City 2 is 171.0.
The distance between City 0 and City 3 is 122.0.
The distance between City 0 and City 4 is 109.0.
The distance between City 0 and City 5 is 93.0.
The distance between City 0 and City 6 is 106.0.
The distance between City 0 and City 7 is 52.0.
The distance between City 0 and City 8 is 115.0.
The distance between City 0 and City 9 is 148.0.
The distance between City 1 and City 2 is 145.0.
The distance between City 1 and City 3 is 71.0.
The distance between City 1 and City 4 is 114.0.
The distance between City 1 and City 5 is 69.0.
The distance between City 1 and City 6 is 163.0.
The distance between City 1 and City 7 is 132.0.
The distance between City 1 and City 8 is 183.0.
The distance between City 1 and City 9 is 115.0.
The distance between City 2 and City 3 is 107.0.
The distance between City 2 and City 4 is 163.0.
The distance between City 2 and City 5 is 63.0.
The distance between City 2 and City 6 is 91.0.
The distance between City 2 and City 7 is 108.0.
The distance between City 2 and City 8 is 109.0.
The distance between City 2 and City 9 is 77.0.
The distance between City 3 and City 4 is 119.0.
The distance between City 3 and City 5 is 131.0.
The distance between City 3 and City 6 is 79.0.
The distance between City 3 and City 7 is 159.0.
The distance between City 3 and City 8 is 27.0.
The distance between City 3 and City 9 is 64.0.
The distance between City 4 and City 5 is 85.0.
The distance between City 4 and City 6 is 110.0.
The distance between City 4 and City 7 is 71.0.
The distance between City 4 and City 8 is 61.0.
The distance between City 4 and City 9 is 61.0.
The distance between City 5 and City 6 is 73.0.
The distance between City 5 and City 7 is 132.0.
The distance between City 5 and City 8 is 82.0.
The distance between City 5 and City 9 is 96.0.
The distance between City 6 and City 7 is 156.0.
The distance between City 6 and City 8 is 103.0.
The distance between City 6 and City 9 is 69.0.
The distance between City 7 and City 8 is 108.0.
The distance between City 7 and City 9 is 97.0.
The distance between City 8 and City 9 is 75.0.
```
Output example II (from GPT-4):
```
<root> <reasoning> The Traveling Salesman Problem is a well-known problem in the field of computer science and operations research. It is a combinatorial optimization problem, which means it involves finding the best solution from a finite set of possible solutions. The problem is NP-hard, which means there is no known algorithm that can solve it in polynomial time. Therefore, to solve this problem, we would need to use an algorithm that can find the optimal solution, such as a brute force algorithm, a genetic algorithm, or a simulated annealing algorithm. However, these algorithms can be computationally expensive, especially for large problem sizes. In this case, we have 10 cities, which means there are 10! = 3,628,800 possible routes. Therefore, without the use of a computer, it would be impractical to manually check all possible routes to find the one with the total distance less than 3481.5. </reasoning> <final_answer>{'Feasible': 'UNKNOWN'}</final_answer> </root>
```
## Evaluation results
```
dataset version metric mode internlm2-chat-7b-hf
--------- --------- ----------------- ------ ----------------------
hard_GCP 144a59 Weighted Accuracy gen 1.64
hard_TSP 144a59 Weighted Accuracy gen 0
hard_MSP 144a59 Weighted Accuracy gen 0
cmp_GCP_D 144a59 Weighted Accuracy gen 43.82
cmp_TSP_D 144a59 Weighted Accuracy gen 40.18
cmp_KSP 144a59 Weighted Accuracy gen 0
p_BSP 144a59 Weighted Accuracy gen 40.36
p_EDP 144a59 Weighted Accuracy gen 0
p_SPP 144a59 Weighted Accuracy gen 0
```
## Reference
```
@article{fan2023nphardeval,
title={NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes},
author={Fan, Lizhou and Hua, Wenyue and Li, Lingyao and Ling, Haoyang and Zhang, Yongfeng and Hemphill, Libby},
journal={arXiv preprint arXiv:2312.14890},
year={2023}
}
```
from mmengine.config import read_base
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import OlympiadBenchDataset, OlympiadBenchEvaluator, olympiadbench_postprocess_v2
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.evaluator import GenericLLMEvaluator
from opencompass.datasets import generic_llmjudge_postprocess
with read_base():
from .OlympiadBench_categories import math_categories as categories
# Create prompter instance for problems
olympiadbench_prompter_cfg = dict(
type='OlympiadBenchPrompter'
)
olympiadbench_reader_cfg = dict(
input_columns=[
'problem', 'language', 'subject', 'question_type',
'answer_type', 'is_multiple_answer', 'unit', 'questions'
],
output_column='solution'
)
GRADER_TEMPLATE = """
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
Here are some evaluation criteria:
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
5. If the prediction is given with \\boxed{}, please ignore the \\boxed{} and only judge whether the candidate's answer is consistent with the standard answer.
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
A: CORRECT
B: INCORRECT
Just return the letters "A" or "B", with no text around it.
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
<Original Question Begin>: \n{problem}\n<Original Question End>\n\n
<Gold Target Begin>: \n{solution}\n<Gold Target End>\n\n
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
Judging the correctness of candidates' answers:
""".strip()
olympiadbenchMath_datasets = []
for _name in categories:
olympiadbench_infer_cfg = dict(
prompt_template=dict(
type='OlympiadBenchTemplate'
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
# Evaluation configuration
olympiadbench_eval_cfg = dict(
evaluator=dict(
type=GenericLLMEvaluator,
prompt_template=dict(
type=PromptTemplate,
template=dict(
begin=[
dict(
role='SYSTEM',
fallback_role='HUMAN',
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
],
round=[
dict(
role='HUMAN',
prompt = GRADER_TEMPLATE
),
]),
),
dataset_cfg=dict(
type=OlympiadBenchDataset,
path='opencompass/OlympiadBench',
name=_name,
reader_cfg=olympiadbench_reader_cfg,
),
judge_cfg=dict(),
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
),
pred_role='BOT',
)
olympiadbenchMath_datasets.append(
dict(
type=OlympiadBenchDataset,
abbr=f'OlympiadBench_{_name}',
path='opencompass/OlympiadBench',
name=_name,
reader_cfg=olympiadbench_reader_cfg,
infer_cfg=olympiadbench_infer_cfg,
eval_cfg=olympiadbench_eval_cfg,
)
)
del _name
from mmengine.config import read_base
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import OlympiadBenchDataset, OlympiadBenchEvaluator, olympiadbench_postprocess_v2
with read_base():
from .OlympiadBench_categories import categories
# Create prompter instance for problems
olympiadbench_prompter_cfg = dict(
type='OlympiadBenchPrompter'
)
olympiadbench_reader_cfg = dict(
input_columns=[
'problem', 'language', 'subject', 'question_type',
'answer_type', 'is_multiple_answer', 'unit', 'questions'
],
output_column='solution'
)
olympiadbench_datasets = []
for _name in categories:
olympiadbench_infer_cfg = dict(
prompt_template=dict(
type='OlympiadBenchTemplate'
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
olympiadbench_eval_cfg = dict(
evaluator=dict(type=OlympiadBenchEvaluator, version='v2'),
pred_postprocessor=dict(type=olympiadbench_postprocess_v2),
)
olympiadbench_datasets.append(
dict(
type=OlympiadBenchDataset,
abbr=f'OlympiadBench_{_name}',
path='opencompass/OlympiadBench',
name=_name,
reader_cfg=olympiadbench_reader_cfg,
infer_cfg=olympiadbench_infer_cfg,
eval_cfg=olympiadbench_eval_cfg,
)
)
del _name
from mmengine.config import read_base
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import OlympiadBenchDataset, OlympiadBenchEvaluator, olympiadbench_postprocess_v2
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.evaluator import GenericLLMEvaluator
from opencompass.datasets import generic_llmjudge_postprocess
with read_base():
from .OlympiadBench_categories import categories
# Create prompter instance for problems
olympiadbench_prompter_cfg = dict(
type='OlympiadBenchPrompter'
)
olympiadbench_reader_cfg = dict(
input_columns=[
'problem', 'language', 'subject', 'question_type',
'answer_type', 'is_multiple_answer', 'unit', 'questions'
],
output_column='solution'
)
GRADER_TEMPLATE = """
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
Here are some evaluation criteria:
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
5. If the prediction is given with \\boxed{}, please ignore the \\boxed{} and only judge whether the candidate's answer is consistent with the standard answer.
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
A: CORRECT
B: INCORRECT
Just return the letters "A" or "B", with no text around it.
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
<Original Question Begin>: \n{problem}\n<Original Question End>\n\n
<Gold Target Begin>: \n{solution}\n<Gold Target End>\n\n
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
Judging the correctness of candidates' answers:
""".strip()
olympiadbench_datasets = []
for _name in categories:
olympiadbench_infer_cfg = dict(
prompt_template=dict(
type='OlympiadBenchTemplate'
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
# olympiadbench_eval_cfg = dict(
# evaluator=dict(type=OlympiadBenchEvaluator, version='v2'),
# pred_postprocessor=dict(type=olympiadbench_postprocess_v2),
# )
# Evaluation configuration
olympiadbench_eval_cfg = dict(
evaluator=dict(
type=GenericLLMEvaluator,
prompt_template=dict(
type=PromptTemplate,
template=dict(
begin=[
dict(
role='SYSTEM',
fallback_role='HUMAN',
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
],
round=[
dict(
role='HUMAN',
prompt = GRADER_TEMPLATE
),
]),
),
dataset_cfg=dict(
type=OlympiadBenchDataset,
path='opencompass/OlympiadBench',
name=_name,
reader_cfg=olympiadbench_reader_cfg,
),
judge_cfg=dict(),
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
),
pred_role='BOT',
)
olympiadbench_datasets.append(
dict(
type=OlympiadBenchDataset,
abbr=f'OlympiadBench_{_name}',
path='opencompass/OlympiadBench',
name=_name,
reader_cfg=olympiadbench_reader_cfg,
infer_cfg=olympiadbench_infer_cfg,
eval_cfg=olympiadbench_eval_cfg,
)
)
del _name
categories = [
'OE_TO_maths_en_COMP', # OpenEnded - TextOnly - maths - COMP
'OE_TO_maths_zh_COMP', # OpenEnded - TextOnly - maths - COMP
'OE_TO_maths_zh_CEE', # OpenEnded - TextOnly - maths - CEE
'OE_TO_physics_en_COMP', # OpenEnded - TextOnly - physics - COMP
'OE_TO_physics_zh_CEE' # OpenEnded - TextOnly - physics - CEE
]
math_categories = [
'OE_TO_maths_en_COMP', # OpenEnded - TextOnly - maths - COMP
'OE_TO_maths_zh_COMP', # OpenEnded - TextOnly - maths - COMP
'OE_TO_maths_zh_CEE', # OpenEnded - TextOnly - maths - CEE
]
physics_categories = [
'OE_TO_physics_en_COMP', # OpenEnded - TextOnly - physics - COMP
'OE_TO_physics_zh_CEE' # OpenEnded - TextOnly - physics - CEE
]
from mmengine.config import read_base
with read_base():
from .OpenFinData_gen_46dedb import OpenFinData_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets.OpenFinData import OpenFinDataDataset, OpenFinDataKWEvaluator
from opencompass.utils.text_postprocessors import last_capital_postprocess
OpenFinData_datasets = []
OpenFinData_3choices_list = ['emotion_identification', 'entity_disambiguation', 'financial_facts']
OpenFinData_4choices_list = ['data_inspection', 'financial_terminology', 'metric_calculation', 'value_extraction']
OpenFinData_5choices_list = ['intent_understanding']
OpenFinData_keyword_list = ['entity_recognition']
OpenFinData_all_list = OpenFinData_3choices_list + OpenFinData_4choices_list + OpenFinData_5choices_list + OpenFinData_keyword_list
OpenFinData_eval_cfg = dict(evaluator=dict(type=AccEvaluator), pred_postprocessor=dict(type=last_capital_postprocess))
OpenFinData_KW_eval_cfg = dict(evaluator=dict(type=OpenFinDataKWEvaluator))
for _name in OpenFinData_all_list:
if _name in OpenFinData_3choices_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin='</E>', round=[
dict(role='HUMAN', prompt=f'{{question}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\n答案: '),
dict(role='BOT', prompt='{answer}')]),
ice_token='</E>'), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path='./data/openfindata_release',
name=_name,
abbr='OpenFinData-' + _name,
reader_cfg=dict(
input_columns=['question', 'A', 'B', 'C'],
output_column='answer'),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_eval_cfg,
))
if _name in OpenFinData_4choices_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin='</E>', round=[
dict(role='HUMAN', prompt=f'{{question}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案: '),
dict(role='BOT', prompt='{answer}')]),
ice_token='</E>'), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path='./data/openfindata_release',
name=_name,
abbr='OpenFinData-' + _name,
reader_cfg=dict(
input_columns=['question', 'A', 'B', 'C', 'D'],
output_column='answer'),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_eval_cfg,
))
if _name in OpenFinData_5choices_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin='</E>', round=[
dict(role='HUMAN', prompt=f'{{question}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\nE. {{E}}\n答案: '),
dict(role='BOT', prompt='{answer}')]),
ice_token='</E>'), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path='./data/openfindata_release',
name=_name,
abbr='OpenFinData-' + _name,
reader_cfg=dict(
input_columns=['question', 'A', 'B', 'C', 'D', 'E'],
output_column='answer'),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_eval_cfg,
))
if _name in OpenFinData_keyword_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin='</E>', round=[
dict(role='HUMAN', prompt=f'{{question}}\n答案: '),
dict(role='BOT', prompt='{answer}')]),
ice_token='</E>'), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path='./data/openfindata_release',
name=_name,
abbr='OpenFinData-' + _name,
reader_cfg=dict(
input_columns=['question'],
output_column='answer'),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_KW_eval_cfg,
))
del _name
# OpenFinData
## Introduction
The following introduction comes from the introduction in [OpenFinData](https://github.com/open-compass/OpenFinData)
```
OpenFinData是由东方财富与上海人工智能实验室联合发布的开源金融评测数据集。该数据集代表了最真实的产业场景需求,是目前场景最全、专业性最深的金融评测数据集。它基于东方财富实际金融业务的多样化丰富场景,旨在为金融科技领域的研究者和开发者提供一个高质量的数据资源。
OpenFinData is an open source financial evaluation dataset jointly released by Oriental Fortune and Shanghai Artificial Intelligence Laboratory. This data set represents the most realistic industrial scenario needs and is currently the most comprehensive and professional financial evaluation data set. It is based on the diverse and rich scenarios of Oriental Fortune's actual financial business and aims to provide a high-quality data resource for researchers and developers in the field of financial technology.
```
## Official link
### Repository
[OpenFinData](https://github.com/open-compass/OpenFinData)
## Use cases
In evaluation scripts, add OpenFinData dataset as other datasets by using
```
from .datasets.OepnFinData.OpenFinData_gen import OpenFinData_datasets
```
## Examples
Input example I:
```
你是一个数据审核小助手。表格内给出了2023年11月10日文一科技(600520)的最新数据,请指出其中哪个数据有误。请给出正确选项。
| 代码 | 名称 | 最新 | 涨幅% | 涨跌 | 成交量(股) | 成交额(元) | 流通市值 | 总市值 | 所属行业 |
|-------:|:-----|------:|------:|-----:|---------:|-----------:|-----------:|-----------:|:-------|
| 600520 | 文一科技 | 34.01 | 9.99 | 3.09 | 74227945 | 2472820896 | 5388200000 | 5388204300 | 通用设备 |
A. 2023年11月10日文一科技最新价34.01
B. 2023年11月10日文一科技成交额为2472820896
C. 文一科技的流通市值和总市值可能有误,因为流通市值5388200000元大于总市值5388204300元
D. 无明显错误数据
答案:
```
Output example I (from QWen-14B-Chat):
```
C. 文一科技的流通市值和总市值可能有误,因为流通市值5388200000元大于总市值5388204300元。
```
Input example II:
```
你是一个实体识别助手。请列出以下内容中提及的公司。
一度扬帆顺风的光伏产业,在过去几年中,面对潜在的高利润诱惑,吸引了众多非光伏行业的上市公司跨界转战,试图分得一杯羹。然而,今年下半年以来,出现了一个显著的趋势:一些跨界公司开始放弃或削减其光伏项目,包括皇氏集团(002329.SZ)、乐通股份(002319.SZ)、奥维通信(002231.SZ)等近十家公司。此外,还有一些光伏龙头放缓投资计划,如大全能源(688303.SH)、通威股份(600438.SZ)。业内人士表示,诸多因素导致了这股热潮的退却,包括市场变化、技术门槛、政策调整等等。光伏产业经历了从快速扩张到现在的理性回调,行业的自我调整和生态平衡正在逐步展现。从财务状况来看,较多选择退出的跨界企业都面临着经营压力。不过,皇氏集团、乐通股份等公司并未“全身而退”,仍在保持对光伏市场的关注,寻求进一步开拓的可能性。
答案:
```
Output example II (from InternLM2-7B-Chat):
```
皇氏集团(002329.SZ)、乐通股份(002319.SZ)、奥维通信(002231.SZ)、大全能源(688303.SH)、通威股份(600438.SZ)
```
## Evaluation results
```
dataset version metric mode qwen-14b-chat-hf internlm2-chat-7b-hf
---------------------------------- --------- -------- ------ ------------------ ----------------------
OpenFinData-emotion_identification b64193 accuracy gen 85.33 78.67
OpenFinData-entity_disambiguation b64193 accuracy gen 52 68
OpenFinData-financial_facts b64193 accuracy gen 70.67 46.67
OpenFinData-data_inspection a846b7 accuracy gen 53.33 51.67
OpenFinData-financial_terminology a846b7 accuracy gen 84 73.33
OpenFinData-metric_calculation a846b7 accuracy gen 55.71 68.57
OpenFinData-value_extraction a846b7 accuracy gen 84.29 71.43
OpenFinData-intent_understanding f0bd9e accuracy gen 88 86.67
OpenFinData-entity_recognition 81aeeb accuracy gen 68 84
```
from mmengine.config import read_base
with read_base():
from .PJExam_gen_8cd97c import PJExam_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import PJExamDataset, PJExamEvaluator
PJExam_datasets = []
for _name in [
'gk-2022-v1', 'gk-2022-v1-math', 'gk-2023-v1', 'gk-2023-v1-math',
'gk-2023-v2', 'gk-2023-v2-math', 'zk-2022-v1'
]:
_hint = '请你做一道</major>选择题\n请你一步一步思考并将思考过程写在【解析】和<eoe>之间。你将从A,B,C,D中选出正确的答案,并写在【答案】和<eoa>之间。\n例如:【答案】A<eoa>\n完整的题目回答的格式如下:\n【解析】...<eoe>\n【答案】...<eoa>\n请你严格按照上述格式作答。\n题目如下:\n'
_reader_cfg = {
'input_columns': ['question'],
'output_column': 'std_ans',
},
_infer_cfg = {
'ice_template': {
'type': PromptTemplate,
'template': {
'round': [{
'role': 'HUMAN',
'prompt': _hint + '{question}',
}]
},
'ice_token': '</E>'
},
'retriever': {
'type': ZeroRetriever
},
'inferencer': {
'type': GenInferencer,
'max_out_len': 1024,
}
}
_eval_cfg = {
'evaluator': {
'type': PJExamEvaluator
},
'pred_role': 'BOT',
'ds_column': 'eval_infos'
}
_dataset = {
'type': PJExamDataset,
'abbr': 'PJExamDataset-' + _name,
'path': './data/PJExam',
'name': _name,
'reader_cfg': _reader_cfg,
'infer_cfg': _infer_cfg,
'eval_cfg': _eval_cfg,
}
PJExam_datasets.append(_dataset)
del _name, _hint, _reader_cfg, _infer_cfg, _eval_cfg, _dataset
from mmengine.config import read_base
with read_base():
from .flores_gen_2697d7 import PMMEval_flores_datasets
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets.PMMEval import PMMEvalFloresDataset, PMMEvalFloresEvaluator, pmmeval_flores_postprocess
NATURAL_LANGUAGE_FULLNAMES_FLORES = ['Chinese', 'Arabic', 'Spanish', 'French', 'Japanese', 'Korean', 'Portuguese', 'Thai', 'Vietnamese']
PROMPT = {
'Chinese': '将这个句子从英语翻译成中文。\n\n{src}',
'Arabic': 'ترجم هذه الجملة من الإنجليزية إلى العربية.\n\n{src}',
'Spanish': 'Traduce esta oración del inglés al español.\n\n{src}',
'Japanese': 'この文を英語から日本語に翻訳してください。\n\n{src}',
'Korean': '이 문장을 영어에서 한국어로 번역하세요.\n\n{src}',
'Thai': 'แปลประโยคนี้จากภาษาอังกฤษเป็นภาษาไทย.\n\n{src}',
'French': "Traduisez cette phrase de l'anglais en français.\n\n{src}",
'Portuguese': 'Traduza esta frase do inglês para o português.\n\n{src}',
'Vietnamese': 'Dịch câu này từ tiếng Anh sang tiếng Việt.\n\n{src}'
}
PMMEval_flores_datasets = list()
# Add flores_200
PMMEval_flores_reader_cfg = dict(
input_columns=['src'],
output_column='tgt',
test_split='test'
)
PMMEval_flores_datasets = list()
for lang_fullname in NATURAL_LANGUAGE_FULLNAMES_FLORES:
PMMEval_flores_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(
role='HUMAN',
prompt=PROMPT[lang_fullname]
)
]
)
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
PMMEval_flores_eval_cfg = dict(
evaluator=dict(type=PMMEvalFloresEvaluator),
pred_role='BOT',
pred_postprocessor=dict(type=pmmeval_flores_postprocess, lang_fullname=lang_fullname)
)
PMMEval_flores_datasets.append(
dict(
abbr=f'flores-{lang_fullname}',
type=PMMEvalFloresDataset,
path='P-MMEval',
lang_fullname=lang_fullname,
reader_cfg=PMMEval_flores_reader_cfg,
infer_cfg=PMMEval_flores_infer_cfg,
eval_cfg=PMMEval_flores_eval_cfg)
)
from mmengine.config import read_base
with read_base():
from .humanevalxl_gen_4dfef4 import PMMEval_HumanEvalXL_datasets
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets.PMMEval import PMMEvalHumanEvalXLDataset, PMMEvalHumanEvalXLEvaluator
NATURAL_LANGUAGE_FULLNAMES = ['English', 'Chinese', 'Arabic', 'Spanish', 'French', 'Japanese', 'Korean', 'Portuguese', 'Thai', 'Vietnamese']
PMMEval_HumanEvalXL_datasets = list()
PMMEval_HumanEvalXL_reader_cfg = dict(
input_columns=['task_id', 'prompt', 'entry_point', 'test', 'language', 'description', 'natural_language'],
output_column='declaration',
test_split='test'
)
PMMEval_HumanEvalXL_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template='{prompt}'),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
PMMEval_HumanEvalXL_datasets = list()
for lang_fullname in NATURAL_LANGUAGE_FULLNAMES:
for program_lang in ['python', 'java', 'javascript']:
PMMEval_HumanEvalXL_eval_cfg = dict(
evaluator=dict(
type=PMMEvalHumanEvalXLEvaluator,
language=program_lang,
text_language=lang_fullname,
ip_address='localhost',
port=5001),
pred_role='BOT')
PMMEval_HumanEvalXL_datasets.append(
dict(
abbr=f'humanevalxl-{program_lang}-{lang_fullname}',
type=PMMEvalHumanEvalXLDataset,
path='P-MMEval',
lang=lang_fullname,
program_lang=program_lang,
reader_cfg=PMMEval_HumanEvalXL_reader_cfg,
infer_cfg=PMMEval_HumanEvalXL_infer_cfg,
eval_cfg=PMMEval_HumanEvalXL_eval_cfg)
)
from mmengine.config import read_base
with read_base():
from .mgsm_gen_679720 import PMMEval_MGSM_datasets
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets.PMMEval import PMMEvalMGSMDataset, PMMEvalMGSMEvaluator
NATURAL_LANGUAGE_CODES = ['en', 'zh', 'ar', 'es', 'fr', 'ja', 'ko', 'pt', 'th', 'vi']
LANG_TO_INSTRUCTIONS = {
'en': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"The answer is \". Do not add anything other than the integer answer after \"The answer is \".\n\n{question}",
'es': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"La respuesta es \". Do not add anything other than the integer answer after \"La respuesta es \".\n\n{question}",
'fr': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"La réponse est \". Do not add anything other than the integer answer after \"La réponse est \".\n\n{question}",
'zh': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"答案是 \". Do not add anything other than the integer answer after \"答案是 \".\n\n{question}",
'ja': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"答えは \". Do not add anything other than the integer answer after \"答えは \".\n\n{question}",
'th': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"คำตอบคือ \". Do not add anything other than the integer answer after \"คำตอบคือ \".\n\n{question}",
'ko': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"답변은 \". Do not add anything other than the integer answer after \"답변은 \".\n\n{question}",
'pt': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"A resposta é \". Do not add anything other than the integer answer after \"A resposta é \".\n\n{question}",
'vi': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"Câu trả lời là \". Do not add anything other than the integer answer after \"Câu trả lời là \".\n\n{question}",
'ar': "Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of \"الجواب هو \". Do not add anything other than the integer answer after \"الجواب هو \".\n\n{question}"
}
PMMEval_MGSM_datasets = list()
# Add flores_200
PMMEval_MGSM_reader_cfg = dict(
input_columns=['question'],
output_column='answer',
test_split='test'
)
PMMEval_MGSM_eval_cfg = dict(
evaluator=dict(type=PMMEvalMGSMEvaluator),
pred_role='BOT')
for lang_code in NATURAL_LANGUAGE_CODES:
PMMEval_MGSM_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(
role='HUMAN',
prompt=LANG_TO_INSTRUCTIONS[lang_code]
)
]
)
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
PMMEval_MGSM_datasets.append(
dict(
abbr=f'mgsm-{lang_code}',
type=PMMEvalMGSMDataset,
path='P-MMEval',
lang=lang_code,
reader_cfg=PMMEval_MGSM_reader_cfg,
infer_cfg=PMMEval_MGSM_infer_cfg,
eval_cfg=PMMEval_MGSM_eval_cfg)
)
from mmengine.config import read_base
with read_base():
from .mhellaswag_gen_1a6b73 import PMMEval_MHellaswag_datasets
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets.PMMEval import PMMEvalMHellaswagDataset, PMMEvalMHellaswagEvaluator, pmmeval_mhellaswag_postprocess
NATURAL_LANGUAGE_CODES = ['en', 'zh', 'ar', 'es', 'fr', 'ja', 'ko', 'pt', 'th', 'vi']
PMMEVAL_MHELLASWAG_TEMPLATE = "Input: {ctx}\nOptions: \nA. {option_1}\nB. {option_2}\nC. {option_3}\nD. {option_4}\nPick the correct ending for the sentence from A, B, C, and D, and return it in the following JSON format:\n{\"answer\": \"[choice]\"}\nwhere [choice] must be one of A, B, C or D."
PMMEval_MHellaswag_datasets = list()
PMMEval_MHellaswag_reader_cfg = dict(
input_columns=['ctx', 'option_1', 'option_2', 'option_3', 'option_4'],
output_column='label',
test_split='test'
)
PMMEval_MHellaswag_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(
role='HUMAN',
prompt=PMMEVAL_MHELLASWAG_TEMPLATE
)
]
)
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
PMMEval_MHellaswag_datasets = list()
for lang_code in NATURAL_LANGUAGE_CODES:
PMMEval_MHellaswag_eval_cfg = dict(
evaluator=dict(type=PMMEvalMHellaswagEvaluator),
pred_role='BOT',
pred_postprocessor=dict(type=pmmeval_mhellaswag_postprocess, lang_code=lang_code)
)
PMMEval_MHellaswag_datasets.append(
dict(
abbr=f'mhellaswag-{lang_code}',
type=PMMEvalMHellaswagDataset,
path='P-MMEval',
lang=lang_code,
reader_cfg=PMMEval_MHellaswag_reader_cfg,
infer_cfg=PMMEval_MHellaswag_infer_cfg,
eval_cfg=PMMEval_MHellaswag_eval_cfg)
)
from mmengine.config import read_base
with read_base():
from .mifeval_gen_79f8fb import PMMEval_MIFEval_datasets
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets.PMMEval import PMMEvalMIFEvalDataset, PMMEvalMIFEvalEvaluator, pmmeval_mifeval_postprocess
NATURAL_LANGUAGE_CODES = ['en', 'zh', 'ar', 'es', 'fr', 'ja', 'ko', 'pt', 'th', 'vi']
PMMEVAL_MIFEVAL_TEMPLATE = '{prompt}'
PMMEval_MIFEval_datasets = list()
PMMEval_MIFEval_reader_cfg = dict(
input_columns=['prompt', 'instruction_id_list', 'kwargs'],
output_column=None,
test_split='test'
)
PMMEval_MIFEval_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(
role='HUMAN',
prompt=PMMEVAL_MIFEVAL_TEMPLATE
)
]
)
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
for lang_code in NATURAL_LANGUAGE_CODES:
PMMEval_MIFEval_eval_cfg = dict(
evaluator=dict(type=PMMEvalMIFEvalEvaluator),
pred_role='BOT',
pred_postprocessor=dict(type=pmmeval_mifeval_postprocess, lang_code=lang_code)
)
PMMEval_MIFEval_datasets.append(
dict(
abbr=f'mifeval-{lang_code}',
type=PMMEvalMIFEvalDataset,
path='P-MMEval',
lang=lang_code,
reader_cfg=PMMEval_MIFEval_reader_cfg,
infer_cfg=PMMEval_MIFEval_infer_cfg,
eval_cfg=PMMEval_MIFEval_eval_cfg)
)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment