Commit be3dfa50 authored by jerrrrry's avatar jerrrrry
Browse files

Initial commit

parents
Pipeline #2876 failed with stages
in 0 seconds
examples = [
(
'In a 10 Gigabit Ethernet network, the average size of a frame is 1500 bytes. If a burst of noise lasting 1ms interrupts the network, how many frames are lost?',
'First, calculate the data rate in bytes/s:\n\n10 Gigabit/s * (1 Byte / 8 bits) = 1.25 * 10^9 Bytes/s\n\nNext, calculate the data loss in bytes due to the noise:\n\n1 ms * 1.25 * 10^9 Bytes/s = 1.25 * 10^6 Bytes\n\nFinally, divide the data loss by the average frame size to get the number of frames lost:\n\n1.25 * 10^6 Bytes / 1500 Bytes/frame ≈ 833.33 frames\nThe answer is 833.33'
),
(
'Given x = 0.157, what is the value of x \\times \\frac{\\prod_{n=1}^\\infty (1 - \\frac{x^2}{n^2 \\pi^2})}{\\sin(x)}?',
"To evaluate the expression $x \\times \\frac{\\prod_{n=1}^{\\infty} (1 - \\frac{x^2}{n^2 \\pi^2})}{\\sin(x)}$ given x = 0.157, we first recognize that the product in the numerator is related to the sine function through the Euler's reflection formula for the sine function, which can be expressed as:\n\n$$\\sin(x) = x \\prod_{n=1}^{\\infty} \\left(1 - \\frac{x^2}{n^2 \\pi^2}\\right)$$\n\nTherefore, the given expression simplifies to: $x \\times \\frac{\\sin(x)}{\\sin(x)}$\n\nBecause sin(x) in the numerator and denominator cancels out, the expression simplifies further to just x.\n\nSo, given x = 0.157, the value of the expression is 0.157. This result is derived from the properties of the sine function and does not require computational evaluation.\nThe answer is 0.157"
),
(
'Consider the basis C of \\mathbb{R}^2 consisting of vectors u_1 = [2, 4] and u_2 = [1, -1]. If y = [8, 12], find the C-coordinate vector of y.',
"The goal is to express y as a linear combination of the basis vectors of C, i.e., $y = a\\cdot u_1 + b\\cdot u_2$, where a and b are the scalar coefficients that we want to find. These coefficients will form the C-coordinate vector of y, which we'll denote as $[a, b]_C$.\n\nGiven:\n- $u_1 = [2, 4]$,\n- $u_2 = [1, -1]$,\n- $y = [8, 12]$.\n\nWe need to solve the system of linear equations:\n2a + 1b = 8\n4a - 1b = 12\n\nLet's solve this system of equations to find a and b.\n\nThe solution to the system of equations is $a = \\frac{10}{3} and b = \\frac{4}{3}$. Therefore, the C-coordinate vector of y in the basis consisting of vectors u_1 = [2, 4] and u_2 = [1, -1] is $\\left[\\frac{10}{3}, \\frac{4}{3}\\right]_C$. \nLet's calculate the numerical value of $\\left[\x0crac{10}{3}, \x0crac{4}{3}\right]_C$ as [3.33, 1.33].\nThe answer is [3.33, 1.33]"
),
(
'One can draw a simple, connected planar graph with 200 vertices and 397 edges. Is this statement Trur or False?',
"To determine the answer, we can use Euler's formula for planar graphs, which states that for any finite, connected, planar graph, $V - E + F = 2$, where V is the number of vertices, E is the number of edges, and F is the number of faces.\n\nGiven the modified question, we have V = 200 vertices and E = 397 edges. We want to find if we can have a graph that satisfies these conditions, adhering to Euler's formula.\n\nFirst, let's rearrange Euler's formula to solve for F: F = E - V + 2\n\nSubstituting the given values: F = 397 - 200 + 2, F = 199\n\nThis means a graph with 200 vertices and 397 edges would have 199 faces. However, to determine the truth of this possibility, we should check if this graph doesn't violate any other planar graph constraints, particularly regarding the number of edges.\n\nFor a simple, connected planar graph, there's also a relationship between vertices, edges, and faces given by the inequality: $E \\leq 3V - 6$\n\nSubstituting V = 200 gives: $E \\leq 3*200 - 6 = 594$\n\nWith E = 397, the condition $E \\leq 594$ is satisfied, meaning it's theoretically possible in terms of the edge condition for a planar graph.\n\nTherefore, one can draw a simple, connected planar graph with 200 vertices and 397 edges, resulting in 199 faces, without violating the conditions for it to be planar according to both Euler's formula and the constraint on the maximum number of edges.\nThe answer is True"
),
(
'Given a finite group G, and a collection of permutations H on a set. Then (a) there always exists H such that G is isomorphic to H; (b) for any H, G is isomorphic to H; (c) G can never be isomorphic to H; (d) none of the above. Which option is correct?',
"This is based on Cayley's theorem, which states that every group G is isomorphic to a subgroup of the symmetric group acting on G. \nIn other words, for every finite group G, there exists a collection of permutations H (which in this context, can be thought of as the set of permutations representing the action of G on itself) such that G is isomorphic to H.\n\nTherefore, there always exists H such that G is isomorphic to H.\nThe answer is (a)"
)
]
from mmengine.config import read_base
with read_base():
from .TheoremQA_5shot_gen_6f0af8 import TheoremQA_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import (
TheoremQADataset,
TheoremQA_postprocess_v3,
TheoremQA_postprocess_v4,
TheoremQAEvaluatorV3,
)
TheoremQA_reader_cfg = dict(
input_columns=['Question', 'Answer_type'],
output_column='Answer',
train_split='test',
)
TheoremQA_prompt1 = """You are a mathematician, you are supposed to answer the given question. You need to output the answer in your final sentence like "Therefore, the answer is ...". The answer can only be one of the following forms:
1. a numerical value like 0.1, no symbol and no unit at all.
2. a list of number like [2, 3, 4].
3. True/False.
4. an option like (a), (b), (c), (d)
"""
TheoremQA_prompt2 = "Question: {Question}\nLet's think step by step."
TheoremQA_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(
role='HUMAN',
prompt=TheoremQA_prompt1 + TheoremQA_prompt2,
),
]
),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
# 正确的 evaluator 需要借助于 llm 来进行答案提取,此评测逻辑亦会有较多 FN 。
TheoremQA_eval_cfg = dict(
evaluator=dict(type=TheoremQAEvaluatorV3),
pred_postprocessor=dict(type=TheoremQA_postprocess_v4),
)
TheoremQA_datasets = [
dict(
abbr='TheoremQA',
type=TheoremQADataset,
path='./data/TheoremQA/test.csv',
reader_cfg=TheoremQA_reader_cfg,
infer_cfg=TheoremQA_infer_cfg,
eval_cfg=TheoremQA_eval_cfg,
)
]
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import TheoremQADataset, TheoremQA_postprocess
TheoremQA_reader_cfg = dict(input_columns=['Question', 'Answer_type'], output_column='Answer', train_split='test')
TheoremQA_prompt1 = (
'Please read a math problem, and then think step by step to derive the answer. The answer is decided by Answer Type. '
'If the Answer type in [bool], the answer needs to be True or False. '
'Else if the Answer type in [integer, float] , The answer needs to be in numerical form. '
'Else if the Answer type in [list of integer, list of float] , the answer needs to be a list of number like [2, 3, 4]. '
'Else if the Answer type in [option], the answer needs to be an option like (a), (b), (c), (d).'
"You need to output the answer in your final sentence like 'Therefore, the answer is ...'."
)
TheoremQA_prompt2 = (
f'Below is an instruction that describes a task, paired with an input that provides further context. '
f'Write a response that appropriately completes the request.\n\n### Instruction:\n{TheoremQA_prompt1}\n\n### Input:\n{{Question}}\nAnswer_type:{{Answer_type}}\n### Response:\n'
)
TheoremQA_infer_cfg = dict(
prompt_template=dict(type=PromptTemplate, template=TheoremQA_prompt2),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=512),
)
TheoremQA_eval_cfg = dict(evaluator=dict(type=AccEvaluator), pred_postprocessor=dict(type=TheoremQA_postprocess))
TheoremQA_datasets = [
dict(
abbr='TheoremQA',
type=TheoremQADataset,
path='./data/TheoremQA/test.csv',
reader_cfg=TheoremQA_reader_cfg,
infer_cfg=TheoremQA_infer_cfg,
eval_cfg=TheoremQA_eval_cfg,
)
]
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import TheoremQADataset, TheoremQA_postprocess
TheoremQA_reader_cfg = dict(input_columns=['Question', 'Answer_type'], output_column='Answer', train_split='test')
TheoremQA_prompt1 = """You are a mathematician, you are supposed to answer the given question. You need to output the answer in your final sentence like "Therefore, the answer is ...". The answer can only be one of the following forms:
1. a numerical value like 0.1, no symbol and no unit at all.
2. a list of number like [2, 3, 4].
3. True/False.
4. an option like (a), (b), (c), (d)
"""
TheoremQA_prompt2 = "Question: {Question}\nLet's think step by step."
TheoremQA_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
begin=[
dict(role='SYSTEM', fallback_role='HUMAN', prompt=TheoremQA_prompt1),
],
round=[
dict(role='HUMAN', prompt=TheoremQA_prompt2),
],
),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=512),
)
TheoremQA_eval_cfg = dict(evaluator=dict(type=AccEvaluator), pred_postprocessor=dict(type=TheoremQA_postprocess))
TheoremQA_datasets = [
dict(
abbr='TheoremQA',
type=TheoremQADataset,
path='./data/TheoremQA/test.csv',
reader_cfg=TheoremQA_reader_cfg,
infer_cfg=TheoremQA_infer_cfg,
eval_cfg=TheoremQA_eval_cfg,
)
]
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import TheoremQADataset, TheoremQA_postprocess
TheoremQA_reader_cfg = dict(input_columns=['Question', 'Answer_type'], output_column='Answer', train_split='test')
TheoremQA_prompt1 = """You are a mathematician, you are supposed to answer the given question. You need to output the answer in your final sentence like "Therefore, the answer is ...". The answer can only be one of the following forms:
1. a numerical value like 0.1, no symbol and no unit at all.
2. a list of number like [2, 3, 4].
3. True/False.
4. an option like (a), (b), (c), (d)
"""
TheoremQA_prompt2 = "Question: {Question}\nLet's think step by step."
TheoremQA_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(
role='HUMAN',
prompt=TheoremQA_prompt1 + TheoremQA_prompt2,
),
]
),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=512),
)
TheoremQA_eval_cfg = dict(evaluator=dict(type=AccEvaluator), pred_postprocessor=dict(type=TheoremQA_postprocess))
TheoremQA_datasets = [
dict(
abbr='TheoremQA',
type=TheoremQADataset,
path='./data/TheoremQA/test.csv',
reader_cfg=TheoremQA_reader_cfg,
infer_cfg=TheoremQA_infer_cfg,
eval_cfg=TheoremQA_eval_cfg,
)
]
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import TheoremQADataset, TheoremQA_postprocess_v2
TheoremQA_reader_cfg = dict(input_columns=['Question', 'Answer_type'], output_column='Answer', train_split='test')
TheoremQA_prompt1 = """You are a mathematician, you are supposed to answer the given question. You need to output the answer in your final sentence like "Therefore, the answer is ...". The answer can only be one of the following forms:
1. a numerical value like 0.1, no symbol and no unit at all.
2. a list of number like [2, 3, 4].
3. True/False.
4. an option like (a), (b), (c), (d)
"""
TheoremQA_prompt2 = "Question: {Question}\nLet's think step by step."
TheoremQA_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=TheoremQA_prompt1 + TheoremQA_prompt2,
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=512),
)
# 正确的 evaluator 需要借助于 llm 来进行答案提取,此评测逻辑亦会有较多 FN 。
TheoremQA_eval_cfg = dict(evaluator=dict(type=AccEvaluator), pred_postprocessor=dict(type=TheoremQA_postprocess_v2))
TheoremQA_datasets = [
dict(
abbr='TheoremQA',
type=TheoremQADataset,
path='./data/TheoremQA/test.csv',
reader_cfg=TheoremQA_reader_cfg,
infer_cfg=TheoremQA_infer_cfg,
eval_cfg=TheoremQA_eval_cfg,
)
]
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import TheoremQADataset, TheoremQA_postprocess_v2
TheoremQA_reader_cfg = dict(input_columns=['Question', 'Answer_type'], output_column='Answer', train_split='test')
TheoremQA_prompt1 = """You are a mathematician, you are supposed to answer the given question. You need to output the answer in your final sentence like "Therefore, the answer is ...". The answer can only be one of the following forms:
1. a numerical value like 0.1, no symbol and no unit at all.
2. a list of number like [2, 3, 4].
3. True/False.
4. an option like (a), (b), (c), (d)
"""
TheoremQA_prompt2 = "Question: {Question}\nLet's think step by step."
TheoremQA_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(
role='HUMAN',
prompt=TheoremQA_prompt1 + TheoremQA_prompt2,
),
]
),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=512),
)
# 正确的 evaluator 需要借助于 llm 来进行答案提取,此评测逻辑亦会有较多 FN 。
TheoremQA_eval_cfg = dict(evaluator=dict(type=AccEvaluator), pred_postprocessor=dict(type=TheoremQA_postprocess_v2))
TheoremQA_datasets = [
dict(
abbr='TheoremQA',
type=TheoremQADataset,
path='./data/TheoremQA/test.csv',
reader_cfg=TheoremQA_reader_cfg,
infer_cfg=TheoremQA_infer_cfg,
eval_cfg=TheoremQA_eval_cfg,
)
]
from mmengine.config import read_base
with read_base():
from .XCOPA_ppl_54058d import XCOPA_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import PPLInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import XCOPADataset
XCOPA_reader_cfg = dict(
input_columns=['question', 'premise', 'choice1', 'choice2'],
output_column='label',
test_split='train')
XCOPA_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template={
0: 'Premise:{premise}。\nQuestion:{question}。\nAnswer: {choice1}.',
1: 'Passage:{premise}。\nQuestion:{question}。\nAnswer: {choice2}.',
}),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=PPLInferencer))
XCOPA_eval_cfg = dict(evaluator=dict(type=AccEvaluator))
XCOPA_datasets = [
dict(
type=XCOPADataset,
path='xcopa',
reader_cfg=XCOPA_reader_cfg,
infer_cfg=XCOPA_infer_cfg,
eval_cfg=XCOPA_eval_cfg)
]
from mmengine.config import read_base
with read_base():
from .XLSum_gen_2bb71c import XLSum_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import RougeEvaluator
from opencompass.datasets import XLSUMDataset, Xsum_postprocess
XLSum_reader_cfg = dict(input_columns=['text'], output_column='summary')
XLSum_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template='Document:{text}\n'
'Based on the previous text, provide a brief single summary:'),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer))
XLSum_eval_cfg = dict(
evaluator=dict(type=RougeEvaluator),
pred_postprocessor=dict(type=Xsum_postprocess),
)
XLSum_datasets = [
dict(
type=XLSUMDataset,
path='csebuetnlp/xlsum',
reader_cfg=XLSum_reader_cfg,
infer_cfg=XLSum_infer_cfg,
eval_cfg=XLSum_eval_cfg)
]
from mmengine.config import read_base
with read_base():
from .Xsum_gen_31397e import Xsum_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import RougeEvaluator
from opencompass.datasets import XsumDataset
Xsum_reader_cfg = dict(input_columns=['dialogue'], output_column='summary')
Xsum_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(round=[
dict(
role='HUMAN',
prompt=
'Document:{dialogue}\nBased on the previous text, provide a brief single summary:'
),
]),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
Xsum_eval_cfg = dict(
evaluator=dict(type=RougeEvaluator),
pred_role='BOT',
pred_postprocessor=dict(type='Xsum'),
)
Xsum_datasets = [
dict(
type=XsumDataset,
abbr='Xsum',
path='opencompass/xsum',
reader_cfg=Xsum_reader_cfg,
infer_cfg=Xsum_infer_cfg,
eval_cfg=Xsum_eval_cfg,
)
]
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import RougeEvaluator
from opencompass.datasets import XsumDataset, Xsum_postprocess
Xsum_reader_cfg = dict(input_columns=['dialogue'], output_column='summary')
Xsum_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template='Document:{dialogue}\n'
'Based on the previous text, provide a brief single summary:'),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer))
Xsum_eval_cfg = dict(
evaluator=dict(type=RougeEvaluator),
pred_postprocessor=dict(type=Xsum_postprocess),
)
Xsum_datasets = [
dict(
type=XsumDataset,
abbr='Xsum',
path='opencompass/xsum',
reader_cfg=Xsum_reader_cfg,
infer_cfg=Xsum_infer_cfg,
eval_cfg=Xsum_eval_cfg)
]
from mmengine.config import read_base
with read_base():
from .adv_glue_sst2.adv_glue_sst2_gen import adv_sst2_datasets
from .adv_glue_qqp.adv_glue_qqp_gen import adv_qqp_datasets
from .adv_glue_rte.adv_glue_rte_gen import adv_rte_datasets
from .adv_glue_qnli.adv_glue_qnli_gen import adv_qnli_datasets
from .adv_glue_mnli.adv_glue_mnli_gen import adv_mnli_datasets
from .adv_glue_mnli_mm.adv_glue_mnli_mm_gen import adv_mnli_mm_datasets
datasets = sum((v for k, v in locals().items() if k.endswith('_datasets')), [])
from mmengine.config import read_base
with read_base():
from .adv_glue_mnli_gen_bd8ef0 import adv_mnli_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import AdvMnliDataset, AccDropEvaluator
from opencompass.utils.text_postprocessors import first_option_postprocess
adv_mnli_reader_cfg = dict(
input_columns=['premise', 'hypothesis'], output_column='label_option')
adv_mnli_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(round=[
dict(
role='HUMAN',
prompt=
"""Please identify whether the premise entails the hypothesis. The answer should be exactly 'A. yes', 'B. maybe' or 'C. no'.
premise: {premise}
hypothesis: {hypothesis}
Answer:"""),
]),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
adv_mnli_eval_cfg = dict(
evaluator=dict(type=AccDropEvaluator),
pred_role='BOT',
pred_postprocessor=dict(type=first_option_postprocess, options='ABC'),
)
adv_mnli_datasets = [
dict(
abbr='adv_mnli',
type=AdvMnliDataset,
path='opencompass/advglue-dev',
reader_cfg=adv_mnli_reader_cfg,
infer_cfg=adv_mnli_infer_cfg,
eval_cfg=adv_mnli_eval_cfg,
)
]
from mmengine.config import read_base
with read_base():
from .adv_glue_mnli_mm_gen_bd8ef0 import adv_mnli_mm_datasets # noqa: F401, F403
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import AdvMnliMMDataset, AccDropEvaluator
from opencompass.utils.text_postprocessors import first_option_postprocess
adv_mnli_mm_reader_cfg = dict(
input_columns=['premise', 'hypothesis'], output_column='label_option')
adv_mnli_mm_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(round=[
dict(
role='HUMAN',
prompt=
"""Please identify whether the premise entails the hypothesis. The answer should be exactly 'A. yes', 'B. maybe' or 'C. no'.
premise: {premise}
hypothesis: {hypothesis}
Answer:"""),
]),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer),
)
adv_mnli_mm_eval_cfg = dict(
evaluator=dict(type=AccDropEvaluator),
pred_role='BOT',
pred_postprocessor=dict(type=first_option_postprocess, options='ABC'),
)
adv_mnli_mm_datasets = [
dict(
abbr='adv_mnli_mm',
type=AdvMnliMMDataset,
path='opencompass/advglue-dev',
reader_cfg=adv_mnli_mm_reader_cfg,
infer_cfg=adv_mnli_mm_infer_cfg,
eval_cfg=adv_mnli_mm_eval_cfg,
)
]
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment