Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
opencompass
Commits
2c15a0c0
"git@developer.sourcefind.cn:yangql/composable_kernel-1.git" did not exist on "fab2f10a554974998e8a979d7992c02784bfc848"
Unverified
Commit
2c15a0c0
authored
Sep 18, 2023
by
Hubert
Committed by
GitHub
Sep 18, 2023
Browse files
[Feat] refine docs and codes for more user guides (#409)
parent
a11cb45c
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
22 additions
and
10 deletions
+22
-10
configs/datasets/truthfulqa/truthfulqa_gen_1e7d8d.py
configs/datasets/truthfulqa/truthfulqa_gen_1e7d8d.py
+5
-2
configs/datasets/truthfulqa/truthfulqa_gen_5ddc62.py
configs/datasets/truthfulqa/truthfulqa_gen_5ddc62.py
+5
-2
opencompass/datasets/truthfulqa.py
opencompass/datasets/truthfulqa.py
+12
-6
No files found.
configs/datasets/truthfulqa/truthfulqa_gen_1e7d8d.py
View file @
2c15a0c0
...
@@ -18,8 +18,11 @@ truthfulqa_infer_cfg = dict(
...
@@ -18,8 +18,11 @@ truthfulqa_infer_cfg = dict(
# Metrics such as 'truth' and 'info' needs
# Metrics such as 'truth' and 'info' needs
# OPENAI_API_KEY with finetuned models in it.
# OPENAI_API_KEY with finetuned models in it.
# Please use your own finetuned openai model with keys and refers to
# Please use your own finetuned openai model with keys and refers to
# the source code for more details
# the source code of `TruthfulQAEvaluator` for more details.
# Metrics such as 'bleurt', 'rouge', 'bleu' are free to test
#
# If you cannot provide available models for 'truth' and 'info',
# and want to perform basic metric eval, please set
# `metrics=('bleurt', 'rouge', 'bleu')`
# When key is set to "ENV", the key will be fetched from the environment
# When key is set to "ENV", the key will be fetched from the environment
# variable $OPENAI_API_KEY. Otherwise, set key in here directly.
# variable $OPENAI_API_KEY. Otherwise, set key in here directly.
...
...
configs/datasets/truthfulqa/truthfulqa_gen_5ddc62.py
View file @
2c15a0c0
...
@@ -20,8 +20,11 @@ truthfulqa_infer_cfg = dict(
...
@@ -20,8 +20,11 @@ truthfulqa_infer_cfg = dict(
# Metrics such as 'truth' and 'info' needs
# Metrics such as 'truth' and 'info' needs
# OPENAI_API_KEY with finetuned models in it.
# OPENAI_API_KEY with finetuned models in it.
# Please use your own finetuned openai model with keys and refers to
# Please use your own finetuned openai model with keys and refers to
# the source code for more details
# the source code of `TruthfulQAEvaluator` for more details.
# Metrics such as 'bleurt', 'rouge', 'bleu' are free to test
#
# If you cannot provide available models for 'truth' and 'info',
# and want to perform basic metric eval, please set
# `metrics=('bleurt', 'rouge', 'bleu')`
# When key is set to "ENV", the key will be fetched from the environment
# When key is set to "ENV", the key will be fetched from the environment
# variable $OPENAI_API_KEY. Otherwise, set key in here directly.
# variable $OPENAI_API_KEY. Otherwise, set key in here directly.
...
...
opencompass/datasets/truthfulqa.py
View file @
2c15a0c0
...
@@ -39,7 +39,9 @@ class TruthfulQAEvaluator(BaseEvaluator):
...
@@ -39,7 +39,9 @@ class TruthfulQAEvaluator(BaseEvaluator):
Args:
Args:
truth_model (str): Truth model name. See "notes" for details.
truth_model (str): Truth model name. See "notes" for details.
Defaults to ''.
info_model (str): Informativeness model name. See "notes" for details.
info_model (str): Informativeness model name. See "notes" for details.
Defaults to ''.
metrics (tuple): Computing needed metrics for truthfulqa dataset.
metrics (tuple): Computing needed metrics for truthfulqa dataset.
Supported metrics are `bleurt`, `rouge`, `bleu`, `truth`, `info`.
Supported metrics are `bleurt`, `rouge`, `bleu`, `truth`, `info`.
key (str): Corresponding API key. If set to `ENV`, find it in
key (str): Corresponding API key. If set to `ENV`, find it in
...
@@ -67,12 +69,11 @@ class TruthfulQAEvaluator(BaseEvaluator):
...
@@ -67,12 +69,11 @@ class TruthfulQAEvaluator(BaseEvaluator):
'bleu'
:
'bleu'
,
'bleu'
:
'bleu'
,
}
}
def
__init__
(
def
__init__
(
self
,
self
,
truth_model
:
str
=
''
,
truth_model
:
str
,
# noqa
info_model
:
str
=
''
,
info_model
:
str
,
# noqa
metrics
=
(
'bleurt'
,
'rouge'
,
'bleu'
,
'truth'
,
'info'
),
metrics
=
(
'bleurt'
,
'rouge'
,
'bleu'
,
'truth'
,
'info'
),
key
=
'ENV'
):
key
=
'ENV'
):
self
.
API_MODEL
=
{
self
.
API_MODEL
=
{
'truth'
:
truth_model
,
'truth'
:
truth_model
,
'info'
:
info_model
,
'info'
:
info_model
,
...
@@ -85,6 +86,11 @@ class TruthfulQAEvaluator(BaseEvaluator):
...
@@ -85,6 +86,11 @@ class TruthfulQAEvaluator(BaseEvaluator):
if
metric
in
self
.
SCORE_KEY
.
keys
():
if
metric
in
self
.
SCORE_KEY
.
keys
():
self
.
metrics
.
append
(
metric
)
self
.
metrics
.
append
(
metric
)
if
metric
in
self
.
API_MODEL
.
keys
():
if
metric
in
self
.
API_MODEL
.
keys
():
assert
self
.
API_MODEL
.
get
(
metric
),
\
f
'`
{
metric
}
_model` should be set to perform API eval.'
\
'If you want to perform basic metric eval, '
\
f
'please refer to the docstring of
{
__file__
}
'
\
'for more details.'
self
.
api_metrics
.
append
(
metric
)
self
.
api_metrics
.
append
(
metric
)
if
self
.
api_metrics
:
if
self
.
api_metrics
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment