Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
489fbc21
Unverified
Commit
489fbc21
authored
Jul 18, 2025
by
Idan Tene
Committed by
GitHub
Jul 18, 2025
Browse files
Fix medical benchmarks import (#3151)
* Update utils.py
parent
c2be7211
Changes
7
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
21 additions
and
7 deletions
+21
-7
lm_eval/tasks/meddialog/utils.py
lm_eval/tasks/meddialog/utils.py
+3
-1
lm_eval/tasks/mediqa_qa2019/utils.py
lm_eval/tasks/mediqa_qa2019/utils.py
+3
-1
lm_eval/tasks/medtext/utils.py
lm_eval/tasks/medtext/utils.py
+3
-1
lm_eval/tasks/meqsum/utils.py
lm_eval/tasks/meqsum/utils.py
+3
-1
lm_eval/tasks/mimic_repsum/utils.py
lm_eval/tasks/mimic_repsum/utils.py
+3
-1
lm_eval/tasks/mts_dialog/utils.py
lm_eval/tasks/mts_dialog/utils.py
+3
-1
lm_eval/tasks/olaph/utils.py
lm_eval/tasks/olaph/utils.py
+3
-1
No files found.
lm_eval/tasks/meddialog/utils.py
View file @
489fbc21
...
...
@@ -11,7 +11,9 @@ try:
except
(
ModuleNotFoundError
,
ImportError
):
raise
ModuleNotFoundError
(
"Please install evaluation metrics via pip install evaluate and pip install bert-score"
,
"Please install evaluation metrics via pip install evaluate bert-score "
"rouge_score>=0.1.2 nltk absl-py "
"git+https://github.com/google-research/bleurt.git"
)
except
Exception
as
e
:
raise
RuntimeError
(
...
...
lm_eval/tasks/mediqa_qa2019/utils.py
View file @
489fbc21
...
...
@@ -11,7 +11,9 @@ try:
except
(
ModuleNotFoundError
,
ImportError
):
raise
ModuleNotFoundError
(
"Please install evaluation metrics via pip install evaluate and pip install bert-score"
,
"Please install evaluation metrics via pip install evaluate bert-score "
"rouge_score>=0.1.2 nltk absl-py "
"git+https://github.com/google-research/bleurt.git"
)
except
Exception
as
e
:
raise
RuntimeError
(
...
...
lm_eval/tasks/medtext/utils.py
View file @
489fbc21
...
...
@@ -11,7 +11,9 @@ try:
except
(
ModuleNotFoundError
,
ImportError
):
raise
ModuleNotFoundError
(
"Please install evaluation metrics via pip install evaluate and pip install bert-score"
,
"Please install evaluation metrics via pip install evaluate bert-score "
"rouge_score>=0.1.2 nltk absl-py "
"git+https://github.com/google-research/bleurt.git"
)
except
Exception
as
e
:
raise
RuntimeError
(
...
...
lm_eval/tasks/meqsum/utils.py
View file @
489fbc21
...
...
@@ -11,7 +11,9 @@ try:
except
(
ModuleNotFoundError
,
ImportError
):
raise
ModuleNotFoundError
(
"Please install evaluation metrics via pip install evaluate and pip install bert-score"
,
"Please install evaluation metrics via pip install evaluate bert-score "
"rouge_score>=0.1.2 nltk absl-py "
"git+https://github.com/google-research/bleurt.git"
)
except
Exception
as
e
:
raise
RuntimeError
(
...
...
lm_eval/tasks/mimic_repsum/utils.py
View file @
489fbc21
...
...
@@ -15,7 +15,9 @@ try:
except
(
ModuleNotFoundError
,
ImportError
):
raise
ModuleNotFoundError
(
"Please install evaluation metrics via pip install evaluate and pip install bert-score"
,
"Please install evaluation metrics via pip install evaluate bert-score "
"rouge_score>=0.1.2 nltk absl-py radgraph"
"git+https://github.com/google-research/bleurt.git"
)
except
Exception
as
e
:
raise
RuntimeError
(
...
...
lm_eval/tasks/mts_dialog/utils.py
View file @
489fbc21
...
...
@@ -11,7 +11,9 @@ try:
except
(
ModuleNotFoundError
,
ImportError
):
raise
ModuleNotFoundError
(
"Please install evaluation metrics via pip install evaluate and pip install bert-score"
,
"Please install evaluation metrics via pip install evaluate bert-score "
"rouge_score>=0.1.2 nltk absl-py "
"git+https://github.com/google-research/bleurt.git"
)
except
Exception
as
e
:
raise
RuntimeError
(
...
...
lm_eval/tasks/olaph/utils.py
View file @
489fbc21
...
...
@@ -12,7 +12,9 @@ try:
except
(
ModuleNotFoundError
,
ImportError
):
raise
ModuleNotFoundError
(
"Please install evaluation metrics via pip install evaluate and pip install bert-score"
,
"Please install evaluation metrics via pip install evaluate bert-score "
"rouge_score>=0.1.2 nltk absl-py "
"git+https://github.com/google-research/bleurt.git"
)
except
Exception
as
e
:
raise
RuntimeError
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment