Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
0007b74a
Commit
0007b74a
authored
Jan 16, 2025
by
Baber
Browse files
nit
parent
14eee946
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
4 deletions
+4
-4
lm_eval/tasks/mathvista/utils.py
lm_eval/tasks/mathvista/utils.py
+4
-4
No files found.
lm_eval/tasks/mathvista/utils.py
View file @
0007b74a
...
...
@@ -7,9 +7,9 @@ import requests
from
Levenshtein
import
distance
API_KEY
=
"your_openai_api_key"
API_KEY
=
"API KEY"
API_URL
=
"https://api.openai.com/v1/chat/completions"
MODEL
=
"gpt-4"
# required for external LM call
...
...
@@ -61,7 +61,7 @@ def send_request(prompt: str):
"Content-Type"
:
"application/json"
,
}
data
=
{
"model"
:
"gpt-4"
,
"model"
:
MODEL
,
"messages"
:
[
{
"role"
:
"user"
,
"content"
:
prompt
},
],
...
...
@@ -248,7 +248,7 @@ def process_results(doc: dict, results: list[str]):
answer
=
doc
[
"answer"
]
# step 1: extract the answer from the model response
# extracted_answer = extract_answer(response, doc)
extracted_answer
=
response
[
0
]
extracted_answer
=
response
if
verify_extraction
(
extracted_answer
):
normalized_extraction
=
normalize_extracted_answer
(
extracted_answer
,
choices
,
question_type
,
answer_type
,
precision
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment