- 02 Aug, 2023 2 commits
-
-
Lintang Sutawika authored
-
lintangsutawika authored
-
- 01 Aug, 2023 10 commits
-
-
Hailey Schoelkopf authored
-
Lintang Sutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
haileyschoelkopf authored
-
- 31 Jul, 2023 1 commit
-
-
baberabb authored
-
- 28 Jul, 2023 4 commits
- 25 Jul, 2023 3 commits
-
-
Lintang Sutawika authored
-
Lintang Sutawika authored
-
Lintang Sutawika authored
-
- 24 Jul, 2023 5 commits
-
-
Lintang Sutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
lintangsutawika authored
-
ZZR0 authored
I discovered that the accuracy of all models (e.g., llama7b, llama13b, starcoder) in the 'gsm8k-cot' task was 0%. After a thorough investigation, I realized that the generated text for each question was causing an early stop, preventing the 'regex_pattern' from finding any answers. This issue was caused by an incorrect assignment of the 'primary_until' variable in the 'greedy_until' function. Specifically, 'primary_until' should be a list of strings instead of a single string, as the 'stop_sequences' parameter in the 'stop_sequences_criteria' function requires a List[str]. Once I assigned 'primary_until' to '[until[0]]', the accuracy of llama7b in the 'gsm8k-cot' task increased to 1.67%.
-
- 21 Jul, 2023 4 commits
-
-
Eddy Yeo authored
-
Hailey Schoelkopf authored
-
haileyschoelkopf authored
-
haileyschoelkopf authored
-
- 20 Jul, 2023 2 commits
-
-
Lintang Sutawika authored
-
lintangsutawika authored
-
- 19 Jul, 2023 2 commits
-
-
lintangsutawika authored
-
lintangsutawika authored
-
- 18 Jul, 2023 7 commits
-
-
Eddy Yeo authored
-
haileyschoelkopf authored
-
haileyschoelkopf authored
-
haileyschoelkopf authored
-
haileyschoelkopf authored
-
Eddy Yeo authored
-
lintangsutawika authored
-