Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
DeepSeekV2_pytorch
Commits
74df9bea
Commit
74df9bea
authored
Sep 02, 2024
by
zhaoying1
Browse files
added deepseekv2
parents
Pipeline
#1652
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/minute_mysteries_qa.yaml
...al/tasks/bigbench/generate_until/minute_mysteries_qa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/misconceptions.yaml
...lm_eval/tasks/bigbench/generate_until/misconceptions.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/misconceptions_russian.yaml
...tasks/bigbench/generate_until/misconceptions_russian.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/mnist_ascii.yaml
...10/lm_eval/tasks/bigbench/generate_until/mnist_ascii.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/modified_arithmetic.yaml
...al/tasks/bigbench/generate_until/modified_arithmetic.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/moral_permissibility.yaml
...l/tasks/bigbench/generate_until/moral_permissibility.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/movie_dialog_same_or_different.yaml
...gbench/generate_until/movie_dialog_same_or_different.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/movie_recommendation.yaml
...l/tasks/bigbench/generate_until/movie_recommendation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml
...al/tasks/bigbench/generate_until/mult_data_wrangling.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/multiemo.yaml
...40310/lm_eval/tasks/bigbench/generate_until/multiemo.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/natural_instructions.yaml
...l/tasks/bigbench/generate_until/natural_instructions.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/navigate.yaml
...40310/lm_eval/tasks/bigbench/generate_until/navigate.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/nonsense_words_grammar.yaml
...tasks/bigbench/generate_until/nonsense_words_grammar.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/novel_concepts.yaml
...lm_eval/tasks/bigbench/generate_until/novel_concepts.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/object_counting.yaml
...m_eval/tasks/bigbench/generate_until/object_counting.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/odd_one_out.yaml
...10/lm_eval/tasks/bigbench/generate_until/odd_one_out.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/operators.yaml
...0310/lm_eval/tasks/bigbench/generate_until/operators.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/paragraph_segmentation.yaml
...tasks/bigbench/generate_until/paragraph_segmentation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_qa.yaml
...10/lm_eval/tasks/bigbench/generate_until/parsinlu_qa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_reading_comprehension.yaml
...gbench/generate_until/parsinlu_reading_comprehension.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/minute_mysteries_qa.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
minute_mysteries_qa_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_minute_mysteries_qa_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/misconceptions.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
misconceptions_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_misconceptions_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/misconceptions_russian.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
misconceptions_russian_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_misconceptions_russian_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/mnist_ascii.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
mnist_ascii_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_mnist_ascii_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/modified_arithmetic.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
modified_arithmetic_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_modified_arithmetic_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/moral_permissibility.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
moral_permissibility_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_moral_permissibility_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/movie_dialog_same_or_different.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
movie_dialog_same_or_different_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_movie_dialog_same_or_different_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/movie_recommendation.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
movie_recommendation_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_movie_recommendation_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
mult_data_wrangling_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_mult_data_wrangling_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/multiemo.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
multiemo_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_multiemo_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/natural_instructions.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
natural_instructions_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_natural_instructions_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/navigate.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
navigate_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_navigate_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/nonsense_words_grammar.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
nonsense_words_grammar_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_nonsense_words_grammar_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/novel_concepts.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
novel_concepts_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_novel_concepts_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/object_counting.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
object_counting_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_object_counting_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/odd_one_out.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
odd_one_out_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_odd_one_out_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/operators.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
operators_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_operators_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/paragraph_segmentation.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
paragraph_segmentation_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_paragraph_segmentation_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_qa.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
parsinlu_qa_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_parsinlu_qa_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_reading_comprehension.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
parsinlu_reading_comprehension_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_parsinlu_reading_comprehension_generate_until
Prev
1
…
27
28
29
30
31
32
33
34
35
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment