1. 05 Apr, 2024 1 commit
    • ZoneTwelve's avatar
      TMMLU+ implementation (#1394) · 9ae96cdf
      ZoneTwelve authored
      
      
      * implementation of TMMLU+
      
      * implemented: TMMLU+
      
      ****TMMLU+ : large-scale Traditional chinese Massive Multitask language Understanding****
      
      - 4 categories
          - STEM
          - Social Science
          - Humanities
          - Other
      
      The TMMLU+ dataset, encompassing over 67 subjects and 20160 tasks, is six times larger and more balanced than its predecessor, TMMLU, and includes benchmark results from both closed-source and 20 open-weight Chinese large language models with 1.8B to 72B parameters. However, Traditional Chinese variants continue to underperform compared to major Simplified Chinese models.
      
      ```markdown
      Total number of tasks in the 'test' sets: 20160
      Total number of tasks in the 'validation' sets: 2247
      Total number of tasks in the 'train' sets: 335
      ```
      
      * Remove print from __init__.py
      
      There was my mistake in forgetting to remove the debug print from the code.
      
      * update: move TMMLU+ config generation program into default
      
      * fix: we should use training set as few shots example
      
      * update: README for TMMLU+
      
      * update: a small changes of TMMLU+ README file
      
      * pre-commit run thought
      
      * Add README for TMMLU+ dataset
      
      * run precommit
      
      * trigger precommit again
      
      * trigger precommit again
      
      * isort is fussy
      
      * isort is fussy
      
      * format, again
      
      * oops
      
      * oops
      
      ---------
      Co-authored-by: default avatarlintang <lintang@eleuther.ai>
      Co-authored-by: default avatarhaileyschoelkopf <hailey@eleuther.ai>
      9ae96cdf