1. 02 Jul, 2024 1 commit
  2. 16 Apr, 2024 1 commit
  3. 26 Mar, 2024 1 commit
    • Sergio Perez's avatar
      Integration of NeMo models into LM Evaluation Harness library (#1598) · e9d429e1
      Sergio Perez authored
      * Integration of NeMo models into LM Evaluation Harness library
      
      * rename nemo model as nemo_lm
      
      * move nemo section in readme after hf section
      
      * use self.eot_token_id in get_until()
      
      * improve progress bar showing loglikelihood requests
      
      * data replication or tensor/pipeline replication working fine within one node
      
      * run pre-commit on modified files
      
      * check whether dependencies are installed
      
      * clarify usage of torchrun in README
      e9d429e1
  4. 26 Feb, 2024 1 commit
  5. 18 Feb, 2024 1 commit
  6. 06 Feb, 2024 1 commit
  7. 05 Feb, 2024 1 commit
  8. 26 Jan, 2024 1 commit
    • NoushNabi's avatar
      Add causalLM OpenVino models (#1290) · 97a67d27
      NoushNabi authored
      
      
      * added intel optimum
      
      * added intel optimum in readme
      
      * modified intel optimum
      
      * modified intel optimum
      
      * modified intel optimum
      
      * modified install optimum
      
      * modified path of IR file
      
      * added openvino_device
      
      * added openvino_device2
      
      * changed optimum-causal to openvino-causal
      
      * Update README.md
      
      * Update README.md
      
      * remove `lm_eval.base` import
      
      * update openvino-causal -> openvino ; pass device through super().__init__()
      
      * Update README.md
      
      * Add optimum to tests dependencies
      
      * apply pre-commit
      
      * fix so tests pass
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      Co-authored-by: default avatarhaileyschoelkopf <hailey@eleuther.ai>
      97a67d27
  9. 22 Dec, 2023 1 commit
    • Hailey Schoelkopf's avatar
      Upstream Mamba Support (`mamba_ssm`) (#1110) · 5503b274
      Hailey Schoelkopf authored
      * modularize HFLM code
      
      * pass through extra kwargs to AutoModel.from_pretrained call
      
      * remove explicit model_kwargs
      
      * rename gptq -> autogptq
      
      * fix tokenizer pad token errors
      
      * ensure model always respects device_map and autogptq's selected devices
      
      * add a _get_config helper fn
      
      * add mambaLMWrapper
      
      * add mamba extra
      
      * add mamba extra
      
      * fix conditional import
      
      * Fix botched merge commit
      
      * Remove beginning-of-file comment for consistency
      
      * Add docstring for mambaLM re: supported kwargs
      
      * Alphabetize extras
      
      * Update extras table
      
      * appease precommit
      
      * run precommit on mamba_lm
      5503b274
  10. 27 Nov, 2023 1 commit
  11. 22 Nov, 2023 1 commit
  12. 21 Nov, 2023 1 commit
  13. 03 Nov, 2023 1 commit
  14. 04 Aug, 2023 3 commits
  15. 02 Aug, 2023 2 commits
  16. 27 Jun, 2023 1 commit
  17. 22 Jun, 2023 3 commits
  18. 21 Jun, 2023 1 commit
  19. 20 Jun, 2023 1 commit
  20. 12 Jun, 2023 1 commit
  21. 08 Jun, 2023 2 commits
  22. 07 Jun, 2023 1 commit
  23. 08 May, 2023 1 commit
  24. 24 Apr, 2023 2 commits
  25. 23 Apr, 2023 1 commit
  26. 19 Apr, 2023 1 commit
  27. 17 Jan, 2023 1 commit
  28. 04 Nov, 2022 1 commit
  29. 29 Apr, 2022 1 commit
  30. 27 Apr, 2022 1 commit
  31. 26 Apr, 2022 1 commit
  32. 11 Oct, 2021 1 commit
  33. 04 Feb, 2021 1 commit
    • Leo Gao's avatar
      Massive refactor · 778e0f91
      Leo Gao authored
      - Extract evaluator (still needs work to clean up)
      - Add tests for evaluator
      - Fix all the things that break on the new tests
      - Misc cleanup
      778e0f91