"lm_eval/tasks/careqa/careqa_open_perplexity.yaml" did not exist on "79b972d6d67cf6f4647debe9d22d9c2735fffda8"
  1. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9