1. 05 May, 2025 2 commits
  2. 21 Sep, 2024 1 commit
  3. 12 Sep, 2024 1 commit
    • Daniel Hiltgen's avatar
      Optimize container images for startup (#6547) · cd5c8f64
      Daniel Hiltgen authored
      * Optimize container images for startup
      
      This change adjusts how to handle runner payloads to support
      container builds where we keep them extracted in the filesystem.
      This makes it easier to optimize the cpu/cuda vs cpu/rocm images for
      size, and should result in faster startup times for container images.
      
      * Refactor payload logic and add buildx support for faster builds
      
      * Move payloads around
      
      * Review comments
      
      * Converge to buildx based helper scripts
      
      * Use docker buildx action for release
      cd5c8f64
  4. 22 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Enable windows error dialog for subprocess startup · e12fff88
      Daniel Hiltgen authored
      Make sure if something goes wrong spawning the process, the user gets
      enough info to be able to try to self correct, or at least file a bug
      with details so we can fix it.  Once the process starts, we immediately
      change back to the recommended setting to prevent the blocking dialog.
      This ensures if the model fails to load (OOM, unsupported model type,
      etc.) the process will exit quickly and we can scan the stdout/stderr
      of the subprocess for the reason to report via API.
      e12fff88
  5. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Move nested payloads to installer and zip file on windows · 058f6cd2
      Daniel Hiltgen authored
      Now that the llm runner is an executable and not just a dll, more users are facing
      problems with security policy configurations on windows that prevent users
      writing to directories and then executing binaries from the same location.
      This change removes payloads from the main executable on windows and shifts them
      over to be packaged in the installer and discovered based on the executables location.
      This also adds a new zip file for people who want to "roll their own" installation model.
      058f6cd2
  6. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9