- 19 Dec, 2023 1 commit
-
-
Daniel Hiltgen authored
Run the server.cpp directly inside the Go runtime via cgo while retaining the LLM Go abstractions.
-
- 15 Dec, 2023 1 commit
-
-
Patrick Devine authored
-
- 05 Dec, 2023 1 commit
-
-
Michael Yang authored
-
- 14 Nov, 2023 1 commit
-
-
Michael Yang authored
-
- 01 Nov, 2023 1 commit
-
-
Michael Yang authored
-
- 25 Oct, 2023 2 commits
-
-
Patrick Devine authored
-
Ajay Kemparaj authored
-
- 16 Oct, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 06 Oct, 2023 1 commit
-
-
Michael Yang authored
-
- 22 Sep, 2023 1 commit
-
-
Patrick Devine authored
-
- 05 Sep, 2023 1 commit
-
-
Michael Yang authored
-
- 30 Aug, 2023 1 commit
-
-
Bruce MacDonald authored
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
-
- 10 Aug, 2023 1 commit
-
-
Michael Yang authored
-
- 08 Aug, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 01 Aug, 2023 2 commits
-
-
Bruce MacDonald authored
- read runner options from map to see what was specified explicitly and overwrite zero values
-
Bruce MacDonald authored
-
- 22 Jul, 2023 1 commit
-
-
Michael Yang authored
-
- 20 Jul, 2023 1 commit
-
-
Patrick Devine authored
-
- 19 Jul, 2023 1 commit
-
-
Michael Yang authored
-
- 18 Jul, 2023 1 commit
-
-
Patrick Devine authored
-
- 17 Jul, 2023 1 commit
-
-
Michael Yang authored
-
- 11 Jul, 2023 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
- 07 Jul, 2023 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
- 06 Jul, 2023 5 commits
-
-
Bruce MacDonald authored
-
Michael Yang authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
Co-authored-by:Patrick Devine <pdevine@sonic.net>
-