- 31 Aug, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 30 Aug, 2023 1 commit
-
-
Bruce MacDonald authored
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
-
- 18 Aug, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 10 Aug, 2023 2 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 02 Aug, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 28 Jul, 2023 3 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 27 Jul, 2023 2 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 26 Jul, 2023 3 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 25 Jul, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 21 Jul, 2023 7 commits
-
-
Eva Ho authored
-
Eva Ho authored
-
Eva Ho authored
-
hoyyeva authored
Co-authored-by:Jeffrey Morgan <251292+jmorganca@users.noreply.github.com>
-
Eva Ho authored
-
Eva Ho authored
-
Eva Ho authored
-
- 19 Jul, 2023 2 commits
-
-
Eva Ho authored
-
Jeffrey Morgan authored
-
- 18 Jul, 2023 3 commits
-
-
Jeffrey Morgan authored
-
Eva Ho authored
-
Eva Ho authored
-
- 17 Jul, 2023 10 commits
-
-
Eva Ho authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 14 Jul, 2023 2 commits
-
-
Jeffrey Morgan authored
-
hoyyeva authored
-
- 11 Jul, 2023 1 commit
-
-
Jeffrey Morgan authored
-