subprocess llama.cpp server (#401)
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
Showing
llm/ggml_llama.go
0 → 100644
This diff is collapsed.
llm/k_quants.c
deleted
100644 → 0
This diff is collapsed.
llm/k_quants.h
deleted
100644 → 0
llm/llama-util.h
deleted
100644 → 0
This diff is collapsed.
llm/llama.cpp
deleted
100644 → 0
This diff is collapsed.
llm/llama.cpp/generate.go
0 → 100644
llm/llama.go
deleted
100644 → 0
This diff is collapsed.
llm/llama.h
deleted
100644 → 0
This diff is collapsed.
llm/llama_darwin.go
deleted
100644 → 0
This diff is collapsed.
This diff is collapsed.
Please register or sign in to comment