Add cgo implementation for llama.cpp
Run the server.cpp directly inside the Go runtime via cgo while retaining the LLM Go abstractions.
Showing
server/llm_test.go
0 → 100644
server/llm_utils_test.go
0 → 100644
Please register or sign in to comment