README.md 1.25 KB
Newer Older
mashun1's avatar
v1  
mashun1 committed
1
2
# llama.cpp/example/main-cmake-pkg

xuxzh1's avatar
init  
xuxzh1 committed
3
This program builds [llama-cli](../main) using a relocatable CMake package. It serves as an example of using the `find_package()` CMake command to conveniently include [llama.cpp](https://github.com/ggerganov/llama.cpp) in projects which live outside of the source tree.
mashun1's avatar
v1  
mashun1 committed
4
5
6
7
8
9
10

## Building

Because this example is "outside of the source tree", it is important to first build/install llama.cpp using CMake. An example is provided here, but please see the [llama.cpp build instructions](../..) for more detailed build instructions.

### Considerations

xuxzh1's avatar
init  
xuxzh1 committed
11
When hardware acceleration libraries are used (e.g. CUDA, Metal, etc.), CMake must be able to locate the associated CMake package.
mashun1's avatar
v1  
mashun1 committed
12
13
14
15
16
17

### Build llama.cpp and install to C:\LlamaCPP directory

```cmd
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
xuxzh1's avatar
init  
xuxzh1 committed
18
cmake -B build -DBUILD_SHARED_LIBS=OFF -G "Visual Studio 17 2022" -A x64
mashun1's avatar
v1  
mashun1 committed
19
20
21
22
cmake --build build --config Release
cmake --install build --prefix C:/LlamaCPP
```

xuxzh1's avatar
init  
xuxzh1 committed
23
### Build llama-cli-cmake-pkg
mashun1's avatar
v1  
mashun1 committed
24
25
26
27


```cmd
cd ..\examples\main-cmake-pkg
xuxzh1's avatar
init  
xuxzh1 committed
28
cmake -B build -DBUILD_SHARED_LIBS=OFF -DCMAKE_PREFIX_PATH="C:/LlamaCPP/lib/cmake/Llama" -G "Visual Studio 17 2022" -A x64
mashun1's avatar
v1  
mashun1 committed
29
30
31
cmake --build build --config Release
cmake --install build --prefix C:/MyLlamaApp
```