Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ollama
Commits
85aeb428
Unverified
Commit
85aeb428
authored
Aug 03, 2023
by
Michael Yang
Committed by
GitHub
Aug 03, 2023
Browse files
Merge pull request #270 from jmorganca/update-llama-cpp
update llama.cpp
parents
f0b365a4
c5bcf328
Changes
19
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
19 changed files
with
623 additions
and
298 deletions
+623
-298
llama/ggml-alloc.c
llama/ggml-alloc.c
+1
-1
llama/ggml-alloc.h
llama/ggml-alloc.h
+1
-1
llama/ggml-cuda.cu
llama/ggml-cuda.cu
+606
-276
llama/ggml-cuda.h
llama/ggml-cuda.h
+1
-1
llama/ggml-metal.h
llama/ggml-metal.h
+1
-1
llama/ggml-metal.m
llama/ggml-metal.m
+1
-1
llama/ggml-metal.metal
llama/ggml-metal.metal
+1
-1
llama/ggml-mpi.c
llama/ggml-mpi.c
+1
-1
llama/ggml-mpi.h
llama/ggml-mpi.h
+1
-1
llama/ggml-opencl.cpp
llama/ggml-opencl.cpp
+1
-1
llama/ggml-opencl.h
llama/ggml-opencl.h
+1
-1
llama/ggml.c
llama/ggml.c
+1
-1
llama/ggml.h
llama/ggml.h
+1
-1
llama/k_quants.c
llama/k_quants.c
+1
-1
llama/k_quants.h
llama/k_quants.h
+1
-1
llama/llama-util.h
llama/llama-util.h
+1
-1
llama/llama.cpp
llama/llama.cpp
+1
-1
llama/llama.go
llama/llama.go
+0
-5
llama/llama.h
llama/llama.h
+1
-1
No files found.
llama/ggml-alloc.c
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-alloc.h
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-cuda.cu
View file @
85aeb428
This diff is collapsed.
Click to expand it.
llama/ggml-cuda.h
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-metal.h
View file @
85aeb428
//go:build darwin
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-metal.m
View file @
85aeb428
//go:build darwin
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-metal.metal
View file @
85aeb428
//go:build darwin
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-mpi.c
View file @
85aeb428
//go:build mpi
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-mpi.h
View file @
85aeb428
//go:build mpi
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-opencl.cpp
View file @
85aeb428
//go:build opencl
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml-opencl.h
View file @
85aeb428
//go:build opencl
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml.c
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/ggml.h
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/k_quants.c
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/k_quants.h
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/llama-util.h
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/llama.cpp
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
llama/llama.go
View file @
85aeb428
...
...
@@ -128,11 +128,6 @@ func New(model string, opts api.Options) (*LLM, error) {
C
.
llama_backend_init
(
C
.
bool
(
llm
.
UseNUMA
))
// TODO: GQA == 8 suggests 70B model which doesn't support metal
if
llm
.
NumGQA
==
8
{
llm
.
NumGPU
=
0
}
params
:=
C
.
llama_context_default_params
()
params
.
seed
=
C
.
uint
(
llm
.
Seed
)
params
.
n_ctx
=
C
.
int
(
llm
.
NumCtx
)
...
...
llama/llama.h
View file @
85aeb428
/**
* llama.cpp - git
c574bddb368424b5996cbee2ec45ec050967d404
* llama.cpp - git
8183159cf3def112f6d1fe94815fce70e1bffa12
*
* MIT License
*
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment