README.md 21.3 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
<div align="center">
Arpit Jain's avatar
Arpit Jain committed
2
 <img alt="ollama" height="200px" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
3
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
4

Bruce MacDonald's avatar
Bruce MacDonald committed
5
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
6

7
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
8

Michael's avatar
Michael committed
9
Get up and running with large language models.
10

11
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
12

13
[Download](https://ollama.com/download/Ollama-darwin.zip)
14

15
### Windows
16

Michael's avatar
Michael committed
17
[Download](https://ollama.com/download/OllamaSetup.exe)
18

Michael's avatar
Michael committed
19
### Linux
20
21

```
22
curl -fsSL https://ollama.com/install.sh | sh
23
24
```

25
[Manual install instructions](https://github.com/ollama/ollama/blob/main/docs/linux.md)
26

27
### Docker
28

Jeffrey Morgan's avatar
Jeffrey Morgan committed
29
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
30

Jeffrey Morgan's avatar
Jeffrey Morgan committed
31
32
33
34
35
### Libraries

- [ollama-python](https://github.com/ollama/ollama-python)
- [ollama-js](https://github.com/ollama/ollama-js)

36
37
## Quickstart

38
To run and chat with [Llama 3.2](https://ollama.com/library/llama3.2):
39
40

```
41
ollama run llama3.2
42
43
44
45
```

## Model library

Jeffrey Morgan's avatar
Jeffrey Morgan committed
46
Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library 'ollama model library')
47

Jeffrey Morgan's avatar
Jeffrey Morgan committed
48
Here are some example models that can be downloaded:
49

50
51
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
52
| Llama 3.2          | 3B         | 2.0GB | `ollama run llama3.2`          |
53
| Llama 3.2          | 1B         | 1.3GB | `ollama run llama3.2:1b`       |
Michael's avatar
Michael committed
54
55
56
| Llama 3.1          | 8B         | 4.7GB | `ollama run llama3.1`          |
| Llama 3.1          | 70B        | 40GB  | `ollama run llama3.1:70b`      |
| Llama 3.1          | 405B       | 231GB | `ollama run llama3.1:405b`     |
Michael's avatar
Michael committed
57
58
| Phi 3 Mini         | 3.8B       | 2.3GB | `ollama run phi3`              |
| Phi 3 Medium       | 14B        | 7.9GB | `ollama run phi3:medium`       |
sryu1's avatar
sryu1 committed
59
| Gemma 2            | 2B         | 1.6GB | `ollama run gemma2:2b`         |
Michael's avatar
Michael committed
60
61
| Gemma 2            | 9B         | 5.5GB | `ollama run gemma2`            |
| Gemma 2            | 27B        | 16GB  | `ollama run gemma2:27b`        |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
62
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
Michael's avatar
Michael committed
63
| Moondream 2        | 1.4B       | 829MB | `ollama run moondream`         |
Michael's avatar
Michael committed
64
65
| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`       |
| Starling           | 7B         | 4.1GB | `ollama run starling-lm`       |
66
67
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
68
| LLaVA              | 7B         | 4.5GB | `ollama run llava`             |
69
| Solar              | 10.7B      | 6.1GB | `ollama run solar`             |
70

Michael Yang's avatar
Michael Yang committed
71
72
> [!NOTE]
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
73

Jeffrey Morgan's avatar
Jeffrey Morgan committed
74
## Customize a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
75

76
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
77

78
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
79

80
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
81

82
83
84
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
85

86
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
87

88
   ```
89
   ollama create example -f Modelfile
90
   ```
91

92
3. Run the model
Michael Yang's avatar
Michael Yang committed
93

94
   ```
95
   ollama run example
96
   ```
Michael Yang's avatar
Michael Yang committed
97

98
99
100
101
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

102
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
103

104
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.2` model:
105
106

```
107
ollama pull llama3.2
108
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
109

110
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
111

Jeffrey Morgan's avatar
Jeffrey Morgan committed
112
```
113
FROM llama3.2
114
115
116
117

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

118
# set the system message
Jeffrey Morgan's avatar
Jeffrey Morgan committed
119
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
120
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
121
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
122
```
Bruce MacDonald's avatar
Bruce MacDonald committed
123

124
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
125
126

```
127
128
129
130
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
131
132
```

133
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
134
135
136
137
138
139

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
140

Matt Williams's avatar
Matt Williams committed
141
142
143
144
```
ollama create mymodel -f ./Modelfile
```

145
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
146

147
```
148
ollama pull llama3.2
149
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
150

151
152
153
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
154
155

```
156
ollama rm llama3.2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
157
158
```

159
160
161
### Copy a model

```
162
ollama cp llama3.2 my-model
163
164
165
166
167
168
169
170
171
172
173
174
175
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
176
177
178
### Multimodal models

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
179
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
Jeffrey Morgan's avatar
Jeffrey Morgan committed
180
181
182
The image features a yellow smiley face, which is likely the central focus of the picture.
```

Arpit Jain's avatar
Arpit Jain committed
183
### Pass the prompt as an argument
184
185

```
186
$ ollama run llama3.2 "Summarize this file: $(cat README.md)"
187
188
189
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

royjhan's avatar
royjhan committed
190
191
192
### Show model information

```
193
ollama show llama3.2
royjhan's avatar
royjhan committed
194
195
```

196
### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
197

198
199
200
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
201

202
203
204
205
206
207
208
209
210
### List which models are currently loaded

```
ollama ps
```

### Stop a model which is currently running

```
211
ollama stop llama3.2
212
213
```

214
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
215

216
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
217

Jeffrey Morgan's avatar
Jeffrey Morgan committed
218
219
## Building

220
See the [developer guide](https://github.com/ollama/ollama/blob/main/docs/development.md)
221
222

### Running local builds
223

Jeffrey Morgan's avatar
Jeffrey Morgan committed
224
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
225

Jeffrey Morgan's avatar
Jeffrey Morgan committed
226
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
227
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
228
229
```

Michael Yang's avatar
Michael Yang committed
230
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
231
232

```
233
./ollama run llama3.2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
234
```
235
236
237

## REST API

James Braza's avatar
James Braza committed
238
Ollama has a REST API for running and managing models.
239
240

### Generate a response
241
242

```
243
curl http://localhost:11434/api/generate -d '{
244
  "model": "llama3.2",
245
246
  "prompt":"Why is the sky blue?"
}'
247
```
Nate Sesti's avatar
Nate Sesti committed
248

249
### Chat with a model
Bruce MacDonald's avatar
Bruce MacDonald committed
250
251
252

```
curl http://localhost:11434/api/chat -d '{
253
  "model": "llama3.2",
254
255
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
Bruce MacDonald's avatar
Bruce MacDonald committed
256
257
258
259
  ]
}'
```

James Braza's avatar
James Braza committed
260
261
See the [API documentation](./docs/api.md) for all endpoints.

262
263
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
264
### Web & Desktop
265

Bruce MacDonald's avatar
Bruce MacDonald committed
266
267
- [Open WebUI](https://github.com/open-webui/open-webui)
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
Fernando Maclen's avatar
Fernando Maclen committed
268
- [Hollama](https://github.com/fmaclen/hollama)
Saifeddine ALOUI's avatar
Saifeddine ALOUI committed
269
- [Lollms-Webui](https://github.com/ParisNeo/lollms-webui)
270
- [LibreChat](https://github.com/danny-avila/LibreChat)
271
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
272
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
Jikku Jose's avatar
Jikku Jose committed
273
- [Saddle](https://github.com/jikkuatwork/saddle)
274
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
275
- [Chatbot UI v2](https://github.com/mckaywrigley/chatbot-ui)
276
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
277
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
278
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
279
- [big-AGI](https://github.com/enricoros/big-AGI/blob/main/docs/config-local-ollama.md)
280
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
281
- [Amica](https://github.com/semperai/amica)
282
- [chatd](https://github.com/BruceMacD/chatd)
283
- [Ollama-SwiftUI](https://github.com/kghandour/Ollama-SwiftUI)
284
- [Dify.AI](https://github.com/langgenius/dify)
285
- [MindMac](https://mindmac.app)
286
- [NextJS Web Interface for Ollama](https://github.com/jakobhoeg/nextjs-ollama-llm-ui)
287
- [Msty](https://msty.app)
288
- [Chatbox](https://github.com/Bin-Huang/Chatbox)
289
- [WinForm Ollama Copilot](https://github.com/tgraupmann/WinForm_Ollama_Copilot)
290
- [NextChat](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web) with [Get Started Doc](https://docs.nextchat.dev/models/ollama)
291
- [Alpaca WebUI](https://github.com/mmo80/alpaca-webui)
292
- [OllamaGUI](https://github.com/enoch1118/ollamaGUI)
293
- [OpenAOE](https://github.com/InternLM/OpenAOE)
294
- [Odin Runes](https://github.com/leonid20000/OdinRunes)
295
- [LLM-X](https://github.com/mrdjohnson/llm-x) (Progressive Web App)
296
- [AnythingLLM (Docker + MacOs/Windows/Linux native app)](https://github.com/Mintplex-Labs/anything-llm)
297
298
- [Ollama Basic Chat: Uses HyperDiv Reactive UI](https://github.com/rapidarchitect/ollama_basic_chat)
- [Ollama-chats RPG](https://github.com/drazdra/ollama-chats)
299
300
301
302
303
304
- [QA-Pilot](https://github.com/reid41/QA-Pilot) (Chat with Code Repository)
- [ChatOllama](https://github.com/sugarforever/chat-ollama) (Open Source Chatbot based on Ollama with Knowledge Bases)
- [CRAG Ollama Chat](https://github.com/Nagi-ovo/CRAG-Ollama-Chat) (Simple Web Search with Corrective RAG)
- [RAGFlow](https://github.com/infiniflow/ragflow) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
- [StreamDeploy](https://github.com/StreamDeploy-DevRel/streamdeploy-llm-app-scaffold) (LLM Application Scaffold)
- [chat](https://github.com/swuecho/chat) (chat web app for teams)
305
- [Lobe Chat](https://github.com/lobehub/lobe-chat) with [Integrating Doc](https://lobehub.com/docs/self-hosting/examples/ollama)
306
307
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) (Local Chat with multiple PDFs using Ollama and RAG)
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) (Flexible native client with RAG & multi-agent automation)
308
- [macai](https://github.com/Renset/macai) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
309
- [Olpaka](https://github.com/Otacon/olpaka) (User-friendly Flutter Web App for Ollama)
310
- [OllamaSpring](https://github.com/CrazyNeil/OllamaSpring) (Ollama Client for macOS)
311
- [LLocal.in](https://github.com/kartikm7/llocal) (Easy to use Electron Desktop Client for Ollama)
312
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in discord )
RAPID ARCHITECT's avatar
RAPID ARCHITECT committed
313
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
314
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
315
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
316
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
Pákozdi György's avatar
Pákozdi György committed
317
- [Sidellama](https://github.com/gyopak/sidellama) (browser-based LLM client)
318
- [LLMStack](https://github.com/trypromptly/LLMStack) (No-code multi-agent framework to build LLM agents and workflows)
319
- [BoltAI for Mac](https://boltai.com) (AI Chat Client for Mac)
320
- [Harbor](https://github.com/av/harbor) (Containerized LLM Toolkit with Ollama as default backend)
321
- [Go-CREW](https://www.jonathanhecl.com/go-crew/) (Powerful Offline RAG in Golang)
322
- [PartCAD](https://github.com/openvmp/partcad/) (CAD model generation with OpenSCAD and CadQuery)
323
- [Ollama4j Web UI](https://github.com/ollama4j/ollama4j-web-ui) - Java-based Web UI for Ollama built with Vaadin, Spring Boot and Ollama4j
Viz's avatar
Viz committed
324
- [PyOllaMx](https://github.com/kspviswa/pyOllaMx) - macOS application capable of chatting with both Ollama and Apple MLX models.
325
- [Claude Dev](https://github.com/saoudrizwan/claude-dev) - VSCode extension for multi-file/whole-repo coding
326
- [Cherry Studio](https://github.com/kangfenmao/cherry-studio) (Desktop client with Ollama support)
327
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
328
- [Archyve](https://github.com/nickthecook/archyve) (RAG-enabling document library)
329
- [crewAI with Mesop](https://github.com/rapidarchitect/ollama-crew-mesop) (Mesop Web Interface to run crewAI with Ollama)
330
- [LLMChat](https://github.com/trendy-design/llmchat) (Privacy focused, 100% local, intuitive all-in-one chat interface)
331
- [ARGO](https://github.com/xark-argo/argo) (Locally download and run Ollama and Huggingface models with RAG on Mac/Windows/Linux)
332
- [G1](https://github.com/bklieger-groq/g1) (Prototype of using prompting strategies to improve the LLM's reasoning through o1-like reasoning chains.)
333
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
334
- [Hexabot](https://github.com/hexastack/hexabot) (A conversational AI builder)
RAPID ARCHITECT's avatar
RAPID ARCHITECT committed
335
- [Reddit Rate]((https://github.com/rapidarchitect/reddit_analyzer)) (Search and Rate Reddit topics with a weighted summation)
336

Jeffrey Morgan's avatar
Jeffrey Morgan committed
337
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
338

339
340
341
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
342
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
343
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
344
- [ollero.nvim](https://github.com/marco-souza/ollero.nvim)
345
- [ollama-chat.nvim](https://github.com/gerazov/ollama-chat.nvim)
346
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
347
- [gptel Emacs client](https://github.com/karthink/gptel)
348
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
349
- [cmdh](https://github.com/pgibler/cmdh)
350
- [ooo](https://github.com/npahlfer/ooo)
reid41's avatar
reid41 committed
351
- [shell-pilot](https://github.com/reid41/shell-pilot)
352
- [tenere](https://github.com/pythops/tenere)
353
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
354
- [typechat-cli](https://github.com/anaisbetts/typechat-cli)
355
- [ShellOracle](https://github.com/djcopley/ShellOracle)
356
- [tlm](https://github.com/yusufcanb/tlm)
Bruce MacDonald's avatar
Bruce MacDonald committed
357
- [podman-ollama](https://github.com/ericcurtin/podman-ollama)
Sam's avatar
Sam committed
358
- [gollama](https://github.com/sammcj/gollama)
359
- [Ollama eBook Summary](https://github.com/cognitivetech/ollama-ebook-summary/)
360
- [Ollama Mixture of Experts (MOE) in 50 lines of code](https://github.com/rapidarchitect/ollama_moe)
361
- [vim-intelligence-bridge](https://github.com/pepo-ec/vim-intelligence-bridge) Simple interaction of "Ollama" with the Vim editor
362

363
364
365
### Apple Vision Pro
- [Enchanted](https://github.com/AugustDev/enchanted)

Jorge Torres's avatar
Jorge Torres committed
366
367
### Database

368
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) (Connects Ollama models with nearly 200 data platforms and apps)
369
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) with [example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama)
370

Matt Williams's avatar
Matt Williams committed
371
### Package managers
372

Matt Williams's avatar
Matt Williams committed
373
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
374
- [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)
375
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
376
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)
377
378
- [Nix package](https://search.nixos.org/packages?channel=24.05&show=ollama&from=0&size=50&sort=relevance&type=packages&query=ollama)
- [Flox](https://flox.dev/blog/ollama-part-one)
379

380
### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
381

382
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/integrations/chat/ollama/) with [example](https://js.langchain.com/docs/tutorials/local_rag/)
383
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
384
- [crewAI](https://github.com/crewAIInc/crewAI)
385
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
LangChain4j's avatar
LangChain4j committed
386
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
387
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) with [example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs)
388
- [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/ollama/) and [LlamaIndexTS](https://ts.llamaindex.ai/modules/llms/available_llms/ollama)
389
- [LiteLLM](https://github.com/BerriAI/litellm)
390
- [OllamaFarm for Go](https://github.com/presbrey/ollamafarm)
391
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
392
- [Ollama for Ruby](https://github.com/gbaptista/ollama-ai)
393
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
394
- [Ollama-hpp for C++](https://github.com/jmont-dev/ollama-hpp)
395
- [Ollama4j for Java](https://github.com/ollama4j/ollama4j)
396
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
397
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
398
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
399
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
400
- [LangChainDart](https://github.com/davidmigloz/langchain_dart)
401
- [Semantic Kernel - Python](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama)
402
- [Haystack](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md)
403
- [Elixir LangChain](https://github.com/brainlid/langchain)
Maximilian Weber's avatar
Maximilian Weber committed
404
- [Ollama for R - rollama](https://github.com/JBGruber/rollama)
405
- [Ollama for R - ollama-r](https://github.com/hauselin/ollama-r)
406
- [Ollama-ex for Elixir](https://github.com/lebrunel/ollama-ex)
407
- [Ollama Connector for SAP ABAP](https://github.com/b-tocs/abap_btocs_ollama)
408
- [Testcontainers](https://testcontainers.com/modules/ollama/)
409
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama)
J S's avatar
J S committed
410
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) with an [example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama)
411
- [LlamaScript](https://github.com/Project-Llama/llamascript)
412
- [Gollm](https://docs.gollm.co/examples/ollama-example)
413
- [Ollamaclient for Golang](https://github.com/xyproto/ollamaclient)
Mitar's avatar
Mitar committed
414
- [High-level function abstraction in Go](https://gitlab.com/tozd/go/fun)
415
- [Ollama PHP](https://github.com/ArdaGnsrn/ollama-php)
416
- [Agents-Flex for Java](https://github.com/agents-flex/agents-flex) with [example](https://github.com/agents-flex/agents-flex/tree/main/agents-flex-llm/agents-flex-llm-ollama/src/test/java/com/agentsflex/llm/ollama)
417
- [Ollama for Swift](https://github.com/mattt/ollama-swift)
Sam's avatar
Sam committed
418

Jeffrey Morgan's avatar
Jeffrey Morgan committed
419
420
### Mobile

421
- [Enchanted](https://github.com/AugustDev/enchanted)
Dane Madsen's avatar
Dane Madsen committed
422
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
423
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
424
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
425

Jeffrey Morgan's avatar
Jeffrey Morgan committed
426
427
### Extensions & Plugins

428
429
430
431
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
432
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
433
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
434
435
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
ruecat's avatar
ruecat committed
436
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
437
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
438
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
439
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)
440
- [Cliobot](https://github.com/herval/cliobot) (Telegram bot with Ollama support)
441
- [Copilot for Obsidian plugin](https://github.com/logancyang/obsidian-copilot)
Pavel Frankov's avatar
Pavel Frankov committed
442
- [Obsidian Local GPT plugin](https://github.com/pfrankov/obsidian-local-gpt)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
443
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)
444
445
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
- [Ollama Copilot](https://github.com/bernardo-bruning/ollama-copilot) (Proxy that allows you to use ollama as a copilot like Github copilot)
446
- [twinny](https://github.com/rjmacarthy/twinny) (Copilot and Copilot chat alternative using Ollama)
447
- [Wingman-AI](https://github.com/RussellCanfield/wingman-ai) (Copilot code and chat alternative using Ollama and Hugging Face)
448
- [Page Assist](https://github.com/n4ze3m/page-assist) (Chrome Extension)
449
- [Plasmoid Ollama Control](https://github.com/imoize/plasmoid-ollamacontrol) (KDE Plasma extension that allows you to quickly manage/control Ollama model)
450
- [AI Telegram Bot](https://github.com/tusharhero/aitelegrambot) (Telegram bot using Ollama in backend)
Yaroslav's avatar
Yaroslav committed
451
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (Sublime Text 4 AI assistant plugin with Ollama support)
452
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
453
- [Discord AI chat/moderation bot](https://github.com/rapmd73/Companion) Chat/moderation bot written in python. Uses Ollama to create personalities.
Nischal Jain's avatar
Nischal Jain committed
454
- [Headless Ollama](https://github.com/nischalj10/headless-ollama) (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
455
- [vnc-lm](https://github.com/jk011ru/vnc-lm) (A containerized Discord bot with support for attachments and web links)
456
- [LSP-AI](https://github.com/SilasMarvin/lsp-ai) (Open-source language server for AI-powered functionality)
457
- [QodeAssist](https://github.com/Palm1r/QodeAssist) (AI-powered coding assistant plugin for Qt Creator)
458
- [Obsidian Quiz Generator plugin](https://github.com/ECuiDev/obsidian-quiz-generator)
459
- [TextCraft](https://github.com/suncloudsmoon/TextCraft) (Copilot in Word alternative using Ollama)
460

Sam's avatar
Sam committed
461
462
### Supported backends

463
464
- [llama.cpp](https://github.com/ggerganov/llama.cpp) project founded by Georgi Gerganov.