README.md 16.9 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
<div align="center">
Arpit Jain's avatar
Arpit Jain committed
2
 <img alt="ollama" height="200px" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
3
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
4

Bruce MacDonald's avatar
Bruce MacDonald committed
5
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
6

7
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
8

Michael's avatar
Michael committed
9
Get up and running with large language models.
10

11
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
12

13
[Download](https://ollama.com/download/Ollama-darwin.zip)
14

Michael's avatar
Michael committed
15
### Windows preview
16

Michael's avatar
Michael committed
17
[Download](https://ollama.com/download/OllamaSetup.exe)
18

Michael's avatar
Michael committed
19
### Linux
20
21

```
22
curl -fsSL https://ollama.com/install.sh | sh
23
24
```

25
[Manual install instructions](https://github.com/ollama/ollama/blob/main/docs/linux.md)
26

27
### Docker
28

Jeffrey Morgan's avatar
Jeffrey Morgan committed
29
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
30

Jeffrey Morgan's avatar
Jeffrey Morgan committed
31
32
33
34
35
### Libraries

- [ollama-python](https://github.com/ollama/ollama-python)
- [ollama-js](https://github.com/ollama/ollama-js)

36
37
## Quickstart

Michael's avatar
Michael committed
38
To run and chat with [Llama 3.1](https://ollama.com/library/llama3.1):
39
40

```
Michael's avatar
Michael committed
41
ollama run llama3.1
42
43
44
45
```

## Model library

Jeffrey Morgan's avatar
Jeffrey Morgan committed
46
Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library 'ollama model library')
47

Jeffrey Morgan's avatar
Jeffrey Morgan committed
48
Here are some example models that can be downloaded:
49

50
51
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
52
53
54
| Llama 3.1          | 8B         | 4.7GB | `ollama run llama3.1`          |
| Llama 3.1          | 70B        | 40GB  | `ollama run llama3.1:70b`      |
| Llama 3.1          | 405B       | 231GB | `ollama run llama3.1:405b`     |
Michael's avatar
Michael committed
55
56
| Phi 3 Mini         | 3.8B       | 2.3GB | `ollama run phi3`              |
| Phi 3 Medium       | 14B        | 7.9GB | `ollama run phi3:medium`       |
sryu1's avatar
sryu1 committed
57
| Gemma 2            | 2B         | 1.6GB | `ollama run gemma2:2b`         |
Michael's avatar
Michael committed
58
59
| Gemma 2            | 9B         | 5.5GB | `ollama run gemma2`            |
| Gemma 2            | 27B        | 16GB  | `ollama run gemma2:27b`        |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
60
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
Michael's avatar
Michael committed
61
| Moondream 2        | 1.4B       | 829MB | `ollama run moondream`         |
Michael's avatar
Michael committed
62
63
| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`       |
| Starling           | 7B         | 4.1GB | `ollama run starling-lm`       |
64
65
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
66
| LLaVA              | 7B         | 4.5GB | `ollama run llava`             |
67
| Solar              | 10.7B      | 6.1GB | `ollama run solar`             |
68

Michael Yang's avatar
Michael Yang committed
69
70
> [!NOTE]
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
71

Jeffrey Morgan's avatar
Jeffrey Morgan committed
72
## Customize a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
73

74
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
75

76
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
77

78
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
79

80
81
82
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
83

84
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
85

86
   ```
87
   ollama create example -f Modelfile
88
   ```
89

90
3. Run the model
Michael Yang's avatar
Michael Yang committed
91

92
   ```
93
   ollama run example
94
   ```
Michael Yang's avatar
Michael Yang committed
95

96
97
98
99
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

100
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
101

Michael's avatar
Michael committed
102
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.1` model:
103
104

```
Michael's avatar
Michael committed
105
ollama pull llama3.1
106
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
107

108
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
109

Jeffrey Morgan's avatar
Jeffrey Morgan committed
110
```
Michael's avatar
Michael committed
111
FROM llama3.1
112
113
114
115

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

116
# set the system message
Jeffrey Morgan's avatar
Jeffrey Morgan committed
117
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
118
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
119
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
120
```
Bruce MacDonald's avatar
Bruce MacDonald committed
121

122
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
123
124

```
125
126
127
128
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
129
130
```

131
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
132
133
134
135
136
137

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
138

Matt Williams's avatar
Matt Williams committed
139
140
141
142
```
ollama create mymodel -f ./Modelfile
```

143
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
144

145
```
Michael's avatar
Michael committed
146
ollama pull llama3.1
147
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
148

149
150
151
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
152
153

```
Michael's avatar
Michael committed
154
ollama rm llama3.1
Jeffrey Morgan's avatar
Jeffrey Morgan committed
155
156
```

157
158
159
### Copy a model

```
Michael's avatar
Michael committed
160
ollama cp llama3.1 my-model
161
162
163
164
165
166
167
168
169
170
171
172
173
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
174
175
176
### Multimodal models

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
177
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
Jeffrey Morgan's avatar
Jeffrey Morgan committed
178
179
180
The image features a yellow smiley face, which is likely the central focus of the picture.
```

Arpit Jain's avatar
Arpit Jain committed
181
### Pass the prompt as an argument
182
183

```
Michael's avatar
Michael committed
184
$ ollama run llama3.1 "Summarize this file: $(cat README.md)"
185
186
187
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

royjhan's avatar
royjhan committed
188
189
190
### Show model information

```
Michael's avatar
Michael committed
191
ollama show llama3.1
royjhan's avatar
royjhan committed
192
193
```

194
### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
195

196
197
198
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
199

200
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
201

202
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
203

Jeffrey Morgan's avatar
Jeffrey Morgan committed
204
205
## Building

206
See the [developer guide](https://github.com/ollama/ollama/blob/main/docs/development.md)
207
208

### Running local builds
209

Jeffrey Morgan's avatar
Jeffrey Morgan committed
210
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
211

Jeffrey Morgan's avatar
Jeffrey Morgan committed
212
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
213
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
214
215
```

Michael Yang's avatar
Michael Yang committed
216
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
217
218

```
Michael's avatar
Michael committed
219
./ollama run llama3.1
Jeffrey Morgan's avatar
Jeffrey Morgan committed
220
```
221
222
223

## REST API

James Braza's avatar
James Braza committed
224
Ollama has a REST API for running and managing models.
225
226

### Generate a response
227
228

```
229
curl http://localhost:11434/api/generate -d '{
Michael's avatar
Michael committed
230
  "model": "llama3.1",
231
232
  "prompt":"Why is the sky blue?"
}'
233
```
Nate Sesti's avatar
Nate Sesti committed
234

235
### Chat with a model
Bruce MacDonald's avatar
Bruce MacDonald committed
236
237
238

```
curl http://localhost:11434/api/chat -d '{
Michael's avatar
Michael committed
239
  "model": "llama3.1",
240
241
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
Bruce MacDonald's avatar
Bruce MacDonald committed
242
243
244
245
  ]
}'
```

James Braza's avatar
James Braza committed
246
247
See the [API documentation](./docs/api.md) for all endpoints.

248
249
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
250
### Web & Desktop
251

Bruce MacDonald's avatar
Bruce MacDonald committed
252
253
- [Open WebUI](https://github.com/open-webui/open-webui)
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
Fernando Maclen's avatar
Fernando Maclen committed
254
- [Hollama](https://github.com/fmaclen/hollama)
Saifeddine ALOUI's avatar
Saifeddine ALOUI committed
255
- [Lollms-Webui](https://github.com/ParisNeo/lollms-webui)
256
- [LibreChat](https://github.com/danny-avila/LibreChat)
257
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
258
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
Jikku Jose's avatar
Jikku Jose committed
259
- [Saddle](https://github.com/jikkuatwork/saddle)
260
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
261
- [Chatbot UI v2](https://github.com/mckaywrigley/chatbot-ui)
262
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
263
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
264
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
265
- [big-AGI](https://github.com/enricoros/big-AGI/blob/main/docs/config-local-ollama.md)
266
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
267
- [Amica](https://github.com/semperai/amica)
268
- [chatd](https://github.com/BruceMacD/chatd)
269
- [Ollama-SwiftUI](https://github.com/kghandour/Ollama-SwiftUI)
270
- [Dify.AI](https://github.com/langgenius/dify)
271
- [MindMac](https://mindmac.app)
272
- [NextJS Web Interface for Ollama](https://github.com/jakobhoeg/nextjs-ollama-llm-ui)
273
- [Msty](https://msty.app)
274
- [Chatbox](https://github.com/Bin-Huang/Chatbox)
275
- [WinForm Ollama Copilot](https://github.com/tgraupmann/WinForm_Ollama_Copilot)
276
- [NextChat](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web) with [Get Started Doc](https://docs.nextchat.dev/models/ollama)
277
- [Alpaca WebUI](https://github.com/mmo80/alpaca-webui)
278
- [OllamaGUI](https://github.com/enoch1118/ollamaGUI)
279
- [OpenAOE](https://github.com/InternLM/OpenAOE)
280
- [Odin Runes](https://github.com/leonid20000/OdinRunes)
281
- [LLM-X](https://github.com/mrdjohnson/llm-x) (Progressive Web App)
282
- [AnythingLLM (Docker + MacOs/Windows/Linux native app)](https://github.com/Mintplex-Labs/anything-llm)
283
284
- [Ollama Basic Chat: Uses HyperDiv Reactive UI](https://github.com/rapidarchitect/ollama_basic_chat)
- [Ollama-chats RPG](https://github.com/drazdra/ollama-chats)
285
286
287
288
289
290
- [QA-Pilot](https://github.com/reid41/QA-Pilot) (Chat with Code Repository)
- [ChatOllama](https://github.com/sugarforever/chat-ollama) (Open Source Chatbot based on Ollama with Knowledge Bases)
- [CRAG Ollama Chat](https://github.com/Nagi-ovo/CRAG-Ollama-Chat) (Simple Web Search with Corrective RAG)
- [RAGFlow](https://github.com/infiniflow/ragflow) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
- [StreamDeploy](https://github.com/StreamDeploy-DevRel/streamdeploy-llm-app-scaffold) (LLM Application Scaffold)
- [chat](https://github.com/swuecho/chat) (chat web app for teams)
291
- [Lobe Chat](https://github.com/lobehub/lobe-chat) with [Integrating Doc](https://lobehub.com/docs/self-hosting/examples/ollama)
292
293
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) (Local Chat with multiple PDFs using Ollama and RAG)
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) (Flexible native client with RAG & multi-agent automation)
294
- [macai](https://github.com/Renset/macai) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
295
- [Olpaka](https://github.com/Otacon/olpaka) (User-friendly Flutter Web App for Ollama)
296
- [OllamaSpring](https://github.com/CrazyNeil/OllamaSpring) (Ollama Client for macOS)
297
- [LLocal.in](https://github.com/kartikm7/llocal) (Easy to use Electron Desktop Client for Ollama)
RAPID ARCHITECT's avatar
RAPID ARCHITECT committed
298
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
299
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
300
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
Pákozdi György's avatar
Pákozdi György committed
301
- [Sidellama](https://github.com/gyopak/sidellama) (browser-based LLM client)
302
- [LLMStack](https://github.com/trypromptly/LLMStack) (No-code multi-agent framework to build LLM agents and workflows)
303
- [BoltAI for Mac](https://boltai.com) (AI Chat Client for Mac)
304
- [Harbor](https://github.com/av/harbor) (Containerized LLM Toolkit with Ollama as default backend)
305

Jeffrey Morgan's avatar
Jeffrey Morgan committed
306
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
307

308
309
310
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
311
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
312
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
313
- [ollero.nvim](https://github.com/marco-souza/ollero.nvim)
314
- [ollama-chat.nvim](https://github.com/gerazov/ollama-chat.nvim)
315
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
316
- [gptel Emacs client](https://github.com/karthink/gptel)
317
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
318
- [cmdh](https://github.com/pgibler/cmdh)
319
- [ooo](https://github.com/npahlfer/ooo)
reid41's avatar
reid41 committed
320
- [shell-pilot](https://github.com/reid41/shell-pilot)
321
- [tenere](https://github.com/pythops/tenere)
322
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
323
- [typechat-cli](https://github.com/anaisbetts/typechat-cli)
324
- [ShellOracle](https://github.com/djcopley/ShellOracle)
325
- [tlm](https://github.com/yusufcanb/tlm)
Bruce MacDonald's avatar
Bruce MacDonald committed
326
- [podman-ollama](https://github.com/ericcurtin/podman-ollama)
Sam's avatar
Sam committed
327
- [gollama](https://github.com/sammcj/gollama)
328

Jorge Torres's avatar
Jorge Torres committed
329
330
### Database

331
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) (Connects Ollama models with nearly 200 data platforms and apps)
332
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) with [example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama)
333

Matt Williams's avatar
Matt Williams committed
334
### Package managers
335

Matt Williams's avatar
Matt Williams committed
336
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
337
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
338
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)
339

340
### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
341

342
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
343
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
344
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
LangChain4j's avatar
LangChain4j committed
345
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
346
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) with [example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs)
347
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
348
- [LiteLLM](https://github.com/BerriAI/litellm)
349
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
350
- [Ollama for Ruby](https://github.com/gbaptista/ollama-ai)
351
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
352
- [Ollama-hpp for C++](https://github.com/jmont-dev/ollama-hpp)
353
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
354
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
355
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
356
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
357
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
358
- [LangChainDart](https://github.com/davidmigloz/langchain_dart)
359
- [Semantic Kernel - Python](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama)
360
- [Haystack](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md)
361
- [Elixir LangChain](https://github.com/brainlid/langchain)
Maximilian Weber's avatar
Maximilian Weber committed
362
- [Ollama for R - rollama](https://github.com/JBGruber/rollama)
363
- [Ollama for R - ollama-r](https://github.com/hauselin/ollama-r)
364
- [Ollama-ex for Elixir](https://github.com/lebrunel/ollama-ex)
365
- [Ollama Connector for SAP ABAP](https://github.com/b-tocs/abap_btocs_ollama)
366
- [Testcontainers](https://testcontainers.com/modules/ollama/)
367
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama)
J S's avatar
J S committed
368
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) with an [example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama)
369
- [LlamaScript](https://github.com/Project-Llama/llamascript)
Sam's avatar
Sam committed
370

Jeffrey Morgan's avatar
Jeffrey Morgan committed
371
372
### Mobile

373
- [Enchanted](https://github.com/AugustDev/enchanted)
Dane Madsen's avatar
Dane Madsen committed
374
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
375

Jeffrey Morgan's avatar
Jeffrey Morgan committed
376
377
### Extensions & Plugins

378
379
380
381
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
382
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
383
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
384
385
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
ruecat's avatar
ruecat committed
386
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
387
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
388
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
389
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)
390
- [Cliobot](https://github.com/herval/cliobot) (Telegram bot with Ollama support)
391
- [Copilot for Obsidian plugin](https://github.com/logancyang/obsidian-copilot)
Pavel Frankov's avatar
Pavel Frankov committed
392
- [Obsidian Local GPT plugin](https://github.com/pfrankov/obsidian-local-gpt)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
393
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)
394
395
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
- [Ollama Copilot](https://github.com/bernardo-bruning/ollama-copilot) (Proxy that allows you to use ollama as a copilot like Github copilot)
396
- [twinny](https://github.com/rjmacarthy/twinny) (Copilot and Copilot chat alternative using Ollama)
397
- [Wingman-AI](https://github.com/RussellCanfield/wingman-ai) (Copilot code and chat alternative using Ollama and Hugging Face)
398
- [Page Assist](https://github.com/n4ze3m/page-assist) (Chrome Extension)
399
- [AI Telegram Bot](https://github.com/tusharhero/aitelegrambot) (Telegram bot using Ollama in backend)
Yaroslav's avatar
Yaroslav committed
400
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (Sublime Text 4 AI assistant plugin with Ollama support)
401
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
402
- [Discord AI chat/moderation bot](https://github.com/rapmd73/Companion) Chat/moderation bot written in python. Uses Ollama to create personalities.
Nischal Jain's avatar
Nischal Jain committed
403
- [Headless Ollama](https://github.com/nischalj10/headless-ollama) (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
404

Sam's avatar
Sam committed
405
406
### Supported backends

407
408
- [llama.cpp](https://github.com/ggerganov/llama.cpp) project founded by Georgi Gerganov.