README.md 19 KB
Newer Older
Timothy J. Baek's avatar
Timothy J. Baek committed
1
# Open WebUI (Formerly Ollama WebUI) 👋
Timothy J. Baek's avatar
Timothy J. Baek committed
2

Timothy J. Baek's avatar
rename  
Timothy J. Baek committed
3
4
5
6
7
8
9
![GitHub stars](https://img.shields.io/github/stars/open-webui/open-webui?style=social)
![GitHub forks](https://img.shields.io/github/forks/open-webui/open-webui?style=social)
![GitHub watchers](https://img.shields.io/github/watchers/open-webui/open-webui?style=social)
![GitHub repo size](https://img.shields.io/github/repo-size/open-webui/open-webui)
![GitHub language count](https://img.shields.io/github/languages/count/open-webui/open-webui)
![GitHub top language](https://img.shields.io/github/languages/top/open-webui/open-webui)
![GitHub last commit](https://img.shields.io/github/last-commit/open-webui/open-webui?color=red)
Timothy J. Baek's avatar
Timothy J. Baek committed
10
![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2Follama-webui%2Follama-wbui&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)
Timothy J. Baek's avatar
Timothy J. Baek committed
11
[![Discord](https://img.shields.io/badge/Discord-Open_WebUI-blue?logo=discord&logoColor=white)](https://discord.gg/5rJgQTnV4s)
Timothy J. Baek's avatar
Timothy J. Baek committed
12
[![](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/tjbck)
Timothy J. Baek's avatar
Timothy J. Baek committed
13

silentoplayz's avatar
silentoplayz committed
14
Imagine having a powerhouse of AI-driven conversations at your fingertips. Open WebUI is a revolutionary, self-hosted WebUI that brings the future of language models to your desktop. With its modular architecture, extensibility, and user-friendly interface, Open WebUI is the perfect platform for anyone looking to unlock the full potential of language models. Capable of operating entirely offline and leveraging various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our [Open WebUI Documentation](https://docs.openwebui.com/).
Timothy J. Baek's avatar
Timothy J. Baek committed
15

Timothy J. Baek's avatar
Timothy J. Baek committed
16
![Open WebUI Demo](./demo.gif)
Timothy Jaeryang Baek's avatar
Timothy Jaeryang Baek committed
17

silentoplayz's avatar
silentoplayz committed
18
19
## Key Features ⭐

silentoplayz's avatar
silentoplayz committed
20
- 📚 **Local RAG Integration**: Dive into the future of chat interactions with the groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using `#` command in the prompt. In its alpha phase, occasional issues may arise as we actively refine and enhance this feature to ensure optimal performance and reliability. **Revolutionize chat interactions with RAG support**.
silentoplayz's avatar
silentoplayz committed
21

silentoplayz's avatar
silentoplayz committed
22
- 🔍 **RAG Embedding Support**: Change the RAG embedding model directly in document settings, enhancing document processing. This feature supports Ollama and OpenAI models.  **Take control of your document interactions**.
silentoplayz's avatar
silentoplayz committed
23

silentoplayz's avatar
silentoplayz committed
24
- 🌐 **Web Browsing Capability**: Seamlessly integrate websites into your chat experience using the `#` command followed by the URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions. **Surf the web within your chat**.
silentoplayz's avatar
silentoplayz committed
25

silentoplayz's avatar
silentoplayz committed
26
- 🤖 **Multiple Model Support**: Seamlessly switch between different chat models for diverse interactions. **Explore multiple perspectives in a single chat**.
silentoplayz's avatar
silentoplayz committed
27

silentoplayz's avatar
silentoplayz committed
28
- 🧩 **Model Builder**: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through [Open WebUI Community](https://openwebui.com/) integration. **Design your ideal chat model**.
silentoplayz's avatar
silentoplayz committed
29

silentoplayz's avatar
silentoplayz committed
30
- 👥 **'@' Model Integration**: Harness the collective intelligence of multiple models in a single chat by seamlessly switching to any acessible local or external model during conversations by using the `@` command to specify the model by name. **Unlock the power of multiple models**.
silentoplayz's avatar
silentoplayz committed
31

silentoplayz's avatar
silentoplayz committed
32
- 🎨 **Image Generation Integration**: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API (local), ComfyUI (local), and DALL-E, enriching your chat experience with dynamic visual content. **Bring your chats to life with images**.
silentoplayz's avatar
silentoplayz committed
33

silentoplayz's avatar
silentoplayz committed
34
- 🤝 **OpenAI API Integration**: Effortlessly integrate OpenAI-compatible API for versatile conversations alongside Ollama models. Customize the API Base URL to link with **LMStudio, Mistral, OpenRouter, and more**. **Tap into the power of OpenAI**.
silentoplayz's avatar
silentoplayz committed
35

silentoplayz's avatar
silentoplayz committed
36
- 🔄 **Multi-Modal Support**: Seamlessly engage with models that support multimodal interactions, including images (e.g., LLava). **Experience the future of chat interactions**.
silentoplayz's avatar
silentoplayz committed
37

silentoplayz's avatar
silentoplayz committed
38
- ⚙️ **Fine-Tuned Control with Advanced Parameters**: Gain a deeper level of control by adjusting parameters such as temperature and defining your system prompts to tailor the conversation to your specific preferences and needs. **Tailor your conversations to your needs**.
silentoplayz's avatar
silentoplayz committed
39

silentoplayz's avatar
silentoplayz committed
40
- 🌐🌍 **Multilingual Support**: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors! **Chat in your native tongue**.
silentoplayz's avatar
silentoplayz committed
41

silentoplayz's avatar
silentoplayz committed
42
43
44
- ↕️ **Bi-Directional Chat Support**: Easily switch between left-to-right and right-to-left chat directions to accommodate various language preferences. **Accommodate diverse language preferences**.

- 🌟 **Continuous Updates**: We are committed to improving Open WebUI with regular updates and new features. **Enjoy the latest innovations in chat technology**.
silentoplayz's avatar
silentoplayz committed
45
46
47
48
49

<details>
  <summary>...and many more features! ⚡️</summary>

<details>
silentoplayz's avatar
silentoplayz committed
50
  <summary>🌈 User Experience</summary>
Timothy J. Baek's avatar
Timothy J. Baek committed
51
52

- 🖥️ **Intuitive Interface**: Our chat interface takes inspiration from ChatGPT, ensuring a user-friendly experience.
53

Timothy J. Baek's avatar
Timothy J. Baek committed
54
- 📱 **Responsive Design**: Enjoy a seamless experience on both desktop and mobile devices.
55

Timothy J. Baek's avatar
Timothy J. Baek committed
56
-**Swift Responsiveness**: Enjoy fast and responsive performance.
57

Daniele Viti's avatar
Daniele Viti committed
58
- 🚀 **Effortless Setup**: Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience.
59

Silentoplayz's avatar
Silentoplayz committed
60
61
- 🌈 **Theme Customization**: Choose from a variety of themes to personalize your Open WebUI experience.

Timothy J. Baek's avatar
Timothy J. Baek committed
62
63
64
65
- 💻 **Code Syntax Highlighting**: Enjoy enhanced code readability with our syntax highlighting feature.

- ✒️🔢 **Full Markdown and LaTeX Support**: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.

silentoplayz's avatar
silentoplayz committed
66
</details>
Silentoplayz's avatar
Silentoplayz committed
67

silentoplayz's avatar
silentoplayz committed
68
<details>
silentoplayz's avatar
silentoplayz committed
69
  <summary>💬 Conversations</summary>
Timothy J. Baek's avatar
Timothy J. Baek committed
70

Timothy J. Baek's avatar
Timothy J. Baek committed
71
- 📜 **Prompt Preset Support**: Instantly access preset prompts using the `/` command in the chat input. Load predefined conversation starters effortlessly and expedite your interactions. Effortlessly import prompts through [Open WebUI Community](https://openwebui.com/) integration.
Timothy Jaeryang Baek's avatar
Timothy Jaeryang Baek committed
72

Silentoplayz's avatar
Silentoplayz committed
73
- 👍👎 **RLHF Annotation**: Empower your messages by rating them with thumbs up and thumbs down, followed by the option to provide textual feedback, facilitating the creation of datasets for Reinforcement Learning from Human Feedback (RLHF). Utilize your messages to train or fine-tune models, all while ensuring the confidentiality of locally saved data.
Timothy Jaeryang Baek's avatar
Timothy Jaeryang Baek committed
74

Timothy J. Baek's avatar
Timothy J. Baek committed
75
76
- 🏷️ **Conversation Tagging**: Effortlessly categorize and locate specific chats for quick reference and streamlined data collection.

silentoplayz's avatar
silentoplayz committed
77
- ⬆️ **GGUF File Model Creation**: Effortlessly create Ollama models by uploading GGUF files directly from the web UI. Streamlined process with options to upload from your machine or download GGUF files from Hugging Face.
78

silentoplayz's avatar
silentoplayz committed
79
- ⚙️ **Many Models Conversations**: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
Silentoplayz's avatar
Silentoplayz committed
80

silentoplayz's avatar
silentoplayz committed
81
82
83
84
85
86
- 🧠 **Experimental Memory Feature**: Manually input personal information you want LLMs to remember via Settings > Personalization > Memory.

- 📜 **Citations in RAG Feature**: Easily track the context fed to the LLM with added citations in the RAG feature.

- 📹 **Youtube RAG Pipeline**: Dedicated RAG pipeline for Youtube videos, enabling interaction with video transcriptions directly.

silentoplayz's avatar
silentoplayz committed
87
</details>
Timothy J. Baek's avatar
Timothy J. Baek committed
88

silentoplayz's avatar
silentoplayz committed
89
<details>
silentoplayz's avatar
silentoplayz committed
90
  <summary>💻 Model Management</summary>
91

silentoplayz's avatar
silentoplayz committed
92
- 📥🗑️ **Download/Delete Models**: Easily download or remove models directly from the web UI.
Timothy J. Baek's avatar
Timothy J. Baek committed
93

silentoplayz's avatar
silentoplayz committed
94
- 🔄 **Update All Ollama Models**: Easily update locally installed models all at once with a convenient button, streamlining model management.
Timothy Jaeryang Baek's avatar
Timothy Jaeryang Baek committed
95

silentoplayz's avatar
silentoplayz committed
96
</details>
Timothy J. Baek's avatar
Timothy J. Baek committed
97

silentoplayz's avatar
silentoplayz committed
98
<details>
silentoplayz's avatar
silentoplayz committed
99
  <summary>👥 Collaboration</summary>
Timothy J. Baek's avatar
Timothy J. Baek committed
100

Silentoplayz's avatar
Silentoplayz committed
101
102
- 🗨️ **Local Chat Sharing**: Generate and share chat links seamlessly between users, enhancing collaboration and communication.

silentoplayz's avatar
silentoplayz committed
103
104
105
</details>

<details>
silentoplayz's avatar
silentoplayz committed
106
  <summary>📚 History and Archive</summary>
silentoplayz's avatar
silentoplayz committed
107

Timothy J. Baek's avatar
Timothy J. Baek committed
108
109
- 🔄 **Regeneration History Access**: Easily revisit and explore your entire regeneration history.

110
- 📜 **Chat History**: Effortlessly access and manage your conversation history.
111

Silentoplayz's avatar
Silentoplayz committed
112
113
- 📬 **Archive Chats**: Effortlessly store away completed conversations with LLMs for future reference, maintaining a tidy and clutter-free chat interface while allowing for easy retrieval and reference.

Timothy J. Baek's avatar
Timothy J. Baek committed
114
- 📤📥 **Import/Export Chat History**: Seamlessly move your chat data in and out of the platform.
115

silentoplayz's avatar
silentoplayz committed
116
117
118
</details>

<details>
silentoplayz's avatar
silentoplayz committed
119
  <summary>🎙️ Accessibility</summary>
silentoplayz's avatar
silentoplayz committed
120

Timothy J. Baek's avatar
Timothy J. Baek committed
121
- 🗣️ **Voice Input Support**: Engage with your model through voice interactions; enjoy the convenience of talking to your model directly. Additionally, explore the option for sending voice input automatically after 3 seconds of silence for a streamlined experience.
Timothy J. Baek's avatar
Timothy J. Baek committed
122

Silentoplayz's avatar
Silentoplayz committed
123
124
- 🔊 **Configurable Text-to-Speech Endpoint**: Customize your Text-to-Speech experience with configurable OpenAI endpoints.

silentoplayz's avatar
silentoplayz committed
125
</details>
Timothy J. Baek's avatar
Timothy J. Baek committed
126

silentoplayz's avatar
silentoplayz committed
127
<details>
silentoplayz's avatar
silentoplayz committed
128
129
130
131
132
133
134
135
136
137
  <summary>🐍 Code Execution</summary>

- 🐍 **Python Code Execution**: Execute Python code locally in the browser with libraries like 'requests', 'beautifulsoup4', 'numpy', 'pandas', 'seaborn', 'matplotlib', 'scikit-learn', 'scipy', 'regex'.

- 🚀 **Flexible, UI-Agnostic OpenAI-Compatible Pipelines (WIP)**: Seamlessly integrate and customize pipelines for efficient data processing and model training, ensuring ultimate flexibility and scalability.

</details>

<details>
  <summary>🔓 Integration and Security</summary>
Timothy J. Baek's avatar
Timothy J. Baek committed
138
139
140

-**Multiple OpenAI-Compatible API Support**: Seamlessly integrate and customize various OpenAI-compatible APIs, enhancing the versatility of your chat interactions.

silentoplayz's avatar
silentoplayz committed
141
- 🔑 **Simplified API Key Management**: Easily generate and manage secret keys to leverage Open WebUI with OpenAI libraries, streamlining integration and development.
Silentoplayz's avatar
Silentoplayz committed
142

silentoplayz's avatar
silentoplayz committed
143
- 🌐🔗 **External Ollama Server Connectivity**: Seamlessly link to an external Ollama server hosted on a different address by configuring the environment variable.
Timothy J. Baek's avatar
Timothy J. Baek committed
144

Timothy J. Baek's avatar
Timothy J. Baek committed
145
146
- 🔀 **Multiple Ollama Instance Load Balancing**: Effortlessly distribute chat requests across multiple Ollama instances for enhanced performance and reliability.

silentoplayz's avatar
silentoplayz committed
147
148
149
</details>

<details>
silentoplayz's avatar
silentoplayz committed
150
151
152
  <summary>👑 Administration</summary>

- 👑 **Super Admin Assignment**: Automatically assign the first signup as a super admin as an unchangeable role that cannot be modified by other admins.
silentoplayz's avatar
silentoplayz committed
153

silentoplayz's avatar
silentoplayz committed
154
- 🛡️ **Granular User Permissions**: Restrict user actions and access with customizable role-based permissions, ensuring that only authorized individuals can perform specific tasks.
Timothy J. Baek's avatar
Timothy J. Baek committed
155

silentoplayz's avatar
silentoplayz committed
156
- 👥 **Multi-User Management**: Seamlessly manage multiple users through our intuitive admin panel, streamlining user administration and simplifying user lifecycle management.
Silentoplayz's avatar
Silentoplayz committed
157

silentoplayz's avatar
silentoplayz committed
158
- 🔧 **Admin Panel**: Streamlined user management with options to add users directly or in bulk via CSV import, making user onboarding and management efficient.
Silentoplayz's avatar
Silentoplayz committed
159

silentoplayz's avatar
silentoplayz committed
160
161
162
163
164
- 🔗 **Webhook Integration**: Subscribe to new user sign-up events via webhook (compatible with Discord, Google Chat and Microsoft Teams), providing real-time notifications and automation capabilities.

- 🛡️ **Model Whitelisting**: Enhance security and access control by allowing admins to whitelist models for users with the `user` role, ensuring that only authorized models can be accessed.

- 📧 **Trusted Email Authentication**: Authenticate using a trusted email header, adding an extra layer of security and authentication to protect your WebUI.
Silentoplayz's avatar
Silentoplayz committed
165

Timothy J. Baek's avatar
Timothy J. Baek committed
166
- 🔐 **Role-Based Access Control (RBAC)**: Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.
167

Timothy J. Baek's avatar
Timothy J. Baek committed
168
- 🔒 **Backend Reverse Proxy Support**: Bolster security through direct communication between Open WebUI backend and Ollama. This key feature eliminates the need to expose Ollama over LAN. Requests made to the '/ollama/api' route from the web UI are seamlessly redirected to Ollama from the backend, enhancing overall system security.
169

silentoplayz's avatar
silentoplayz committed
170
171
- 🔓 **Optional Authentication**: Enjoy the flexibility to disable authentication by setting WEBUI_AUTH to False, ideal for fresh installations without existing users.

silentoplayz's avatar
silentoplayz committed
172
</details>
Timothy J. Baek's avatar
Timothy J. Baek committed
173

silentoplayz's avatar
silentoplayz committed
174
</details>
Timothy J. Baek's avatar
Timothy J. Baek committed
175

Timothy J. Baek's avatar
Timothy J. Baek committed
176
## 🔗 Also Check Out Open WebUI Community!
Timothy Jaeryang Baek's avatar
Timothy Jaeryang Baek committed
177

Timothy J. Baek's avatar
Timothy J. Baek committed
178
Don't forget to explore our sibling project, [Open WebUI Community](https://openwebui.com/), where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! 🚀
Timothy Jaeryang Baek's avatar
Timothy Jaeryang Baek committed
179

Timothy J. Baek's avatar
Timothy J. Baek committed
180
181
## How to Install 🚀

Timothy J. Baek's avatar
Timothy J. Baek committed
182
183
> [!NOTE]  
> Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on [Open WebUI Documentation](https://docs.openwebui.com/) is ready to assist you.
Timothy J. Baek's avatar
Timothy J. Baek committed
184

Jannik S's avatar
Jannik S committed
185
### Quick Start with Docker 🐳
186

Timothy J. Baek's avatar
Timothy J. Baek committed
187
> [!WARNING]
Timothy J. Baek's avatar
Timothy J. Baek committed
188
> When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
Timothy J. Baek's avatar
Timothy J. Baek committed
189

Timothy J. Baek's avatar
Timothy J. Baek committed
190
191
192
> [!TIP]  
> If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either `:cuda` or `:ollama`. To enable CUDA, you must install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your Linux/WSL system.

Timothy J. Baek's avatar
Timothy J. Baek committed
193
### Installation with Default Configuration
Timothy J. Baek's avatar
Timothy J. Baek committed
194

Timothy J. Baek's avatar
Timothy J. Baek committed
195
- **If Ollama is on your computer**, use this command:
Timothy J. Baek's avatar
Timothy J. Baek committed
196

Timothy J. Baek's avatar
Timothy J. Baek committed
197
198
199
  ```bash
  docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  ```
Timothy J. Baek's avatar
Timothy J. Baek committed
200

Timothy J. Baek's avatar
Timothy J. Baek committed
201
- **If Ollama is on a Different Server**, use this command:
Timothy J. Baek's avatar
Timothy J. Baek committed
202

Timothy J. Baek's avatar
Timothy J. Baek committed
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
  To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:

  ```bash
  docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  ```

  - **To run Open WebUI with Nvidia GPU support**, use this command:

  ```bash
  docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
  ```

### Installation for OpenAI API Usage Only

- **If you're only using OpenAI API**, use this command:

  ```bash
  docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  ```

### Installing Open WebUI with Bundled Ollama Support

This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:

- **With GPU Support**:
  Utilize GPU resources by running the following command:

  ```bash
  docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
  ```

- **For CPU Only**:
  If you're not using a GPU, use this command instead:

  ```bash
  docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
  ```

Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.
Timothy J. Baek's avatar
Timothy J. Baek committed
242
243

After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
Timothy J. Baek's avatar
Timothy J. Baek committed
244

Timothy J. Baek's avatar
Timothy J. Baek committed
245
246
247
248
### Other Installation Methods

We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.

Timothy J. Baek's avatar
Timothy J. Baek committed
249
250
251
252
### Troubleshooting

Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s).

Timothy J. Baek's avatar
Timothy J. Baek committed
253
#### Open WebUI: Server Connection Error
254

Timothy J. Baek's avatar
Timothy J. Baek committed
255
256
257
258
259
If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.

**Example Docker Command**:

```bash
260
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Timothy J. Baek's avatar
Timothy J. Baek committed
261
```
262

Timothy J. Baek's avatar
Timothy J. Baek committed
263
### Keeping Your Docker Installation Up-to-Date
Timothy J. Baek's avatar
Timothy J. Baek committed
264

Timothy J. Baek's avatar
Timothy J. Baek committed
265
266
267
268
269
270
271
272
273
274
In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):

```bash
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
```

In the last part of the command, replace `open-webui` with your container name if it is different.

### Moving from Ollama WebUI to Open WebUI

Timothy J. Baek's avatar
Timothy J. Baek committed
275
Check our Migration Guide available in our [Open WebUI Documentation](https://docs.openwebui.com/migration/).
Timothy J. Baek's avatar
Timothy J. Baek committed
276

Timothy J. Baek's avatar
Timothy J. Baek committed
277
## What's Next? 🌟
Timothy J. Baek's avatar
Timothy J. Baek committed
278

Timothy J. Baek's avatar
Timothy J. Baek committed
279
Discover upcoming features on our roadmap in the [Open WebUI Documentation](https://docs.openwebui.com/roadmap/).
Timothy J. Baek's avatar
Timothy J. Baek committed
280

281
282
283
## Supporters ✨

A big shoutout to our amazing supporters who's helping to make this project possible! 🙏
Timothy J. Baek's avatar
Timothy J. Baek committed
284

285
### Platinum Sponsors 🤍
Timothy J. Baek's avatar
Timothy J. Baek committed
286

287
288
289
290
- We're looking for Sponsors!

### Acknowledgments

Timothy J. Baek's avatar
Timothy J. Baek committed
291
Special thanks to [Prof. Lawrence Kim](https://www.lhkim.com/) and [Prof. Nick Vincent](https://www.nickmvincent.com/) for their invaluable support and guidance in shaping this project into a research endeavor. Grateful for your mentorship throughout the journey! 🙌
Timothy J. Baek's avatar
Timothy J. Baek committed
292
293
294

## License 📜

Timothy J. Baek's avatar
Timothy J. Baek committed
295
This project is licensed under the [MIT License](LICENSE) - see the [LICENSE](LICENSE) file for details. 📄
Timothy J. Baek's avatar
Timothy J. Baek committed
296
297
298

## Support 💬

Timothy J. Baek's avatar
Timothy J. Baek committed
299
If you have any questions, suggestions, or need assistance, please open an issue or join our
Timothy J. Baek's avatar
Timothy J. Baek committed
300
[Open WebUI Discord community](https://discord.gg/5rJgQTnV4s) to connect with us! 🤝
Timothy J. Baek's avatar
Timothy J. Baek committed
301

Jannik S's avatar
Jannik S committed
302
303
304
305
306
307
308
309
310
311
## Star History

<a href="https://star-history.com/#open-webui/open-webui&Date">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=open-webui/open-webui&type=Date&theme=dark" />
    <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=open-webui/open-webui&type=Date" />
    <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=open-webui/open-webui&type=Date" />
  </picture>
</a>

Timothy J. Baek's avatar
Timothy J. Baek committed
312
313
---

Timothy J. Baek's avatar
Timothy J. Baek committed
314
Created by [Timothy J. Baek](https://github.com/tjbck) - Let's make Open WebUI even more amazing together! 💪