> Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on [Open WebUI Documentation](https://docs.openwebui.com/) is ready to assist you.
### Quick Start with Docker 🐳
### Quick Start with Docker (3 ways) 🐳
> [!IMPORTANT]
> When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
-**If Ollama is on your computer**, use this command:
1.**If Ollama is on your computer**, use this command:
- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
-**If you want to customize your build with additional args**, use this commands:
3.**If you want to customize your build with additional ARGS**, use this commands:
> [!NOTE]
> If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :with-ollama
> If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below.
> [!NOTE]
> If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :ollama
> If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below.
**For the build:**
-**For the build:**
```bash
docker build -t open-webui
```
Optional build ARGS (use them in the docker build command below if needed):
-**Optional build ARGS (use them in the docker build command below if needed):**
For "intfloat/multilingual-e5-large" custom embedding model (default is all-MiniLM-L6-v2), only works with [sentence transforer models](https://huggingface.co/models?library=sentence-transformers). Current [Leaderbord](https://huggingface.co/spaces/mteb/leaderboard) of embedding models.
```bash
--build-arg="USE_OLLAMA=true"
```
For including ollama in the image.
```bash
--build-arg="USE_CUDA=true"
```
To use CUDA exeleration for the embedding and whisper models.
> [!NOTE]
> You need to install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your machine to be able to set CUDA as the Docker engine. Only works with Linux - use WSL for Windows!
> [!NOTE]
> You need to install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your machine to be able to set CUDA as the Docker engine. Only works with Linux - use WSL for Windows!