Commit 7313e424 authored by Timothy J. Baek's avatar Timothy J. Baek
Browse files

fix: readme.md formatting

parent 87177e5b
### Installing Both Ollama and Ollama Web UI Using Kustomize
For cpu-only pod
```bash
kubectl apply -f ./kubernetes/manifest/base
```
For gpu-enabled pod
```bash
kubectl apply -k ./kubernetes/manifest
```
### Installing Both Ollama and Ollama Web UI Using Helm
Package Helm file first
```bash
helm package ./kubernetes/helm/
```
For cpu-only pod
```bash
helm install ollama-webui ./ollama-webui-*.tgz
```
For gpu-enabled pod
```bash
helm install ollama-webui ./ollama-webui-*.tgz --set ollama.resources.limits.nvidia.com/gpu="1"
```
Check the `kubernetes/helm/values.yaml` file to know which parameters are available for customization
...@@ -79,113 +79,113 @@ Don't forget to explore our sibling project, [OllamaHub](https://ollamahub.com/) ...@@ -79,113 +79,113 @@ Don't forget to explore our sibling project, [OllamaHub](https://ollamahub.com/)
- **Privacy and Data Security:** We prioritize your privacy and data security above all. Please be reassured that all data entered into the Ollama Web UI is stored locally on your device. Our system is designed to be privacy-first, ensuring that no external requests are made, and your data does not leave your local environment. We are committed to maintaining the highest standards of data privacy and security, ensuring that your information remains confidential and under your control. - **Privacy and Data Security:** We prioritize your privacy and data security above all. Please be reassured that all data entered into the Ollama Web UI is stored locally on your device. Our system is designed to be privacy-first, ensuring that no external requests are made, and your data does not leave your local environment. We are committed to maintaining the highest standards of data privacy and security, ensuring that your information remains confidential and under your control.
### Installing Both Ollama and Ollama Web UI Using Provided run-compose.sh bash script ### Installing Ollama Web UI Only
Also available on Windows under any docker-enabled WSL2 linux distro (you have to enable it from Docker Desktop)
Simply run the following command: #### Prerequisites
Grant execute permission to script
```bash
chmod +x run-compose.sh
```
For CPU only container Make sure you have the latest version of Ollama installed before proceeding with the installation. You can find the latest version of Ollama at [https://ollama.ai/](https://ollama.ai/).
```bash
./run-compose.sh
```
For GPU enabled container (to enable this you must have your gpu driver for docker, it mostly works with nvidia so this is the official install guide: [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)) ##### Checking Ollama
Warning! A GPU-enabled installation has only been tested using linux and nvidia GPU, full functionalities are not guaranteed under Windows or Macos or using a different GPU
```bash
./run-compose.sh --enable-gpu
```
Note that both the above commands will use the latest production docker image in repository, to be able to build the latest local version you'll need to append the `--build` parameter, for example: After installing Ollama, verify that Ollama is running by accessing the following link in your web browser: [http://127.0.0.1:11434/](http://127.0.0.1:11434/). Note that the port number may differ based on your system configuration.
```bash
./run-compose.sh --enable-gpu --build
```
### Installing Both Ollama and Ollama Web UI Using Docker Compose #### Using Docker 🐳
To install using docker compose script as CPU-only installation simply run this command
```bash
docker compose up -d
```
for a GPU-enabled installation (provided you installed the necessary gpu drivers and you are using nvidia) **Important:** When using Docker to install Ollama Web UI, make sure to include the `-v ollama-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
```bash
docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d If Ollama is hosted on your local machine and accessible at [http://127.0.0.1:11434/](http://127.0.0.1:11434/), run the following command:
```
### Installing Both Ollama and Ollama Web UI Using Kustomize
For cpu-only pod
```bash ```bash
kubectl apply -f ./kubernetes/manifest/base docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
``` ```
For gpu-enabled pod
Alternatively, if you prefer to build the container yourself, use the following command:
```bash ```bash
kubectl apply -k ./kubernetes/manifest docker build -t ollama-webui .
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ollama-webui
``` ```
### Installing Both Ollama and Ollama Web UI Using Helm Your Ollama Web UI should now be hosted at [http://localhost:3000](http://localhost:3000) and accessible over LAN (or Network). Enjoy! 😄
Package Helm file first
#### Accessing External Ollama on a Different Server
Change `OLLAMA_API_BASE_URL` environment variable to match the external Ollama Server url:
```bash ```bash
helm package ./kubernetes/helm/ docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
``` ```
For cpu-only pod Alternatively, if you prefer to build the container yourself, use the following command:
```bash ```bash
helm install ollama-webui ./ollama-webui-*.tgz docker build -t ollama-webui .
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api -v ollama-webui:/app/backend/data --name ollama-webui --restart always ollama-webui
``` ```
For gpu-enabled pod
### Installing Both Ollama and Ollama Web UI
#### Using Docker Compose
If you don't have Ollama installed yet, you can use the provided Docker Compose file for a hassle-free installation. Simply run the following command:
```bash ```bash
helm install ollama-webui ./ollama-webui-*.tgz --set ollama.resources.limits.nvidia.com/gpu="1" docker compose up -d --build
``` ```
Check the `kubernetes/helm/values.yaml` file to know which parameters are available for customization This command will install both Ollama and Ollama Web UI on your system.
### Installing Ollama Web UI Only ##### Enable GPU
#### Prerequisites Use the additional Docker Compose file designed to enable GPU support by running the following command:
Make sure you have the latest version of Ollama installed before proceeding with the installation. You can find the latest version of Ollama at [https://ollama.ai/](https://ollama.ai/). ```bash
docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d --build
```
##### Checking Ollama ##### Expose Ollama API outside the container stack
After installing Ollama, verify that Ollama is running by accessing the following link in your web browser: [http://127.0.0.1:11434/](http://127.0.0.1:11434/). Note that the port number may differ based on your system configuration. Deploy the service with an additional Docker Compose file designed for API exposure:
#### Using Docker 🐳 ```bash
docker compose -f docker-compose.yaml -f docker-compose.api.yaml up -d --build
```
**Important:** When using Docker to install Ollama Web UI, make sure to include the `-v ollama-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data. #### Using Provided `run-compose.sh` Script (Linux)
If Ollama is hosted on your local machine and accessible at [http://127.0.0.1:11434/](http://127.0.0.1:11434/), run the following command: Also available on Windows under any docker-enabled WSL2 linux distro (you have to enable it from Docker Desktop)
Simply run the following command to grant execute permission to script:
```bash ```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main chmod +x run-compose.sh
``` ```
Alternatively, if you prefer to build the container yourself, use the following command: ##### For CPU only container
```bash ```bash
docker build -t ollama-webui . ./run-compose.sh
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ollama-webui
``` ```
Your Ollama Web UI should now be hosted at [http://localhost:3000](http://localhost:3000) and accessible over LAN (or Network). Enjoy! 😄 ##### Enable GPU
#### Accessing External Ollama on a Different Server
Change `OLLAMA_API_BASE_URL` environment variable to match the external Ollama Server url: For GPU enabled container (to enable this you must have your gpu driver for docker, it mostly works with nvidia so this is the official install guide: [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html))
Warning! A GPU-enabled installation has only been tested using linux and nvidia GPU, full functionalities are not guaranteed under Windows or Macos or using a different GPU
```bash ```bash
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main ./run-compose.sh --enable-gpu
``` ```
Alternatively, if you prefer to build the container yourself, use the following command: Note that both the above commands will use the latest production docker image in repository, to be able to build the latest local version you'll need to append the `--build` parameter, for example:
```bash ```bash
docker build -t ollama-webui . ./run-compose.sh --enable-gpu --build
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api -v ollama-webui:/app/backend/data --name ollama-webui --restart always ollama-webui
``` ```
#### Using Alternative Methods (Kustomize or Helm)
See [INSTALLATION.md](/INSTALLATION.md) for information on how to install and/or join our [Ollama Web UI Discord community](https://discord.gg/5rJgQTnV4s).
## How to Install Without Docker ## How to Install Without Docker
While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own. While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own.
......
...@@ -2,5 +2,6 @@ version: '3.8' ...@@ -2,5 +2,6 @@ version: '3.8'
services: services:
ollama: ollama:
# Expose Ollama API outside the container stack
ports: ports:
- ${OLLAMA_WEBAPI_PORT-11434}:11434 - ${OLLAMA_WEBAPI_PORT-11434}:11434
version: '3.6'
services:
ollama:
# Expose Ollama API outside the container stack
ports:
- 11434:11434
\ No newline at end of file
...@@ -2,6 +2,7 @@ version: '3.8' ...@@ -2,6 +2,7 @@ version: '3.8'
services: services:
ollama: ollama:
# GPU support
deploy: deploy:
resources: resources:
reservations: reservations:
......
version: '3.6'
services:
ollama:
# GPU support
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment