Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ollama
Commits
daa5bb44
Unverified
Commit
daa5bb44
authored
Oct 25, 2023
by
Michael Yang
Committed by
GitHub
Oct 25, 2023
Browse files
Merge pull request #907 from jmorganca/mxyng/linux
update linux.md
parents
53b0ba8d
92119de9
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
53 additions
and
40 deletions
+53
-40
docs/linux.md
docs/linux.md
+53
-39
scripts/install.sh
scripts/install.sh
+0
-1
No files found.
docs/linux.md
View file @
daa5bb44
#
Installing
Ollama on Linux
# Ollama on Linux
> Note: A one line installer for Ollama is available by running:
>
> ```bash
> curl https://ollama.ai/install.sh | sh
> ```
## Download the `ollama` binary
Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:
```
bash
sudo
curl
-L
https://ollama.ai/download/ollama-linux-amd64
-o
/usr/bin/ollama
sudo chmod
+x /usr/bin/ollama
```
## Start Ollama
Start Ollama by running
`ollama serve`
:
```
bash
ollama serve
```
Once Ollama is running, run a model in another terminal session:
## Install
Install Ollama running this one-liner:
>
```
bash
ollama run llama2
curl https://ollama.ai/install.sh | sh
```
##
Install CUDA drivers (optional – for Nvidia GPUs)
##
Manual install
[
Download
and install
](
https://developer.nvidia.com/cuda-downloads
)
CUDA.
###
Download
the `ollama` binary
Verify that the drivers are installed by running the following command, which should print details about
your
GPU
:
Ollama is distributed as a self-contained binary. Download it to a directory in
your
PATH
:
```
bash
nvidia-smi
sudo
curl
-L
https://ollama.ai/download/ollama-linux-amd64
-o
/usr/bin/ollama
sudo chmod
+x /usr/bin/ollama
```
## Adding Ollama as a startup service (
optional
)
##
#
Adding Ollama as a startup service (
recommended
)
Create a user for Ollama:
...
...
@@ -60,7 +40,6 @@ User=ollama
Group
=
ollama
Restart
=
always
RestartSec
=
3
Environment
=
"HOME=/usr/share/ollama"
[Install]
WantedBy
=
default.target
...
...
@@ -73,7 +52,40 @@ sudo systemctl daemon-reload
sudo
systemctl
enable
ollama
```
### Viewing logs
### Install CUDA drivers (optional – for Nvidia GPUs)
[
Download and install
](
https://developer.nvidia.com/cuda-downloads
)
CUDA.
Verify that the drivers are installed by running the following command, which should print details about your GPU:
```
bash
nvidia-smi
```
### Start Ollama
Start Ollama using
`systemd`
:
```
bash
sudo
systemctl start ollama
```
## Update
Update ollama by running the install script again:
```
bash
curl https://ollama.ai/install.sh | sh
```
Or by downloading the ollama binary:
```
bash
sudo
curl
-L
https://ollama.ai/download/ollama-linux-amd64
-o
/usr/bin/ollama
sudo chmod
+x /usr/bin/ollama
```
## Viewing logs
To view logs of Ollama running as a startup service, run:
...
...
@@ -84,19 +96,21 @@ journalctl -u ollama
## Uninstall
Remove the ollama service:
```
bash
systemctl stop ollama
systemctl disable ollama
rm
/etc/systemd/system/ollama.service
sudo
systemctl stop ollama
sudo
systemctl disable ollama
sudo
rm
/etc/systemd/system/ollama.service
```
Remove the ollama binary from your bin directory (either
`/usr/local/bin`
,
`/usr/bin`
, or
`/bin`
):
```
bash
rm
/usr/local/bin/
ollama
sudo rm
$(
which
ollama
)
```
Remove the downloaded models and Ollama service user:
```
bash
r
m
/usr/share/ollama
userdel ollama
sudo rm
-
r
/usr/share/ollama
sudo
userdel ollama
```
scripts/install.sh
View file @
daa5bb44
...
...
@@ -89,7 +89,6 @@ User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="HOME=/usr/share/ollama"
Environment="PATH=
$PATH
"
[Install]
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment