"graphbolt/src/fused_csc_sampling_graph.cc" did not exist on "e7ff22f7be98b0a59f3ade4e1d6a6231aee82520"
docker.md 1.92 KB
Newer Older
mashun1's avatar
v1  
mashun1 committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Ollama Docker image

### CPU only

```bash
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```

### Nvidia GPU
Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation).

#### Install with Apt
1.  Configure the repository
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
    | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
    | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
    | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
```
2.  Install the NVIDIA Container Toolkit packages
```bash
sudo apt-get install -y nvidia-container-toolkit
```

#### Install with Yum or Dnf
1.  Configure the repository
xuxzh1's avatar
init  
xuxzh1 committed
29

mashun1's avatar
v1  
mashun1 committed
30
31
32
33
```bash
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
    | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
```
xuxzh1's avatar
init  
xuxzh1 committed
34

mashun1's avatar
v1  
mashun1 committed
35
2. Install the NVIDIA Container Toolkit packages
xuxzh1's avatar
init  
xuxzh1 committed
36

mashun1's avatar
v1  
mashun1 committed
37
38
39
40
```bash
sudo yum install -y nvidia-container-toolkit
```

xuxzh1's avatar
init  
xuxzh1 committed
41
#### Configure Docker to use Nvidia driver
mashun1's avatar
v1  
mashun1 committed
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
```
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
```

#### Start the container

```bash
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```

### AMD GPU

To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following command:

```
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
```

### Run model locally

Now you can run a model:

```
xuxzh1's avatar
init  
xuxzh1 committed
66
docker exec -it ollama ollama run llama3.1
mashun1's avatar
v1  
mashun1 committed
67
68
69
70
71
```

### Try different models

More models can be found on the [Ollama library](https://ollama.com/library).