docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```
For GPU support, use `--gpus=all`. See the Docker [image](https://hub.docker.com/r/ollama/ollama) for more information.
## Quickstart
## Quickstart
...
@@ -56,11 +64,11 @@ Here are some example open-source models that can be downloaded:
...
@@ -56,11 +64,11 @@ Here are some example open-source models that can be downloaded:
## Customize your own model
## Customize your own model
### Import from GGUF or GGML
### Import from GGUF
Ollama supports importing GGUF and GGML file formats in the Modelfile. This means if you have a model that is not in the Ollama library, you can create it, iterate on it, and upload it to the Ollama library to share with others when you are ready.
Ollama supports importing GGUF models in the Modelfile:
1. Create a file named Modelfile, and add a `FROM` instruction with the local filepath to the model you want to import.
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
```
```
FROM ./vicuna-33b.Q4_0.gguf
FROM ./vicuna-33b.Q4_0.gguf
...
@@ -69,15 +77,19 @@ Ollama supports importing GGUF and GGML file formats in the Modelfile. This mean
...
@@ -69,15 +77,19 @@ Ollama supports importing GGUF and GGML file formats in the Modelfile. This mean
2. Create the model in Ollama
2. Create the model in Ollama
```
```
ollama create name -f path_to_modelfile
ollama create example -f Modelfile
```
```
3. Run the model
3. Run the model
```
```
ollama run name
ollama run example
```
```
### Import from PyTorch or Safetensors
See the [guide](docs/import.md) on importing models for more information.
### Customize a prompt
### Customize a prompt
Models from the Ollama library can be customized with a prompt. The example
Models from the Ollama library can be customized with a prompt. The example