openai.md 2.84 KB
Newer Older
1
2
# OpenAI compatibility

3
> **Note:** OpenAI compatibility is experimental and is subject to major adjustments including breaking changes. For fully-featured access to the Ollama API, see the Ollama [Python library](https://github.com/ollama/ollama-python), [JavaScript library](https://github.com/ollama/ollama-js) and [REST API](https://github.com/ollama/ollama/blob/main/docs/api.md).
4

Jeffrey Morgan's avatar
Jeffrey Morgan committed
5
Ollama provides experimental compatibility with parts of the [OpenAI API](https://platform.openai.com/docs/api-reference) to help connect existing applications to Ollama.
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

## Usage

### OpenAI Python library

```python
from openai import OpenAI

client = OpenAI(
    base_url='http://localhost:11434/v1/',

    # required but ignored
    api_key='ollama',
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            'role': 'user',
            'content': 'Say this is a test',
        }
    ],
28
    model='llama3',
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
)
```

### OpenAI JavaScript library

```javascript
import OpenAI from 'openai'

const openai = new OpenAI({
  baseURL: 'http://localhost:11434/v1/',

  // required but ignored
  apiKey: 'ollama',
})

const chatCompletion = await openai.chat.completions.create({
  messages: [{ role: 'user', content: 'Say this is a test' }],
46
  model: 'llama3',
47
48
49
50
51
52
53
54
55
})
```

### `curl`

```
curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
56
        "model": "llama3",
57
58
59
60
61
62
63
64
65
66
67
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }'
68

69
70
71
72
73
74
75
76
77
78
79
80
81
```

## Endpoints

### `/v1/chat/completions`

#### Supported features

- [x] Chat completions
- [x] Streaming
- [x] JSON mode
- [x] Reproducible outputs
- [ ] Vision
royjhan's avatar
royjhan committed
82
- [x] Tools
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
- [ ] Logprobs

#### Supported request fields

- [x] `model`
- [x] `messages`
  - [x] Text `content`
  - [ ] Array of `content` parts
- [x] `frequency_penalty`
- [x] `presence_penalty`
- [x] `response_format`
- [x] `seed`
- [x] `stop`
- [x] `stream`
- [x] `temperature`
- [x] `top_p`
- [x] `max_tokens`
royjhan's avatar
royjhan committed
100
- [x] `tools`
101
- [ ] `tool_choice`
royjhan's avatar
royjhan committed
102
- [ ] `logit_bias`
103
- [ ] `user`
Jeffrey Morgan's avatar
Jeffrey Morgan committed
104
- [ ] `n`
105
106
107
108
109
110

## Models

Before using a model, pull it locally `ollama pull`:

```shell
111
ollama pull llama3
112
113
114
115
116
117
118
```

### Default model names

For tooling that relies on default OpenAI model names such as `gpt-3.5-turbo`, use `ollama cp` to copy an existing model name to a temporary name:

```
119
ollama cp llama3 gpt-3.5-turbo
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
```

Afterwards, this new model name can be specified the `model` field:

```shell
curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "gpt-3.5-turbo",
        "messages": [
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }'
```