Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ollama
Commits
e1388938
Commit
e1388938
authored
Jun 28, 2023
by
Jeffrey Morgan
Browse files
reorganize `README.md` files
parent
9934ad77
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
64 additions
and
28 deletions
+64
-28
README.md
README.md
+47
-25
desktop/README.md
desktop/README.md
+17
-3
No files found.
README.md
View file @
e1388938
# Ollama
# Ollama
The easiest way to r
un ai models.
R
un ai models
locally
.
## Download
_Note: this project is a work in progress. The features below are still in development_
-
[
macOS
](
https://ollama.ai/download/darwin_arm64
)
(
Apple
Silicon)
**Features**
-
macOS (Intel – Coming soon)
-
Windows (Coming soon)
-
Linux (Coming soon)
## Python SDK
-
Run models locally on macOS (Windows, Linux and other platforms coming soon)
-
Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, core ml and other loaders coming soon)
-
Import models from local files
-
Find and download models on Hugging Face and other sources (coming soon)
-
Support for running and switching between multiple models at a time (coming soon)
-
Native desktop experience (coming soon)
-
Built-in memory (coming soon)
## Install
```
```
pip install ollama
pip install ollama
```
```
##
# Python SDK q
uickstart
##
Q
uickstart
```
python
import
ollama
ollama
.
generate
(
"./llama-7b-ggml.bin"
,
"hi"
)
```
```
% ollama run huggingface.co/TheBloke/orca_mini_3B-GGML
Pulling huggingface.co/TheBloke/orca_mini_3B-GGML...
Downloading [================> ] 66.67% (2/3) 30.2MB/s
### `ollama.generate(model, message)`
...
...
...
Generate a completion
> Hello
Hello, how may I help you?
```
## Python SDK
### Example
```
python
```
python
import
ollama
ollama
.
generate
(
"./llama-7b-ggml.bin"
,
"hi"
)
ollama
.
generate
(
"./llama-7b-ggml.bin"
,
"hi"
)
```
```
### `ollama.
load(model
)`
### `ollama.
generate(model, message
)`
Load a model for genera
tion
Generate a comple
tion
```
python
```
python
ollama
.
load
(
"model
"
)
ollama
.
generate
(
"./llama-7b-ggml.bin"
,
"hi
"
)
```
```
### `ollama.models()`
### `ollama.models()`
...
@@ -58,6 +73,22 @@ Add a model by importing from a file
...
@@ -58,6 +73,22 @@ Add a model by importing from a file
ollama
.
add
(
"./path/to/model"
)
ollama
.
add
(
"./path/to/model"
)
```
```
### `ollama.load(model)`
Manually a model for generation
```
python
ollama
.
load
(
"model"
)
```
### `ollama.unload(model)`
Unload a model
```
python
ollama
.
unload
(
"model"
)
```
## Cooming Soon
## Cooming Soon
### `ollama.pull(model)`
### `ollama.pull(model)`
...
@@ -76,15 +107,6 @@ Search for compatible models that Ollama can run
...
@@ -76,15 +107,6 @@ Search for compatible models that Ollama can run
ollama
.
search
(
"llama-7b"
)
ollama
.
search
(
"llama-7b"
)
```
```
## Future CLI
In the future, there will be an
`ollama`
CLI for running models on servers, in containers or for local development environments.
```
ollama generate huggingface.co/thebloke/llama-7b-ggml "hi"
> Downloading [================> ] 66.67% (2/3) 30.2MB/s
```
## Documentation
## Documentation
-
[
Development
](
docs/development.md
)
-
[
Development
](
docs/development.md
)
desktop/README.md
View file @
e1388938
# Desktop
# Desktop
The Ollama desktop experience
The Ollama desktop experience. This is an experimental, easy-to-use app for running models with
[
`ollama`
](
https://github.com/jmorganca/ollama
)
.
## Download
-
[
macOS
](
https://ollama.ai/download/darwin_arm64
)
(
Apple
Silicon)
-
macOS (Intel – Coming soon)
-
Windows (Coming soon)
-
Linux (Coming soon)
## Running
## Running
In the background run the
`
ollama
.py`
[
development
](
../docs/development.md
)
server:
In the background run the ollama
server
`ollama.py`
server:
```
```
python ../ollama.py serve --port 7734
python ../ollama.py serve --port 7734
```
```
Then run the desktop app:
Then run the desktop app
with
`npm start`
:
```
```
npm install
npm install
npm start
npm start
```
```
## Coming soon
-
Browse the latest available models on Hugging Face and other sources
-
Keep track of previous conversations with models
-
Switch between models
-
Connect to remote Ollama servers to run models
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment