Commit 20cdd9fe authored by Jeffrey Morgan's avatar Jeffrey Morgan
Browse files

update `README.md`

parent 11614b6d
...@@ -74,10 +74,10 @@ ollama.search("llama-7b") ...@@ -74,10 +74,10 @@ ollama.search("llama-7b")
## Future CLI ## Future CLI
In the future, there will be an easy CLI for running models In the future, there will be an `ollama` CLI for running models on servers, in containers or for local development environments.
``` ```
ollama run huggingface.co/thebloke/llama-7b-ggml ollama generaate huggingface.co/thebloke/llama-7b-ggml
> Downloading [================> ] 66.67% (2/3) 30.2MB/s > Downloading [================> ] 66.67% (2/3) 30.2MB/s
``` ```
......
# Desktop # Desktop
The Ollama desktop experience The Ollama desktop app
## Running ## Running
In the background run the `ollama.py` [development](../docs/development.md) server:
``` ```
npm install python ../ollama.py serve --port 5001
npm start
``` ```
## Packaging Then run the desktop app:
``` ```
npm run package npm install
npm start
``` ```
...@@ -14,14 +14,6 @@ Put your model in `models/` and run: ...@@ -14,14 +14,6 @@ Put your model in `models/` and run:
python3 ollama.py serve python3 ollama.py serve
``` ```
To run the app:
```
cd desktop
npm install
npm start
```
## Building ## Building
If using Apple silicon, you need a Python version that supports arm64: If using Apple silicon, you need a Python version that supports arm64:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment