Commit 396700dd authored by chenzk's avatar chenzk
Browse files

v1.0

parents
Pipeline #2603 failed with stages
in 0 seconds
# App
Get started with the App API
# Chat App
```python
POST /api/v2/chat/completions
```
### Examples
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
### Stream Chat App
<Tabs
defaultValue="python"
groupId="chat"
values={[
{label: 'Curl', value: 'curl'},
{label: 'Python', value: 'python'},
]
}>
<TabItem value="curl">
```shell
DBGPT_API_KEY=dbgpt
APP_ID={YOUR_APP_ID}
curl -X POST "http://localhost:5670/api/v2/chat/completions" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"messages\":\"Hello\",\"model\":\"gpt-4o\", \"chat_mode\": \"chat_app\", \"chat_param\": \"$APP_ID\"}"
```
</TabItem>
<TabItem value="python">
```python
from dbgpt_client import Client
DBGPT_API_KEY = "dbgpt"
APP_ID="{YOUR_APP_ID}"
client = Client(api_key=DBGPT_API_KEY)
async for data in client.chat_stream(
messages="Introduce AWEL",
model="gpt-4o",
chat_mode="chat_app",
chat_param=APP_ID
):
print(data)
```
</TabItem>
</Tabs>
### Chat Completion Stream Response
```commandline
data: {"id": "109bfc28-fe87-452c-8e1f-d4fe43283b7d", "created": 1710919480, "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "```agent-plans\n[{\"name\": \"Introduce Awel\", \"num\": 2, \"status\": \"complete\", \"agent\": \"Human\", \"markdown\": \"```agent-messages\\n[{\\\"sender\\\": \\\"Summarizer\\\", \\\"receiver\\\": \\\"Human\\\", \\\"model\\\": \\\"gpt-4o\\\", \\\"markdown\\\": \\\"Agentic Workflow Expression Language (AWEL) is a specialized language designed for developing large model applications with intelligent agent workflows. It offers flexibility and functionality, allowing developers to focus on business logic for LLMs applications without getting bogged down in model and environment details. AWEL uses a layered API design architecture, making it easier to work with. You can find examples and source code to get started with AWEL, and it supports various operators and environments. AWEL is a powerful tool for building native data applications through workflows and agents.\"}]\n```"}}]}
data: [DONE]
```
### Get App
```python
GET /api/v2/serve/apps/{app_id}
```
<Tabs
defaultValue="curl_get_app"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_get_app'},
{label: 'Python', value: 'python_get_app'},
]
}>
<TabItem value="curl_get_app">
```shell
DBGPT_API_KEY=dbgpt
APP_ID={YOUR_APP_ID}
curl -X GET "http://localhost:5670/api/v2/serve/apps/$APP_ID" -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_get_app">
```python
from dbgpt_client import Client
from dbgpt_client.app import get_app
DBGPT_API_KEY = "dbgpt"
app_id = "{your_app_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await get_app(client=client, app_id=app_id)
```
</TabItem>
</Tabs>
#### Query Parameters
________
<b>app_id</b> <font color="gray"> string </font> <font color="red"> Required </font>
app id
________
#### Response body
Return <a href="#the-app-object">App Object</a>
### List App
```python
GET /api/v2/serve/apps
```
<Tabs
defaultValue="curl_list_app"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_list_app'},
{label: 'Python', value: 'python_list_app'},
]
}>
<TabItem value="curl_list_app">
```shell
DBGPT_API_KEY=dbgpt
curl -X GET 'http://localhost:5670/api/v2/serve/apps' -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_list_app">
```python
from dbgpt_client import Client
from dbgpt_client.app import list_app
DBGPT_API_KEY = "dbgpt"
app_id = "{your_app_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await list_app(client=client)
```
</TabItem>
</Tabs>
#### Response body
Return <a href="#the-app-object">App Object</a> List
### The App Model
________
<b>app_code</b> <font color="gray"> string </font>
unique app id
________
<b>app_name</b> <font color="gray"> string </font>
app name
________
<b>app_describe</b> <font color="gray"> string </font>
app describe
________
<b>team_mode</b> <font color="gray"> string </font>
team mode
________
<b>language</b> <font color="gray"> string </font>
language
________
<b>team_context</b> <font color="gray"> string </font>
team context
________
<b>user_code</b> <font color="gray"> string </font>
user code
________
<b>sys_code</b> <font color="gray"> string </font>
sys code
________
<b>is_collected</b> <font color="gray"> string </font>
is collected
________
<b>icon</b> <font color="gray"> string </font>
icon
________
<b>created_at</b> <font color="gray"> string </font>
created at
________
<b>updated_at</b> <font color="gray"> string </font>
updated at
________
<b>details</b> <font color="gray"> string </font>
app details List[AppDetailModel]
________
### The App Detail Model
________
<b>app_code</b> <font color="gray"> string </font>
app code
________
<b>app_name</b> <font color="gray"> string </font>
app name
________
<b>agent_name</b> <font color="gray"> string </font>
agent name
________
<b>node_id</b> <font color="gray"> string </font>
node id
________
<b>resources</b> <font color="gray"> string </font>
resources
________
<b>prompt_template</b> <font color="gray"> string </font>
prompt template
________
<b>llm_strategy</b> <font color="gray"> string </font>
llm strategy
________
<b>llm_strategy_value</b> <font color="gray"> string </font>
llm strategy value
________
<b>created_at</b> <font color="gray"> string </font>
created at
________
<b>updated_at</b> <font color="gray"> string </font>
updated at
________
# Chat
Given a list of messages comprising a conversation, the model will return a response.
# Create Chat Completion
```python
POST /api/v2/chat/completions
```
### Examples
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
### Stream Chat Completion
<Tabs
defaultValue="python"
groupId="chat"
values={[
{label: 'Curl', value: 'curl'},
{label: 'Python', value: 'python'},
]
}>
<TabItem value="curl">
```shell
DBGPT_API_KEY="dbgpt"
curl -X POST "http://localhost:5670/api/v2/chat/completions" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"messages\":\"Hello\",\"model\":\"gpt-4o\", \"stream\": true}"
```
</TabItem>
<TabItem value="python">
```python
from dbgpt_client import Client
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
async for data in client.chat_stream(
model="gpt-4o",
messages="hello",
):
print(data)
```
</TabItem>
</Tabs>
### Chat Completion Stream Response
```commandline
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "Hello"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "!"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " How"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " can"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " I"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " assist"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " you"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " today"}}]}
data: {"id": "chatcmpl-ba6fb52e-e5b2-11ee-b031-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "?"}}]}
data: [DONE]
```
### Chat Completion
<Tabs
defaultValue="python"
groupId="chat"
values={[
{label: 'Curl', value: 'curl'},
{label: 'Python', value: 'python'},
]
}>
<TabItem value="curl">
```shell
DBGPT_API_KEY="dbgpt"
curl -X POST "http://localhost:5670/api/v2/chat/completions" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"messages\":\"Hello\",\"model\":\"gpt-4o\", \"stream\": false}"
```
</TabItem>
<TabItem value="python">
```python
from dbgpt_client import Client
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
response = await client.chat(model="gpt-4o" ,messages="hello")
```
</TabItem>
</Tabs>
### Chat Completion Response
```json
{
"id": "a8321543-52e9-47a5-a0b6-3d997463f6a3",
"object": "chat.completion",
"created": 1710826792,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": null
}
],
"usage": {
"prompt_tokens": 0,
"total_tokens": 0,
"completion_tokens": 0
}
}
```
### Request body
________
<b>messages</b> <font color="gray"> string </font> <font color="red"> Required </font>
A list of messages comprising the conversation so far. Example Python code.
________
<b>model</b> <font color="gray"> string </font> <font color="red"> Required </font>
ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
________
<b>chat_mode</b> <font color="gray"> string </font> <font color="red"> Optional </font>
The DB-GPT chat mode, which can be one of the following: `chat_normal`, `chat_app`, `chat_knowledge`, `chat_flow`, default is `chat_normal`.
________
<b>chat_param</b> <font color="gray"> string </font> <font color="red"> Optional </font>
The DB-GPT The chat param value of chat mode: `{app_id}`, `{space_id}`, `{flow_id}`, default is `None`.
________
<b>max_new_tokens</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
________
<b>stream</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
If set, partial message deltas will be sent.
Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]`
________
<b>temperature</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
________
<b>conv_uid</b> <font color="gray"> string </font> <font color="red"> Optional </font>
The conversation id of the model inference, default is `None`
________
<b>span_id</b> <font color="gray"> string </font> <font color="red"> Optional </font>
The span id of the model inference, default is `None`
________
<b>sys_code</b> <font color="gray"> string </font> <font color="red"> Optional </font>
The system code, default is `None`
________
<b>user_name</b> <font color="gray"> string </font> <font color="red"> Optional </font>
The web server user name, default is `None`
________
### Chat Stream Response Body
________
<b>id</b> <font color="gray"> string </font>
conv_uid of the convsersation.
________
<b>model</b> <font color="gray"> string </font>
The model used for the chat completion.
________
<b>created</b> <font color="gray"> string </font>
The Unix timestamp (in seconds) of when the chat completion was created.
________
<b>choices</b> <font color="gray"> array </font>
A list of chat completion choices. Can be more than one if n is greater than 1.
- <b>index</b> <font color="gray"> integer </font>
The index of the choice in the list of choices.
- <b>delta</b> <font color="gray"> object </font>
The chat completion delta.
- <b>role</b> <font color="gray"> string </font>
The role of the speaker. Can be `user` or `assistant`.
- <b>content</b> <font color="gray"> string </font>
The content of the message.
- <b>finish_reason</b> <font color="gray"> string </font>
The reason the chat completion finished. Can be `max_tokens` or `stop`.
________
### Chat Response Body
________
<b>id</b> <font color="gray"> string </font>
conv_uid of the convsersation.
________
<b>model</b> <font color="gray"> string </font>
The model used for the chat completion.
________
<b>created</b> <font color="gray"> string </font>
The Unix timestamp (in seconds) of when the chat completion was created.
________
<b>object</b> <font color="gray"> string </font>
The object type of the chat completion.
________
<b>choices</b> <font color="gray"> array </font>
A list of chat completion choices. Can be more than one if n is greater than 1.
- <b>index</b> <font color="gray"> integer </font>
The index of the choice in the list of choices.
- <b>delta</b> <font color="gray"> object </font>
The chat completion delta.
- <b>role</b> <font color="gray"> string </font>
The role of the speaker. Can be `user` or `assistant`.
- <b>content</b> <font color="gray"> string </font>
The content of the message.
- <b>finish_reason</b> <font color="gray"> string </font>
The reason the chat completion finished. Can be `max_tokens` or `stop`.
________
<b>usage</b> <font color="gray"> object </font>
The usage statistics for the chat completion.
- <b>prompt_tokens</b> <font color="gray"> integer </font>
The number of tokens in the prompt.
- <b>total_tokens</b> <font color="gray"> integer </font>
The total number of tokens in the chat completion.
- <b>completion_tokens</b> <font color="gray"> integer </font>
The number of tokens in the chat completion.
# Datasource
Get started with the Datasource API
# Chat Datasource
```python
POST /api/v2/chat/completions
```
### Examples
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
### Chat Datasource
<Tabs
defaultValue="python"
groupId="chat"
values={[
{label: 'Curl', value: 'curl'},
{label: 'Python', value: 'python'},
]
}>
<TabItem value="curl">
```shell
DBGPT_API_KEY=dbgpt
DB_NAME="{your_db_name}"
curl -X POST "http://localhost:5670/api/v2/chat/completions" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"messages\":\"show space datas limit 5\",\"model\":\"gpt-4o\", \"chat_mode\": \"chat_data\", \"chat_param\": \"$DB_NAME\"}"
```
</TabItem>
<TabItem value="python">
```python
from dbgpt_client import Client
DBGPT_API_KEY = "dbgpt"
DB_NAME="{your_db_name}"
client = Client(api_key=DBGPT_API_KEY)
res = client.chat(
messages="show space datas limit 5",
model="gpt-4o",
chat_mode="chat_data",
chat_param=DB_NAME
)
```
</TabItem>
</Tabs>
#### Chat Completion Response
```json
{
"id": "2bb80fdd-e47e-4083-8bc9-7ca66ee0931b",
"object": "chat.completion",
"created": 1711509733,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The user wants to display information about knowledge spaces with a limit of 5 results.\\n<chart-view content=\"{\"type\": \"response_table\", \"sql\": \"SELECT * FROM knowledge_space LIMIT 5\", \"data\": [{\"id\": 5, \"name\": \"frfrw\", \"vector_type\": \"Chroma\", \"desc\": \"eee\", \"owner\": \"eee\", \"context\": null, \"gmt_created\": \"2024-01-02T13:29:52\", \"gmt_modified\": \"2024-01-02T13:29:52\", \"description\": null}, {\"id\": 7, \"name\": \"acc\", \"vector_type\": \"Chroma\", \"desc\": \"dede\", \"owner\": \"dede\", \"context\": null, \"gmt_created\": \"2024-01-02T13:47:01\", \"gmt_modified\": \"2024-01-02T13:47:01\", \"description\": null}, {\"id\": 8, \"name\": \"bcc\", \"vector_type\": \"Chroma\", \"desc\": \"dede\", \"owner\": \"dede\", \"context\": null, \"gmt_created\": \"2024-01-02T14:22:02\", \"gmt_modified\": \"2024-01-02T14:22:02\", \"description\": null}, {\"id\": 9, \"name\": \"dede\", \"vector_type\": \"Chroma\", \"desc\": \"dede\", \"owner\": \"dede\", \"context\": null, \"gmt_created\": \"2024-01-02T14:36:18\", \"gmt_modified\": \"2024-01-02T14:36:18\", \"description\": null}, {\"id\": 10, \"name\": \"qqq\", \"vector_type\": \"Chroma\", \"desc\": \"dede\", \"owner\": \"dede\", \"context\": null, \"gmt_created\": \"2024-01-02T14:40:56\", \"gmt_modified\": \"2024-01-02T14:40:56\", \"description\": null}]}\" />"
},
"finish_reason": null
}
],
"usage": {
"prompt_tokens": 0,
"total_tokens": 0,
"completion_tokens": 0
}
}
```
### Create Datasource
```python
POST /api/v2/serve/datasources
```
#### Request body
Request <a href="#the-flow-object">Datasource Object</a>
#### Response body
Return <a href="#the-flow-object">Datasource Object</a>
### Update Datasource
```python
PUT /api/v2/serve/datasources
```
#### Request body
Request <a href="#the-flow-object">Datasource Object</a>
#### Response body
Return <a href="#the-flow-object">Datasource Object</a>
### Delete Datasource
```python
DELETE /api/v2/serve/datasources
```
<Tabs
defaultValue="curl_update_datasource"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_update_datasource'},
{label: 'Python', value: 'python_update_datasource'},
]
}>
<TabItem value="curl_update_datasource">
```shell
DBGPT_API_KEY=dbgpt
DATASOURCE_ID={YOUR_DATASOURCE_ID}
curl -X DELETE "http://localhost:5670/api/v2/serve/datasources/$DATASOURCE_ID" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
```
</TabItem>
<TabItem value="python_update_datasource">
```python
from dbgpt_client import Client
from dbgpt_client.datasource import delete_datasource
DBGPT_API_KEY = "dbgpt"
datasource_id = "{your_datasource_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await delete_datasource(client=client, datasource_id=datasource_id)
```
</TabItem>
</Tabs>
#### Delete Parameters
________
<b>datasource_id</b> <font color="gray"> string </font> <font color="red"> Required </font>
datasource id
________
#### Response body
Return <a href="#the-flow-object">Datasource Object</a>
### Get Datasource
```python
GET /api/v2/serve/datasources/{datasource_id}
```
<Tabs
defaultValue="curl_get_datasource"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_get_datasource'},
{label: 'Python', value: 'python_get_datasource'},
]
}>
<TabItem value="curl_get_datasource">
```shell
DBGPT_API_KEY=dbgpt
DATASOURCE_ID={YOUR_DATASOURCE_ID}
curl -X GET "http://localhost:5670/api/v2/serve/datasources/$DATASOURCE_ID" -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_get_datasource">
```python
from dbgpt_client import Client
from dbgpt_client.datasource import get_datasource
DBGPT_API_KEY = "dbgpt"
datasource_id = "{your_datasource_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await get_datasource(client=client, datasource_id=datasource_id)
```
</TabItem>
</Tabs>
#### Query Parameters
________
<b>datasource_id</b> <font color="gray"> string </font> <font color="red"> Required </font>
datasource id
________
#### Response body
Return <a href="#the-flow-object">Datasource Object</a>
### List Datasource
```python
GET /api/v2/serve/datasources
```
<Tabs
defaultValue="curl_list_datasource"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_list_datasource'},
{label: 'Python', value: 'python_list_datasource'},
]
}>
<TabItem value="curl_list_datasource">
```shell
DBGPT_API_KEY=dbgpt
curl -X GET "http://localhost:5670/api/v2/serve/datasources" -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_list_datasource">
```python
from dbgpt_client import Client
from dbgpt_client.datasource import list_datasource
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
res = await list_datasource(client=client)
```
</TabItem>
</Tabs>
#### Response body
Return <a href="#the-flow-object">Datasource Object</a>
### The Datasource Object
________
<b>id</b> <font color="gray">string</font>
The unique id for the datasource.
________
<b>db_name</b> <font color="gray">string</font>
The Database name
________
<b>db_type</b> <font color="gray">string</font>
Database type, e.g. sqlite, mysql, etc.
________
<b>db_path</b> <font color="gray">string</font>
File path for file-based database.
________
<b>db_host</b> <font color="gray">string</font>
Database host.
________
<b>db_port</b> <font color="gray">object</font>
Database port.
________
<b>db_user</b> <font color="gray">string</font>
Database user.
________
<b>db_pwd</b> <font color="gray">string</font>
Database password.
________
<b>comment</b> <font color="gray">string</font>
Comment for the database.
________
# Evaluation
Get started with the Evaluation API
### Create Evaluation
```python
POST /api/v2/serve/evaluate/evaluation
```
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs
defaultValue="curl_evaluation"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_evaluation'},
{label: 'Python', value: 'python_evaluation'},
]
}>
<TabItem value="curl_evaluation">
```shell
DBGPT_API_KEY=dbgpt
SPACE_ID={YOUR_SPACE_ID}
curl -X POST "http://localhost:5670/api/v2/serve/evaluate/evaluation"
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"scene_key": "recall",
"scene_value":147,
"context":{"top_k":5},
"sys_code":"xx",
"evaluate_metrics":["RetrieverHitRateMetric","RetrieverMRRMetric","RetrieverSimilarityMetric"],
"datasets": [{
"query": "what awel talked about",
"doc_name":"awel.md"
}]
}'
```
</TabItem>
<TabItem value="python_evaluation">
```python
from dbgpt_client import Client
from dbgpt_client.evaluation import run_evaluation
from dbgpt.serve.evaluate.api.schemas import EvaluateServeRequest
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
request = EvaluateServeRequest(
# The scene type of the evaluation, e.g. support app, recall
scene_key="recall",
# e.g. app id(when scene_key is app), space id(when scene_key is recall)
scene_value="147",
context={"top_k": 5},
evaluate_metrics=[
"RetrieverHitRateMetric",
"RetrieverMRRMetric",
"RetrieverSimilarityMetric",
],
datasets=[
{
"query": "what awel talked about",
"doc_name": "awel.md",
}
],
)
data = await run_evaluation(client, request=request)
```
</TabItem>
</Tabs>
#### Request body
Request <a href="#the-evaluation-request">Evaluation Object</a>
when scene_key is app, the request body should be like this:
```json
{
"scene_key": "app",
"scene_value":"2c76eea2-83b6-11ef-b482-acde48001122",
"context":{"top_k":5, "prompt":"942acd7e33b54ce28565f89f9b278044","model":"zhipu_proxyllm"},
"sys_code":"xx",
"evaluate_metrics":["AnswerRelevancyMetric"],
"datasets": [{
"query": "what awel talked about",
"doc_name":"awel.md"
}]
}
```
when scene_key is recall, the request body should be like this:
```json
{
"scene_key": "recall",
"scene_value":"2c76eea2-83b6-11ef-b482-acde48001122",
"context":{"top_k":5, "prompt":"942acd7e33b54ce28565f89f9b278044","model":"zhipu_proxyllm"},
"evaluate_metrics":["RetrieverHitRateMetric", "RetrieverMRRMetric", "RetrieverSimilarityMetric"],
"datasets": [{
"query": "what awel talked about",
"doc_name":"awel.md"
}]
}
```
#### Response body
Return <a href="#the-evaluation-object">Evaluation Object</a> List
### The Evaluation Request Object
________
<b>scene_key</b> <font color="gray"> string </font> <font color="red"> Required </font>
The scene type of the evaluation, e.g. support app, recall
--------
<b>scene_value</b> <font color="gray"> string </font> <font color="red"> Required </font>
The scene value of the evaluation, e.g. app id(when scene_key is app), space id(when scene_key is recall)
--------
<b>context</b> <font color="gray"> object </font> <font color="red"> Required </font>
The context of the evaluation
- top_k <font color="gray"> int </font> <font color="red"> Required </font>
- prompt <font color="gray"> string </font> prompt code
- model <font color="gray"> string </font> llm model name
--------
evaluate_metrics <font color="gray"> array </font> <font color="red"> Required </font>
The evaluate metrics of the evaluation,
e.g.
- <b>AnswerRelevancyMetric</b>: the answer relevancy metric(when scene_key is app)
- <b>RetrieverHitRateMetric</b>: Hit rate calculates the fraction of queries where the correct answer is found
within the top-k retrieved documents. In simpler terms, it’s about how often our
system gets it right within the top few guesses. (when scene_key is recall)
- <b>RetrieverMRRMetric</b>: For each query, MRR evaluates the system’s accuracy by looking at the rank of the
highest-placed relevant document. Specifically, it’s the average of the reciprocals
of these ranks across all the queries. So, if the first relevant document is the
top result, the reciprocal rank is 1; if it’s second, the reciprocal rank is 1/2,
and so on. (when scene_key is recall)
- <b>RetrieverSimilarityMetric</b>: Embedding Similarity Metric (when scene_key is recall)
--------
datasets <font color="gray"> array </font> <font color="red"> Required </font>
The datasets of the evaluation
--------
### The Evaluation Result
________
<b>prediction</b> <font color="gray">string</font>
The prediction result
________
<b>contexts</b> <font color="gray">string</font>
The contexts of RAG Retrieve chunk
________
<b>score</b> <font color="gray">float</font>
The score of the prediction
________
<b>passing</b> <font color="gray">bool</font>
The passing of the prediction
________
<b>metric_name</b> <font color="gray">string</font>
The metric name of the evaluation
________
<b>prediction_cost</b> <font color="gray">int</font>
The prediction cost of the evaluation
________
<b>query</b> <font color="gray">string</font>
The query of the evaluation
________
<b>raw_dataset</b> <font color="gray">object</font>
The raw dataset of the evaluation
________
<b>feedback</b> <font color="gray">string</font>
The feedback of the llm evaluation
________
# Flow
Get started with the Flow API
# Chat Flow
```python
POST /api/v2/chat/completions
```
### Examples
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
### Stream Chat Flow
<Tabs
defaultValue="python"
groupId="chat"
values={[
{label: 'Curl', value: 'curl'},
{label: 'Python', value: 'python'},
]
}>
<TabItem value="curl">
```shell
DBGPT_API_KEY=dbgpt
FLOW_ID={YOUR_FLOW_ID}
curl -X POST "http://localhost:5670/api/v2/chat/completions" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"messages\":\"Hello\",\"model\":\"chatgpt_proxyllm\", \"chat_mode\": \"chat_flow\", \"chat_param\": \"$FLOW_ID\"}"
```
</TabItem>
<TabItem value="python">
```python
from dbgpt_client import Client
DBGPT_API_KEY = "dbgpt"
FLOW_ID="{YOUR_FLOW_ID}"
client = Client(api_key=DBGPT_API_KEY)
async for data in client.chat_stream(
messages="Introduce AWEL",
model="chatgpt_proxyllm",
chat_mode="chat_flow",
chat_param=FLOW_ID
):
print(data)
```
</TabItem>
</Tabs>
#### Chat Completion Stream Response
```commandline
data: {"id": "579f8862-fc4b-481e-af02-a127e6d036c8", "created": 1710918094, "model": "chatgpt_proxyllm", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "\n\n"}}]}
```
### Create Flow
```python
POST /api/v2/serve/awel/flows
```
#### Request body
Request <a href="#the-flow-object">Flow Object</a>
#### Response body
Return <a href="#the-flow-object">Flow Object</a>
### Update Flow
```python
PUT /api/v2/serve/awel/flows
```
#### Request body
Request <a href="#the-flow-object">Flow Object</a>
#### Response body
Return <a href="#the-flow-object">Flow Object</a>
### Delete Flow
```python
DELETE /api/v2/serve/awel/flows
```
<Tabs
defaultValue="curl_update_flow"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_update_flow'},
{label: 'Python', value: 'python_update_flow'},
]
}>
<TabItem value="curl_update_flow">
```shell
DBGPT_API_KEY=dbgpt
FLOW_ID={YOUR_FLOW_ID}
curl -X DELETE "http://localhost:5670/api/v2/serve/awel/flows/$FLOW_ID" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
```
</TabItem>
<TabItem value="python_update_flow">
```python
from dbgpt_client import Client
from dbgpt_client.flow import delete_flow
DBGPT_API_KEY = "dbgpt"
flow_id = "{your_flow_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await delete_flow(client=client, flow_id=flow_id)
```
</TabItem>
</Tabs>
#### Delete Parameters
________
<b>uid</b> <font color="gray"> string </font> <font color="red"> Required </font>
flow id
________
#### Response body
Return <a href="#the-flow-object">Flow Object</a>
### Get Flow
```python
GET /api/v2/serve/awel/flows/{flow_id}
```
<Tabs
defaultValue="curl_get_flow"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_get_flow'},
{label: 'Python', value: 'python_get_flow'},
]
}>
<TabItem value="curl_get_flow">
```shell
DBGPT_API_KEY=dbgpt
FLOW_ID={YOUR_FLOW_ID}
curl -X GET "http://localhost:5670/api/v2/serve/awel/flows/$FLOW_ID" -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_get_flow">
```python
from dbgpt_client import Client
from dbgpt_client.flow import get_flow
DBGPT_API_KEY = "dbgpt"
flow_id = "{your_flow_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await get_flow(client=client, flow_id=flow_id)
```
</TabItem>
</Tabs>
#### Query Parameters
________
<b>uid</b> <font color="gray"> string </font> <font color="red"> Required </font>
flow id
________
#### Response body
Return <a href="#the-flow-object">Flow Object</a>
### List Flow
```python
GET /api/v2/serve/awel/flows
```
<Tabs
defaultValue="curl_list_flow"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_list_flow'},
{label: 'Python', value: 'python_list_flow'},
]
}>
<TabItem value="curl_list_flow">
```shell
DBGPT_API_KEY=dbgpt
curl -X GET "http://localhost:5670/api/v2/serve/awel/flows" -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_list_flow">
```python
from dbgpt_client import Client
from dbgpt_client.flow import list_flow
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
res = await list_flow(client=client)
```
</TabItem>
</Tabs>
#### Response body
Return <a href="#the-flow-object">Flow Object</a>
### The Flow Object
________
<b>uid</b> <font color="gray">string</font>
The unique id for the flow.
________
<b>name</b> <font color="gray">string</font>
The name of the flow.
________
<b>description</b> <font color="gray">string</font>
The description of the flow.
________
<b>label</b> <font color="gray">string</font>
The label of the flow.
________
<b>flow_category</b> <font color="gray">string</font>
The category of the flow. Default is FlowCategory.COMMON.
________
<b>flow_data</b> <font color="gray">object</font>
The flow data.
________
<b>state</b> <font color="gray">string</font>
The state of the flow.Default is INITIALIZING.
________
<b>error_message</b> <font color="gray">string</font>
The error message of the flow.
________
<b>source</b> <font color="gray">string</font>
The source of the flow. Default is DBGPT-WEB.
________
<b>source_url</b> <font color="gray">string</font>
The source url of the flow.
________
<b>version</b> <font color="gray">string</font>
The version of the flow. Default is 0.1.0.
________
<b>editable</b> <font color="gray">boolean</font>
Whether the flow is editable. Default is True.
________
<b>user_name</b> <font color="gray">string</font>
The user name of the flow.
________
<b>sys_code</b> <font color="gray">string</font>
The system code of the flow.
________
<b>dag_id</b> <font color="gray">string</font>
The dag id of the flow.
________
<b>gmt_created</b> <font color="gray">string</font>
The created time of the flow.
________
<b>gmt_modified</b> <font color="gray">string</font>
The modified time of the flow.
________
\ No newline at end of file
# Introduction
This is the introduction to the DB-GPT API documentation. You can interact with the API through HTTP requests from any language, via our official Python Client bindings.
# Authentication
The DB-GPT API uses API keys for authentication. Visit your API Keys page to retrieve the API key you'll use in your requests.
Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.
All API requests should include your API key in an Authorization HTTP header as follows:
```http
Authorization: Bearer DBGPT_API_KEY
```
Example with the DB-GPT API curl command:
```bash
curl "http://localhost:5670/api/v2/chat/completions" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
```
Example with the DB-GPT Client Python package:
```python
from dbgpt_client import Client
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
```
Set the API Key in .env file as follows:
:::info note
API_KEYS - The list of API keys that are allowed to access the API. Each of the below are an option, separated by commas.
:::
```python
API_KEYS=dbgpt
```
## Installation
If you use Python, you should install the official DB-GPT Client package from PyPI:
```bash
pip install "dbgpt[client]>=0.5.2"
```
# Knowledge
Get started with the Knowledge API
# Chat Knowledge Space
```python
POST /api/v2/chat/completions
```
### Examples
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
### Chat Knowledge
<Tabs
defaultValue="python"
groupId="chat"
values={[
{label: 'Curl', value: 'curl'},
{label: 'Python', value: 'python'},
]
}>
<TabItem value="curl">
```shell
DBGPT_API_KEY=dbgpt
SPACE_NAME={YOUR_SPACE_NAME}
curl -X POST "http://localhost:5670/api/v2/chat/completions" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"messages\":\"Hello\",\"model\":\"gpt-4o\", \"chat_mode\": \"chat_knowledge\", \"chat_param\": \"$SPACE_NAME\"}"
```
</TabItem>
<TabItem value="python">
```python
from dbgpt_client import Client
DBGPT_API_KEY = "dbgpt"
SPACE_NAME="{YOUR_SPACE_NAME}"
client = Client(api_key=DBGPT_API_KEY)
async for data in client.chat_stream(
messages="Introduce AWEL",
model="gpt-4o",
chat_mode="chat_knowledge",
chat_param=SPACE_NAME
):
print(data)
```
</TabItem>
</Tabs>
#### Chat Completion Response
```json
{
"id": "acb050ab-eb2c-4754-97e4-6f3b94b7dac2",
"object": "chat.completion",
"created": 1710917272,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Agentic Workflow Expression Language (AWEL) is a specialized language designed for developing large model applications with intelligent agent workflows. It offers flexibility and functionality, allowing developers to focus on business logic for LLMs applications without getting bogged down in model and environment details. AWEL uses a layered API design architecture, making it easier to work with. You can find examples and source code to get started with AWEL, and it supports various operators and environments. AWEL is a powerful tool for building native data applications through workflows and agents."
},
"finish_reason": null
}
],
"usage": {
"prompt_tokens": 0,
"total_tokens": 0,
"completion_tokens": 0
}
}
```
#### Chat Completion Stream Response
```commandline
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "AW"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "EL"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ","}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " which"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " stands"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " for"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Ag"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "entic"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Workflow"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Expression"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Language"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ","}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " is"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " a"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " powerful"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " tool"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " designed"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " for"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " developing"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " large"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " model"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " applications"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "."}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " It"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " simpl"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "ifies"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " the"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " process"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " by"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " allowing"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " developers"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " to"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " focus"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " on"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " business"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " logic"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " without"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " getting"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " bog"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "ged"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " down"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " in"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " complex"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " model"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " and"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " environment"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " details"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "."}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " AW"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "EL"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " offers"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " great"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " functionality"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " and"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " flexibility"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " through"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " its"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " layered"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " API"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " design"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " architecture"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "."}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " It"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " provides"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " a"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " set"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " of"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " intelligent"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " agent"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " workflow"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " expression"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " language"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " that"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " enhances"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " efficiency"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " in"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " application"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " development"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "."}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " If"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " you"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " want"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " to"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " learn"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " more"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " about"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " AW"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "EL"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ","}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " you"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " can"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " check"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " out"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " the"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " built"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "-in"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " examples"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " and"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " resources"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " available"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " on"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " platforms"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " like"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Github"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ","}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Docker"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "hub"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ","}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " and"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " more"}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "."}}]}
data: {"id": "chatcmpl-86f60a0c-e686-11ee-9322-acde48001122", "model": "gpt-4o", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "\n\n<references title=\"References\" references=\"[{&quot;name&quot;: &quot;AWEL_URL&quot;, &quot;chunks&quot;: [{&quot;id&quot;: 2526, &quot;content&quot;: &quot;Agentic Workflow Expression Language(AWEL) is a set of intelligent agent workflow expression language specially designed for large model applicationdevelopment. It provides great functionality and flexibility. Through the AWEL API, you can focus on the development of business logic for LLMs applicationswithout paying attention to cumbersome model and environment details.AWEL adopts a layered API design. AWEL's layered API design architecture is shown in the figure below.AWEL Design&quot;, &quot;meta_info&quot;: &quot;{'source': 'https://docs.dbgpt.site/docs/awel/', 'title': 'AWEL(Agentic Workflow Expression Language) | DB-GPT', 'description': 'Agentic Workflow Expression Language(AWEL) is a set of intelligent agent workflow expression language specially designed for large model application', 'language': 'en-US'}&quot;, &quot;recall_score&quot;: 0.6579902643967029}, {&quot;id&quot;: 2531, &quot;content&quot;: &quot;ExamplesThe preliminary version of AWEL has alse been released, and we have provided some built-in usage examples.OperatorsExample of API-RAGYou can find source code from examples/awel/simple_rag_example.py&quot;, &quot;meta_info&quot;: &quot;{'source': 'https://docs.dbgpt.site/docs/awel/', 'title': 'AWEL(Agentic Workflow Expression Language) | DB-GPT', 'description': 'Agentic Workflow Expression Language(AWEL) is a set of intelligent agent workflow expression language specially designed for large model application', 'language': 'en-US'}&quot;, &quot;recall_score&quot;: 0.5997033286385491}, {&quot;id&quot;: 2538, &quot;content&quot;: &quot;Stand-alone environmentRay environmentPreviousWhy use AWEL?NextReleased V0.5.0 | Develop native data applications through workflows and agentsAWEL DesignExamplesOperatorsExample of API-RAGAgentFream ExampleDSL ExampleCurrently supported operatorsExecutable environmentCommunityDiscordDockerhubGithubGithubHuggingFaceMoreHacker NewsTwitterCopyright © 2024 DB-GPT&quot;, &quot;meta_info&quot;: &quot;{'source': 'https://docs.dbgpt.site/docs/awel/', 'title': 'AWEL(Agentic Workflow Expression Language) | DB-GPT', 'description': 'Agentic Workflow Expression Language(AWEL) is a set of intelligent agent workflow expression language specially designed for large model application', 'language': 'en-US'}&quot;, &quot;recall_score&quot;: 0.5980204530753225}]}]\" />"}}]}
data: [DONE]
```
### Create Knowledge Space
```python
POST /api/v2/serve/knowledge/spaces
```
<Tabs
defaultValue="curl_knowledge"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_knowledge'},
{label: 'Python', value: 'python_knowledge'},
]
}>
<TabItem value="curl_knowledge">
```shell
DBGPT_API_KEY="dbgpt"
curl --location --request POST 'http://localhost:5670/api/v2/serve/knowledge/spaces' \
--header 'Authorization: Bearer $DBGPT_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{"desc": "for client space desc", "name": "test_space_2", "owner": "dbgpt", "vector_type": "Chroma"
}'
```
</TabItem>
<TabItem value="python_knowledge">
```python
from dbgpt_client import Client
from dbgpt_client.knowledge import create_space
from dbgpt_client.schema import SpaceModel
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
res = await create_space(client, SpaceModel(
name="test_space",
vector_type="Chroma",
desc="for client space",
owner="dbgpt"
))
```
</TabItem>
</Tabs>
#### Request body
________
<b>name</b> <font color="gray"> string </font> <font color="red"> Required </font>
knowledge space name
________
<b>vector_type</b> <font color="gray"> string </font> <font color="red"> Required </font>
vector db type, `Chroma`, `Milvus`, default is `Chroma`
________
<b>desc</b> <font color="gray"> string </font> <font color="red"> Optional </font>
description of the knowledge space
________
<b>owner</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
The owner of the knowledge space
________
<b>context</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
The context of the knowledge space argument
________
#### Response body
Return <a href="#the-space-object">Space Object</a>
### Update Knowledge Space
```python
PUT /api/v2/serve/knowledge/spaces
```
<Tabs
defaultValue="curl_update_knowledge"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_update_knowledge'},
{label: 'Python', value: 'python_update_knowledge'},
]
}>
<TabItem value="curl_update_knowledge">
```shell
DBGPT_API_KEY="dbgpt"
curl --location --request PUT 'http://localhost:5670/api/v2/serve/knowledge/spaces' \
--header 'Authorization: Bearer $DBGPT_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{"desc": "for client space desc v2", "id": "49", "name": "test_space_2", "owner": "dbgpt", "vector_type": "Chroma"
}'
```
</TabItem>
<TabItem value="python_update_knowledge">
```python
from dbgpt_client import Client
from dbgpt_client.knowledge import update_space
from dbgpt_client.schema import SpaceModel
DBGPT_API_KEY = "dbgpt"
client = Client(api_key=DBGPT_API_KEY)
res = await update_space(client, SpaceModel(
name="test_space",
vector_type="Chroma",
desc="for client space update",
owner="dbgpt"
))
```
</TabItem>
</Tabs>
#### Request body
________
<b>id</b> <font color="gray"> string </font> <font color="red"> Required </font>
knowledge space id
________
<b>name</b> <font color="gray"> string </font> <font color="red"> Required </font>
knowledge space name
________
<b>vector_type</b> <font color="gray"> string </font> <font color="red"> Optional </font>
vector db type, `Chroma`, `Milvus`, default is `Chroma`
________
<b>desc</b> <font color="gray"> string </font> <font color="red"> Optional </font>
description of the knowledge space
________
<b>owner</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
The owner of the knowledge space
________
<b>context</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
The context of the knowledge space argument
________
#### Response body
Return <a href="#the-space-object">Space Object</a>
### Delete Knowledge Space
```python
DELETE /api/v2/serve/knowledge/spaces
```
<Tabs
defaultValue="curl_update_knowledge"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_update_knowledge'},
{label: 'Python', value: 'python_update_knowledge'},
]
}>
<TabItem value="curl_update_knowledge">
```shell
DBGPT_API_KEY=dbgpt
SPACE_ID={YOUR_SPACE_ID}
curl -X DELETE "http://localhost:5670/api/v2/serve/knowledge/spaces/$SPACE_ID" \
-H "Authorization: Bearer $DBGPT_API_KEY" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
```
</TabItem>
<TabItem value="python_update_knowledge">
```python
from dbgpt_client import Client
from dbgpt_client.knowledge import delete_space
DBGPT_API_KEY = "dbgpt"
space_id = "{your_space_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await delete_space(client=client, space_id=space_id)
```
</TabItem>
</Tabs>
#### Delete Parameters
________
<b>id</b> <font color="gray"> string </font> <font color="red"> Required </font>
knowledge space id
________
#### Response body
Return <a href="#the-space-object">Space Object</a>
### Get Knowledge Space
```python
GET /api/v2/serve/knowledge/spaces/{space_id}
```
<Tabs
defaultValue="curl_get_knowledge"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_get_knowledge'},
{label: 'Python', value: 'python_get_knowledge'},
]
}>
<TabItem value="curl_get_knowledge">
```shell
DBGPT_API_KEY=dbgpt
SPACE_ID={YOUR_SPACE_ID}
curl -X GET "http://localhost:5670/api/v2/serve/knowledge/spaces/$SPACE_ID" -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_get_knowledge">
```python
from dbgpt_client import Client
from dbgpt_client.knowledge import get_space
DBGPT_API_KEY = "dbgpt"
space_id = "{your_space_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await get_space(client=client, space_id=space_id)
```
</TabItem>
</Tabs>
#### Query Parameters
________
<b>id</b> <font color="gray"> string </font> <font color="red"> Required </font>
knowledge space id
________
#### Response body
Return <a href="#the-space-object">Space Object</a>
### List Knowledge Space
```python
GET /api/v2/serve/knowledge/spaces
```
<Tabs
defaultValue="curl_list_knowledge"
groupId="chat1"
values={[
{label: 'Curl', value: 'curl_list_knowledge'},
{label: 'Python', value: 'python_list_knowledge'},
]
}>
<TabItem value="curl_list_knowledge">
```shell
DBGPT_API_KEY=dbgpt
curl -X GET 'http://localhost:5670/api/v2/serve/knowledge/spaces' -H "Authorization: Bearer $DBGPT_API_KEY"
```
</TabItem>
<TabItem value="python_list_knowledge">
```python
from dbgpt_client import Client
from dbgpt_client.knowledge import list_space
DBGPT_API_KEY = "dbgpt"
space_id = "{your_space_id}"
client = Client(api_key=DBGPT_API_KEY)
res = await list_space(client=client)
```
</TabItem>
</Tabs>
#### Response body
Return <a href="#the-space-object">Space Object</a> List
### The Space Object
________
<b>id</b> <font color="gray"> string </font>
space id
________
<b>name</b> <font color="gray"> string </font>
knowledge space name
________
<b>vector_type</b> <font color="gray"> string </font>
vector db type, `Chroma`, `Milvus`, default is `Chroma`
________
<b>desc</b> <font color="gray"> string </font> <font color="red"> Optional </font>
description of the knowledge space
________
<b>owner</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
The owner of the knowledge space
________
<b>context</b> <font color="gray"> integer </font> <font color="red"> Optional </font>
The context of the knowledge space argument
________
\ No newline at end of file
# API Interface Usage
The DB-GPT project currently also provides various APIs for use. Currently APIs are mainly divided into two categories. 1. Model API 2. Application service layer AP
Model API mainly means that DB-GPT adapts to various models and is uniformly packaged into models compatible with OpenAI SDK output. The service layer API refers to the API exposed by the DB-GPT service layer. The following is a brief introduction to the use of both.
## Model API
In the DB-GPT project, we defined a service-oriented multi-model management framework (SMMF). Through the capabilities of SMMF, we can deploy multiple models, and these models provide external services through services. In order to allow clients to achieve seamless switching, we uniformly support the OpenAI SDK standards.
- Detail usage tutorial: [OpenAI SDK calls local multi-model ](../../installation/advanced_usage/OpenAI_SDK_call.md)
**Example:** The following is an example of calling through openai sdk
```python
import openai
openai.api_key = "EMPTY"
openai.api_base = "http://127.0.0.1:8100/api/v1"
model = "vicuna-13b-v1.5"
completion = openai.ChatCompletion.create(
model=model,
messages=[{"role": "user", "content": "hello"}]
)
# print the completion
print(completion.choices[0].message.content)
```
## Application service layer API
The service layer API refers to the API exposed on port 5670 after starting the webserver, which is mainly focused on the application layer. It can be divided into the following parts according to categories
- Chat API
- Editor API
- LLM Manage API
- Agent API
- AWEL API
- Model API
:::info
Note: After starting the webserver, open http://127.0.0.1:5670/docs to view details
Regarding the service layer API, in terms of strategy in the early days, we maintained the principle of minimum availability and openness. APIs that are stably exposed to the outside world will carry version information, such as
- /api/v1/
- /api/v2/
Due to the rapid development of the entire field, different versions of the API will not be considered fully compatible in terms of compatibility. In subsequent new versions of the API, we will provide instructions in the documentation for incompatible APIs.
:::
## API Description
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs
defaultValue="chatapi"
values={[
{label: 'Chat API', value: 'chatapi'},
{label: 'Editor API', value: 'editorapi'},
{label: 'Model API', value: 'modelapi'},
{label: 'LLM Manage API', value: 'llmanageapi'},
{label: 'Agent API', value: 'agentapi'},
{label: 'AWEL API', value: 'awelapi'},
]}>
<TabItem value="chatapi">
Chat API Lists
```python
api/v1/chat/db/list
api/v1/chat/db/add
api/v1/chat/db/edit
api/v1/chat/db/delete
api/v1/chat/db/test/connect
api/v1/chat/db/summary
api/v1/chat/db/support/type
api/v1/chat/dialogue/list
api/v1/chat/dialogue/scenes
api/v1/chat/dialogue/new
api/v1/chat/mode/params/list
api/v1/chat/mode/params/file/load
api/v1/chat/dialogue/delete
api/v1/chat/dialogue/messages
api/v1/chat/prepare
api/v1/chat/completions
```
</TabItem>
<TabItem value="editorapi">
Editor API Lists
```python
api/v1/editor/db/tables
api/v1/editor/sql/rounds
api/v1/editor/sql
api/v1/editor/sql/run
api/v1/sql/editor/submit
api/v1/editor/chart/list
api/v1/editor/chart/info
api/v1/editor/chart/run
api/v1/chart/editor/submit
```
</TabItem>
<TabItem value="modelapi">
Model API Lists
```python
api/v1/model/types
api/v1/model/supports
```
</TabItem>
<TabItem value="llmanageapi">
LLM Manage API Lists
```python
api/v1/worker/model/params
api/v1/worker/model/list
api/v1/worker/model/stop
api/v1/worker/model/start
api/worker/generate_stream
api/worker/generate
api/worker/embeddings
api/worker/apply
api/worker/parameter/descriptions
api/worker/models/supports
api/worker/models/startup
api/worker/models/shutdown
api/controller/models
api/controller/heartbeat
```
</TabItem>
<TabItem value="agentapi">
Agent API Lists
```python
api/v1/agent/hub/update
api/v1/agent/query
api/v1/agent/my
api/v1/agent/install
api/v1/agent/uninstall
api/v1/personal/agent/upload
```
</TabItem>
<TabItem value="awelapi">
AWEL API Lists
```python
api/v1/awel/trigger/examples/simple_rag
api/v1/awel/trigger/examples/simple_chat
api/v1/awel/trigger/examples/hello
```
</TabItem>
</Tabs>
:::info note
⚠️ Knowledge and Prompt API
Currently, due to frequent changes in Knowledge and Prompt, the relevant APIs are still in the testing stage and will be gradually opened later
:::
More detailed interface parameters can be viewed at `http://127.0.0.1:5670/docs`
# Command Line Usage
In addition to interface usage, this project also provides a wealth of command line tools. It can realize model deployment, service deployment and start and stop, knowledge base operations (viewing, deleting, document loading), debugging and problem locating and other capabilities.
The following is a systematic introduction to the use of related command line tools.
## Preparation
Before using the dbgpt command, you first need to complete the installation of the project. For detailed installation tutorial, please refer to: [Source code installation](../../installation/sourcecode.md)
## Usage
The command line provides a variety of capabilities, which we can view through the following commands.
As shown in the figure, we can see the command list of `dbgpt`, including `install`, `knowledge`, `model`, `start`, `stop` and `trace`
```python
~ dbgpt --help
Already connect 'dbgpt'
Usage: dbgpt [OPTIONS] COMMAND [ARGS]...
Options:
--log-level TEXT Log level
--version Show the version and exit.
--help Show this message and exit.
Commands:
install Install dependencies, plugins, etc.
knowledge Knowledge command line tool
model Clients that manage model serving
start Start specific server.
stop Stop specific server.
trace Analyze and visualize trace spans.
```
## Installation
`install` command provides installation and use of various dependency packages and plugins
:::info
The agents is currently under reconstruction, and related functions will be available in the next version
:::
## Knowledge Command
The `dbgpt knowledge` command mainly provides operations related to the knowledge base. The current main commands are `delete`, `list`, and `load`
```python
~ dbgpt knowledge --help
Already connect 'dbgpt'
Usage: dbgpt knowledge [OPTIONS] COMMAND [ARGS]...
Knowledge command line tool
Options:
--address TEXT Address of the Api server(If not set, try to read from
environment variable: API_ADDRESS). [default:
http://127.0.0.1:5670]
--help Show this message and exit.
Commands:
delete Delete your knowledge space or document in space
list List knowledge space
load Load your local documents to DB-GPT
```
#### Load command
`dbgpt knowledge load` refers to the loading of knowledge base documents. You can load knowledge base documents in batches through the load command.
```python
~ dbgpt knowledge load --help
Already connect 'dbgpt'
Usage: dbgpt knowledge load [OPTIONS]
Load your local documents to DB-GPT
Options:
--space_name TEXT Your knowledge space name [default: default]
--vector_store_type TEXT Vector store type. [default: Chroma]
--local_doc_path TEXT Your document directory or document file path.
[default: /Users/magic/workspace/github/eosphoros-
ai/DB-GPT/pilot/datasets]
--skip_wrong_doc Skip wrong document.
--overwrite Overwrite existing document(they has same name).
--max_workers INTEGER The maximum number of threads that can be used to
upload document.
--pre_separator TEXT Preseparator, this separator is used for pre-
splitting before the document is actually split by
the text splitter. Preseparator are not included
in the vectorized text.
--separator TEXT This is the document separator. Currently, only
one separator is supported.
--chunk_size INTEGER Maximum size of chunks to split.
--chunk_overlap INTEGER Overlap in characters between chunks.
--help Show this message and exit.
```
<p align="left">
<img src={'/img/cli/kbqa.gif'} width="720px"/>
</p>
#### List command
`dbgpt knowledge list` command mainly displays information related to the knowledge base. Such as displaying knowledge space, document content, Chunk content, etc.
```python
~ dbgpt knowledge list --help
Already connect 'dbgpt'
Usage: dbgpt knowledge list [OPTIONS]
List knowledge space
Options:
--space_name TEXT Your knowledge space name. If None, list all
spaces
--doc_id INTEGER Your document id in knowledge space. If Not
None, list all chunks in current document
--page INTEGER The page for every query [default: 1]
--page_size INTEGER The page size for every query [default: 20]
--show_content Query the document content of chunks
--output [text|html|csv|latex|json]
The output format
--help Show this message and exit.
```
#### Delete command
The delete command supports the deletion of knowledge base and documents. You can view related command details through `dbgpt knowledge delete --help`
```python
~ dbgpt knowledge delete --help
Already connect 'dbgpt'
Usage: dbgpt knowledge delete [OPTIONS]
Delete your knowledge space or document in space
Options:
--space_name TEXT Your knowledge space name [default: default]
--doc_name TEXT The document name you want to delete. If doc_name is
None, this command will delete the whole space.
-y Confirm your choice
--help Show this message and exit.
```
<p align="left">
<img src={'/img/cli/kd_new.gif'} width="720px"/>
</p>
## Model command
Model related commands are mainly used when deploying multiple models. For model cluster deployment, you can view the [cluster deployment mode](../../installation/model_service/cluster.md).
```python
~ dbgpt model --help
Already connect 'dbgpt'
Usage: dbgpt model [OPTIONS] COMMAND [ARGS]...
Clients that manage model serving
Options:
--address TEXT Address of the Model Controller to connect to. Just support
light deploy model, If the environment variable
CONTROLLER_ADDRESS is configured, read from the environment
variable
--help Show this message and exit.
Commands:
chat Interact with your bot from the command line
list List model instances
restart Restart model instances
start Start model instances
stop Stop model instances
```
#### Chat command
You can use the `dbgpt model chat` command to communicate with the model in the command line terminal
```python
~ dbgpt model chat --help
Already connect 'dbgpt'
Usage: dbgpt model chat [OPTIONS]
Interact with your bot from the command line
Options:
--model_name TEXT The name of model [required]
--system TEXT System prompt
--help Show this message and exit.
```
#### List Command
```python
~ dbgpt model list --help
Already connect 'dbgpt'
Usage: dbgpt model list [OPTIONS]
List model instances
Options:
--model_name TEXT The name of model
--model_type TEXT The type of model
--help Show this message and exit.
```
#### Restart command
The model can be restarted through the `dbgpt model restart` command
```python
~ dbgpt model restart --help
Already connect 'dbgpt'
Usage: dbgpt model restart [OPTIONS]
Restart model instances
Options:
--model_name TEXT The name of model [required]
--model_type TEXT The type of model
--help Show this message and exit.
```
#### Start command
The model can be start through the `dbgpt model start` command
```python
~ dbgpt model start --help
Already connect 'dbgpt'
Usage: dbgpt model start [OPTIONS]
Start model instances
Options:
--model_name TEXT The model name to deploy [required]
--model_path TEXT The model path to deploy
--host TEXT The remote host to deploy model [default:
30.183.153.197]
--port INTEGER The remote port to deploy model [default: 5000]
--worker_type TEXT Worker type [default: llm]
--device TEXT Device to run model. If None, the device is
automatically determined
--model_type TEXT Model type: huggingface, llama.cpp, proxy and
vllm [default: huggingface]
--prompt_template TEXT Prompt template. If None, the prompt template is
automatically determined from model path,
supported template: zero_shot,vicuna_v1.1,llama-
2,codellama,alpaca,baichuan-chat,internlm-chat
--max_context_size INTEGER Maximum context size [default: 4096]
--num_gpus INTEGER The number of gpus you expect to use, if it is
empty, use all of them as much as possible
--max_gpu_memory TEXT The maximum memory limit of each GPU, only valid
in multi-GPU configuration
--cpu_offloading CPU offloading
--load_8bit 8-bit quantization
--load_4bit 4-bit quantization
--quant_type TEXT Quantization datatypes, `fp4` (four bit float)
and `nf4` (normal four bit float), only valid
when load_4bit=True [default: nf4]
--use_double_quant Nested quantization, only valid when
load_4bit=True [default: True]
--compute_dtype TEXT Model compute type
--trust_remote_code Trust remote code [default: True]
--verbose Show verbose output.
--help Show this message and exit.
```
#### Stop command
The `dbgpt model stop` command is mainly responsible for stopping the model.
```python
~ dbgpt model stop --help
Already connect 'dbgpt'
Usage: dbgpt model stop [OPTIONS]
Stop model instances
Options:
--model_name TEXT The name of model [required]
--model_type TEXT The type of model
--host TEXT The remote host to stop model [required]
--port INTEGER The remote port to stop model [required]
--help Show this message and exit.
```
<p align="left">
<img src={'/img/cli/cli_m.gif'} width="720px"/>
</p>
## Start/Stop Command
The commands related to `dbgpt start` and `dbgpt stop` are a set of interfaces related to service registration discovery. There are `apiserver`, `controller`, `worker` and `webserver` respectively.
```python
~ dbgpt start --help
Already connect 'dbgpt'
Usage: dbgpt start [OPTIONS] COMMAND [ARGS]...
Start specific server.
Options:
--help Show this message and exit.
Commands:
apiserver Start apiserver
controller Start model controller
webserver Start webserver(dbgpt_server.py)
worker Start model worker
```
#### Apiserver
You can start the model's API service through `dbgpt start apiserver`. The default startup port is 8100.
```python
~ dbgpt start apiserver --help
Already connect 'dbgpt'
Usage: dbgpt start apiserver [OPTIONS]
Start apiserver
Options:
--host TEXT Model API server deploy host [default: 0.0.0.0]
--port INTEGER Model API server deploy port [default: 8100]
--daemon Run Model API server in background
--controller_addr TEXT The Model controller address to connect
[default: http://127.0.0.1:8000]
--api_keys TEXT Optional list of comma separated API keys
--log_level TEXT Logging level
--log_file TEXT The filename to store log [default:
dbgpt_model_apiserver.log]
--tracer_file TEXT The filename to store tracer span records
[default: dbgpt_model_apiserver_tracer.jsonl]
--tracer_storage_cls TEXT The storage class to storage tracer span records
--help Show this message and exit.
```
`start apiserver`
```python
~ dbgpt start apiserver
Already connect 'dbgpt'
2023-12-07 14:35:21 B-4TMH9N3X-2120.local pilot.component[95201] INFO Register component with name dbgpt_model_registry and instance: <pilot.model.cluster.controller.controller.ModelRegistryClient object at 0x28f4e0c70>
2023-12-07 14:35:21 B-4TMH9N3X-2120.local pilot.component[95201] INFO Register component with name dbgpt_worker_manager_factory and instance: <pilot.model.cluster.worker.manager._DefaultWorkerManagerFactory object at 0x28f4e2110>
2023-12-07 14:35:21 B-4TMH9N3X-2120.local pilot.component[95201] INFO Register component with name dbgpt_model_api_server and instance: <pilot.model.cluster.apiserver.api.APIServer object at 0x28f4e2170>
INFO: Started server process [95201]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8100 (Press CTRL+C to quit)
INFO: 127.0.0.1:56638 - "GET /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:56665 - "GET /openapi.json HTTP/1.1" 200 OK
^CINFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [95201]
```
#### Controller command
The management and control service can be started through `dbgpt start controller`. The default startup port is 8000
```python
~ dbgpt start --help
Already connect 'dbgpt'
Usage: dbgpt start [OPTIONS] COMMAND [ARGS]...
Start specific server.
Options:
--help Show this message and exit.
Commands:
apiserver Start apiserver
controller Start model controller
webserver Start webserver(dbgpt_server.py)
worker Start model worker
(dbgpt_env) magic@B-4TMH9N3X-2120 ~ % dbgpt start controller
Already connect 'dbgpt'
INFO: Started server process [96797]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
```
#### Webserver command
The front-end service can be started through `dbgpt start webserver`, the default port is 5670, and can be accessed through `http://127.0.0.1:5670`
```python
~ dbgpt start webserver --help
Already connect 'dbgpt'
Usage: dbgpt start webserver [OPTIONS]
Start webserver(dbgpt_server.py)
Options:
--host TEXT Webserver deploy host [default: 0.0.0.0]
--port INTEGER Webserver deploy port [default: 5000]
--daemon Run Webserver in background
--controller_addr TEXT The Model controller address to connect. If None,
read model controller address from environment
key `MODEL_SERVER`.
--model_name TEXT The default model name to use. If None, read
model name from environment key `LLM_MODEL`.
--share Whether to create a publicly shareable link for
the interface. Creates an SSH tunnel to make your
UI accessible from anywhere.
--remote_embedding Whether to enable remote embedding models. If it
is True, you need to start a embedding model
through `dbgpt start worker --worker_type
text2vec --model_name xxx --model_path xxx`
--log_level TEXT Logging level
--light enable light mode
--log_file TEXT The filename to store log [default:
dbgpt_webserver.log]
--tracer_file TEXT The filename to store tracer span records
[default: dbgpt_webserver_tracer.jsonl]
--tracer_storage_cls TEXT The storage class to storage tracer span records
--disable_alembic_upgrade Whether to disable alembic to initialize and
upgrade database metadata
--help Show this message and exit.
```
<p align="left">
<img src={'/img/cli/start_cli_new.gif'} width="720px"/>
</p>
#### worker command
`dbgpt start worker` is mainly used to start the working model. For detailed usage, [cluster deployment](../../installation/model_service/cluster.md)
## Debugging
The dbgpt project provides a wealth of debug commands. For detailed usage, [debugging](./debugging.md)
\ No newline at end of file
# Debugging
DB-GPT provides a series of tools to help developers troubleshoot and solve some problems they may encounter.
## View Trace Logs With Command
DB-GPT writes some key system runtime information to trace logs. By default, they are located in `logs/dbgpt*.jsonl`.
DB-GPT also provides a command line tool `dbgpt trace` to help analyze these trace logs. You can check the specific usage through the following command:
```python
dbgpt trace --help
```
## View chat details
You can view chat details through the `dbgpt trace chat` command. By default, the latest conversation information is displayed.
## View service runtime information
```python
dbgpt trace chat --hide_conv
```
The output is as follows:
```python
+------------------------+--------------------------+-----------------------------+------------------------------------+
| Config Key (Webserver) | Config Value (Webserver) | Config Key (EmbeddingModel) | Config Value (EmbeddingModel) |
+------------------------+--------------------------+-----------------------------+------------------------------------+
| host | 0.0.0.0 | model_name | text2vec |
| port | 5000 | model_path | /app/models/text2vec-large-chinese |
| daemon | False | device | cuda |
| share | False | normalize_embeddings | None |
| remote_embedding | False | | |
| log_level | None | | |
| light | False | | |
+------------------------+--------------------------+-----------------------------+------------------------------------+
+--------------------------+-----------------------------+----------------------------+------------------------------+
| Config Key (ModelWorker) | Config Value (ModelWorker) | Config Key (WorkerManager) | Config Value (WorkerManager) |
+--------------------------+-----------------------------+----------------------------+------------------------------+
| model_name | vicuna-13b-v1.5 | model_name | vicuna-13b-v1.5 |
| model_path | /app/models/vicuna-13b-v1.5 | model_path | /app/models/vicuna-13b-v1.5 |
| device | cuda | worker_type | None |
| model_type | huggingface | worker_class | None |
| prompt_template | None | model_type | huggingface |
| max_context_size | 4096 | host | 0.0.0.0 |
| num_gpus | None | port | 5000 |
| max_gpu_memory | None | daemon | False |
| cpu_offloading | False | limit_model_concurrency | 5 |
| load_8bit | False | standalone | True |
| load_4bit | False | register | True |
| quant_type | nf4 | worker_register_host | None |
| use_double_quant | True | controller_addr | http://127.0.0.1:5000 |
| compute_dtype | None | send_heartbeat | True |
| trust_remote_code | True | heartbeat_interval | 20 |
| verbose | False | log_level | None |
+--------------------------+-----------------------------+----------------------------+------------------------------+
```
## View latest conversation information
```python
dbgpt trace chat --hide_run_params
```
The output is as follows:
```python
+-------------------------------------------------------------------------------------------------------------------------------------------+
| Chat Trace Details |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| Key | Value Value |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| trace_id | 5d1900c3-5aad-4159-9946-fbb600666530 |
| span_id | 5d1900c3-5aad-4159-9946-fbb600666530:14772034-bed4-4b4e-b43f-fcf3a8aad6a7 |
| conv_uid | 5e456272-68ac-11ee-9fba-0242ac150003 |
| user_input | Who are you? |
| chat_mode | chat_normal |
| select_param | None |
| model_name | vicuna-13b-v1.5 |
| temperature | 0.6 |
| max_new_tokens | 1024 |
| echo | False |
| llm_adapter | FastChatLLMModelAdaperWrapper(fastchat.model.model_adapter.VicunaAdapter) |
| User prompt | A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polit |
| | e answers to the user's questions. USER: Who are you? ASSISTANT: |
| Model output | You can call me Vicuna, and I was trained by Large Model Systems Organization (LMSYS) researchers as a language model. |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
```
## View chat details and call chain
```python
dbgpt trace chat --hide_run_params --tree
```
The output is as follows:
```python
Invoke Trace Tree:
Operation: DB-GPT-Web-Entry (Start: 2023-10-12 03:06:43.180, End: None)
Operation: get_chat_instance (Start: 2023-10-12 03:06:43.258, End: None)
Operation: get_chat_instance (Start: 2023-10-12 03:06:43.258, End: 2023-10-12 03:06:43.424)
Operation: stream_generator (Start: 2023-10-12 03:06:43.425, End: None)
Operation: BaseChat.stream_call (Start: 2023-10-12 03:06:43.426, End: None)
Operation: WorkerManager.generate_stream (Start: 2023-10-12 03:06:43.426, End: None)
Operation: DefaultModelWorker.generate_stream (Start: 2023-10-12 03:06:43.428, End: None)
Operation: DefaultModelWorker_call.generate_stream_func (Start: 2023-10-12 03:06:43.430, End: None)
Operation: DefaultModelWorker_call.generate_stream_func (Start: 2023-10-12 03:06:43.430, End: 2023-10-12 03:06:48.518)
Operation: DefaultModelWorker.generate_stream (Start: 2023-10-12 03:06:43.428, End: 2023-10-12 03:06:48.518)
Operation: WorkerManager.generate_stream (Start: 2023-10-12 03:06:43.426, End: 2023-10-12 03:06:48.518)
Operation: BaseChat.stream_call (Start: 2023-10-12 03:06:43.426, End: 2023-10-12 03:06:48.519)
Operation: stream_generator (Start: 2023-10-12 03:06:43.425, End: 2023-10-12 03:06:48.519)
Operation: DB-GPT-Web-Entry (Start: 2023-10-12 03:06:43.180, End: 2023-10-12 03:06:43.257)
+-------------------------------------------------------------------------------------------------------------------------------------------+
| Chat Trace Details |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| Key | Value Value |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| trace_id | 5d1900c3-5aad-4159-9946-fbb600666530 |
| span_id | 5d1900c3-5aad-4159-9946-fbb600666530:14772034-bed4-4b4e-b43f-fcf3a8aad6a7 |
| conv_uid | 5e456272-68ac-11ee-9fba-0242ac150003 |
| user_input | Who are you? |
| chat_mode | chat_normal |
| select_param | None |
| model_name | vicuna-13b-v1.5 |
| temperature | 0.6 |
| max_new_tokens | 1024 |
| echo | False |
| llm_adapter | FastChatLLMModelAdaperWrapper(fastchat.model.model_adapter.VicunaAdapter) |
| User prompt | A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polit |
| | e answers to the user's questions. USER: Who are you? ASSISTANT: |
| Model output | You can call me Vicuna, and I was trained by Large Model Systems Organization (LMSYS) researchers as a language model. |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
```
## View chat details based on trace_id
```python
dbgpt trace chat --hide_run_params --trace_id ec30d733-7b35-4d61-b02e-2832fd2e29ff
```
The output is as follows:
```python
+-------------------------------------------------------------------------------------------------------------------------------------------+
| Chat Trace Details |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| Key | Value Value |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| trace_id | ec30d733-7b35-4d61-b02e-2832fd2e29ff |
| span_id | ec30d733-7b35-4d61-b02e-2832fd2e29ff:0482a0c5-38b3-4b38-8101-e42489f90ccd |
| conv_uid | 87a722de-68ae-11ee-9fba-0242ac150003 |
| user_input | Hello |
| chat_mode | chat_normal |
| select_param | None |
| model_name | vicuna-13b-v1.5 |
| temperature | 0.6 |
| max_new_tokens | 1024 |
| echo | False |
| llm_adapter | FastChatLLMModelAdaperWrapper(fastchat.model.model_adapter.VicunaAdapter) |
| User prompt | A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polit |
| | e answers to the user's questions. USER: Hello ASSISTANT: |
| Model output | Hello! How can I help you today? Is there something specific you want to know or talk about? I'm here to answer any ques |
| | tions you might have, to the best of my ability. |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
```
## More chat command usage
```python
# command
dbgpt trace chat --help
# output
Usage: dbgpt trace chat [OPTIONS] [FILES]...
Show conversation details
Options:
--trace_id TEXT Specify the trace ID to analyze. If None,
show latest conversation details
--tree Display trace spans as a tree
--hide_conv Hide your conversation details
--hide_run_params Hide run params
--output [text|html|csv|latex|json]
The output format
--help Show this message and exit.
```
## View the call tree based on trace_id
```python
dbgpt trace chat --help
```
The output is as follows:
```python
Operation: DB-GPT-Web-Entry (Start: 2023-10-12 03:22:10.592, End: None)
Operation: get_chat_instance (Start: 2023-10-12 03:22:10.594, End: None)
Operation: get_chat_instance (Start: 2023-10-12 03:22:10.594, End: 2023-10-12 03:22:10.658)
Operation: stream_generator (Start: 2023-10-12 03:22:10.659, End: None)
Operation: BaseChat.stream_call (Start: 2023-10-12 03:22:10.659, End: None)
Operation: WorkerManager.generate_stream (Start: 2023-10-12 03:22:10.660, End: None)
Operation: DefaultModelWorker.generate_stream (Start: 2023-10-12 03:22:10.675, End: None)
Operation: DefaultModelWorker_call.generate_stream_func (Start: 2023-10-12 03:22:10.676, End: None)
Operation: DefaultModelWorker_call.generate_stream_func (Start: 2023-10-12 03:22:10.676, End: 2023-10-12 03:22:16.130)
Operation: DefaultModelWorker.generate_stream (Start: 2023-10-12 03:22:10.675, End: 2023-10-12 03:22:16.130)
Operation: WorkerManager.generate_stream (Start: 2023-10-12 03:22:10.660, End: 2023-10-12 03:22:16.130)
Operation: BaseChat.stream_call (Start: 2023-10-12 03:22:10.659, End: 2023-10-12 03:22:16.130)
Operation: stream_generator (Start: 2023-10-12 03:22:10.659, End: 2023-10-12 03:22:16.130)
Operation: DB-GPT-Web-Entry (Start: 2023-10-12 03:22:10.592, End: 2023-10-12 03:22:10.673)
```
## List trace information
List all Trace information
```python
dbgpt trace list
```
The output is as follows:
```python
+--------------------------------------+---------------------------------------------------------------------------+-----------------------------------+------------------+
| Trace ID | Span ID | Operation Name | Conversation UID |
+--------------------------------------+---------------------------------------------------------------------------+-----------------------------------+------------------+
| eaf4830f-976f-45a4-9a50-244f3ab6f9e1 | eaf4830f-976f-45a4-9a50-244f3ab6f9e1:f650065f-f761-4790-99f7-8109c15f756a | run_webserver | None |
| eaf4830f-976f-45a4-9a50-244f3ab6f9e1 | eaf4830f-976f-45a4-9a50-244f3ab6f9e1:b2ff279e-0557-4b2d-8959-85e25dcfe94e | EmbeddingLoader.load | None |
| eaf4830f-976f-45a4-9a50-244f3ab6f9e1 | eaf4830f-976f-45a4-9a50-244f3ab6f9e1:b2ff279e-0557-4b2d-8959-85e25dcfe94e | EmbeddingLoader.load | None |
| eaf4830f-976f-45a4-9a50-244f3ab6f9e1 | eaf4830f-976f-45a4-9a50-244f3ab6f9e1:3e8b1b9d-5ef2-4382-af62-6b2b21cc04fd | WorkerManager._start_local_worker | None |
| eaf4830f-976f-45a4-9a50-244f3ab6f9e1 | eaf4830f-976f-45a4-9a50-244f3ab6f9e1:3e8b1b9d-5ef2-4382-af62-6b2b21cc04fd | WorkerManager._start_local_worker | None |
| eaf4830f-976f-45a4-9a50-244f3ab6f9e1 | eaf4830f-976f-45a4-9a50-244f3ab6f9e1:4c280ec9-0fd6-4ee8-b79f-1afcab0f9901 | DefaultModelWorker.start | None |
+--------------------------------------+---------------------------------------------------------------------------+-----------------------------------+------------------+
```
## View trace information based on trace type
```python
dbgpt trace list --span_type chat
+--------------------------------------+---------------------------------------------------------------------------+-------------------+--------------------------------------+
| Trace ID | Span ID | Operation Name | Conversation UID |
+--------------------------------------+---------------------------------------------------------------------------+-------------------+--------------------------------------+
| 5d1900c3-5aad-4159-9946-fbb600666530 | 5d1900c3-5aad-4159-9946-fbb600666530:14772034-bed4-4b4e-b43f-fcf3a8aad6a7 | get_chat_instance | 5e456272-68ac-11ee-9fba-0242ac150003 |
| 5d1900c3-5aad-4159-9946-fbb600666530 | 5d1900c3-5aad-4159-9946-fbb600666530:14772034-bed4-4b4e-b43f-fcf3a8aad6a7 | get_chat_instance | 5e456272-68ac-11ee-9fba-0242ac150003 |
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:0482a0c5-38b3-4b38-8101-e42489f90ccd | get_chat_instance | 87a722de-68ae-11ee-9fba-0242ac150003 |
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:0482a0c5-38b3-4b38-8101-e42489f90ccd | get_chat_instance | 87a722de-68ae-11ee-9fba-0242ac150003 |
+--------------------------------------+---------------------------------------------------------------------------+-------------------+--------------------------------------+
```
## Search trace information
```python
dbgpt trace list --search Hello
+--------------------------------------+---------------------------------------------------------------------------+----------------------------------------------+--------------------------------------+
| Trace ID | Span ID | Operation Name | Conversation UID |
+--------------------------------------+---------------------------------------------------------------------------+----------------------------------------------+--------------------------------------+
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:0482a0c5-38b3-4b38-8101-e42489f90ccd | get_chat_instance | 87a722de-68ae-11ee-9fba-0242ac150003 |
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:0482a0c5-38b3-4b38-8101-e42489f90ccd | get_chat_instance | 87a722de-68ae-11ee-9fba-0242ac150003 |
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:03de6c87-34d6-426a-85e8-7d46d475411e | BaseChat.stream_call | None |
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:03de6c87-34d6-426a-85e8-7d46d475411e | BaseChat.stream_call | None |
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:19593596-b4c7-4d15-a3c1-0924d86098dd | DefaultModelWorker_call.generate_stream_func | None |
| ec30d733-7b35-4d61-b02e-2832fd2e29ff | ec30d733-7b35-4d61-b02e-2832fd2e29ff:19593596-b4c7-4d15-a3c1-0924d86098dd | DefaultModelWorker_call.generate_stream_func | None |
+--------------------------------------+---------------------------------------------------------------------------+----------------------------------------------+--------------------------------------+
```
## More list related command usage
```python
dbgpt trace list --help
Usage: dbgpt trace list [OPTIONS] [FILES]...
List your trace spans
Options:
--trace_id TEXT Specify the trace ID to list
--span_id TEXT Specify the Span ID to list.
--span_type TEXT Specify the Span Type to list.
--parent_span_id TEXT Specify the Parent Span ID to list.
--search TEXT Search trace_id, span_id, parent_span_id,
operation_name or content in metadata.
-l, --limit INTEGER Limit the number of recent span displayed.
--start_time TEXT Filter by start time. Format: "YYYY-MM-DD
HH:MM:SS.mmm"
--end_time TEXT Filter by end time. Format: "YYYY-MM-DD
HH:MM:SS.mmm"
--desc Whether to use reverse sorting. By default,
sorting is based on start time.
--output [text|html|csv|latex|json]
The output format
--help Show this message and exit.
```
\ No newline at end of file
# Observability
**Observability** is a measure of how well internal states of a system can be inferred from
knowledge of its external outputs. In the context of a software system, observability
is the ability to understand the internal state of the system by examining its outputs.
This is important for debugging, monitoring, and maintaining the system.
## Observability In DB-GPT
DB-GPT provides observability through the following mechanisms:
- **Logging**: DB-GPT logs various events and metrics to help you understand the internal state of the system.
- **Tracing**: DB-GPT provides tracing capabilities to help you understand the flow of requests through the system.
## Logging
You can configure the logging level and storage location for DB-GPT logs. By default,
logs are stored in the `logs` directory in the DB-GPT root directory. You can change
the log level and storage location by setting the `DBGPT_LOG_LEVEL` and `DBGPT_LOG_DIR` environment.
## Tracing
DB-GPT has built-in tracing capabilities that allow you to trace the flow of requests
through the system.
## Trace Storage
### Local Storage
DB-GPT will store traces in the `traces` directory in the DB-GPT logs directory, by default,
they are located in `logs/dbgpt*.jsonl`.
If you want to know more about the local storage of traces and how to use them, you
can refer to the [Debugging](./debugging) documentation.
### OpenTelemetry Support
DB-GPT also supports [OpenTelemetry](https://opentelemetry.io/) for distributed tracing.
Now, you can export traces to open-telemetry compatible backends like Jaeger, Zipkin,
and others with OpenTelemetry Protocol (OTLP).
To enable OpenTelemetry support, you need install following packages:
```bash
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp
```
Then, modify your `.env` file to enable OpenTelemetry tracing:
```bash
## Whether to enable DB-GPT send trace to OpenTelemetry
TRACER_TO_OPEN_TELEMETRY=True
## More details see https://opentelemetry-python.readthedocs.io/en/latest/exporter/otlp/otlp.html
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4317
```
In the above configuration, you can change the `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` to
your OTLP collector or backend, we use gRPC endpoint by default.
Here, we use Jaeger as an example to show how to use OpenTelemetry to trace DB-GPT.
### Jaeger Support
Here is an example of how to use Jaeger to trace DB-GPT with docker:
Run the Jaeger all-in-one image:
```bash
docker run --rm --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 14250:14250 \
-p 14268:14268 \
-p 14269:14269 \
-p 9411:9411 \
jaegertracing/all-in-one:1.58
```
Then, modify your `.env` file to enable OpenTelemetry tracing like above.
```bash
TRACER_TO_OPEN_TELEMETRY=True
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4317
```
Start the DB-GPT server:
```bash
dbgpt start webserver
```
Now, you can access the Jaeger UI at `http://localhost:16686` to view the traces.
Here are some examples of screenshot of Jaeger UI:
**Search Traces Page**
<p align="left">
<img src={'/img/application/advanced_tutorial/observability_img1.png'} width="720px"/>
</p>
**Show Normal Conversation Trace**
<p align="left">
<img src={'/img/application/advanced_tutorial/observability_img2.png'} width="720px"/>
</p>
**Show Conversation Detail Tags**
<p align="left">
<img src={'/img/application/advanced_tutorial/observability_img3.png'} width="720px"/>
</p>
**Show Agent Conversation Trace**
<p align="left">
<img src={'/img/application/advanced_tutorial/observability_img4.png'} width="720px"/>
</p>
**Show Trace In Cluster**
### Jaeger Support With Docker Compose
If you want to use docker-compose to start DB-GPT and Jaeger, you can use the following
`docker-compose.yml` file:
```yaml
# An example of using docker-compose to start a cluster with observability enabled.
version: '3.10'
services:
jaeger:
image: jaegertracing/all-in-one:1.58
restart: unless-stopped
networks:
- dbgptnet
ports:
# serve frontend
- "16686:16686"
# accept jaeger.thrift over Thrift-compact protocol (used by most SDKs)
- "6831:6831"
# accept OpenTelemetry Protocol (OTLP) over HTTP
- "4318:4318"
# accept OpenTelemetry Protocol (OTLP) over gRPC
- "4317:4317"
- "14268:14268"
environment:
- LOG_LEVEL=debug
- SPAN_STORAGE_TYPE=badger
- BADGER_EPHEMERAL=false
- BADGER_DIRECTORY_VALUE=/badger/data
- BADGER_DIRECTORY_KEY=/badger/key
volumes:
- jaeger-badger:/badger
user: root
controller:
image: eosphorosai/dbgpt:latest
command: dbgpt start controller
restart: unless-stopped
environment:
- TRACER_TO_OPEN_TELEMETRY=True
- OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://jaeger:4317
- DBGPT_LOG_LEVEL=DEBUG
networks:
- dbgptnet
llm-worker:
image: eosphorosai/dbgpt:latest
command: dbgpt start worker --model_type proxy --model_name chatgpt_proxyllm --model_path chatgpt_proxyllm --proxy_server_url ${OPENAI_API_BASE}/chat/completions --proxy_api_key ${OPENAI_API_KEY} --controller_addr http://controller:8000
environment:
# Your real openai model name, e.g. gpt-3.5-turbo, gpt-4o
- PROXYLLM_BACKEND=gpt-3.5-turbo
- TRACER_TO_OPEN_TELEMETRY=True
- OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://jaeger:4317
- DBGPT_LOG_LEVEL=DEBUG
depends_on:
- controller
restart: unless-stopped
networks:
- dbgptnet
ipc: host
embedding-worker:
image: eosphorosai/dbgpt:latest
command: dbgpt start worker --worker_type text2vec --model_name proxy_http_openapi --model_path proxy_http_openapi --proxy_server_url ${OPENAI_API_BASE}/embeddings --proxy_api_key ${OPENAI_API_KEY} --controller_addr http://controller:8000
environment:
- proxy_http_openapi_proxy_backend=text-embedding-3-small
- TRACER_TO_OPEN_TELEMETRY=True
- OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://jaeger:4317
- DBGPT_LOG_LEVEL=DEBUG
depends_on:
- controller
restart: unless-stopped
networks:
- dbgptnet
ipc: host
webserver:
image: eosphorosai/dbgpt:latest
command: dbgpt start webserver --light --remote_embedding --controller_addr http://controller:8000
environment:
- LLM_MODEL=chatgpt_proxyllm
- EMBEDDING_MODEL=proxy_http_openapi
- TRACER_TO_OPEN_TELEMETRY=True
- OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://jaeger:4317
depends_on:
- controller
- llm-worker
- embedding-worker
volumes:
- dbgpt-data:/app/pilot/data
- dbgpt-message:/app/pilot/message
ports:
- 5670:5670/tcp
restart: unless-stopped
networks:
- dbgptnet
volumes:
dbgpt-data:
dbgpt-message:
jaeger-badger:
networks:
dbgptnet:
driver: bridge
name: dbgptnet
```
You can start the cluster with the following command:
```bash
OPENAI_API_KEY="{your api key}" OPENAI_API_BASE="https://api.openai.com/v1" docker compose up -d
```
Please replace `{your api key}` with your real OpenAI API key and `https://api.openai.com/v1`
with your real OpenAI API base URL.
You can see more details about the docker-compose file in the `docker/compose_examples/observability/docker-compose.yml` documentation.
After the cluster is started, you can access the Jaeger UI at `http://localhost:16686` to view the traces.
**Show RAG Conversation Trace**
<p align="left">
<img src={'/img/application/advanced_tutorial/observability_img5.png'} width="720px"/>
</p>
In the above screenshot, you can see the trace of cross-service communication between the DB-GPT controller, LLM worker, and webserver.
# RAG Parameter Adjustment
Each knowledge space supports argument customization, including the relevant arguments for vector retrieval and the arguments for knowledge question-answering prompts.
As shown in the figure below, clicking on the "Knowledge" will trigger a pop-up dialog box. Click the "Arguments" button to enter the parameter tuning interface.
![image](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/f02039ea-01d7-493a-acd9-027020d54267)
<Tabs
defaultValue="Embedding"
values={[
{label: 'Embedding Argument', value: 'Embedding'},
{label: 'Prompt Argument', value: 'Prompt'},
{label: 'Summary Argument', value: 'Summary'},
]}>
<TabItem value="Embedding" label="Embedding Argument">
![image](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/8a69aba0-3b28-449d-8fd8-ce5bf8dbf7fc)
:::tip Embedding Arguments
* topk:the top k vectors based on similarity score.
* recall_score:set a similarity threshold score for the retrieval of similar vectors. between 0 and 1. default 0.3.
* recall_type:recall type. now nly support topk by vector similarity.
* model:A model used to create vector representations of text or other data.
* chunk_size:The size of the data chunks used in processing.default 500.
* chunk_overlap:The amount of overlap between adjacent data chunks.default 50.
:::
</TabItem>
<TabItem value="Prompt" label="Prompt Argument">
![image](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/00f12903-8d70-4bfb-9f58-26f03a6a4773)
:::tip Prompt Arguments
* scene:A contextual parameter used to define the setting or environment in which the prompt is being used.
* template:A pre-defined structure or format for the prompt, which can help ensure that the AI system generates responses that are consistent with the desired style or tone.
* max_token:The maximum number of tokens or words allowed in a prompt.
:::
</TabItem>
<TabItem value="Summary" label="Summary Argument">
![image](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/96782ba2-e9a2-4173-a003-49d44bf874cc)
:::tip summary arguments
* max_iteration: summary max iteration call with llm, default 5. the bigger and better for document summary but time will cost longer.
* concurrency_limit: default summary concurrency call with llm, default 3.
:::
</TabItem>
</Tabs>
# Knowledge Query Rewrite
set ``KNOWLEDGE_SEARCH_REWRITE=True`` in ``.env`` file, and restart the server.
```shell
# Whether to enable Chat Knowledge Search Rewrite Mode
KNOWLEDGE_SEARCH_REWRITE=True
```
# Change Vector Database
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs
defaultValue="Chroma"
values={[
{label: 'Chroma', value: 'Chroma'},
{label: 'Milvus', value: 'Milvus'},
{label: 'Weaviate', value: 'Weaviate'},
{label: 'OceanBase', value: 'OceanBase'},
]}>
<TabItem value="Chroma" label="Chroma">
set ``VECTOR_STORE_TYPE`` in ``.env`` file.
```shell
### Chroma vector db config
VECTOR_STORE_TYPE=Chroma
#CHROMA_PERSIST_PATH=/root/DB-GPT/pilot/data
```
</TabItem>
<TabItem value="Milvus" label="Milvus">
set ``VECTOR_STORE_TYPE`` in ``.env`` file
```shell
### Milvus vector db config
VECTOR_STORE_TYPE=Milvus
MILVUS_URL=127.0.0.1
MILVUS_PORT=19530
#MILVUS_USERNAME
#MILVUS_PASSWORD
#MILVUS_SECURE=
```
</TabItem>
<TabItem value="Weaviate" label="Weaviate">
set ``VECTOR_STORE_TYPE`` in ``.env`` file
```shell
### Weaviate vector db config
VECTOR_STORE_TYPE=Weaviate
#WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network
```
</TabItem>
<TabItem value="OceanBase" label="OceanBase">
set ``VECTOR_STORE_TYPE`` in ``.env`` file
```shell
OB_HOST=127.0.0.1
OB_PORT=2881
OB_USER=root@test
OB_DATABASE=test
## Optional
# OB_PASSWORD=
## Optional: If {OB_ENABLE_NORMALIZE_VECTOR} is set, the vector stored in OceanBase is normalized.
# OB_ENABLE_NORMALIZE_VECTOR=True
```
</TabItem>
</Tabs>
# SMMF
The DB-GPT project provides service-oriented multi-model management capabilities. Developer who are interested in related capabilities can read the [SMMF](../../modules/smmf.md) module part. Here we focus on how to use multi-LLMs.
Here we mainly introduce the usage through the web interface. For developer interested in the command line, you can refer to the [cluster deployment](../../installation/model_service/cluster.md) model. Open the DB-GPT-Web frontend service and click on `Model Management` to enter the multi-model management interface.
## List Models
By opening the model management interface, we can see the list of currently deployed models. The following is the list of models.
<p align="left">
<img src={'/img/module/model_list.png'} width="720px"/>
</p>
## Use Models
Once the models are deployed, you can switch and use the corresponding model on the multi-model interface.
<p align="left">
<img src={'/img/module/model_use.png'} width="720px"/>
</p>
## Stop Models
As shown in the figure below, click Model Management to enter the model list interface. Select a specific model and click the red `Stop Model` button to stop the model.
<p align="left">
<img src={'/img/module/model_stop.png'} width="720px"/>
</p>
After the model is stopped, the display in the upper right corner will change.
<p align="left">
<img src={'/img/module/model_stopped.png'} width="720px"/>
</p>
## Model Deployment
1. Open the web page, click the `model management` button on the left to enter the model list page, click `Create Model` in the upper left corner, and then select the name of the model you want to deploy in the pop-up dialog box. Here we choose `vicuna-7b-v1.5`, as shown in the figure.
<p align="left">
<img src={'/img/module/model_vicuna-7b-1.5.png'} width="720px"/>
</p>
2. Select the appropriate parameters according to the actual deployed model (if you are not sure, the default is enough), then click the `Submit` button at the bottom left of the dialog box, and wait until the model is deployed successfully.
3. After the new model is deployed, you can see the newly deployed model on the model page, as shown in the figure
<p align="left">
<img src={'/img/module/model_vicuna_deployed.png'} width="720px"/>
</p>
# Operations and Observability
Operations and observability are important components of a production system. In terms of operational capabilities, DB-GPT provides a command-line tool called dbgpt for operations and management, in addition to the common management functionalities available on the web interface. The dbgpt command-line tool offers the following functionalities:
- Starting and stopping various services
- Knowledge base management (batch import, custom import, viewing, and deleting knowledge base documents)
- Model management (viewing, starting, stopping models, and conducting dialogues for debugging)
Observability tools (viewing and analyzing observability logs)
We won't go into detail about the usage of the command-line tool here. You can use the `dbgpt --help` command to obtain specific usage documentation. Additionally, you can check the documentation for individual subcommands. For example, you can use `dbgpt start --help` to view the documentation for starting a service. For more information, please refer to the document provided below.
- [Debugging](../advanced_tutorial/debugging.md)
# App Chat
The online Chat interface provides the main conversation capabilities, showing the historical conversation records and the application currently in conversation. As shown in the figure below, clicking any smart application will also jump to this interface.
<p align="center">
<img src={'/img/app/app_chat_v0.6.jpg'} width="800px" />
</p>
In the dialogue interface, a series of operations such as refreshing and pausing the dialogue are supported. The specific operation buttons are in the edit box at the bottom right. At the same time, the dialog box also provides a variety of parameter selections, such as model selection, temperature parameter adjustment, file upload, etc.
<p align="center">
<img src={'/img/app/app_chat_op_v0.6.jpg'} width="800" />
</p>
If you find new problems or have good ideas during use, you can also directly post them on Github [issue](https://github.com/eosphoros-ai/DB-GPT/issues) feedback.
# App Explore
In the new version of DB-GPT V0.6.0, the application management has been comprehensively upgraded. The search square module is mainly used to discover various interesting, fun and useful data applications.KeywordsIn addition to searching for apps, it also provides popular recommendations, comprehensive apps, my favorites, etc.
After the default installation, the previous six application scenarios are retained.
- [Chat Excel](chat_excel.md)
- Chat Normal
- [Chat DB](chat_db.md)
- [Chat DashBoard](chat_dashboard.md)
- [Chat Data](chat_data.md)
- [Chat Knowledge Base](chat_knowledge.md)
<p align="center">
<img src={'/img/app/app_explore_v0.6.jpg'} width="800px" />
</p>
\ No newline at end of file
# App Manage
The application management panel provides many capabilities. Here we mainly introduce the management of the data intelligence application life cycle, including application creation, editing, deletion, and use.
<p align="center">
<img src={'/img/app/app_manage_v0.6.jpg'} width="800px" />
</p>
As shown in the figure, the application management interface. First, let's take a look at the creation of an application. In DB-GPT, four application creation modes are provided.
- Multi-agent automatic planning mode
- Task flow orchestration mode
- Single Agent Mode
- Native application mode
<p align="center">
<img src={'/img/app/app_manage_mode_v0.6.jpg'} width="800px" />
</p>
Next, we will explain the creation of applications in each mode respectively. Native application mode In the early versions of DB-GPT, six types of native application scenarios were provided, such as `Chat DB`, `Chat Data`, `Chat Dashboard`, `Chat Knowledge Base`, `Chat Normal`, `Chat Excel`, etc.
By creating a data intelligence application in the native application mode, you can quickly build a similar application based on your own database, knowledge base and other parameters. Click the upper right cornerCreate an applicationbutton, select **Native application mode**, enter the application name and description, click **Sure**
<p align="center">
<img src={'/img/app/app_manage_chat_data_v0.6.jpg'} width="800px" />
</p>
After confirmation, enter the parameter selection panel. As shown in the figure below, we can see selection boxes such as application type, model, temperature, and recommended questions.
<p align="center">
<img src={'/img/app/app_manage_chat_data_editor_v0.6.jpg'} width="800px" />
</p>
Here, we select **Chat Data** Application, fill in the parameters in order according to the requirements. Note that in the data dialogue application, the parameter column needs to fill in the data source. If you do not have a data source, you need to follow [Data Source Tutorial](../datasources.md) to add it.
After completing the parameters, click **Save** to view related applications in the application panel.
<p align="center">
<img src={'/img/app/app_manage_app_v0.6.jpg'} width="800px" />
</p>
Please note that after creating an application, there is a **Publish Application** button. Only after the application is published can it be discovered and used by other users.
<p align="center">
<img src={'/img/app/app_manage_app_publish_v0.6.jpg'} width="800px" />
</p>
Finally, click the **Start a conversation** button to start a conversation with the application you just created.
<p align="center">
<img src={'/img/app/app_manage_chat_v0.6.jpg'} width="800px" />
</p>
In addition, you can also edit and delete applications. Just operate on the corresponding interface.
# Chat Dashboard
Report analysis corresponds to the `Chat Dashboard` scenario in DB-GPT, and intelligent report generation and analysis can be performed through natural language. It is one of the basic capabilities of generative BI (GBI). Let's take a look at how to use the report analysis capabilities.
## Steps
The following are the steps for using report analysis:
- 1.Data preparation
- 2.Add data source
- 3.Select Chat Dashboard App
- 4.Start chat
### Data preparation
In order to better experience the report analysis capabilities, we have built some test data into the code. To use this test data, we first need to create a test library.
```SQL
CREATE DATABASE IF NOT EXISTS dbgpt_test CHARACTER SET utf8;
```
After the test library is created, you can initialize the test data with one click through the script.
```python
python docker/examples/dashboard/test_case_mysql_data.py
```
### Add data source
The steps to add a data source are the same as [Chat Data](./chat_data.md). Select the corresponding database type in the data source management tab, then create it. Fill in the necessary information to complete the creation.
### Select Chat Dashboard
After the data source is added, select `Chat Dashboard` on the home scene page to perform report analysis.
<p align="center">
<img src={'/img/app/chat_dashboard_v0.6.jpg'} width="800px" />
</p>
### Start chat
Enter specific questions in the dialog box on the right to start a data conversation.
:::info note
⚠️ Data dialogue has relatively high requirements on model capabilities, and `ChatGPT/GPT-4` has a high success rate. Other open source models you can try `qwen2`
:::
<p align="center">
<img src={'/img/app/chat_dashboard_display_v0.6.jpg'} width="800px" />
</p>
\ No newline at end of file
# Chat Data
Chat data capability is to dialogue with data through natural language. Currently, it is mainly dialogue between structured and semi-structured data, which can assist in data analysis and insight.
:::info note
Before starting the data conversation, we first need to add the data source
:::
## steps
To start a data conversation, you need to go through the following steps:
- 1.Add data source
- 2.Select ChatData
- 3.Select the corresponding database
- 4.Start a conversation
### Add data source
First, select the [data source](../datasources.md) on the left to add and add a database. Currently, DB-GPT supports multiple database types. Just select the corresponding database type to add. Here we choose MySQL as a demonstration. For the test data of the demonstration, see the [test sample](https://github.com/eosphoros-ai/DB-GPT/tree/main/docker/examples/sqls).
### Choose ChatData App
<p align="center">
<img src={'/img/app/chat_data_v0.6.jpg'} width="800px" />
</p>
### Start a conversation
<p align="center">
<img src={'/img/app/chat_data_display_v0.6.jpg'} width="800px" />
</p>
# Chat DB
The purpose of `Chat DB` is to create professional database experts, positioned as LLM As DBA, who can complete database performance analysis, optimization and other work by talking to the database. Currently, ChatDB only has some basic capabilities, which will be gradually enhanced with the iteration of the community.
## Steps
The Chat DB usage process mainly includes the following steps:
- 1.Select Chat DB
- 2.Add data source (talk to data)
- 3.Select the basic model and database
- 4.Start chat
### Select Chat DB
<p align="left">
<img src={'/img/chat_db/choose_chat_db.png'} width="720px" />
</p>
### Select DataBase
<p align="left">
<img src={'/img/chat_db/choose_db.png'} width="720px" />
</p>
### Start Chat
:::tip
Single table query
:::
<p align="left">
<img src={'/img/chat_db/single_table.png'} width="720px" />
</p>
:::tip
Multi-table query
:::
<p align="left">
<img src={'/img/chat_db/multi_table.png'} width="720px" />
</p>
:::tip
Index optimization suggestions
:::
<p align="left">
<img src={'/img/chat_db/index.png'} width="720px" />
</p>
:::tip
Database problem diagnosis
:::
<p align="left">
<img src={'/img/chat_db/problem_help.png'} width="720px" />
</p>
:::tip
Troubleshoot slow queries
:::
<p align="left">
<img src={'/img/chat_db/slow_query.png'} width="720px" />
</p>
:::danger Note
⚠️ The examples provided above are for demonstration only. The model output results are from open source models and ChatGPT agents. They have not been fine-tuned or targeted optimized. They are for reference only and are not guaranteed to be absolutely correct.
:::
# Chat Excel
Chat Excel means that you can interpret and analyze Excel data through natural language dialogue.
<p align="left">
<img src={'/img/chat_excel/excel.png'} width="720px" />
</p>
## Steps
The steps to use Chat Excel are relatively simple and are mainly divided into the following steps:
- 1.Select Chat Excel dialogue app
- 2.Upload Excel document
- 3.Start chat
### Select `Chat Excel`
<p align="center">
<img src={'/img/app/chat_excel_v0.6.jpg'} width="800px" />
</p>
### Upload Excel document
<p align="center">
<img src={'/img/app/chat_excel_upload_succ_v0.6.jpg'} width="800px" />
</p>
:::info note
⚠️ the Excel file format is converted to `.csv` format
:::
After the upload is successful, the content will be summarized by default and some questioning strategies will be recommended.
### Start chat
You can then start a conversation based on the uploaded file.
<p align="center">
<img src={'/img/app/chat_excel_upload_v0.6.jpg'} width="800px" />
</p>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment