README.md 2.49 KB
Newer Older
chenzk's avatar
v1.0  
chenzk committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
# YOLOv13 FastAPI REST API

**What is this?**  
A REST API server that detects objects in images using YOLOv13 AI models. Upload an image, get back detection results with bounding boxes and confidence scores.

**Key Benefits:**
- Real-time detection (~6.9 FPS with YOLOv13n)
- Multiple YOLO model support (YOLOv13, YOLOv8)
- Simple REST API interface
- Production-ready with error handling

## Quick Start
Before starting the server, make sure you have installed this extra requirement: 

```bash
pip install huggingface-hub
```

Then, start the server:

```bash
# Install dependencies
pip install -r requirements.txt

# Start the server
python yolov13_fastapi_api.py
```

Server runs at: http://localhost:8000  
API docs: http://localhost:8000/docs

## Usage

### Basic Detection

```bash
curl -X POST "http://localhost:8000/detect" \
     -F "image=@your_image.jpg" \
     -F "model=yolov13n"
```

### With Custom Settings

```bash
curl -X POST "http://localhost:8000/detect" \
     -F "image=@your_image.jpg" \
     -F "model=yolov13n" \
     -F "conf=0.25" \
     -F "iou=0.45"
```

### Get Available Models

```bash
curl http://localhost:8000/models
```

## Available Models

- **YOLOv13**: yolov13n, yolov13s, yolov13m, yolov13l, yolov13x
- **YOLOv8**: yolov8n, yolov8s, yolov8m, yolov8l, yolov8x

**Recommended for real-time**: yolov13n (fastest)

## Response Format

```json
{
  "success": true,
  "model_used": "yolov13n",
  "inference_time": 0.146,
  "detections": [
    {
      "bbox": [x1, y1, x2, y2],
      "confidence": 0.85,
      "class_id": 0,
      "class_name": "person"
    }
  ],
  "num_detections": 1,
  "image_info": {
    "width": 640,
    "height": 480,
    "channels": 3
  }
}
```

## Deployment

### Docker Deployment

```bash
# Build image
docker build -t yolov13-api .

# Run container
docker run -p 8000:8000 yolov13-api
```

### Docker Compose

```yaml
version: '3.8'
services:
  yolov13-api:
    build: .
    ports:
      - "8000:8000"
    volumes:
      - ./models:/app/models  # Optional: for custom models
```

### Production Deployment

```bash
# Install production server
pip install gunicorn

# Run with gunicorn
gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app --bind 0.0.0.0:8000
```

### Environment Variables

```bash
export MODEL_PATH=/path/to/custom/model.pt  # Optional
export API_HOST=0.0.0.0
export API_PORT=8000
```

## Performance

- **YOLOv13n**: ~0.146s inference (~6.9 FPS)
- **YOLOv8n**: ~0.169s inference (~5.9 FPS)

YOLOv13n is **13.5% faster** than YOLOv8n with identical accuracy.