API Overview
Overview of the Mira Network API
Base URL
All API requests should be made to:
https://console.mira.networkThe URL structure follows:
/v1/*- Console Service (authentication, apps, billing)/verify/v1/*- Verify Service (fact verification)/inference/v1/*- Inference Service (LLM chat completions)
Authentication
All endpoints require authentication via API key in the Authorization header:
Authorization: Bearer mk_verify_YOUR_API_KEYSee Authentication for details.
Request Format
- All request bodies should be JSON
- Include
Content-Type: application/jsonheader
curl -X POST https://console.mira.network/verify/v1/stream \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"fact": "Your fact here"}'Response Format
Streaming Responses (SSE)
The Verify API uses Server-Sent Events for real-time streaming. Events are formatted as:
event: event_name
data: {"json": "payload"}Each event is separated by a blank line.
Error Responses
Errors return JSON with this structure:
{
"error": "Error Type",
"message": "Human-readable description"
}Rate Limits
All endpoints have rate limits to ensure fair usage:
| Limit Type | Value |
|---|---|
| Requests per minute | 60 |
| Requests per day | 1,000 |
Rate limit headers are included in all responses:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
X-RateLimit-Reset: 1699999999See Rate Limits for handling rate limit errors.
Available Endpoints
Verify Service
| Method | Endpoint | Description |
|---|---|---|
| POST | /verify/v1/stream | Verify a fact with streaming response |
| GET | /verify/v1/health | Health check (no auth required) |
Inference Service
| Method | Endpoint | Description |
|---|---|---|
| POST | /inference/v1/chat/completions | Chat completions (OpenAI-compatible) |
| GET | /inference/v1/health | Health check (no auth required) |
SDKs & Libraries
Currently, we provide documentation for direct HTTP API usage. SDKs are coming soon.
For Hono middleware integration, see the @mira.network/gatekeeper middleware package.