Middleware Reference
Detailed API reference for Gatekeeper middleware
init(getOptions)
Creates middleware that initializes the Mira context with service identification, a unique trace ID, and pre-configured OpenAI client.
init((c) => ({
serviceId: 'verify', // Required: 'verify' or 'inference'
aiGateway: {
accountId: 'your-cf-account-id',
gatewayId: 'your-gateway-id',
token: c.env.CF_AI_GATEWAY_TOKEN,
},
}))What it does
- Validates
serviceId(must be'verify'or'inference') - Generates a unique
traceId(UUID) for request tracing - Creates an OpenAI client configured for Cloudflare AI Gateway
- Attaches
traceIdandserviceIdto all AI Gateway requests viacf-aig-metadataheader - Sets mira context on the request
Options
| Property | Type | Description |
|---|---|---|
serviceId | 'verify' | 'inference' | Service identifier for this worker |
aiGateway.accountId | string | Your Cloudflare account ID |
aiGateway.gatewayId | string | Your AI Gateway ID |
aiGateway.token | string | AI Gateway authentication token |
Mira Context
After init() runs, access the context via c.get('mira'):
interface MiraContext {
traceId: string; // Unique request ID (UUID)
serviceId: string; // Service identifier ('verify' | 'inference')
openai: OpenAI; // Pre-configured OpenAI client
_aiGatewayConfig: {...}; // Internal config (don't use directly)
}auth()
Creates middleware that validates API keys and checks account balance. Requires init() to be called first.
auth() // No options needed - gets serviceId from mira contextWhat it does
- Gets
serviceIdfrom mira context (set byinit) - Extracts Bearer token from
Authorizationheader - Validates token prefix matches service (e.g.,
mk_verify_...) - Calls Console Service to validate the full key
- Checks if the app has sufficient balance
- Sets auth context on the request
- Auto-enriches mira context with auth metadata for AI Gateway logging
Auth Context
After auth() runs, access the context via c.get('auth'):
interface AuthContext {
keyId: string; // API key identifier
appId: string; // App identifier
userId: string; // User identifier
}AI Gateway Metadata
After auth runs, all OpenAI client requests automatically include:
{
"traceId": "uuid",
"serviceId": "verify",
"keyId": "key-id",
"appId": "app-id",
"userId": "user-id"
}Errors Thrown
AuthError- When authentication fails (401)InsufficientBalanceError- When app has insufficient balance (402)
rateLimit(options)
Creates middleware that enforces rate limits per API key.
rateLimit({
rpm: 60, // Requests per minute
rpd: 1000, // Requests per day
})What it does
- Reads rate limit state from KV storage
- Checks against per-minute and per-day limits
- Increments counters and saves state
- Sets rate limit headers on response
Options
| Property | Type | Description |
|---|---|---|
rpm | number | Maximum requests per minute |
rpd | number | Maximum requests per day |
Response Headers
X-RateLimit-Limit-Minute: 60
X-RateLimit-Remaining-Minute: 45
X-RateLimit-Limit-Day: 1000
X-RateLimit-Remaining-Day: 955Errors Thrown
RateLimitError- When rate limit is exceeded (429)
Full Example
import { Hono } from 'hono';
import {
init,
auth,
rateLimit,
AuthError,
RateLimitError,
InsufficientBalanceError,
type AuthContext,
type MiraContext,
} from '@mira.network/gatekeeper';
type Env = {
CF_ACCOUNT_ID: string;
CF_GATEWAY_ID: string;
CF_AI_GATEWAY_TOKEN: string;
CONSOLE_SERVICE: Service;
RATE_LIMIT_KV: KVNamespace;
};
const app = new Hono<{
Bindings: Env;
Variables: { auth: AuthContext; mira: MiraContext };
}>();
// Initialize Mira context (serviceId + traceId + OpenAI client)
app.use('/*', init((c) => ({
serviceId: 'verify',
aiGateway: {
accountId: c.env.CF_ACCOUNT_ID,
gatewayId: c.env.CF_GATEWAY_ID,
token: c.env.CF_AI_GATEWAY_TOKEN,
},
})));
// Global error handler
app.onError((err, c) => {
if (err instanceof AuthError) {
return c.json({ error: err.message }, 401);
}
if (err instanceof RateLimitError) {
return c.json({ error: err.message, resetAt: err.resetAt }, 429);
}
if (err instanceof InsufficientBalanceError) {
return c.json({ error: 'Insufficient balance', balance: err.balance }, 402);
}
console.error(err);
return c.json({ error: 'Internal error' }, 500);
});
// Health check (no auth required)
app.get('/health', (c) => c.json({ status: 'ok' }));
// Protected routes - auth + rate limit
app.use('/v1/*', auth());
app.use('/v1/*', rateLimit({ rpm: 60, rpd: 1000 }));
app.post('/v1/chat', async (c) => {
const { traceId, serviceId, openai } = c.get('mira');
const { keyId, appId } = c.get('auth');
// Use pre-configured OpenAI client
// All requests automatically include traceId, serviceId + auth metadata
const completion = await openai.chat.completions.create({
model: 'openai/gpt-4o-mini', // AI Gateway unified model format
messages: [{ role: 'user', content: 'Hello!' }],
});
// Report usage back to Console Service
await c.env.CONSOLE_SERVICE.reportUsage(
appId,
serviceId,
[{
model: 'openai/gpt-4o-mini',
promptTokens: completion.usage?.prompt_tokens ?? 0,
completionTokens: completion.usage?.completion_tokens ?? 0,
}],
keyId,
`Chat: ${traceId}`
);
return c.json({ result: completion.choices[0].message.content });
});
export default app;