Middleware Reference

Detailed API reference for Gatekeeper middleware

init(getOptions)

Creates middleware that initializes the Mira context with service identification, a unique trace ID, and pre-configured OpenAI client.

init((c) => ({
  serviceId: 'verify',  // Required: 'verify' or 'inference'
  aiGateway: {
    accountId: 'your-cf-account-id',
    gatewayId: 'your-gateway-id',
    token: c.env.CF_AI_GATEWAY_TOKEN,
  },
}))

What it does

  1. Validates serviceId (must be 'verify' or 'inference')
  2. Generates a unique traceId (UUID) for request tracing
  3. Creates an OpenAI client configured for Cloudflare AI Gateway
  4. Attaches traceId and serviceId to all AI Gateway requests via cf-aig-metadata header
  5. Sets mira context on the request

Options

PropertyTypeDescription
serviceId'verify' | 'inference'Service identifier for this worker
aiGateway.accountIdstringYour Cloudflare account ID
aiGateway.gatewayIdstringYour AI Gateway ID
aiGateway.tokenstringAI Gateway authentication token

Mira Context

After init() runs, access the context via c.get('mira'):

interface MiraContext {
  traceId: string;           // Unique request ID (UUID)
  serviceId: string;         // Service identifier ('verify' | 'inference')
  openai: OpenAI;            // Pre-configured OpenAI client
  _aiGatewayConfig: {...};   // Internal config (don't use directly)
}

auth()

Creates middleware that validates API keys and checks account balance. Requires init() to be called first.

auth()  // No options needed - gets serviceId from mira context

What it does

  1. Gets serviceId from mira context (set by init)
  2. Extracts Bearer token from Authorization header
  3. Validates token prefix matches service (e.g., mk_verify_...)
  4. Calls Console Service to validate the full key
  5. Checks if the app has sufficient balance
  6. Sets auth context on the request
  7. Auto-enriches mira context with auth metadata for AI Gateway logging

Auth Context

After auth() runs, access the context via c.get('auth'):

interface AuthContext {
  keyId: string;   // API key identifier
  appId: string;   // App identifier
  userId: string;  // User identifier
}

AI Gateway Metadata

After auth runs, all OpenAI client requests automatically include:

{
  "traceId": "uuid",
  "serviceId": "verify",
  "keyId": "key-id",
  "appId": "app-id",
  "userId": "user-id"
}

Errors Thrown

  • AuthError - When authentication fails (401)
  • InsufficientBalanceError - When app has insufficient balance (402)

rateLimit(options)

Creates middleware that enforces rate limits per API key.

rateLimit({
  rpm: 60,    // Requests per minute
  rpd: 1000,  // Requests per day
})

What it does

  1. Reads rate limit state from KV storage
  2. Checks against per-minute and per-day limits
  3. Increments counters and saves state
  4. Sets rate limit headers on response

Options

PropertyTypeDescription
rpmnumberMaximum requests per minute
rpdnumberMaximum requests per day

Response Headers

X-RateLimit-Limit-Minute: 60
X-RateLimit-Remaining-Minute: 45
X-RateLimit-Limit-Day: 1000
X-RateLimit-Remaining-Day: 955

Errors Thrown

  • RateLimitError - When rate limit is exceeded (429)

Full Example

import { Hono } from 'hono';
import {
  init,
  auth,
  rateLimit,
  AuthError,
  RateLimitError,
  InsufficientBalanceError,
  type AuthContext,
  type MiraContext,
} from '@mira.network/gatekeeper';

type Env = {
  CF_ACCOUNT_ID: string;
  CF_GATEWAY_ID: string;
  CF_AI_GATEWAY_TOKEN: string;
  CONSOLE_SERVICE: Service;
  RATE_LIMIT_KV: KVNamespace;
};

const app = new Hono<{
  Bindings: Env;
  Variables: { auth: AuthContext; mira: MiraContext };
}>();

// Initialize Mira context (serviceId + traceId + OpenAI client)
app.use('/*', init((c) => ({
  serviceId: 'verify',
  aiGateway: {
    accountId: c.env.CF_ACCOUNT_ID,
    gatewayId: c.env.CF_GATEWAY_ID,
    token: c.env.CF_AI_GATEWAY_TOKEN,
  },
})));

// Global error handler
app.onError((err, c) => {
  if (err instanceof AuthError) {
    return c.json({ error: err.message }, 401);
  }
  if (err instanceof RateLimitError) {
    return c.json({ error: err.message, resetAt: err.resetAt }, 429);
  }
  if (err instanceof InsufficientBalanceError) {
    return c.json({ error: 'Insufficient balance', balance: err.balance }, 402);
  }
  console.error(err);
  return c.json({ error: 'Internal error' }, 500);
});

// Health check (no auth required)
app.get('/health', (c) => c.json({ status: 'ok' }));

// Protected routes - auth + rate limit
app.use('/v1/*', auth());
app.use('/v1/*', rateLimit({ rpm: 60, rpd: 1000 }));

app.post('/v1/chat', async (c) => {
  const { traceId, serviceId, openai } = c.get('mira');
  const { keyId, appId } = c.get('auth');

  // Use pre-configured OpenAI client
  // All requests automatically include traceId, serviceId + auth metadata
  const completion = await openai.chat.completions.create({
    model: 'openai/gpt-4o-mini',  // AI Gateway unified model format
    messages: [{ role: 'user', content: 'Hello!' }],
  });

  // Report usage back to Console Service
  await c.env.CONSOLE_SERVICE.reportUsage(
    appId,
    serviceId,
    [{
      model: 'openai/gpt-4o-mini',
      promptTokens: completion.usage?.prompt_tokens ?? 0,
      completionTokens: completion.usage?.completion_tokens ?? 0,
    }],
    keyId,
    `Chat: ${traceId}`
  );

  return c.json({ result: completion.choices[0].message.content });
});

export default app;