Observability
zzz ships with a suite of middleware for production observability: structured logging, Prometheus-compatible metrics, lifecycle telemetry hooks, request ID propagation, and health check endpoints. All are configured at comptime and compose naturally with the middleware pipeline.
Quick setup
Section titled “Quick setup”A typical production stack combines several observability middleware. Order matters — place them early in the pipeline so they wrap all downstream handlers.
const App = Router.define(.{ .middleware = &.{ health(.{ .path = "/healthz" }), requestId(.{}), metrics(.{ .path = "/metrics" }), structuredLogger(.{ .format = .json }), }, .routes = &.{ Router.get("/", indexHandler), Router.get("/api/users", usersHandler), },});Structured logging
Section titled “Structured logging”The structuredLogger middleware logs every request with method, path, status code, response duration, and (when available) the request ID. It supports two output formats and configurable minimum log levels.
Configuration
Section titled “Configuration”pub const StructuredLoggerConfig = struct { level: LogLevel = .info, format: LogFormat = .text,};| Option | Type | Default | Description |
|---|---|---|---|
level | LogLevel | .info | Minimum severity to emit. Requests producing 5xx responses log at err, 4xx at warn, and everything else at info. |
format | LogFormat | .text | Output format: .text for human-readable, .json for structured JSON. |
Log levels
Section titled “Log levels”The middleware automatically assigns a log level based on the response status code:
| Status range | Assigned level |
|---|---|
| 5xx | err |
| 4xx | warn |
| 1xx — 3xx | info |
Only messages at or above the configured minimum level are emitted.
Output formats
Section titled “Output formats”structuredLogger(.{ .format = .text })Output:
[info] GET /api/users -> 200 (142us)[info] GET /api/users -> 200 (142us) [zzz-a1b2c3-0]When a request ID is present (from the requestId middleware), it is appended in brackets.
structuredLogger(.{ .format = .json })Output:
{"level":"info","method":"GET","path":"/api/users","status":200,"duration_us":142,"request_id":"zzz-a1b2c3-0"}JSON format is recommended for production, where logs are typically ingested by a log aggregation system.
Simple logger
Section titled “Simple logger”For development, zzz also provides a simpler logger middleware that does not require configuration:
const App = Router.define(.{ .middleware = &.{ logger }, .routes = &.{ ... },});The simple logger automatically formats durations in the most readable unit (microseconds, milliseconds, or seconds).
Request IDs
Section titled “Request IDs”The requestId middleware either propagates an existing request ID from an incoming header or generates a new one. The ID is made available to downstream middleware (including the structured logger) via context assigns and is echoed back in the response headers.
Configuration
Section titled “Configuration”pub const RequestIdConfig = struct { header_name: []const u8 = "X-Request-Id", assign_key: []const u8 = "request_id",};| Option | Default | Description |
|---|---|---|
header_name | "X-Request-Id" | Request/response header name to read and write. |
assign_key | "request_id" | Key used in ctx.assigns so other middleware can access the ID. |
How it works
Section titled “How it works”-
The middleware checks the incoming request for the configured header (e.g.,
X-Request-Id). -
If the header exists, its value is used as-is. This lets load balancers or API gateways set the ID upstream.
-
If no header is present, a new ID is generated in the format
zzz-{timestamp_hex}-{counter_hex}, using a monotonic clock and an atomic counter to guarantee uniqueness. -
The ID is stored in
ctx.assignsunder the configured key and added to the response headers.
Accessing the request ID in handlers
Section titled “Accessing the request ID in handlers”fn myHandler(ctx: *Context) !void { const req_id = ctx.getAssign("request_id") orelse "unknown"; // Use req_id for correlation in logs, database records, etc. ctx.text(.ok, "ok");}Metrics
Section titled “Metrics”The metrics middleware collects request counts, status code distributions, method breakdowns, and latency statistics. It serves the data in Prometheus exposition format at a configurable endpoint.
Configuration
Section titled “Configuration”pub const MetricsConfig = struct { path: []const u8 = "/metrics",};| Option | Default | Description |
|---|---|---|
path | "/metrics" | The URL path where Prometheus metrics are served. Only GET requests to this path trigger the metrics response. |
Exposed metrics
Section titled “Exposed metrics”| Metric | Type | Description |
|---|---|---|
zzz_requests_total | counter | Total number of HTTP requests processed. |
zzz_requests_by_status{class="2xx"} | counter | Request count grouped by status class (1xx, 2xx, 3xx, 4xx, 5xx, other). |
zzz_requests_by_method{method="GET"} | counter | Request count grouped by HTTP method (GET, HEAD, POST, PUT, DELETE, PATCH, OPTIONS, CONNECT, TRACE). |
zzz_request_duration_us_sum | counter | Cumulative request duration in microseconds. |
zzz_request_duration_us_count | counter | Number of timed requests (use with _sum to compute average latency). |
All counters use atomic operations for thread safety.
Prometheus integration
Section titled “Prometheus integration”Point your Prometheus scrape config at the metrics endpoint:
scrape_configs: - job_name: 'zzz-app' scrape_interval: 15s static_configs: - targets: ['localhost:8888'] metrics_path: '/metrics'Computing average latency
Section titled “Computing average latency”With the sum and count metrics, you can compute average request duration in Prometheus:
rate(zzz_request_duration_us_sum[5m]) / rate(zzz_request_duration_us_count[5m])Telemetry
Section titled “Telemetry”The telemetry middleware fires structured lifecycle events at request start and end. This is useful for custom tracing integrations, APM tools, or audit logging.
Configuration
Section titled “Configuration”pub const TelemetryConfig = struct { on_event: *const fn (TelemetryEvent) void,};You provide a single callback function that receives a TelemetryEvent tagged union:
pub const TelemetryEvent = union(enum) { request_start: RequestStart, request_end: RequestEnd,
pub const RequestStart = struct { method: Method, path: []const u8, timestamp_ns: i128, };
pub const RequestEnd = struct { method: Method, path: []const u8, status: u16, duration_ns: i128, timestamp_ns: i128, };};Example: custom telemetry sink
Section titled “Example: custom telemetry sink”fn onTelemetryEvent(event: TelemetryEvent) void { switch (event) { .request_start => |e| { std.log.debug("START {s} {s}", .{ @tagName(e.method), e.path }); }, .request_end => |e| { const duration_us: u64 = @intCast(@divTrunc(e.duration_ns, 1000)); std.log.debug("END {s} {s} -> {d} ({d}us)", .{ @tagName(e.method), e.path, e.status, duration_us, }); }, }}
const App = Router.define(.{ .middleware = &.{ telemetry(.{ .on_event = &onTelemetryEvent }), }, .routes = &.{ ... },});Because the callback is a comptime-known function pointer, the compiler can inline the event dispatch path.
Health checks
Section titled “Health checks”The health middleware responds to GET requests at a configurable path with a JSON health status. All other requests pass through to the next handler.
Configuration
Section titled “Configuration”pub const HealthConfig = struct { path: []const u8 = "/health",};| Option | Default | Description |
|---|---|---|
path | "/health" | The URL path for the health endpoint. Common alternatives: "/healthz", "/ready". |
Response format
Section titled “Response format”A GET to the health path returns:
{"status":"ok"}with a 200 OK status and application/json content type.
Kubernetes probes
Section titled “Kubernetes probes”Use different paths for liveness and readiness probes:
const App = Router.define(.{ .middleware = &.{ health(.{ .path = "/healthz" }), // liveness probe health(.{ .path = "/readyz" }), // readiness probe }, .routes = &.{ ... },});livenessProbe: httpGet: path: /healthz port: 8888readinessProbe: httpGet: path: /readyz port: 8888Recommended middleware order
Section titled “Recommended middleware order”The order in which observability middleware is stacked affects what data is captured:
.middleware = &.{ health(.{ .path = "/healthz" }), // 1. Short-circuit health checks early requestId(.{}), // 2. Generate/propagate ID before logging metrics(.{ .path = "/metrics" }), // 3. Collect metrics (wraps everything below) structuredLogger(.{ .format = .json }), // 4. Log with request ID and timing telemetry(.{ .on_event = &myCallback }), // 5. Fire lifecycle events // ... application middleware and routes},-
Health checks run first so they are fast and do not inflate metrics or logs.
-
Request ID runs early so the ID is available to the logger and any downstream handlers.
-
Metrics wraps downstream handlers to measure their latency accurately.
-
Structured logger runs after the request ID is assigned so it can include it in log output.
-
Telemetry fires events around the remaining handler chain.
Next steps
Section titled “Next steps”- Performance tuning — optimize your server for production workloads
- Server backends — choose and configure the I/O backend
- Middleware — learn how the middleware pipeline works
- Deployment — production deployment strategies