Skip to content

Observability

zzz ships with a suite of middleware for production observability: structured logging, Prometheus-compatible metrics, lifecycle telemetry hooks, request ID propagation, and health check endpoints. All are configured at comptime and compose naturally with the middleware pipeline.

A typical production stack combines several observability middleware. Order matters — place them early in the pipeline so they wrap all downstream handlers.

const App = Router.define(.{
.middleware = &.{
health(.{ .path = "/healthz" }),
requestId(.{}),
metrics(.{ .path = "/metrics" }),
structuredLogger(.{ .format = .json }),
},
.routes = &.{
Router.get("/", indexHandler),
Router.get("/api/users", usersHandler),
},
});

The structuredLogger middleware logs every request with method, path, status code, response duration, and (when available) the request ID. It supports two output formats and configurable minimum log levels.

pub const StructuredLoggerConfig = struct {
level: LogLevel = .info,
format: LogFormat = .text,
};
OptionTypeDefaultDescription
levelLogLevel.infoMinimum severity to emit. Requests producing 5xx responses log at err, 4xx at warn, and everything else at info.
formatLogFormat.textOutput format: .text for human-readable, .json for structured JSON.

The middleware automatically assigns a log level based on the response status code:

Status rangeAssigned level
5xxerr
4xxwarn
1xx — 3xxinfo

Only messages at or above the configured minimum level are emitted.

structuredLogger(.{ .format = .text })

Output:

[info] GET /api/users -> 200 (142us)
[info] GET /api/users -> 200 (142us) [zzz-a1b2c3-0]

When a request ID is present (from the requestId middleware), it is appended in brackets.

For development, zzz also provides a simpler logger middleware that does not require configuration:

const App = Router.define(.{
.middleware = &.{ logger },
.routes = &.{ ... },
});

The simple logger automatically formats durations in the most readable unit (microseconds, milliseconds, or seconds).

The requestId middleware either propagates an existing request ID from an incoming header or generates a new one. The ID is made available to downstream middleware (including the structured logger) via context assigns and is echoed back in the response headers.

pub const RequestIdConfig = struct {
header_name: []const u8 = "X-Request-Id",
assign_key: []const u8 = "request_id",
};
OptionDefaultDescription
header_name"X-Request-Id"Request/response header name to read and write.
assign_key"request_id"Key used in ctx.assigns so other middleware can access the ID.
  1. The middleware checks the incoming request for the configured header (e.g., X-Request-Id).

  2. If the header exists, its value is used as-is. This lets load balancers or API gateways set the ID upstream.

  3. If no header is present, a new ID is generated in the format zzz-{timestamp_hex}-{counter_hex}, using a monotonic clock and an atomic counter to guarantee uniqueness.

  4. The ID is stored in ctx.assigns under the configured key and added to the response headers.

fn myHandler(ctx: *Context) !void {
const req_id = ctx.getAssign("request_id") orelse "unknown";
// Use req_id for correlation in logs, database records, etc.
ctx.text(.ok, "ok");
}

The metrics middleware collects request counts, status code distributions, method breakdowns, and latency statistics. It serves the data in Prometheus exposition format at a configurable endpoint.

pub const MetricsConfig = struct {
path: []const u8 = "/metrics",
};
OptionDefaultDescription
path"/metrics"The URL path where Prometheus metrics are served. Only GET requests to this path trigger the metrics response.
MetricTypeDescription
zzz_requests_totalcounterTotal number of HTTP requests processed.
zzz_requests_by_status{class="2xx"}counterRequest count grouped by status class (1xx, 2xx, 3xx, 4xx, 5xx, other).
zzz_requests_by_method{method="GET"}counterRequest count grouped by HTTP method (GET, HEAD, POST, PUT, DELETE, PATCH, OPTIONS, CONNECT, TRACE).
zzz_request_duration_us_sumcounterCumulative request duration in microseconds.
zzz_request_duration_us_countcounterNumber of timed requests (use with _sum to compute average latency).

All counters use atomic operations for thread safety.

Point your Prometheus scrape config at the metrics endpoint:

scrape_configs:
- job_name: 'zzz-app'
scrape_interval: 15s
static_configs:
- targets: ['localhost:8888']
metrics_path: '/metrics'

With the sum and count metrics, you can compute average request duration in Prometheus:

rate(zzz_request_duration_us_sum[5m]) / rate(zzz_request_duration_us_count[5m])

The telemetry middleware fires structured lifecycle events at request start and end. This is useful for custom tracing integrations, APM tools, or audit logging.

pub const TelemetryConfig = struct {
on_event: *const fn (TelemetryEvent) void,
};

You provide a single callback function that receives a TelemetryEvent tagged union:

pub const TelemetryEvent = union(enum) {
request_start: RequestStart,
request_end: RequestEnd,
pub const RequestStart = struct {
method: Method,
path: []const u8,
timestamp_ns: i128,
};
pub const RequestEnd = struct {
method: Method,
path: []const u8,
status: u16,
duration_ns: i128,
timestamp_ns: i128,
};
};
fn onTelemetryEvent(event: TelemetryEvent) void {
switch (event) {
.request_start => |e| {
std.log.debug("START {s} {s}", .{ @tagName(e.method), e.path });
},
.request_end => |e| {
const duration_us: u64 = @intCast(@divTrunc(e.duration_ns, 1000));
std.log.debug("END {s} {s} -> {d} ({d}us)", .{
@tagName(e.method), e.path, e.status, duration_us,
});
},
}
}
const App = Router.define(.{
.middleware = &.{
telemetry(.{ .on_event = &onTelemetryEvent }),
},
.routes = &.{ ... },
});

Because the callback is a comptime-known function pointer, the compiler can inline the event dispatch path.

The health middleware responds to GET requests at a configurable path with a JSON health status. All other requests pass through to the next handler.

pub const HealthConfig = struct {
path: []const u8 = "/health",
};
OptionDefaultDescription
path"/health"The URL path for the health endpoint. Common alternatives: "/healthz", "/ready".

A GET to the health path returns:

{"status":"ok"}

with a 200 OK status and application/json content type.

Use different paths for liveness and readiness probes:

const App = Router.define(.{
.middleware = &.{
health(.{ .path = "/healthz" }), // liveness probe
health(.{ .path = "/readyz" }), // readiness probe
},
.routes = &.{ ... },
});
livenessProbe:
httpGet:
path: /healthz
port: 8888
readinessProbe:
httpGet:
path: /readyz
port: 8888

The order in which observability middleware is stacked affects what data is captured:

.middleware = &.{
health(.{ .path = "/healthz" }), // 1. Short-circuit health checks early
requestId(.{}), // 2. Generate/propagate ID before logging
metrics(.{ .path = "/metrics" }), // 3. Collect metrics (wraps everything below)
structuredLogger(.{ .format = .json }), // 4. Log with request ID and timing
telemetry(.{ .on_event = &myCallback }), // 5. Fire lifecycle events
// ... application middleware and routes
},
  1. Health checks run first so they are fast and do not inflate metrics or logs.

  2. Request ID runs early so the ID is available to the logger and any downstream handlers.

  3. Metrics wraps downstream handlers to measure their latency accurately.

  4. Structured logger runs after the request ID is assigned so it can include it in log output.

  5. Telemetry fires events around the remaining handler chain.