Skip to content

Server Backends

zzz uses a pluggable backend architecture that lets you swap out the underlying I/O strategy at build time. The two available backends — zzz (native, the default) and libhv (event-loop) — each have different strengths depending on your target platform and workload.

The backend selection happens entirely at compile time. The file src/core/backend.zig inspects a build option and resolves to one of two implementations:

pub const SelectedBackend = blk: {
if (std.mem.eql(u8, backend_name, "libhv")) {
break :blk @import("backends/libhv.zig");
} else {
break :blk @import("backends/zzz.zig");
}
};

Both backends expose the same public interface — a listen(server, io) function — so the Server struct delegates to whichever backend is active without any runtime branching.

The native backend uses Zig 0.16’s std.Io networking layer with a thread-pool + bounded queue architecture:

  • A single acceptor thread calls accept() in a loop and pushes new connections onto a bounded queue.
  • A pool of worker threads pops connections from the queue and handles them.
  • Back-pressure is built in: when the queue is full, the acceptor blocks until a worker frees a slot.
  • The bounded queue uses POSIX pthread_mutex_t and pthread_cond_t for efficient blocking.

This backend works on all platforms supported by Zig’s std.Io (Linux, macOS, BSDs, Windows).

The libhv backend wraps libhv, a C event-loop library that uses the most efficient I/O multiplexer available on the current platform:

PlatformI/O multiplexer
Linuxepoll
macOS / BSDkqueue
WindowsIOCP (overlapped I/O)

libhv uses a single-threaded event loop driven by callbacks (onAccept, onRead, onClose). Per-connection state is attached to each hio_t handle. This backend also provides built-in timer APIs and WebSocket heartbeat (ping) support through libhv’s event loop.

The backend is chosen at build time via the -Dbackend option.

Terminal window
zig build run
# or explicitly:
zig build run -Dbackend=zzz

The build system compiles the necessary C sources for libhv only when backend=libhv is specified, including platform-specific files (epoll.c on Linux, kqueue.c on macOS/BSD).

Both backends share the same Config struct defined in src/core/server.zig:

pub const Config = struct {
host: []const u8 = "127.0.0.1",
port: u16 = 8888,
max_body_size: usize = 1024 * 1024, // 1 MB
max_header_size: usize = 16384, // 16 KB
read_timeout_ms: u32 = 30_000, // 30 s
write_timeout_ms: u32 = 30_000, // 30 s
keepalive_timeout_ms: u32 = 65_000, // 65 s
worker_threads: u16 = 4, // 0 = single-threaded (zzz backend)
max_connections: u32 = 1024,
max_requests_per_connection: u32 = 100,
kernel_backlog: u31 = 128,
tls: ?TlsConfig = null,
};
OptionDescriptionDefault
hostBind address"127.0.0.1"
portBind port8888
max_body_sizeMaximum request body in bytes1 MB
max_header_sizeMaximum header block in bytes16 KB
read_timeout_msRead timeout per request30 s
write_timeout_msWrite (send) timeout30 s
keepalive_timeout_msKeep-alive idle timeout65 s
worker_threadsWorker thread count (zzz backend; 0 = 1 thread)4
max_connectionsBounded queue capacity (zzz backend)1024
max_requests_per_connectionMax pipelined requests per connection100
kernel_backlogTCP listen backlog (SO_BACKLOG)128
tlsOptional TLS certificate/key paths for HTTPSnull

Each backend also defines a BackendConfig struct for fine-tuning:

pub const BackendConfig = struct {
pool_size: u16 = 0, // 0 = auto-detect CPU count
queue_capacity: u32 = 1024, // bounded queue size
};

Both backends support TLS. Pass certificate and key file paths through the tls field:

const config: Server.Config = .{
.port = 443,
.tls = .{
.cert_file = "/etc/ssl/certs/server.crt",
.key_file = "/etc/ssl/private/server.key",
},
};

The native zzz backend initializes an OpenSSL SSL_CTX and wraps each connection in TLS readers/writers. The libhv backend uses libhv’s built-in SSL support (hloop_create_ssl_server).

Both backends respond to SIGINT and SIGTERM signals:

  1. The signal handler sets a shutdown_flag (atomic bool) on the server.
  2. The accept loop stops accepting new connections.
  3. Active connections are given up to 10 seconds to drain via drainConnections().
  4. Worker threads (zzz) or the event loop (libhv) are shut down cleanly.

The libhv backend additionally calls hloop_stop() to break out of the event loop.

Considerationzzz (native)libhv
ArchitectureThread pool + bounded queueSingle-threaded event loop
Concurrency modelOS threadsCallbacks (non-blocking)
Platform I/Ostd.Io (portable)epoll / kqueue / IOCP
WebSocket heartbeatManualBuilt-in via hio_set_heartbeat
Timer APINot includedaddTimer / removeTimer / resetTimer
DependenciesNone (pure Zig)Vendored libhv C library
Best forSimple deployments, CPU-bound handlersHigh connection counts, I/O-heavy workloads

For most applications, the default zzz backend is a solid choice. Switch to libhv when you need platform-native I/O multiplexing, built-in timer support, or are handling a large number of concurrent WebSocket connections.

  • Performance tuning — optimize buffer sizes, compression, and connection settings
  • Observability — add structured logging, metrics, and health checks
  • Deployment — production deployment strategies
  • Docker — containerize your zzz application