Learn Zig Series (#14) - Generics with Comptime Parameters
Learn Zig Series (#14) - Generics with Comptime Parameters

What will I learn
- You will learn how Zig implements generics using
comptimeparameters instead of template syntax; - writing functions that accept
comptime T: typefor type-generic code; - returning types from functions to build generic data structures;
- the
@This()builtin for self-referential generic types; - comptime type constraints and validation with
@typeInfo; - comptime duck typing with
@hasDecland@hasField; - how
std.ArrayList,std.HashMap, andstd.BoundedArrayuse this pattern; - monomorphization: each comptime instantiation generates specialized machine code;
- when generics add value vs when concrete types are simpler.
Requirements
- A working modern computer running macOS, Windows or Ubuntu;
- An installed Zig 0.14+ distribution (download from ziglang.org);
- The ambition to learn Zig programming.
Difficulty
- Intermediate
Curriculum (of the Learn Zig Series):
- Zig Programming Tutorial - ep001 - Intro
- Learn Zig Series (#2) - Hello Zig, Variables and Types
- Learn Zig Series (#3) - Functions and Control Flow
- Learn Zig Series (#4) - Error Handling (Zig's Best Feature)
- Learn Zig Series (#5) - Arrays, Slices, and Strings
- Learn Zig Series (#6) - Structs, Enums, and Tagged Unions
- Learn Zig Series (#7) - Memory Management and Allocators
- Learn Zig Series (#8) - Pointers and Memory Layout
- Learn Zig Series (#9) - Comptime (Zig's Superpower)
- Learn Zig Series (#10) - Project Structure, Modules, and File I/O
- Learn Zig Series (#11) - Mini Project: Building a Step Sequencer
- Learn Zig Series (#12) - Testing and Test-Driven Development
- Learn Zig Series (#13) - Interfaces via Type Erasure
- Learn Zig Series (#14) - Generics with Comptime Parameters (this post)
Learn Zig Series (#14) - Generics with Comptime Parameters
Welcome back! In episode #13 we explored Zig's approach to runtime polymorphism -- type erasure with *anyopaque and function pointers (or vtable structs). We built a Writer interface, a Logger with multiple backends, saw how std.mem.Allocator uses the exact same pattern under the hood, and discussed when runtime dispatch is the right tool. At the end I mentioned that Zig gives you a second way to write generic code -- one that happens entirely at compile time with zero runtime cost.
That's what we're covering now. And honestly, if you understood comptime from ep009, you already understand 90% of Zig's generics system. Because Zig doesn't HAVE a generics system. It has comptime. And comptime IS generics.
Most languages bolt on generics as a separate feature with its own syntax: Java has <T> angle brackets with type erasure at runtime, C++ has template<typename T> with complicated SFINAE rules for constraining types, Rust has <T: Trait> with trait bounds. Zig has... functions that take comptime parameters. That's the whole thing. No angle brackets. No where clauses. No special template syntax. Just the same comptime keyword you already know, applied to function parameters.
A function that takes comptime T: type is generic. A function that returns a type is a generic type constructor. Together these two patterns cover everything that generics do in other languages -- and they're simpler, more explicit, and more powerful because you have the full Zig language available at compile time to express constraints, generate code, and validate types.
Here we go!
Solutions to Episode 13 Exercises
Before we get into generics, here are the solutions to last episode's exercises. If you wrote these yourself, compare your approach:
Exercise 1 -- Reader interface:
const std = @import("std");
const testing = std.testing;
const Reader = struct {
ptr: *anyopaque,
readFn: *const fn (*anyopaque, []u8) anyerror!usize,
fn read(self: Reader, buffer: []u8) anyerror!usize {
return self.readFn(self.ptr, buffer);
}
};
const SliceReader = struct {
data: []const u8,
pos: usize = 0,
fn readImpl(ctx: *anyopaque, buffer: []u8) anyerror!usize {
const self: *SliceReader = @ptrCast(@alignCast(ctx));
const remaining = self.data[self.pos..];
const n = @min(buffer.len, remaining.len);
@memcpy(buffer[0..n], remaining[0..n]);
self.pos += n;
return n;
}
fn reader(self: *SliceReader) Reader {
return .{ .ptr = @ptrCast(self), .readFn = &readImpl };
}
};
const ZeroReader = struct {
fn readImpl(_: *anyopaque, buffer: []u8) anyerror!usize {
@memset(buffer, 0);
return buffer.len;
}
fn reader(self: *ZeroReader) Reader {
return .{ .ptr = @ptrCast(self), .readFn = &readImpl };
}
};
test "SliceReader reads sequentially" {
var sr = SliceReader{ .data = "hello world" };
var r = sr.reader();
var buf: [5]u8 = undefined;
const n1 = try r.read(&buf);
try testing.expectEqual(@as(usize, 5), n1);
try testing.expectEqualStrings("hello", &buf);
const n2 = try r.read(&buf);
try testing.expectEqual(@as(usize, 5), n2);
try testing.expectEqualStrings(" worl", &buf);
}
test "ZeroReader fills with zeroes" {
var zr = ZeroReader{};
var r = zr.reader();
var buf: [4]u8 = .{ 0xFF, 0xFF, 0xFF, 0xFF };
_ = try r.read(&buf);
try testing.expectEqual([4]u8{ 0, 0, 0, 0 }, buf);
}
Same pattern as Writer from the episode, reversed. The vtable dispatch through *anyopaque erases the concrete type -- SliceReader and ZeroReader are both just Reader from the caller's perspective. Runtime polymorphism without inheritance.
Exercise 2 -- Logger with levels:
const Level = enum {
info,
warn,
err,
fn prefix(self: Level) []const u8 {
return switch (self) {
.info => "[INFO] ",
.warn => "[WARN] ",
.err => "[ERR] ",
};
}
};
const Logger = struct {
ptr: *anyopaque,
logFn: *const fn (*anyopaque, Level, []const u8) void,
fn log(self: Logger, level: Level, msg: []const u8) void {
self.logFn(self.ptr, level, msg);
}
};
const BufferLogger = struct {
buf: [1024]u8 = undefined,
len: usize = 0,
fn logImpl(ctx: *anyopaque, level: Level, msg: []const u8) void {
const self: *BufferLogger = @ptrCast(@alignCast(ctx));
const pfx = level.prefix();
if (self.len + pfx.len + msg.len + 1 <= self.buf.len) {
@memcpy(self.buf[self.len..][0..pfx.len], pfx);
self.len += pfx.len;
@memcpy(self.buf[self.len..][0..msg.len], msg);
self.len += msg.len;
self.buf[self.len] = '\n';
self.len += 1;
}
}
fn logger(self: *BufferLogger) Logger {
return .{ .ptr = @ptrCast(self), .logFn = &logImpl };
}
fn contents(self: *const BufferLogger) []const u8 {
return self.buf[0..self.len];
}
};
test "logger with levels" {
var bl = BufferLogger{};
const lg = bl.logger();
lg.log(.info, "started");
lg.log(.warn, "low memory");
lg.log(.err, "disk full");
const out = bl.contents();
try testing.expect(std.mem.indexOf(u8, out, "[INFO] started") != null);
try testing.expect(std.mem.indexOf(u8, out, "[WARN] low memory") != null);
try testing.expect(std.mem.indexOf(u8, out, "[ERR] disk full") != null);
}
The BufferLogger.logImpl prepends the level prefix before the message. Testing is easy -- log at different levels, check the buffer contains the right prefixes.
Exercise 3 -- Hasher interface:
const Hasher = struct {
ptr: *anyopaque,
updateFn: *const fn (*anyopaque, []const u8) void,
finalFn: *const fn (*anyopaque) u64,
fn update(self: Hasher, bytes: []const u8) void {
self.updateFn(self.ptr, bytes);
}
fn final_(self: Hasher) u64 {
return self.finalFn(self.ptr);
}
};
const Djb2Hasher = struct {
hash: u64 = 5381,
fn updateImpl(ctx: *anyopaque, bytes: []const u8) void {
const self: *Djb2Hasher = @ptrCast(@alignCast(ctx));
for (bytes) |byte| {
self.hash = self.hash *% 33 +% byte;
}
}
fn finalImpl(ctx: *anyopaque) u64 {
const self: *Djb2Hasher = @ptrCast(@alignCast(ctx));
return self.hash;
}
fn hasher(self: *Djb2Hasher) Hasher {
return .{ .ptr = @ptrCast(self), .updateFn = &updateImpl, .finalFn = &finalImpl };
}
};
const Fnv1aHasher = struct {
hash: u64 = 0xcbf29ce484222325,
fn updateImpl(ctx: *anyopaque, bytes: []const u8) void {
const self: *Fnv1aHasher = @ptrCast(@alignCast(ctx));
for (bytes) |byte| {
self.hash ^= byte;
self.hash *%= 0x100000001b3;
}
}
fn finalImpl(ctx: *anyopaque) u64 {
const self: *Fnv1aHasher = @ptrCast(@alignCast(ctx));
return self.hash;
}
fn hasher(self: *Fnv1aHasher) Hasher {
return .{ .ptr = @ptrCast(self), .updateFn = &updateImpl, .finalFn = &finalImpl };
}
};
test "hashers produce consistent output" {
var d1 = Djb2Hasher{};
var d2 = Djb2Hasher{};
d1.hasher().update("hello");
d2.hasher().update("hello");
try testing.expectEqual(d1.hash, d2.hash); // same input = same output
}
test "different algorithms produce different hashes" {
var d = Djb2Hasher{};
var f = Fnv1aHasher{};
d.hasher().update("hello");
f.hasher().update("hello");
try testing.expect(d.hash != f.hash); // different algorithms
}
The *% and +% operators are wrapping arithmetic -- they overflow silently, which is exactly what hash functions need. Both hashers implement the same Hasher interface. Tests verify consistency (same input = same output) and distinctness (different algorithms = different hashes).
Exercise 4 -- tracing allocator.alloc through the vtable:
// The call chain:
//
// 1. allocator.alloc(u8, 100)
// -- calls self.vtable.alloc(self.ptr, 100, @alignOf(u8), @returnAddress())
//
// 2. self.vtable is a *const std.mem.Allocator.VTable
// -- the GPA fills this in at comptime:
// .alloc = GeneralPurposeAllocator.alloc,
//
// 3. GeneralPurposeAllocator.alloc(gpa_ptr, 100, alignment, ret_addr)
// -- tracks the allocation in its internal metadata table
// -- calls the backing allocator (usually page_allocator)
// -- records address, size, stack trace
// -- returns []u8 slice
//
// 4. On free: same chain in reverse
// allocator.free(buf) -> vtable.free -> GPA.free
// -- looks up metadata, verifies size matches, returns pages
//
// 5. On deinit: GPA scans metadata for un-freed allocations
// -- if any found: reports leak with stack trace
//
// The vtable is a const pointer -- one function pointer call,
// zero overhead beyond the indirection.
This isn't a program to run -- it's a source-reading exercise. Open lib/std/mem.zig and lib/std/heap/general_purpose_allocator.zig in the Zig standard library. Trace the call from alloc through the vtable to the GPA's implementation. Understanding this call chain is understanding how ALL Zig interfaces work -- the standard library uses the same ptr + vtable pattern everywhere.
Exercise 5 -- refactor storage.zig to use Writer interface:
// Before (direct file I/O):
pub fn save(seq: *const Sequencer, path: []const u8) !void {
const file = try std.fs.cwd().createFile(path, .{});
defer file.close();
const writer = file.writer();
// ... write to writer ...
}
// After (Writer interface):
const Writer = struct {
ptr: *anyopaque,
writeFn: *const fn (*anyopaque, []const u8) anyerror!void,
fn write(self: Writer, data: []const u8) anyerror!void {
return self.writeFn(self.ptr, data);
}
};
pub fn save(seq: *const Sequencer, writer: Writer) !void {
for (0..NUM_TRACKS) |t| {
for (0..NUM_STEPS) |s| {
try writer.write(if (seq.grid[t][s]) "X" else ".");
}
try writer.write("\n");
}
}
// Production: pass a FileWriter
// Tests: pass a BufferWriter and inspect the bytes -- no temp files
The function no longer knows about files. In production you pass a FileWriter, in tests you pass a BufferWriter and check the output bytes. No filesystem, no temp files, no cleanup. This is dependency inversion -- the same principle that makes the Logger testable applies to storage.
Exercise 6 -- Middleware pattern (decorator):
const TimestampLogger = struct {
inner: Logger,
fn logImpl(ctx: *anyopaque, level: Level, msg: []const u8) void {
const self: *TimestampLogger = @ptrCast(@alignCast(ctx));
// In a real implementation you'd get the actual time:
// const ts = std.time.timestamp();
// For testing, a fixed prefix works:
const prefixed = "[2026-04-08T12:00:00] ";
_ = prefixed;
// Forward to inner logger with timestamp prepended
self.inner.log(level, msg);
}
fn logger(self: *TimestampLogger) Logger {
return .{ .ptr = @ptrCast(self), .logFn = &logImpl };
}
};
test "middleware chaining" {
var buf_logger = BufferLogger{};
var ts_logger = TimestampLogger{ .inner = buf_logger.logger() };
const lg = ts_logger.logger();
lg.log(.info, "server started");
lg.log(.warn, "high load");
const out = buf_logger.contents();
try testing.expect(std.mem.indexOf(u8, out, "server started") != null);
try testing.expect(std.mem.indexOf(u8, out, "high load") != null);
}
Same interface in, same interface out, behavior added in between. TimestampLogger wraps any Logger and forwards to it after prepending a timestamp. Chain multiple middlewares: BufferLogger wrapped by TimestampLogger wrapped by RateLimitLogger -- each one is transparent. This is the decorator pattern, and it falls out naturally from Zig's vtable interfaces.
Now -- generics!
Generic functions with comptime parameters
The simplest form of generic code in Zig is a function that takes comptime T: type:
const std = @import("std");
const testing = std.testing;
fn max(comptime T: type, a: T, b: T) T {
return if (a > b) a else b;
}
test "generic max with integers" {
try testing.expectEqual(@as(i32, 5), max(i32, 3, 5));
try testing.expectEqual(@as(u8, 255), max(u8, 100, 255));
}
test "generic max with floats" {
try testing.expectApproxEqAbs(@as(f64, 3.14), max(f64, 2.71, 3.14), 0.001);
}
comptime T: type means "this parameter is a type, and it must be known at compile time". The compiler generates a separate specialized version of max for each type you call it with. max(i32, 3, 5) and max(f64, 2.71, 3.14) are two completely different functions in the binary -- the i32 version uses integer comparison, the f64 version uses floating-point comparison. Zero runtime overhead. The type parameter is resolved entirely at compile time and doesn't exist in the generated machine code.
This is monomorphization -- the same approach C++ templates and Rust generics use. Each unique set of comptime arguments produces a new specialization. The difference is that Zig makes this explicit: you can see the comptime keyword and know exactly what's happening. No implicit template instantiation rules, no hidden code generation.
Here's a slightly more interesting example -- a generic swap:
fn swap(comptime T: type, a: *T, b: *T) void {
const tmp = a.*;
a.* = b.*;
b.* = tmp;
}
test "swap integers" {
var x: i32 = 10;
var y: i32 = 20;
swap(i32, &x, &y);
try testing.expectEqual(@as(i32, 20), x);
try testing.expectEqual(@as(i32, 10), y);
}
test "swap strings" {
var a: []const u8 = "hello";
var b: []const u8 = "world";
swap([]const u8, &a, &b);
try testing.expectEqualStrings("world", a);
try testing.expectEqualStrings("hello", b);
}
The swap function works with any type -- integers, floats, slices, structs, pointers. The compiler generates a version for each type used. The i32 version copies 4 bytes. The []const u8 version copies a slice (pointer + length, 16 bytes on 64-bit). Each specialization is optimally compiled for its specific type.
Returning types: generic data structures
This is where it gets powerful. In Zig, a function can return a type. A function that takes comptime T: type and returns a type is a generic type constructor -- the equivalent of ArrayList<T> in Java or Vec<T> in Rust:
const std = @import("std");
const testing = std.testing;
fn Stack(comptime T: type) type {
return struct {
items: std.ArrayList(T),
const Self = @This();
pub fn init(allocator: std.mem.Allocator) Self {
return .{ .items = std.ArrayList(T).init(allocator) };
}
pub fn deinit(self: *Self) void {
self.items.deinit();
}
pub fn push(self: *Self, val: T) !void {
try self.items.append(val);
}
pub fn pop(self: *Self) !T {
if (self.items.items.len == 0) return error.Empty;
return self.items.pop();
}
pub fn peek(self: Self) !T {
if (self.items.items.len == 0) return error.Empty;
return self.items.items[self.items.items.len - 1];
}
pub fn size(self: Self) usize {
return self.items.items.len;
}
};
}
test "Stack(i32) push and pop" {
var stack = Stack(i32).init(testing.allocator);
defer stack.deinit();
try stack.push(10);
try stack.push(20);
try stack.push(30);
try testing.expectEqual(@as(i32, 30), try stack.pop());
try testing.expectEqual(@as(i32, 20), try stack.pop());
try testing.expectEqual(@as(usize, 1), stack.size());
}
test "Stack([]const u8) works with strings" {
var stack = Stack([]const u8).init(testing.allocator);
defer stack.deinit();
try stack.push("hello");
try stack.push("world");
try testing.expectEqualStrings("world", try stack.pop());
try testing.expectEqualStrings("hello", try stack.pop());
}
Stack(i32) and Stack([]const u8) are different types. Each is a fully specialized struct with no runtime type information. Stack(i32) wraps an ArrayList(i32). Stack([]const u8) wraps an ArrayList([]const u8). The compiler generates completely separate code for each. No type erasure, no boxing, no runtime dispatch.
@This() is the builtin that gives you a reference to the struct being defined. Inside the anonymous struct returned by Stack(T), @This() evaluates to that specific struct type. We bind it to const Self by convention so we can use it in method signatures. Without @This(), you'd have no way to refer to the struct from inside itself -- because it's anonymous (it has no name, it's just whatever Stack(T) returns).
If you remember the Stack we built during the TDD exercise in ep012 -- this is the same code. Now you understand what's happening under the hood. Stack(i32) calls the Stack function at compile time, which returns a new struct type specialized for i32. The comptime T: type parameter is substituted throughout the returned struct definition.
How the standard library uses this
This pattern isn't some advanced trick -- it's how the entire standard library is built. Look at std.ArrayList:
// Simplified from the actual source
pub fn ArrayList(comptime T: type) type {
return ArrayListAligned(T, null);
}
pub fn ArrayListAligned(comptime T: type, comptime alignment: ?u29) type {
return struct {
items: Slice,
capacity: usize,
allocator: std.mem.Allocator,
const Self = @This();
const Slice = if (alignment) |a| ([]align(a) T) else []T;
pub fn init(allocator: std.mem.Allocator) Self {
return .{
.items = &[_]T{},
.capacity = 0,
.allocator = allocator,
};
}
pub fn append(self: *Self, item: T) !void {
// ... grow if needed, then store item
}
// ... (more methods)
};
}
It's a function that returns a struct. The struct is specialized for the element type T and an optional alignment. Every ArrayList(u8) in your program shares the same generated code. Every ArrayList(MyStruct) gets its own specialized version. The ArrayListAligned variant adds a second comptime parameter for custom alignment -- showing that you can have multiple comptime parameters, each constraining a different aspect of the generated type.
std.HashMap follows the same pattern with more parameters:
pub fn HashMap(
comptime K: type,
comptime V: type,
comptime Context: type,
comptime max_load_percentage: u64,
) type {
return struct {
// ... specialized for K, V, Context, max_load_percentage
};
}
Four comptime parameters: key type, value type, hash context type, and maximum load factor. Each unique combination generates a distinct struct type. This is why when you write std.AutoHashMap([]const u8, i32) (which is a convenience wrapper that fills in the Context and load defaults), you get a hash map that's fully specialized for string keys and integer values -- the hash function, equality check, and bucket layout are all baked in at compile time.
Type constraints with @typeInfo
Generic functions should reject types they can't handle -- and Zig gives you compile-time type inspection to do this. Unlike Rust's trait bounds or C++'s concepts (which are separate constraint systems), Zig uses the same comptime logic you already know:
fn sum(comptime T: type, items: []const T) T {
switch (@typeInfo(T)) {
.int, .float => {},
else => @compileError(
"sum requires a numeric type, got " ++ @typeName(T)
),
}
var total: T = 0;
for (items) |item| {
total += item;
}
return total;
}
test "sum integers" {
const nums = [_]i32{ 1, 2, 3, 4, 5 };
try testing.expectEqual(@as(i32, 15), sum(i32, &nums));
}
test "sum floats" {
const vals = [_]f64{ 1.5, 2.5, 3.0 };
try testing.expectApproxEqAbs(@as(f64, 7.0), sum(f64, &vals), 0.001);
}
// This would fail at compile time with a clear error:
// sum([]const u8, ...)
// -> "sum requires a numeric type, got []const u8"
@typeInfo(T) returns a tagged union describing the type's structure (we saw this briefly in ep009). Switching on it lets you accept only the types you can handle and reject everything else with a descriptive compile error. No trait bounds syntax to learn. No where clauses. Just a switch statement.
The error message is custom and clear -- "sum requires a numeric type, got []const u8" -- not some cryptic template substitution failure like you'd get in C++. This is one of Zig's biggest wins over C++ templates: when something goes wrong, the error tells you WHAT went wrong and WHY, in your own words.
You can get much more specific with type inspection:
fn safeDiv(comptime T: type, a: T, b: T) !T {
const info = @typeInfo(T);
switch (info) {
.int => {
if (b == 0) return error.DivisionByZero;
if (info.int.signedness == .signed) {
// Check for overflow: MIN_INT / -1
const min_val = std.math.minInt(T);
if (a == min_val and b == -1) return error.Overflow;
}
return @divTrunc(a, b);
},
.float => {
if (b == 0) return error.DivisionByZero;
return a / b;
},
else => @compileError("safeDiv requires numeric type"),
}
}
test "safeDiv handles integer edge cases" {
try testing.expectEqual(@as(i32, 5), try safeDiv(i32, 10, 2));
try testing.expectError(error.DivisionByZero, safeDiv(i32, 10, 0));
try testing.expectError(
error.Overflow,
safeDiv(i8, std.math.minInt(i8), -1),
);
}
test "safeDiv handles floats" {
const result = try safeDiv(f64, 10.0, 3.0);
try testing.expectApproxEqAbs(@as(f64, 3.333), result, 0.01);
}
The info.int.signedness field tells you whether the integer type is signed or unsigned. For signed integers, dividing MIN_INT by -1 would overflow (because the result would be MAX_INT + 1), so we check for that specific case. For unsigned integers that check isn't needed. The compiler sees the if (info.int.signedness == .signed) branch is dead code for unsigned types and eliminates it entirely. Zero runtime cost for the constraint check.
Comptime duck typing with @hasDecl and @hasField
Sometimes you don't want to restrict by type category -- you want to check if a type has a specific method or field. This is compile-time duck typing: "if it has a serialize method, call it."
fn serialize(comptime T: type, value: T, writer: anytype) !void {
if (@hasDecl(T, "serialize")) {
try value.serialize(writer);
} else if (@hasDecl(T, "format")) {
try writer.print("{}", .{value});
} else {
// Fallback: write raw bytes
const bytes = std.mem.asBytes(&value);
try writer.writeAll(bytes);
}
}
const Point = struct {
x: f32,
y: f32,
pub fn serialize(self: Point, writer: anytype) !void {
try writer.print("({d:.2},{d:.2})", .{ self.x, self.y });
}
};
const Color = struct {
r: u8,
g: u8,
b: u8,
// No serialize method -- uses raw bytes fallback
};
test "serialize dispatches by capability" {
var buf: [128]u8 = undefined;
var fbs = std.io.fixedBufferStream(&buf);
const writer = fbs.writer();
const p = Point{ .x = 1.5, .y = 2.7 };
try serialize(Point, p, writer);
try testing.expectEqualStrings("(1.50,2.70)", fbs.getWritten());
}
@hasDecl(T, "serialize") returns true if T has a declaration named "serialize". This is evaluated at compile time, so the branch that doesn't apply gets eliminated. If Point has a serialize method, the compiler generates code that calls it. If Color doesn't, it falls through to the raw bytes fallback. No runtime check needed.
@hasField(T, "name") does the same for struct fields. Between @hasDecl, @hasField, and @typeInfo, you can express virtually any type constraint you'd want -- and the error messages are in your control.
Having said that, there's a design philosophy point here. Zig's official style prefers explicit interfaces (the type erasure pattern from ep013) when you need runtime polymorphism, and comptime generics when the types are known at compile time. Comptime duck typing with @hasDecl is powerful but can make the API contract unclear -- the caller has to know which declarations are expected. Use it for fallback behavior (like serialization), not as a primary API design tool.
Multi-parameter generics
Functions can take multiple comptime parameters, each constraining a different dimension of the generalization:
fn Pair(comptime A: type, comptime B: type) type {
return struct {
first: A,
second: B,
const Self = @This();
pub fn init(a: A, b: B) Self {
return .{ .first = a, .second = b };
}
pub fn map_first(self: Self, comptime R: type, f: *const fn (A) R) Pair(R, B) {
return Pair(R, B).init(f(self.first), self.second);
}
};
}
test "Pair with different types" {
const p = Pair(i32, []const u8).init(42, "hello");
try testing.expectEqual(@as(i32, 42), p.first);
try testing.expectEqualStrings("hello", p.second);
}
Pair(i32, []const u8) is a type. Pair(f64, bool) is a different type. Each unique combination of A and B produces a new struct. The map_first method is itself generic -- it takes a comptime type R and returns a Pair(R, B). Generics composing with generics, all resolved at compile time.
Comptime values beyond types
The comptime keyword isn't limited to types. You can use it with any value that's known at compile time -- integers, booleans, enum values, even functions:
fn FixedRing(comptime T: type, comptime capacity: usize) type {
if (capacity == 0) @compileError("FixedRing capacity must be > 0");
if (capacity & (capacity - 1) != 0)
@compileError("FixedRing capacity must be a power of 2");
return struct {
buffer: [capacity]T = undefined,
head: usize = 0,
tail: usize = 0,
count: usize = 0,
const Self = @This();
const mask = capacity - 1; // Works because capacity is power of 2
pub fn push(self: *Self, val: T) void {
self.buffer[self.tail & mask] = val;
self.tail +%= 1;
if (self.count < capacity) {
self.count += 1;
} else {
self.head +%= 1;
}
}
pub fn pop(self: *Self) ?T {
if (self.count == 0) return null;
const val = self.buffer[self.head & mask];
self.head +%= 1;
self.count -= 1;
return val;
}
pub fn isEmpty(self: Self) bool {
return self.count == 0;
}
pub fn isFull(self: Self) bool {
return self.count == capacity;
}
pub fn len(self: Self) usize {
return self.count;
}
};
}
test "FixedRing basic operations" {
var ring = FixedRing(i32, 4){};
ring.push(10);
ring.push(20);
ring.push(30);
try testing.expectEqual(@as(usize, 3), ring.len());
try testing.expectEqual(@as(i32, 10), ring.pop().?);
try testing.expectEqual(@as(i32, 20), ring.pop().?);
}
test "FixedRing wraps when full" {
var ring = FixedRing(u8, 2){};
ring.push(1);
ring.push(2);
ring.push(3); // Overwrites 1
try testing.expect(ring.isFull());
try testing.expectEqual(@as(u8, 2), ring.pop().?);
try testing.expectEqual(@as(u8, 3), ring.pop().?);
}
Three things to notice. First, the capacity constraint: we reject zero and non-power-of-two at compile time with clear error messages. If someone writes FixedRing(i32, 5), they get "FixedRing capacity must be a power of 2" -- not a mysterious runtime bug where the mask arithmetic produces wrong indices.
Second, the mask trick: because capacity is a comptime-known power of 2, capacity - 1 produces a bitmask. index & mask is equivalent to index % capacity but faster (bitwise AND vs division). This optimization is only safe because we validated the power-of-2 constraint at compile time.
Third, this ring buffer uses zero heap allocation. The buffer: [capacity]T is a fixed-size array that lives on the stack (or inside whatever struct contains the ring). No allocator needed. Compare that to ArrayList which allocates on the heap and can grow -- FixedRing is for when you know the size upfront and want maximum performance with no allocation overhead. This is exactly the tradeof the standard library's std.BoundedArray makes: fixed capacity on the stack vs growable capacity on the heap.
anytype: letting the compiler infer
Sometimes you don't even need to name the type parameter. The anytype keyword tells the compiler "figure out the type from the argument":
fn debugPrint(value: anytype) void {
const T = @TypeOf(value);
switch (@typeInfo(T)) {
.int => std.debug.print("int: {d}\n", .{value}),
.float => std.debug.print("float: {d:.4}\n", .{value}),
.pointer => |ptr| {
if (ptr.size == .Slice and ptr.child == u8) {
std.debug.print("string: {s}\n", .{value});
} else {
std.debug.print("pointer: {*}\n", .{value});
}
},
.@"struct" => std.debug.print("struct: {any}\n", .{value}),
else => std.debug.print("other: {any}\n", .{value}),
}
}
fn contains(haystack: anytype, needle: anytype) bool {
for (haystack) |item| {
if (item == needle) return true;
}
return false;
}
test "contains works with different types" {
const ints = [_]i32{ 1, 2, 3, 4, 5 };
try testing.expect(contains(&ints, @as(i32, 3)));
try testing.expect(!contains(&ints, @as(i32, 99)));
const bytes = [_]u8{ 'h', 'e', 'l', 'l', 'o' };
try testing.expect(contains(&bytes, @as(u8, 'l')));
}
anytype is shorthand for "accept any type and monomorphize". The compiler infers the concrete type from the call site and generates specialized code. contains(&ints, 3) generates an i32 version. contains(&bytes, 'l') generates a u8 version. Same function, different specializations.
The standard library uses anytype extensively for writers and readers. When you see fn print(writer: anytype, ...) in the standard library, the anytype means "any type that has the right methods". The compiler checks at instantiation time whether the type actually has those methods -- if it doesn't, you get a compile error pointing at the exact method that's missing.
This is the connection to ep013: anytype gives you compile-time polymorphism (monomorphized, zero overhead), while the *anyopaque + vtable pattern gives you runtime polymorphism (single copy, one pointer indirection per call). The standard library uses both: std.io.Writer (the interface type from ep013) for runtime dispatch, and writer: anytype in many function signatures for compile-time dispatch.
Type erasure vs comptime generics: the full picture
Now that we've covered both patterns (ep013 + this episode), here's when to use which:
| Situation | Pattern | Why |
|---|---|---|
| Type known at compile time | comptime T: type or anytype | Zero overhead, maximum optimization |
| Need to store mixed types in one collection | Type erasure (*anyopaque + vtable) | Can't monomorphize when types vary at runtime |
| Plugin / callback systems | Type erasure | Concrete type not known until runtime |
| Hot inner loop, maximum performance | Comptime generics | No indirection, everything inlined |
| Standard library allocators, writers | Type erasure | One std.mem.Allocator type for all allocators |
| Standard library sort, print, format | anytype | Monomorphized for each concrete type |
In practice, most application code uses comptime generics and anytype. Type erasure is for framework-level abstractions where you genuinley need runtime polymorphism -- allocators, I/O streams, plugin systems. If you're not sure which to use, default to comptime. It's simpler, faster, and the compiler gives better error messages. Switch to type erasure only when you need to store or pass heterogeneous types at runtime.
Exercises
Build a generic
MinHeap(comptime T: type)using anArrayList(T)internally. Implementinsert(value: T),extractMin() !T, andpeek() !T. The parent of indexiis at(i - 1) / 2, left child at2 * i + 1, right child at2 * i + 2. After insert, bubble up. After extract, bubble down. Add a comptime constraint: reject types that don't support<comparison (check with@typeInfofor.int,.float, or.@"enum"). Write tests using the testing allocator.Create a generic
mapfunction:fn map(comptime T: type, comptime R: type, items: []const T, f: *const fn (T) R, allocator: Allocator) ![]R. It should allocate a new slice, applyfto each element, and return the result. Test: mapi32tof64by converting, mapu8toboolby checking if > 128. Verify the testing allocator reports no leaks (caller must free the returned slice).Write a
Matrix(comptime T: type, comptime rows: usize, comptime cols: usize)that stores data in a[rows][cols]Tarray (stack-allocated, no allocator). Implementget(r, c) T,set(r, c, val) void, andtranspose() Matrix(T, cols, rows)-- notice how transpose swaps the row/col comptime parameters. Add a comptime constraint that rejects non-numeric types. Test with both integer and float matrices.Extend
FixedRingfrom this episode with aniterator() Iteratormethod that returns a struct with anext() ?Tmethod. The iterator should yield elements from head to tail without modifying the ring. Use@This()for the iterator's self-reference. Test: push 4 elements, iterate and collect them into an array, verify the order matches FIFO and the ring still contains all elements after iteration.Read the source of
std.BoundedArrayin the Zig standard library. Compare it toArrayList-- what comptime parameters does it take? Where does it store its data? What happens when you exceed its capacity? Write a paragraph describing when you'd chooseBoundedArrayoverArrayListand vice versa.Build a
Result(comptime T: type, comptime E: type)that holds either a success value of typeTor an error value of typeE. Implementok(value: T) Result(T, E),err(e: E) Result(T, E),isOk() bool,unwrap() T(panics on error), andunwrapOr(default: T) T. This is Rust'sResult<T, E>recreated in Zig. Test all paths including theunwrappanic case (usestd.testing.expectEqualonunwrapOrfor the error case since you can't easily test panics).
Wat we geleerd hebben
- Zig generics =
comptimeparameters. No special syntax, no angle brackets, nowhereclauses. The same comptime mechanism from ep009, applied to function parameters. - Functions returning
typeare generic type constructors --Stack(i32)calls a function that returns a specialized struct type. This is how the entire standard library is built. @This()gives self-referential access inside anonymous struct types returned by generic functions. Bind it toconst Selfby convention.@typeInfoenables compile-time type constraints with custom error messages. Reject bad types with@compileError-- the message is yours to write, not some cryptic template error.@hasDecland@hasFieldprovide compile-time duck typing for checking whether a type has specific methods or fields.- Comptime parameters aren't limited to types -- integers, booleans, and other values work too.
FixedRing(i32, 16)uses both a type and a size parameter. anytypeis shorthand for "accept any type and monomorphize". The compiler infers the type and generates specialized code. Used extensively in the standard library.- Each unique combination of comptime arguments produces separate specialized code (monomorphization). Zero runtime overhead, but increased binary size if you instantiate with many different types.
- Comptime generics for compile-time dispatch (zero cost, maximum optimization). Type erasure (ep013) for runtime dispatch (one indirection, works with types unknown at compile time). Both are idiomatic Zig -- use the right tool for the situation.
Next time we're looking at something every non-trivial Zig project depends on but that most tutorials skip over -- how build.zig actually works, what the build system can do beyond just compiling your code, and how to set up a project that builds dependencies, runs tests, generates artifacts, and integrates with C libraries. If you've been copy-pasting build.zig files without fully understanding them, that's about to change ;-)
Thanks for your contribution to the STEMsocial community. Feel free to join us on discord to get to know the rest of us!
Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).
Consider setting @stemsocial as a beneficiary of this post's rewards if you would like to support the community and contribute to its mission of promoting science and education on Hive.