Learn Zig Series (#32) - Compile-Time Reflection with @typeInfo

avatar

Learn Zig Series (#32) - Compile-Time Reflection with @typeInfo

zig.png

What will I learn

  • You will learn how to write solutions for the Episode 31 exercises;
  • You will learn the @typeInfo builtin and how it exposes type metadata at compile time;
  • You will learn inspecting struct fields, enum variants, and function signatures through reflection;
  • You will learn building a generic toString for any struct using compile-time field iteration;
  • You will learn inline for over @typeInfo field lists -- how and why it works;
  • You will learn @typeName and @tagName for human-readable debug output;
  • You will learn generating serialization logic from type metadata automatically;
  • You will learn a practical example: building a generic debug printer that works for any type.

Requirements

  • A working modern computer running macOS, Windows or Ubuntu;
  • An installed Zig 0.14+ distribution (download from ziglang.org);
  • The ambition to learn Zig programming.

Difficulty

  • Intermediate

Curriculum (of the Learn Zig Series):

Learn Zig Series (#32) - Compile-Time Reflection with @typeInfo

Solutions to Episode 31 Exercises

Exercise 1 - Mmap two files, compare byte-by-byte, report first 10 differences:

const std = @import("std");
const posix = std.posix;

fn mapFile(path: []const u8) !struct { data: []align(std.mem.page_size) u8, size: u64 } {
    const file = try std.fs.cwd().openFile(path, .{});
    defer file.close();
    const stat = try file.stat();
    if (stat.size == 0) return .{ .data = &.{}, .size = 0 };
    const mapped = try posix.mmap(
        null, stat.size, posix.PROT.READ,
        .{ .TYPE = .SHARED }, file.handle, 0,
    );
    return .{ .data = mapped, .size = stat.size };
}

pub fn main() !void {
    const args = std.os.argv;
    if (args.len < 3) {
        std.debug.print("Usage: diff <file1> <file2>\n", .{});
        return;
    }
    const path_a = std.mem.span(args[1]);
    const path_b = std.mem.span(args[2]);

    const a = try mapFile(path_a);
    defer if (a.size > 0) posix.munmap(a.data);
    const b = try mapFile(path_b);
    defer if (b.size > 0) posix.munmap(b.data);

    const common = @min(a.size, b.size);
    var diffs: usize = 0;

    for (0..common) |i| {
        if (a.data[i] != b.data[i]) {
            std.debug.print("Offset 0x{X:0>8}: A=0x{X:0>2} B=0x{X:0>2}\n", .{
                i, a.data[i], b.data[i],
            });
            diffs += 1;
            if (diffs >= 10) break;
        }
    }

    if (a.size != b.size) {
        if (a.size > b.size) {
            std.debug.print("File A has {d} extra bytes\n", .{a.size - b.size});
        } else {
            std.debug.print("File B has {d} extra bytes\n", .{b.size - a.size});
        }
    }
    if (diffs == 0 and a.size == b.size) {
        std.debug.print("Files are identical.\n", .{});
    }
}

The tricky part is handling zero-length files (mmap rejects length 0) and then comparing only up to the common lenght before reporting the size difference. The mapFile helper returns both the mapped slice and the original file size so we can compare sizes independently of the mapping.

Exercise 2 - File patcher via writable shared mmap:

const std = @import("std");
const posix = std.posix;

pub fn main() !void {
    const args = std.os.argv;
    if (args.len < 4) {
        std.debug.print("Usage: patch <file> <hex-offset> <hex-byte>\n", .{});
        return;
    }

    const path = std.mem.span(args[1]);
    const offset = std.fmt.parseInt(usize, std.mem.span(args[2]), 16) catch {
        std.debug.print("Invalid hex offset.\n", .{});
        return;
    };
    const byte_val = std.fmt.parseInt(u8, std.mem.span(args[3]), 16) catch {
        std.debug.print("Invalid hex byte (must be 00-FF).\n", .{});
        return;
    };

    const file = try std.fs.cwd().openFile(path, .{ .mode = .read_write });
    defer file.close();
    const stat = try file.stat();

    if (offset >= stat.size) {
        std.debug.print("Offset 0x{X} is beyond file size ({d} bytes).\n", .{
            offset, stat.size,
        });
        return;
    }

    const mapped = try posix.mmap(
        null, stat.size,
        posix.PROT.READ | posix.PROT.WRITE,
        .{ .TYPE = .SHARED }, file.handle, 0,
    );
    defer posix.munmap(mapped);

    std.debug.print("Before: offset 0x{X:0>8} = 0x{X:0>2}\n", .{ offset, mapped[offset] });
    mapped[offset] = byte_val;
    std.debug.print("After:  offset 0x{X:0>8} = 0x{X:0>2}\n", .{ offset, mapped[offset] });

    // Verify by reading the file separately
    const verify = try std.fs.cwd().openFile(path, .{});
    defer verify.close();
    try verify.seekTo(offset);
    var buf: [1]u8 = undefined;
    _ = try verify.read(&buf);
    std.debug.print("Verify: offset 0x{X:0>8} = 0x{X:0>2}\n", .{ offset, buf[0] });
}

The key detail is opening the file with .read_write mode -- if you only open it for reading, the PROT.WRITE flag in mmap will fail with a permission error. The verification step re-opens the file and does a regular seek+read to confirm the byte actually made it to disk. Because the mapping is SHARED, the write goes to the page cache immediately and the regular read sees it.

Exercise 3 - Anonymous mapping + mprotect to trigger segfault:

const std = @import("std");
const posix = std.posix;

pub fn main() !void {
    const size = 64 * 1024 * 1024; // 64 MB
    const mem = try posix.mmap(
        null, size,
        posix.PROT.READ | posix.PROT.WRITE,
        .{ .TYPE = .PRIVATE, .ANONYMOUS = true },
        -1, 0,
    );
    defer posix.munmap(mem);

    // Fill with a pattern
    @memset(mem[0..1024], 0xAB);
    std.debug.print("Wrote pattern. First byte: 0x{X:0>2}\n", .{mem[0]});
    std.debug.print("Read back OK. Buffer is {d} MB.\n", .{size / 1024 / 1024});

    // Mark the entire region read-only
    try posix.mprotect(mem, posix.PROT.READ);

    std.debug.print("Region is now read-only.\n", .{});
    std.debug.print("Reading still works: 0x{X:0>2}\n", .{mem[0]});

    // This next write will crash with SIGSEGV.
    // The hardware MMU enforces the read-only protection --
    // no amount of Zig error handling can catch this because
    // it is a CPU-level fault, not a software error.
    std.debug.print("Attempting write to read-only region...\n", .{});
    mem[0] = 0xFF; // SIGSEGV here
    // This line will never execute
    std.debug.print("If you see this, something is very wrong.\n", .{});
}

The mprotect call changes the page table entries from read-write to read-only. After that, any write to those pages triggers a segmentation fault at the CPU level -- the MMU sees the write, checks the protection bits, and raises a hardware exception that the OS converts to SIGSEGV. This is the same mechanism that catches null pointer dereferences and stack overflows. It is NOT catchable through Zig's error system because it happens below the language level entirely.

OK so with the mmap exercises wrapped up, today we're going in a very different direction. We touched on comptime back in episode 9 -- the ability to execute code at compile time. And we used comptime parameters for generics in episode 14. But we never dug into what is arguably the MOST powerful thing you can do at compile time: inspecting types themselves. That's what @typeInfo does -- it gives you a structured description of any type, at compile time, that you can branch on, iterate over, and use to generate code. This is reflection, but without the runtme cost. No hash maps of field names, no string parsing, no dynamic dispatch. The compiler resolves everything and spits out exactly the machine code you'd write by hand ;-)

The @typeInfo builtin

The @typeInfo builtin takes a type and returns a std.builtin.Type -- a tagged union describing everything the compiler knows about that type. Fields, their types, their default values, alignment, whether they're packed -- all of it.

const std = @import("std");

const Point = struct {
    x: f32,
    y: f32,
    z: f32 = 0.0,
};

pub fn main() void {
    const info = @typeInfo(Point);

    // info is std.builtin.Type, which is a tagged union.
    // For a struct, the active tag is .@"struct"
    switch (info) {
        .@"struct" => |s| {
            std.debug.print("Point is a struct with {d} fields:\n", .{s.fields.len});
            for (s.fields) |field| {
                std.debug.print("  - {s}: {s}", .{
                    field.name,
                    @typeName(field.type),
                });
                if (field.default_value_ptr) |_| {
                    std.debug.print(" (has default)", .{});
                }
                std.debug.print("\n", .{});
            }
        },
        else => std.debug.print("Not a struct.\n", .{}),
    }
}

The output:

Point is a struct with 3 fields:
  - x: f32
  - y: f32
  - z: f32 (has default)

A few things to note. First, the switch on @typeInfo happens at compile time when the type is known at compile time (which it always is for @typeInfo since it takes a type argument). Second, .@"struct" uses the quoted identifier syntax because struct is a keyword in Zig -- you can't write .struct directly. Third, the fields array is a comptime-known slice of std.builtin.Type.StructField structs, each containing the field's name, type, default value pointer, and alignment info.

Inspecting struct fields, enum variants, and function params

The std.builtin.Type union has cases for every kind of Zig type. Here's how you inspect a few common ones:

const std = @import("std");

const Color = enum { red, green, blue, alpha };

const Config = struct {
    width: u32,
    height: u32,
    title: []const u8,
    fullscreen: bool = false,
};

fn add(a: i32, b: i32) i32 {
    return a + b;
}

pub fn main() void {
    // Enum variants
    const enum_info = @typeInfo(Color).@"enum";
    std.debug.print("Color has {d} variants:\n", .{enum_info.fields.len});
    for (enum_info.fields) |field| {
        std.debug.print("  .{s} = {d}\n", .{ field.name, field.value });
    }

    // Struct fields
    std.debug.print("\nConfig fields:\n", .{});
    const struct_info = @typeInfo(Config).@"struct";
    for (struct_info.fields) |field| {
        std.debug.print("  {s}: {s} (offset: {d})\n", .{
            field.name,
            @typeName(field.type),
            field.alignment,
        });
    }

    // Function parameters
    std.debug.print("\nadd() signature:\n", .{});
    const fn_info = @typeInfo(@TypeOf(add)).@"fn";
    std.debug.print("  params: {d}, return: {s}\n", .{
        fn_info.params.len,
        @typeName(fn_info.return_type.?),
    });
    for (fn_info.params, 0..) |param, i| {
        if (param.type) |t| {
            std.debug.print("  param[{d}]: {s}\n", .{ i, @typeName(t) });
        }
    }
}

Notice how for functions we need @TypeOf(add) first because add is a value (a function pointer), not a type. @typeInfo only takes types, so we get the type of the function value and then inspect that.

The enum fields give you both the name and the integer value of each variant. For a plain enum like Color where you haven't assigned explicit values, they're 0, 1, 2, 3. But if you had const Status = enum(u8) { ok = 200, not_found = 404 }, the values would be 200 and 404 respectively.

Building a generic toString with comptime field iteration

Here's where things get really interesting. We can write a function that takes ANY struct and produces a human-readable string representation -- without knowing the struct type in advance:

const std = @import("std");

fn structToString(value: anytype, buf: []u8) []const u8 {
    const T = @TypeOf(value);
    const info = @typeInfo(T).@"struct";

    var stream = std.io.fixedBufferStream(buf);
    const writer = stream.writer();

    writer.print("{s}{{ ", .{@typeName(T)}) catch return "<error>";

    inline for (info.fields, 0..) |field, i| {
        if (i > 0) writer.writeAll(", ") catch return "<error>";
        const field_val = @field(value, field.name);

        if (comptime isStringType(field.type)) {
            writer.print(".{s} = \"{s}\"", .{ field.name, field_val }) catch return "<error>";
        } else if (@typeInfo(field.type) == .bool) {
            writer.print(".{s} = {}", .{ field.name, field_val }) catch return "<error>";
        } else {
            writer.print(".{s} = {any}", .{ field.name, field_val }) catch return "<error>";
        }
    }

    writer.writeAll(" }") catch return "<error>";
    return stream.getWritten();
}

fn isStringType(comptime T: type) bool {
    return T == []const u8 or T == [:0]const u8;
}

const Config = struct {
    width: u32,
    height: u32,
    title: []const u8,
    vsync: bool = true,
};

pub fn main() void {
    const cfg = Config{
        .width = 1920,
        .height = 1080,
        .title = "My Window",
    };

    var buf: [512]u8 = undefined;
    const result = structToString(cfg, &buf);
    std.debug.print("{s}\n", .{result});
}

Output:

Config{ .width = 1920, .height = 1080, .title = "My Window", .vsync = true }

The magic ingredient here is inline for. When you write inline for (info.fields, 0..) |field, i|, the compiler unrolls this loop at compile time. For each field, it generates a separate branch of code with the correct field name and field type. The @field(value, field.name) builtin accesses a struct field by its compile-time known name -- it is NOT a string lookup at runtime. The compiler sees @field(value, "width") and turns it into a direct field access, same as writing value.width.

Without inline for, you can't do this. A regular for loop at runtime can't call @field because the field name would need to be runtime-known, and @field requires a comptime-known string. This is the fundamental pattern of Zig reflection: use @typeInfo to get the field list, inline for to iterate it at compile time, and @field to access each field. The result is zero-overhead generated code -- the compiler produces exactly what you'd write by hand if you knew the struct layout in advance.

@typeName and @tagName for debug output

We already used @typeName above -- it returns a string representation of any type. But @tagName is the complement for tagged unions and enums: it gives you the name of the active variant.

const std = @import("std");

const Shape = union(enum) {
    circle: f32,
    rect: struct { w: f32, h: f32 },
    point,
};

fn describeShape(shape: Shape) void {
    std.debug.print("Shape variant: {s}\n", .{@tagName(shape)});

    switch (shape) {
        .circle => |r| std.debug.print("  Circle with radius {d:.2}\n", .{r}),
        .rect => |r| std.debug.print("  Rect {d:.2} x {d:.2}\n", .{ r.w, r.h }),
        .point => std.debug.print("  Point (no data)\n", .{}),
    }
}

pub fn main() void {
    std.debug.print("Type name: {s}\n\n", .{@typeName(Shape)});

    const shapes = [_]Shape{
        .{ .circle = 5.0 },
        .{ .rect = .{ .w = 10.0, .h = 20.0 } },
        .point,
    };

    for (shapes) |s| describeShape(s);
}

@tagName is particularly useful in logging and error messages. Instead of writing a switch that maps each variant to a string, you just call @tagName and it gives you the enum field name as a [:0]const u8. Combined with @typeName, you can produce very informative debug output without maintaining any string tables by hand.

One thing to watch out for: @tagName only works on tagged unions (declared with union(enum)) and enums. If you try it on a bare union (no tag), you'll get a compile error because there's no tag to inspect at runtime. We covered the difference between bare unions and tagged unions back in episode 6.

Generating serialization logic from type metadata

A real-world use case for reflection is generating serialization code. Instead of hand-writing a toJson function for every struct, you can write ONE generic function that works for any type:

const std = @import("std");

fn writeJsonValue(writer: anytype, comptime T: type, value: T) !void {
    switch (@typeInfo(T)) {
        .int, .comptime_int => try writer.print("{d}", .{value}),
        .float, .comptime_float => try writer.print("{d:.6}", .{value}),
        .bool => try writer.writeAll(if (value) "true" else "false"),
        .pointer => |ptr| {
            if (ptr.size == .slice and ptr.child == u8) {
                try writer.print("\"{s}\"", .{value});
            } else {
                try writer.writeAll("\"<pointer>\"");
            }
        },
        .optional => {
            if (value) |v| {
                try writeJsonValue(writer, @TypeOf(v), v);
            } else {
                try writer.writeAll("null");
            }
        },
        .@"struct" => |info| {
            try writer.writeAll("{ ");
            inline for (info.fields, 0..) |field, i| {
                if (i > 0) try writer.writeAll(", ");
                try writer.print("\"{s}\": ", .{field.name});
                try writeJsonValue(writer, field.type, @field(value, field.name));
            }
            try writer.writeAll(" }");
        },
        .@"enum" => try writer.print("\"{s}\"", .{@tagName(value)}),
        else => try writer.writeAll("\"<unsupported>\""),
    }
}

fn toJson(value: anytype, buf: []u8) []const u8 {
    var stream = std.io.fixedBufferStream(buf);
    writeJsonValue(stream.writer(), @TypeOf(value), value) catch return "<error>";
    return stream.getWritten();
}

const Priority = enum { low, medium, high, critical };

const Task = struct {
    id: u32,
    name: []const u8,
    priority: Priority,
    done: bool,
    score: ?f32,
};

pub fn main() void {
    const task = Task{
        .id = 42,
        .name = "Write reflection tutorial",
        .priority = .high,
        .done = false,
        .score = 0.87,
    };

    var buf: [1024]u8 = undefined;
    std.debug.print("{s}\n", .{toJson(task, &buf)});

    // Also works with optionals set to null
    const task2 = Task{
        .id = 99,
        .name = "Review PR",
        .priority = .low,
        .done = true,
        .score = null,
    };
    std.debug.print("{s}\n", .{toJson(task2, &buf)});
}

Output:

{ "id": 42, "name": "Write reflection tutorial", "priority": "high", "done": false, "score": 0.870000 }
{ "id": 99, "name": "Review PR", "priority": "low", "done": true, "score": null }

The recursive structure is the key here. writeJsonValue switches on the type info and handles each case: integers, floats, bools, strings (detected as []const u8 slices), optionals (recurse into the payload or emit null), structs (iterate fields with inline for and recurse), and enums (use @tagName). The compiler evaluates all the switch branches at compile time for each concrete type and only generates code for the matching branch. For a Task, the struct branch gets compiled, and within it each field's type gets its own specialized writeJsonValue call. The end result is code as efficient as if you'd hand-written writer.print("{{ \"id\": {d}, ...") for that exact struct. No virtual dispatch, no type maps, no runtime string comparison.

Now, the real std.json in Zig's standard library uses a similar approach but handles far more edge cases -- nested structs, arrays, slices of structs, unicode escaping, pretty-printing. But the fundamental technique is the same: @typeInfo + inline for + recursion.

Practical example: generic debug printer

Let's build something you'll actually use in your own projects. A generic debug printer that handles structs, enums, tagged unions, arrays, optionals, and pointers -- basically a std.debug.print("{any}", ...) on steroids but one you control completely:

const std = @import("std");

fn debugPrint(writer: anytype, value: anytype, depth: usize) !void {
    const T = @TypeOf(value);
    const indent = "                                "[0 .. depth * 2];

    switch (@typeInfo(T)) {
        .@"struct" => |info| {
            try writer.print("{s}{{\n", .{@typeName(T)});
            inline for (info.fields) |field| {
                try writer.print("{s}  .{s} = ", .{ indent, field.name });
                try debugPrint(writer, @field(value, field.name), depth + 1);
                try writer.writeAll(",\n");
            }
            try writer.print("{s}}}", .{indent});
        },
        .@"enum" => try writer.print(".{s}", .{@tagName(value)}),
        .@"union" => |info| {
            if (info.tag_type) |_| {
                try writer.print("{s}.{s}", .{ @typeName(T), @tagName(value) });
                // Could also print the payload by switching on the tag
            } else {
                try writer.print("{s}{{...}}", .{@typeName(T)});
            }
        },
        .optional => {
            if (value) |v| {
                try debugPrint(writer, v, depth);
            } else {
                try writer.writeAll("null");
            }
        },
        .pointer => |ptr| {
            if (ptr.size == .slice and ptr.child == u8) {
                try writer.print("\"{s}\"", .{value});
            } else if (ptr.size == .slice) {
                try writer.print("[{d}]{s}[\n", .{ value.len, @typeName(ptr.child) });
                for (value, 0..) |item, i| {
                    try writer.print("{s}  [{d}] = ", .{ indent, i });
                    try debugPrint(writer, item, depth + 1);
                    try writer.writeAll(",\n");
                }
                try writer.print("{s}]", .{indent});
            } else {
                try writer.print("*{s}@{*}", .{ @typeName(ptr.child), value });
            }
        },
        .array => |arr| {
            try writer.print("[{d}]{s}[\n", .{ arr.len, @typeName(arr.child) });
            for (value, 0..) |item, i| {
                try writer.print("{s}  [{d}] = ", .{ indent, i });
                try debugPrint(writer, item, depth + 1);
                try writer.writeAll(",\n");
            }
            try writer.print("{s}]", .{indent});
        },
        .bool => try writer.print("{}", .{value}),
        .int, .comptime_int => try writer.print("{d}", .{value}),
        .float, .comptime_float => try writer.print("{d:.4}", .{value}),
        else => try writer.print("{any}", .{value}),
    }
}

const Level = enum { debug, info, warn, err };

const LogEntry = struct {
    timestamp: u64,
    level: Level,
    message: []const u8,
    context: ?[]const u8,
};

pub fn main() !void {
    const stderr = std.io.getStdErr().writer();

    const entries = [_]LogEntry{
        .{ .timestamp = 1714300000, .level = .info, .message = "Server started", .context = "main" },
        .{ .timestamp = 1714300005, .level = .warn, .message = "High memory", .context = null },
    };

    for (entries) |entry| {
        try debugPrint(stderr, entry, 0);
        try stderr.writeAll("\n\n");
    }
}

This gives you nicely indented, recursive output for any struct you throw at it. The beauty of the approach is that adding support for a new type category (like arrays, or tagged union payloads) is just adding another branch to the switch. And it all compiles down to direct print calls -- the compiler sees through the entire dispatch tree because every branch decision is based on comptime-known type info.

One practical note: the depth parameter and indentation string use a fixed maximum depth (" ..."[0 .. depth * 2]). If you're printing deeply nested structures you'd want to either increase that buffer or switch to repeated writer.writeAll(" ") calls in a loop. For most real-world debugging, 16 levels of nesting (32 chars of indent) is plenty.

Combining reflection with comptime to generate comparison functions

One more pattern that comes up all the time: generic equality. You want to compare two structs for equality, but == only works on primitive types in Zig. For structs, you need to compare field by field. Reflection makes this generic:

const std = @import("std");

fn structEql(a: anytype, b: @TypeOf(a)) bool {
    const T = @TypeOf(a);
    const info = @typeInfo(T).@"struct";

    inline for (info.fields) |field| {
        const val_a = @field(a, field.name);
        const val_b = @field(b, field.name);

        if (comptime isSliceType(field.type)) {
            if (!std.mem.eql(getSliceChild(field.type), val_a, val_b))
                return false;
        } else if (@typeInfo(field.type) == .@"struct") {
            if (!structEql(val_a, val_b))
                return false;
        } else {
            if (val_a != val_b)
                return false;
        }
    }
    return true;
}

fn isSliceType(comptime T: type) bool {
    return switch (@typeInfo(T)) {
        .pointer => |p| p.size == .slice,
        else => false,
    };
}

fn getSliceChild(comptime T: type) type {
    return @typeInfo(T).pointer.child;
}

const Vec3 = struct {
    x: f32,
    y: f32,
    z: f32,
};

const Entity = struct {
    id: u32,
    name: []const u8,
    pos: Vec3,
    active: bool,
};

pub fn main() void {
    const a = Entity{
        .id = 1,
        .name = "player",
        .pos = .{ .x = 1.0, .y = 2.0, .z = 3.0 },
        .active = true,
    };
    const b = Entity{
        .id = 1,
        .name = "player",
        .pos = .{ .x = 1.0, .y = 2.0, .z = 3.0 },
        .active = true,
    };
    const c = Entity{
        .id = 1,
        .name = "player",
        .pos = .{ .x = 1.0, .y = 9.0, .z = 3.0 },
        .active = true,
    };

    std.debug.print("a == b: {}\n", .{structEql(a, b)});
    std.debug.print("a == c: {}\n", .{structEql(a, c)});
}

The structEql function handles three cases per field: slices (use std.mem.eql), nested structs (recurse), and everything else (direct != comparison). This handles nested structs recursively -- the Vec3 inside Entity gets compared field by field automatically. And because everything is inline for with comptime type checks, the compiler generates a flat sequence of comparisons with no loops or branches beyond what's strictly necessary.

Having said that, be careful with floats. The != comparison for floating point means NaN != NaN returns true, which might not be what you want. For production use you'd want an approxEql branch for float fields. But the point here is the reflection pattern, not a bulletproof math library.

Wat we geleerd hebben

  • @typeInfo(T) returns a std.builtin.Type tagged union describing everything the compiler knows about type T -- fields, their types, default values, alignment, everything.
  • Struct fields are accessed via @typeInfo(T).@"struct".fields, giving you an array of StructField with .name, .type, .default_value_ptr, and .alignment.
  • inline for unrolls a comptime-known loop so that each iteration produces separate compiled code. This is what makes @field(value, field.name) work -- the field name must be comptime-known.
  • @field(value, "name") accesses a struct field by compile-time name. Combined with inline for over field info, this lets you iterate all fields of any struct generically.
  • @typeName(T) gives you a human-readable string of any type. @tagName(value) gives you the active variant name of an enum or tagged union.
  • Serialization, debug printing, equality comparison, and many other utilities can be written ONCE and work for ALL types automatically -- zero runtime overhead because the compiler resolves everything.
  • The fundamental pattern is always: @typeInfo to get metadata, inline for to iterate fields at comptime, @field to access each field, and switch on nested type info for recursive handling.

This kind of compile-time reflection is what makes Zig's comptime so much more than just "constexpr". In languages like C you'd use code generation or macros. In languages like Java or Python you'd use runtime reflection with hash maps and string lookups. Zig gives you the expressiveness of runtime reflection with the performance of hand-written code -- and the compiler catches any type mismatches before your program even runs. Some interesting patterns emerge when you combine this reflection with tagged unions to build finite state machines ;-)

Exercises

  1. Write a generic structFromEnv function that takes a struct type as a comptime parameter and populates it from environment variables. For each field in the struct, look up an environment variable with the same name (uppercased). Support []const u8 (direct string), u32 / i32 (parse with std.fmt.parseInt), and bool (check for "true"/"1"). Return an error if a required field is missing (fields with defaults can be skipped). Test it with a Config struct that has fields like host, port, and debug.

  2. Build a generic structDiff function that takes two instances of the same struct type and prints which fields differ between them. For each field, compare the values (handle slices with std.mem.eql, nested structs recursively, primitives with !=) and print the field name plus both values when they don't match. Test with a struct that has at least 5 fields of mixed types.

  3. Write a validateStruct function that uses @typeInfo to check that all []const u8 fields in a struct are non-empty and all integer fields are positive (> 0). Return a comptime-generated error set with one error per field (e.g. error.name_empty, error.age_invalid). This forces you to combine @typeInfo with comptime error set generation -- look at @Type (the inverse of @typeInfo) and how to build an error set programatically.

De groeten!

@scipio



0
0
0.000
1 comments
avatar

Gracias por estos tutoriales tan detallados. El rigor que le das a cada lección es lo que hace que esta serie sea indispensable para la comunidad

. Thanks for these detailed tutorials. The rigor you bring to every lesson is what makes this series indispensable to the community."

0
0
0.000