Learn Zig Series (#26) - Writing a Custom Allocator

Learn Zig Series (#26) - Writing a Custom Allocator

zig.png

What will I learn

  • You will learn the std.mem.Allocator interface from the implementation side;
  • You will learn building a simple bump/arena allocator from scratch;
  • You will learn implementing alloc, resize, and free for your allocator;
  • You will learn alignment requirements and how to satisfy them;
  • You will learn the @alignCast and std.mem.alignForward utilities;
  • You will learn fixed-buffer allocators for stack-allocated memory pools;
  • You will learn allocator debugging: tracking allocations and detecting double-frees;
  • You will learn when custom allocators improve performance vs using GPA.

Requirements

  • A working modern computer running macOS, Windows or Ubuntu;
  • An installed Zig 0.14+ distribution (download from ziglang.org);
  • The ambition to learn Zig programming.

Difficulty

  • Intermediate

Curriculum (of the Learn Zig Series):

Learn Zig Series (#26) - Writing a Custom Allocator

Welcome back! In episode 25 we built an HTTP status checker -- a complete CLI tool that combined networking, string parsing, error handling, formatting, and concurrency from the previous 24 episodes. That was our second mini project and a good checkpoint for where we are with the language fundamentals.

Now we're crossing into lower-level territory. Back in episode 7 we learned how to use allocators -- GeneralPurposeAllocator, page_allocator, FixedBufferAllocator, and the testing allocator. We passed them around, called alloc() and free(), and accepted that allocators are first-class citizens in Zig. But we never looked at what happens inside an allocator. How does alloc() actually find memory? What does free() do with it? How do you build your own?

Today we're going to the other side. We'll implement the std.mem.Allocator interface from scratch, build a bump allocator (also called an arena allocator), handle alignment correctly, and then add debugging features to catch common memory bugs. This is the kind of thing that separates "I use Zig" from "I understand Zig" ;-)

The std.mem.Allocator interface

Every allocator in Zig -- GeneralPurposeAllocator, page_allocator, FixedBufferAllocator, the testing allocator -- implements the same interface: std.mem.Allocator. When you call allocator.alloc(u8, 1024), you're going through this interface. Let's look at what it actually requires.

The std.mem.Allocator is a struct with two fields:

const std = @import("std");

// This is what std.mem.Allocator looks like (simplified):
// pub const Allocator = struct {
//     ptr: *anyopaque,
//     vtable: *const VTable,
//
//     pub const VTable = struct {
//         alloc: *const fn(...) ?[*]u8,
//         resize: *const fn(...) bool,
//         free: *const fn(...) void,
//     };
// };

// To create an allocator you provide:
// 1. A pointer to your allocator state (ptr)
// 2. A vtable with function pointers for alloc/resize/free

It's a vtable-based interface -- exactly the type erasure pattern we covered in episode 13. The ptr field is an opaque pointer to whatever state your allocator needs (a buffer, a free list, a page table). The vtable field holds function pointers for the three operations: alloc, resize, and free.

This design means any code that takes a std.mem.Allocator parameter doesn't know (or care) what allocator it's using. It could be backed by the OS page allocator, a fixed buffer on the stack, a custom arena, or anything else. Same interface, different implementations. That's the whole point of Zig's allocator design -- the caller decides the allocation strategy, not the library.

Building a bump allocator from scratch

A bump allocator (or arena allocator) is the simplest useful allocator you can build. The idea: you have a chunk of memory, and you maintain a single pointer that starts at the beginning and moves forward ("bumps") with each allocation. When someone asks for N bytes, you give them the memory starting at the current bump pointer and advance it by N. Individual free() calls are no-ops -- you free everything at once by resetting the pointer back to the beginning.

This sounds almost too simple, but bump allocators are used everywhere in performance-critical code: game engines, compilers, parsers, web servers. Any time you have a batch of allocations that all share the same lifetime (like "everything we need to process this HTTP request"), a bump allocator is ideal.

const std = @import("std");

const BumpAllocator = struct {
    buffer: []u8,
    offset: usize,
    allocations: usize,

    pub fn init(buffer: []u8) BumpAllocator {
        return .{
            .buffer = buffer,
            .offset = 0,
            .allocations = 0,
        };
    }

    pub fn reset(self: *BumpAllocator) void {
        self.offset = 0;
        self.allocations = 0;
    }

    pub fn allocator(self: *BumpAllocator) std.mem.Allocator {
        return .{
            .ptr = self,
            .vtable = &.{
                .alloc = alloc,
                .resize = resize,
                .free = free,
            },
        };
    }

    fn alloc(ctx: *anyopaque, len: usize, ptr_align: u8, _: usize) ?[*]u8 {
        const self: *BumpAllocator = @ptrCast(@alignCast(ctx));
        const alignment = @as(usize, 1) << @intCast(ptr_align);

        // Align the current offset forward
        const aligned_offset = std.mem.alignForward(usize, self.offset, alignment);

        if (aligned_offset + len > self.buffer.len) {
            return null; // out of memory
        }

        const result = self.buffer.ptr + aligned_offset;
        self.offset = aligned_offset + len;
        self.allocations += 1;

        return result;
    }

    fn resize(_: *anyopaque, _: [*]u8, _: usize, _: u8, _: usize, _: usize) bool {
        // Bump allocator can't resize in general
        return false;
    }

    fn free(_: *anyopaque, _: [*]u8, _: usize, _: u8) void {
        // Individual free is a no-op for bump allocators
        // Everything gets freed at once via reset()
    }
};

pub fn main() !void {
    // Back the allocator with a stack buffer
    var buffer: [4096]u8 = undefined;
    var bump = BumpAllocator.init(&buffer);
    const alloc = bump.allocator();

    // Allocate some things
    const nums = try alloc.alloc(u32, 10);
    for (nums, 0..) |*n, i| {
        n.* = @intCast(i * 10);
    }

    const message = try alloc.alloc(u8, 32);
    @memcpy(message[0..12], "Hello, bump!");

    std.debug.print("Numbers: ", .{});
    for (nums) |n| std.debug.print("{d} ", .{n});
    std.debug.print("\nMessage: {s}\n", .{message[0..12]});
    std.debug.print("Used: {d}/{d} bytes, {d} allocations\n", .{
        bump.offset, bump.buffer.len, bump.allocations,
    });

    // Reset frees everything at once
    bump.reset();
    std.debug.print("After reset: {d} bytes used\n", .{bump.offset});
}

Notice how we get the std.mem.Allocator interface by calling bump.allocator(), which packages up a pointer to our BumpAllocator and a vtable of our alloc/resize/free functions. From that point on, anything that takes a std.mem.Allocator parameter can use our bump allocator without knowing it's a bump allocator. The ArrayList from the standard library, the JSON parser, std.fmt.allocPrint -- all of them just work.

The alloc function does three things: align the offset, check bounds, and bump the pointer. The free function literally does nothing. The resize function returns false (we can't resize because there might be another allocation right after the current one). That's the entire implementation.

Alignment: why it matters and how to get it right

Alignment is one of those things that "just works" when you use the standard allocators, but if you build your own you have to handle it explicitly. When the CPU reads a 4-byte u32 from memory, it expects that value to start at an address divisible by 4. When it reads an 8-byte u64, it expects an address divisible by 8. If the address isn't aligned correctly, you get either a performance penalty (the CPU does two reads instead of one) or a hard crash (on some architectures, misaligned access is a fault).

Zig enforces alignment at the type system level. Every pointer type has an alignment: *u32 is 4-byte aligned, *u64 is 8-byte aligned, *u8 is 1-byte aligned. When you allocate memory, the allocator interface receives the required alignment as a parameter (the ptr_align argument -- it's a log2 value, so alignment 4 becomes ptr_align = 2, alignment 8 becomes ptr_align = 3).

The key function for handling alignment is std.mem.alignForward:

const std = @import("std");

pub fn main() !void {
    // alignForward rounds UP to the next multiple of alignment
    std.debug.print("alignForward(0, 4)  = {d}\n", .{std.mem.alignForward(usize, 0, 4)});
    std.debug.print("alignForward(1, 4)  = {d}\n", .{std.mem.alignForward(usize, 1, 4)});
    std.debug.print("alignForward(3, 4)  = {d}\n", .{std.mem.alignForward(usize, 3, 4)});
    std.debug.print("alignForward(4, 4)  = {d}\n", .{std.mem.alignForward(usize, 4, 4)});
    std.debug.print("alignForward(5, 4)  = {d}\n", .{std.mem.alignForward(usize, 5, 4)});
    std.debug.print("alignForward(7, 8)  = {d}\n", .{std.mem.alignForward(usize, 7, 8)});
    std.debug.print("alignForward(8, 8)  = {d}\n", .{std.mem.alignForward(usize, 8, 8)});
    std.debug.print("alignForward(9, 8)  = {d}\n", .{std.mem.alignForward(usize, 9, 8)});

    // Output:
    // alignForward(0, 4)  = 0
    // alignForward(1, 4)  = 4
    // alignForward(3, 4)  = 4
    // alignForward(4, 4)  = 4
    // alignForward(5, 4)  = 8
    // alignForward(7, 8)  = 8
    // alignForward(8, 8)  = 8
    // alignForward(9, 8)  = 16
}

The math behind alignForward is beautifully simple: (value + alignment - 1) & ~(alignment - 1). It rounds up to the next multiple of the alignment. Since alignments are always powers of two, the bitwise AND with the inverted mask zeroes out the lower bits. No division, no modulo -- just bit manipulation (we covered this in episode 17).

In our bump allocator, alignment means we sometimes waste a few bytes between allocations. If the current offset is 5 and someone requests a u32 (alignment 4), we skip bytes 5-7 and start the allocation at offset 8. Those three bytes are "padding" -- wasted. A well-designed allocator minimizes padding by either sorting allocations by alignment (largest first) or using separate pools for different alignments.

@alignCast and pointer casting

When you receive memory from an allocator as [*]u8, you often need to cast it to a more specific pointer type. Zig's @alignCast verifies (at runtime in Debug/ReleaseSafe builds) that the pointer actually has the required alignment:

const std = @import("std");

pub fn main() !void {
    var buffer: [256]u8 align(16) = undefined;
    var bump = @import("root").BumpAllocator.init(&buffer);
    _ = &bump;

    // Raw byte pointer from the allocator
    const raw_ptr: [*]u8 = buffer[0..].ptr;

    // Safe: buffer is 16-byte aligned, so casting to *u32 (align 4) is fine
    const u32_ptr: *u32 = @ptrCast(@alignCast(raw_ptr));
    u32_ptr.* = 42;
    std.debug.print("u32 value: {d}\n", .{u32_ptr.*});

    // The allocator handles this for you when you use alloc(u32, N)
    // But understanding what happens underneath is important
}

In practice, you rarely need @alignCast when using allocators through the standard interface because alloc(T, n) returns a properly typed []T slice. But when building the allocator itself or doing low-level memory tricks, you'll encounter it. The @alignCast is Zig's way of saying "I promise this pointer is properly aligned" while giving the runtime a chance to verify that promise.

Fixed-buffer allocator for stack memory

Our bump allocator already works with a stack-allocated buffer, but let's look at how Zig's standard library FixedBufferAllocator does it. Understanding this is useful because it shows a slightly more capable version of what we built -- one that actually supports free() for the most recent allocation (a LIFO pattern):

const std = @import("std");

pub fn main() !void {
    // Stack-allocated buffer -- no heap, no OS calls
    var buf: [1024]u8 = undefined;
    var fba = std.heap.FixedBufferAllocator.init(&buf);
    const alloc = fba.allocator();

    // Use it exactly like any other allocator
    var list = std.ArrayList(u32).init(alloc);
    defer list.deinit();

    for (0..20) |i| {
        try list.append(@intCast(i * 3));
    }

    std.debug.print("List has {d} items, capacity {d}\n", .{ list.items.len, list.capacity });
    std.debug.print("Buffer used: {d}/{d} bytes\n", .{ fba.end_index, buf.len });

    // Reset and reuse the same buffer
    fba.reset();
    std.debug.print("After reset: {d} bytes used\n", .{fba.end_index});

    // Now we can allocate again from the same buffer
    const msg = try alloc.alloc(u8, 64);
    @memcpy(msg[0..13], "Fresh start!\x00");
    std.debug.print("New allocation: {s}\n", .{msg[0..12]});
}

The pattern is clean: allocate a buffer on the stack (or embed it in a struct), wrap it in a FixedBufferAllocator, and hand out the allocator interface. No heap allocation, no OS calls, completely deterministic performance. This is why Zig is popular for embedded and real-time systems -- you can control exactly where every byte comes from.

I use this pattern a lot when I know the maximum size of something in advance. Parsing a fixed-format config file? 4KB buffer is plenty. Building a small HTTP response? 16KB buffer. Formatting a log line? 256 bytes. Why involve the heap when you know the upper bound?

A debugging allocator: tracking and detecting bugs

The GeneralPurposeAllocator in Zig already has excellent debugging features -- it detects double-frees, use-after-free, and memory leaks. But building your own debugging wrapper teaches you how those features work. Let's extend our bump allocator with tracking:

const std = @import("std");

const DebugBumpAllocator = struct {
    buffer: []u8,
    offset: usize,

    // Tracking metadata
    active_allocations: usize,
    total_allocated: usize,
    total_freed: usize,
    peak_usage: usize,

    // Simple allocation log (fixed size for simplicity)
    log: [256]AllocationEntry,
    log_count: usize,

    const AllocationEntry = struct {
        ptr: usize,      // address
        size: usize,
        freed: bool,
    };

    pub fn init(buffer: []u8) DebugBumpAllocator {
        return .{
            .buffer = buffer,
            .offset = 0,
            .active_allocations = 0,
            .total_allocated = 0,
            .total_freed = 0,
            .peak_usage = 0,
            .log = undefined,
            .log_count = 0,
        };
    }

    pub fn allocator(self: *DebugBumpAllocator) std.mem.Allocator {
        return .{
            .ptr = self,
            .vtable = &.{
                .alloc = alloc,
                .resize = resize,
                .free = free,
            },
        };
    }

    fn alloc(ctx: *anyopaque, len: usize, ptr_align: u8, _: usize) ?[*]u8 {
        const self: *DebugBumpAllocator = @ptrCast(@alignCast(ctx));
        const alignment = @as(usize, 1) << @intCast(ptr_align);
        const aligned_offset = std.mem.alignForward(usize, self.offset, alignment);

        if (aligned_offset + len > self.buffer.len) {
            std.debug.print("[DEBUG-ALLOC] OUT OF MEMORY: requested {d} bytes, " ++
                "only {d} available\n", .{ len, self.buffer.len - self.offset });
            return null;
        }

        const result = self.buffer.ptr + aligned_offset;
        self.offset = aligned_offset + len;
        self.active_allocations += 1;
        self.total_allocated += len;
        if (self.offset > self.peak_usage) {
            self.peak_usage = self.offset;
        }

        // Log the allocation
        if (self.log_count < self.log.len) {
            self.log[self.log_count] = .{
                .ptr = @intFromPtr(result),
                .size = len,
                .freed = false,
            };
            self.log_count += 1;
        }

        return result;
    }

    fn resize(_: *anyopaque, _: [*]u8, _: usize, _: u8, _: usize, _: usize) bool {
        return false;
    }

    fn free(ctx: *anyopaque, buf: [*]u8, len: usize, _: u8) void {
        const self: *DebugBumpAllocator = @ptrCast(@alignCast(ctx));
        const addr = @intFromPtr(buf);

        // Check for double-free
        for (self.log[0..self.log_count]) |*entry| {
            if (entry.ptr == addr) {
                if (entry.freed) {
                    std.debug.print("[DEBUG-ALLOC] DOUBLE FREE detected at " ++
                        "0x{x}, size {d}!\n", .{ addr, entry.size });
                    return;
                }
                entry.freed = true;
                self.active_allocations -= 1;
                self.total_freed += len;
                return;
            }
        }

        std.debug.print("[DEBUG-ALLOC] FREE of unknown pointer 0x{x}!\n", .{addr});
    }

    pub fn dumpStats(self: *DebugBumpAllocator) void {
        std.debug.print("\n--- Allocator Stats ---\n", .{});
        std.debug.print("Active allocations: {d}\n", .{self.active_allocations});
        std.debug.print("Total allocated:    {d} bytes\n", .{self.total_allocated});
        std.debug.print("Total freed:        {d} bytes\n", .{self.total_freed});
        std.debug.print("Peak usage:         {d} bytes\n", .{self.peak_usage});
        std.debug.print("Current offset:     {d}/{d}\n", .{ self.offset, self.buffer.len });

        // Report leaks
        var leaks: usize = 0;
        for (self.log[0..self.log_count]) |entry| {
            if (!entry.freed) {
                leaks += 1;
            }
        }
        if (leaks > 0) {
            std.debug.print("\nWARNING: {d} allocation(s) not freed:\n", .{leaks});
            for (self.log[0..self.log_count]) |entry| {
                if (!entry.freed) {
                    std.debug.print("  - 0x{x}: {d} bytes\n", .{ entry.ptr, entry.size });
                }
            }
        } else {
            std.debug.print("\nNo leaks detected.\n", .{});
        }
        std.debug.print("-----------------------\n", .{});
    }
};

pub fn main() !void {
    var buffer: [8192]u8 = undefined;
    var debug_alloc = DebugBumpAllocator.init(&buffer);
    const alloc = debug_alloc.allocator();

    // Normal usage
    const data1 = try alloc.alloc(u8, 100);
    const data2 = try alloc.alloc(u32, 50);
    const data3 = try alloc.alloc(u8, 200);

    // Free some (but not all -- deliberate leak)
    alloc.free(data1);
    alloc.free(data2);
    // data3 is intentionally NOT freed

    debug_alloc.dumpStats();

    // Try a double free (our allocator catches it)
    alloc.free(data1);
}

The output shows you exactly what's happening: how many allocations are active, peak memory usage, which allocations were leaked, and whether anyone tried to double-free. This is the exact kind of information that the standard GeneralPurposeAllocator provides when you call gpa.deinit() in debug mode -- and now you understand how it works under the hood.

In real projects, I use a debug allocator like this during development and then switch to a production allocator (bare bump or GPA) when shipping. The debugging overhead (tracking every allocation, checking for double-frees) costs performance, but catching memory bugs early is worth it. Zig's philosophy of "debug build is slow but catches everything, release build is fast but trusts you" maps perfectly onto this pattern.

When to use a custom allocator

The GeneralPurposeAllocator is good enough for most programs. It's general-purpose (hence the name), handles fragmentation well, and has excellent debugging in debug mode. So when does a custom allocator actually help?

Arena/bump allocator -- when you have many allocations with the same lifetime. Processing a web request, parsing a file, running a compiler pass. You allocate everything into the arena, process the data, then reset the arena in one shot. No individual frees, no fragmentation, and allocation is just a pointer bump -- way faster than a general-purpose allocator that has to search free lists and merge blocks.

const std = @import("std");

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();

    // ArenaAllocator wraps any allocator and provides reset()
    var arena = std.heap.ArenaAllocator.init(gpa.allocator());
    defer arena.deinit();
    const alloc = arena.allocator();

    // Simulate processing multiple "requests"
    for (0..5) |request_id| {
        // Each request allocates various things
        const header = try alloc.alloc(u8, 256);
        _ = header;
        const body = try alloc.alloc(u8, 4096);
        _ = body;
        var items = std.ArrayList(u32).init(alloc);
        for (0..100) |j| {
            try items.append(@intCast(j));
        }

        std.debug.print("Request {d}: processed {d} items\n", .{ request_id, items.items.len });

        // Reset: frees EVERYTHING allocated since last reset
        // No individual deinit/free needed!
        _ = arena.reset(.retain_capacity);
    }
}

The standard library's ArenaAllocator is exactly this pattern -- a bump allocator backed by another allocator (it requests big chunks from the backing allocator and bumps through them). The reset(.retain_capacity) call keeps the underlying pages allocated so the next round doesn't need to ask the OS for memory again. This is the single most impactful allocator optimization for request-processing workloads.

Fixed-buffer allocator -- when you know the exact maximum size and want zero heap involvement. Embedded systems, hot loops, stack-local computations. We saw this earlier with FixedBufferAllocator.

Pool allocator -- when you allocate and free many objects of the same size (game entities, network packets, AST nodes). A pool pre-allocates a big array and hands out fixed-size slots. Allocation is O(1), free is O(1), zero fragmentation. Zig's standard library includes std.heap.MemoryPool for this:

const std = @import("std");

const Entity = struct {
    id: u32,
    x: f32,
    y: f32,
    health: i32,
    active: bool,
};

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();

    var pool = std.heap.MemoryPool(Entity).init(gpa.allocator());
    defer pool.deinit();

    // Create some entities
    const e1 = try pool.create();
    e1.* = .{ .id = 1, .x = 10.0, .y = 20.0, .health = 100, .active = true };

    const e2 = try pool.create();
    e2.* = .{ .id = 2, .x = 30.0, .y = 40.0, .health = 80, .active = true };

    const e3 = try pool.create();
    e3.* = .{ .id = 3, .x = 50.0, .y = 60.0, .health = 60, .active = true };

    std.debug.print("Entity 1: id={d}, pos=({d:.1},{d:.1}), hp={d}\n", .{
        e1.id, e1.x, e1.y, e1.health,
    });

    // Destroy e2 -- the slot goes back to the pool
    pool.destroy(e2);

    // Next create() reuses e2's slot (no new allocation)
    const e4 = try pool.create();
    e4.* = .{ .id = 4, .x = 70.0, .y = 80.0, .health = 100, .active = true };

    std.debug.print("Entity 4 (reused slot): id={d}, pos=({d:.1},{d:.1})\n", .{
        e4.id, e4.x, e4.y,
    });

    pool.destroy(e1);
    pool.destroy(e3);
    pool.destroy(e4);
}

Pool allocators are what game engines use for things like particles, bullets, and enemies -- objects that spawn and despawn rapidly, always the same size, and need allocation to be essentially free (no searching, no fragmentation).

Choosing the right allocator

Here's a quick decision guide:

SituationAllocatorWhy
General purpose, debuggingGeneralPurposeAllocatorCatches leaks, double-free, use-after-free
Batch processing, same lifetimeArenaAllocatorFast alloc, bulk free, zero fragmentation
Known max size, no heapFixedBufferAllocatorStack-only, deterministic
Many same-size objectsMemoryPoolO(1) alloc/free, zero fragmentation
Testingstd.testing.allocatorFails test on leak
Custom needsBuild your ownFull control

The advice I'd give: start with GeneralPurposeAllocator everywhere. Profile. If allocation is a bottleneck (and you'll be surprised how rarely it is), identify the allocation pattern and pick the matching specialised allocator. Most programs never need anything beyond GPA + arena. Only go custom when you have measured data showing it matters.

Dusssssss, wat hebben we nou geleerd?

  • The std.mem.Allocator interface uses a vtable pattern (pointer + function pointers for alloc/resize/free) -- exactly the type erasure technique from episode 13. Any allocator can be swapped in transparently.
  • A bump/arena allocator is the simplest useful allocator: bump a pointer forward on alloc, do nothing on free, reset everything at once. Used in compilers, game engines, web servers -- anywhere allocations share a lifetime.
  • Alignment matters at the hardware level. std.mem.alignForward rounds up to the next properly aligned address. Allocators receive alignment as a log2 parameter and must respect it to avoid crashes or performance penalties.
  • @alignCast tells the compiler to cast a pointer to a higher alignment. In debug builds, it verifies the alignment at runtime -- catching misaligned pointers before they cause subtle bugs.
  • Fixed-buffer allocators back memory from a stack buffer or embedded array -- zero heap, zero OS calls, completley deterministic. Ideal for embedded systems and hot paths with known bounds.
  • Debugging allocators track allocations and detect double-frees, use-after-free, and leaks by logging every alloc/free operation. This is how GeneralPurposeAllocator catches memory bugs in debug mode.
  • ArenaAllocator (std library) and MemoryPool cover the two most common specialization patterns. Arena for same-lifetime batches, pool for same-size rapid alloc/free cycles.
  • When to go custom: start with GPA, measure, specialize only where profiling shows allocation is the bottleneck. Most programs never need anything beyond GPA + arena.

Custom allocators are one of the features that make Zig genuinely different from most languages. In Python or JavaScript, you get one allocator (the runtime's GC) and that's that. In C, you get malloc/free and everything else is DIY. Zig gives you a clean interface, powerful standard library implementations, and the ability to plug in your own when you need to. The allocator is a first-class citizen because memory management IS the program for systems-level work. Understanding it at this level will pay off as we start exploring C interop, where you'll need to think about who owns which memory across language boundaries ;-)

Exercises

  1. Extend the BumpAllocator from this episode to support resize for the most recent allocation (the one at the end of the buffer). If the requested resize is for the last allocation, just adjust the offset. If it's for any other allocation, return false. Test it by creating an ArrayList(u32) backed by your bump allocator and appending 50 items -- the ArrayList's internal resize calls should succeed as long as it's the only active allocation.

  2. Build a CountingAllocator that wraps any std.mem.Allocator and transparently forwards all alloc/resize/free calls to the inner allocator, while counting how many allocations are active, how many bytes are currently allocated, and what the peak byte usage was. Provide a printStats() method. Test it by wrapping a GeneralPurposeAllocator, running some code that allocates and frees, and printing the stats at the end.

  3. Create a StackAllocator that works like a bump allocator but also supports LIFO (last-in-first-out) frees. Keep a small "stack" of allocation sizes. When free is called, check that the pointer matches the most recent allocation -- if it does, rewind the offset. If it doesn't (free out of order), print a warning and do nothing. Demonstrate correct LIFO usage with 3 allocations freed in reverse order, and show what happens when someone tries to free out of order.

Bedankt en tot de volgende keer!

@scipio



0
0
0.000
1 comments
avatar

Thanks for your contribution to the STEMsocial community. Feel free to join us on discord to get to know the rest of us!

Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).

Consider setting @stemsocial as a beneficiary of this post's rewards if you would like to support the community and contribute to its mission of promoting science and education on Hive. 
 

0
0
0.000