
Learn Zig Series (#15) - The Build System (build.zig)
Learn Zig Series (#15) - The Build System (build.zig)

What will I learn
- You will learn how
build.zigreplaces Makefiles, CMake, and other build configuration; - the
std.BuildAPI for declaring executables, libraries, and test targets; - build options: debug vs release, target triples, optimization levels;
- adding dependencies and linking C libraries from the build script;
- custom build steps for code generation and asset processing;
- the
zig fetchandbuild.zig.zonpackage management system; - conditional compilation and build-time configuration via options;
- organizing multi-binary projects with shared libraries.
Requirements
- A working modern computer running macOS, Windows or Ubuntu;
- An installed Zig 0.14+ distribution (download from ziglang.org);
- The ambition to learn Zig programming.
Difficulty
- Intermediate
Curriculum (of the Learn Zig Series):
- @scipio/zig-programming-tutoroial-ep001-intro" target="_blank" rel="noopener noreferrer">Zig Programming Tutorial - ep001 - Intro
- @scipio/learn-zig-series-2-hello-zig-variables-and-types" target="_blank" rel="noopener noreferrer">Learn Zig Series (#2) - Hello Zig, Variables and Types
- @scipio/learn-zig-series-3-functions-and-control-flow" target="_blank" rel="noopener noreferrer">Learn Zig Series (#3) - Functions and Control Flow
- @scipio/learn-zig-series-4-error-handling-zigs-best-feature" target="_blank" rel="noopener noreferrer">Learn Zig Series (#4) - Error Handling (Zig's Best Feature)
- @scipio/learn-zig-series-5-arrays-slices-and-strings" target="_blank" rel="noopener noreferrer">Learn Zig Series (#5) - Arrays, Slices, and Strings
- @scipio/learn-zig-series-6-structs-enums-and-tagged-unions" target="_blank" rel="noopener noreferrer">Learn Zig Series (#6) - Structs, Enums, and Tagged Unions
- @scipio/learn-zig-series-7-memory-management-and-allocators" target="_blank" rel="noopener noreferrer">Learn Zig Series (#7) - Memory Management and Allocators
- @scipio/learn-zig-series-8-pointers-and-memory-layout" target="_blank" rel="noopener noreferrer">Learn Zig Series (#8) - Pointers and Memory Layout
- @scipio/learn-zig-series-9-comptime-zigs-superpower" target="_blank" rel="noopener noreferrer">Learn Zig Series (#9) - Comptime (Zig's Superpower)
- @scipio/learn-zig-series-10-project-structure-modules-and-file-io" target="_blank" rel="noopener noreferrer">Learn Zig Series (#10) - Project Structure, Modules, and File I/O
- @scipio/learn-zig-series-11-mini-project-building-a-step-sequencer" target="_blank" rel="noopener noreferrer">Learn Zig Series (#11) - Mini Project: Building a Step Sequencer
- @scipio/learn-zig-series-12-testing-and-test-driven-development" target="_blank" rel="noopener noreferrer">Learn Zig Series (#12) - Testing and Test-Driven Development
- @scipio/learn-zig-series-13-interfaces-via-type-erasure" target="_blank" rel="noopener noreferrer">Learn Zig Series (#13) - Interfaces via Type Erasure
- @scipio/learn-zig-series-14-generics-with-comptime-parameters" target="_blank" rel="noopener noreferrer">Learn Zig Series (#14) - Generics with Comptime Parameters
- @scipio/learn-zig-series-15-the-build-system-buildzig" target="_blank" rel="noopener noreferrer">Learn Zig Series (#15) - The Build System (build.zig) (this post)
Learn Zig Series (#15) - The Build System (build.zig)
Welcome back! In @scipio/learn-zig-series-14-generics-with-comptime-parameters" target="_blank" rel="noopener noreferrer">episode #14 we explored Zig's generics system -- which is really just comptime parameters in disguise. We built generic functions with comptime T: type, returned types from functions to create generic data structures like Stack(i32), inspected types at compile time with @typeInfo and @hasDecl, combined multiple comptime parameters in FixedRing(T, capacity), and saw how anytype provides lightweight polymorphism. That covered the compile-time side of Zig's abstraction story. Combined with the runtime type erasure from @scipio/learn-zig-series-13-interfaces-via-type-erasure" target="_blank" rel="noopener noreferrer">ep013, you now have both tools for writing polymorphic code.
But there's one piece of the puzzle we've been ignoring this entire series. Every single program we've compiled used zig build or zig test -- and those commands read a file called build.zig that we never wrote ourselves. We either ran zig init and used whatever it generated, or compiled single files with zig build-exe. Well, that changes now. Because once your project has more than one source file, once you need to link a C library, once you want to produce both a debug and release binary, or once you pull in a third-party package -- you need to understand build.zig. And the good news is that build.zig is not a config file in some DSL. It's Zig code. Real Zig code, using the same language you already know, with conditionals and loops and error handling and all the rest.
Here we go!
Solutions to Episode 14 Exercises
Before we get into the build system, here are the solutions to last episode's exercises on generics. All code is complete and compilable -- copy, paste, zig test, done.
Exercise 1 -- Generic MinHeap(T):
const std = @import("std");
const testing = std.testing;
fn MinHeap(comptime T: type) type {
switch (@typeInfo(T)) {
.int, .float, .@"enum" => {},
else => @compileError("MinHeap requires a comparable type"),
}
return struct {
items: std.ArrayList(T),
const Self = @This();
pub fn init(allocator: std.mem.Allocator) Self {
return .{ .items = std.ArrayList(T).init(allocator) };
}
pub fn deinit(self: *Self) void {
self.items.deinit();
}
pub fn insert(self: *Self, value: T) !void {
try self.items.append(value);
var idx = self.items.items.len - 1;
while (idx > 0) {
const parent = (idx - 1) / 2;
if (self.items.items[idx] < self.items.items[parent]) {
const tmp = self.items.items[idx];
self.items.items[idx] = self.items.items[parent];
self.items.items[parent] = tmp;
idx = parent;
} else break;
}
}
pub fn extractMin(self: *Self) !T {
if (self.items.items.len == 0) return error.Empty;
const min_val = self.items.items[0];
const last = self.items.pop();
if (self.items.items.len > 0) {
self.items.items[0] = last;
self.bubbleDown(0);
}
return min_val;
}
pub fn peek(self: Self) !T {
if (self.items.items.len == 0) return error.Empty;
return self.items.items[0];
}
fn bubbleDown(self: *Self, start: usize) void {
var idx = start;
const len = self.items.items.len;
while (true) {
var smallest = idx;
const left = 2 * idx + 1;
const right = 2 * idx + 2;
if (left < len and self.items.items[left] < self.items.items[smallest])
smallest = left;
if (right < len and self.items.items[right] < self.items.items[smallest])
smallest = right;
if (smallest == idx) break;
const tmp = self.items.items[idx];
self.items.items[idx] = self.items.items[smallest];
self.items.items[smallest] = tmp;
idx = smallest;
}
}
};
}
test "MinHeap extracts in sorted order" {
var heap = MinHeap(i32).init(testing.allocator);
defer heap.deinit();
try heap.insert(5);
try heap.insert(2);
try heap.insert(8);
try heap.insert(1);
try testing.expectEqual(@as(i32, 1), try heap.extractMin());
try testing.expectEqual(@as(i32, 2), try heap.extractMin());
try testing.expectEqual(@as(i32, 5), try heap.extractMin());
try testing.expectEqual(@as(i32, 8), try heap.extractMin());
}
The key insight: bubble-up after insert (swap with parent while smaller), bubble-down after extract (swap with smallest child while larger). The comptime constraint uses the same @typeInfo switch pattern from the episode.
Exercise 2 -- Generic map function:
const std = @import("std");
const testing = std.testing;
fn map(
comptime T: type,
comptime R: type,
items: []const T,
f: *const fn (T) R,
allocator: std.mem.Allocator,
) ![]R {
const result = try allocator.alloc(R, items.len);
for (items, 0..) |item, i| {
result[i] = f(item);
}
return result;
}
fn intToFloat(x: i32) f64 {
return @floatFromInt(x);
}
fn isHigh(x: u8) bool {
return x > 128;
}
test "map i32 to f64" {
const input = [_]i32{ 1, 2, 3 };
const result = try map(i32, f64, &input, &intToFloat, testing.allocator);
defer testing.allocator.free(result);
try testing.expectApproxEqAbs(@as(f64, 1.0), result[0], 0.001);
try testing.expectApproxEqAbs(@as(f64, 2.0), result[1], 0.001);
try testing.expectApproxEqAbs(@as(f64, 3.0), result[2], 0.001);
}
test "map u8 to bool" {
const input = [_]u8{ 50, 200, 100, 255 };
const result = try map(u8, bool, &input, &isHigh, testing.allocator);
defer testing.allocator.free(result);
try testing.expectEqual(false, result[0]);
try testing.expectEqual(true, result[1]);
try testing.expectEqual(false, result[2]);
try testing.expectEqual(true, result[3]);
}
The caller owns the returned slice and must free it. The testing allocator verifies no leaks.
Exercise 3 -- Matrix(T, rows, cols) with transpose():
const std = @import("std");
const testing = std.testing;
fn Matrix(comptime T: type, comptime rows: usize, comptime cols: usize) type {
switch (@typeInfo(T)) {
.int, .float => {},
else => @compileError("Matrix requires a numeric type"),
}
return struct {
data: [rows][cols]T = std.mem.zeroes([rows][cols]T),
const Self = @This();
pub fn get(self: Self, r: usize, c: usize) T {
return self.data[r][c];
}
pub fn set(self: *Self, r: usize, c: usize, val: T) void {
self.data[r][c] = val;
}
pub fn transpose(self: Self) Matrix(T, cols, rows) {
var result = Matrix(T, cols, rows){};
for (0..rows) |r| {
for (0..cols) |c| {
result.data[c][r] = self.data[r][c];
}
}
return result;
}
};
}
test "Matrix transpose swaps dimensions" {
var m = Matrix(i32, 2, 3){};
m.set(0, 0, 1);
m.set(0, 1, 2);
m.set(0, 2, 3);
m.set(1, 0, 4);
m.set(1, 1, 5);
m.set(1, 2, 6);
const t = m.transpose();
try testing.expectEqual(@as(i32, 1), t.get(0, 0));
try testing.expectEqual(@as(i32, 4), t.get(0, 1));
try testing.expectEqual(@as(i32, 2), t.get(1, 0));
try testing.expectEqual(@as(i32, 5), t.get(1, 1));
try testing.expectEqual(@as(i32, 3), t.get(2, 0));
try testing.expectEqual(@as(i32, 6), t.get(2, 1));
}
Notice how transpose() returns Matrix(T, cols, rows) -- the comptime parameters are swapped. A 2x3 matrix transposes to a 3x2 matrix. The compiler enforces this at the type level.
Exercise 4 -- FixedRing iterator:
const std = @import("std");
const testing = std.testing;
fn FixedRing(comptime T: type, comptime capacity: usize) type {
if (capacity == 0) @compileError("capacity must be > 0");
if (capacity & (capacity - 1) != 0)
@compileError("capacity must be a power of 2");
return struct {
buffer: [capacity]T = undefined,
head: usize = 0,
tail: usize = 0,
count: usize = 0,
const Self = @This();
const mask = capacity - 1;
pub fn push(self: *Self, val: T) void {
self.buffer[self.tail & mask] = val;
self.tail +%= 1;
if (self.count < capacity) {
self.count += 1;
} else {
self.head +%= 1;
}
}
pub fn pop(self: *Self) ?T {
if (self.count == 0) return null;
const val = self.buffer[self.head & mask];
self.head +%= 1;
self.count -= 1;
return val;
}
pub const Iterator = struct {
ring: *const Self,
pos: usize,
remaining: usize,
pub fn next(self: *Iterator) ?T {
if (self.remaining == 0) return null;
const val = self.ring.buffer[self.pos & mask];
self.pos +%= 1;
self.remaining -= 1;
return val;
}
};
pub fn iterator(self: *const Self) Iterator {
return .{
.ring = self,
.pos = self.head,
.remaining = self.count,
};
}
};
}
test "FixedRing iterator yields FIFO order" {
var ring = FixedRing(i32, 4){};
ring.push(10);
ring.push(20);
ring.push(30);
ring.push(40);
var it = ring.iterator();
var collected: [4]i32 = undefined;
var i: usize = 0;
while (it.next()) |val| {
collected[i] = val;
i += 1;
}
try testing.expectEqual(@as(i32, 10), collected[0]);
try testing.expectEqual(@as(i32, 20), collected[1]);
try testing.expectEqual(@as(i32, 30), collected[2]);
try testing.expectEqual(@as(i32, 40), collected[3]);
// Ring still has all elements
try testing.expectEqual(@as(usize, 4), ring.count);
}
The iterator reads without modifying the ring. It tracks its own pos and remaining count, leaving the ring's head and tail untouched.
Exercise 5 -- std.BoundedArray analysis: BoundedArray(T, capacity) takes two comptime parameters -- an element type and a maximum capacity. It stores data in a [capacity]T array on the stack (no allocator needed, same as FixedRing). When you exceed the capacity, it returns error.Overflow instead of reallocating. Choose BoundedArray when you know the maximum size at compile time and want zero heap allocation -- embedded systems, hot loops, or anywhere allocation latency matters. Choose ArrayList when the size is unbounded or unpredictable and you're willing to pay for heap allocation and occasional resizing.
Exercise 6 -- Result(T, E):
const std = @import("std");
const testing = std.testing;
fn Result(comptime T: type, comptime E: type) type {
return union(enum) {
ok_val: T,
err_val: E,
const Self = @This();
pub fn ok(value: T) Self {
return .{ .ok_val = value };
}
pub fn err(e: E) Self {
return .{ .err_val = e };
}
pub fn isOk(self: Self) bool {
return self == .ok_val;
}
pub fn unwrap(self: Self) T {
return switch (self) {
.ok_val => |v| v,
.err_val => @panic("called unwrap on an error Result"),
};
}
pub fn unwrapOr(self: Self, default: T) T {
return switch (self) {
.ok_val => |v| v,
.err_val => default,
};
}
};
}
test "Result ok path" {
const r = Result(i32, []const u8).ok(42);
try testing.expect(r.isOk());
try testing.expectEqual(@as(i32, 42), r.unwrap());
try testing.expectEqual(@as(i32, 42), r.unwrapOr(0));
}
test "Result err path" {
const r = Result(i32, []const u8).err("something broke");
try testing.expect(!r.isOk());
try testing.expectEqual(@as(i32, -1), r.unwrapOr(-1));
}
This uses a tagged union rather than a struct with an is_ok bool -- more idiomatic Zig. The unwrap on an error panics, which is consistent with Rust's behavior.
Right, let's talk about build systems ;-)
What zig init gives you
Run zig init in an empty directory and you get two files that matter: src/main.zig and build.zig. The src/main.zig is your hello-world entry point. The build.zig is the build configuration. Open it up:
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const exe = b.addExecutable(.{
.name = "my-project",
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
b.installArtifact(exe);
const run_cmd = b.addRunArtifact(exe);
run_cmd.step.dependOn(b.getInstallStep());
if (b.args) |args| {
run_cmd.addArgs(args);
}
const run_step = b.step("run", "Run the application");
run_step.dependOn(&run_cmd.step);
const unit_tests = b.addTest(.{
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
const run_unit_tests = b.addRunArtifact(unit_tests);
const test_step = b.step("test", "Run unit tests");
test_step.dependOn(&run_unit_tests.step);
}
This is a Zig function. Not YAML. Not TOML. Not a Makefile. A regular pub fn build(b: *std.Build) void that receives a *std.Build handle and uses it to describe what to compile. The b parameter is your entire build system API -- targets, steps, options, artifacts, dependencies, all of it hangs off this one pointer.
Let me break down what's happening:
b.standardTargetOptions(.{}) and b.standardOptimizeOption(.{}) set up two command-line options that every Zig project gets for free. When someone runs zig build -Dtarget=x86_64-linux -Doptimize=ReleaseFast, these functions parse those -D flags and return the corresponding values. Without these flags, you get the native target (your current machine) and Debug mode.
b.addExecutable(...) declares "I want to compile an executable". You give it a name, a root source file (the entry point), the target platform, and optimization level. This returns a *Compile step -- a node in the build graph. It doesn't compile anything yet. It just registers the intent.
b.installArtifact(exe) says "when the default install step runs, copy this executable to the output directory" (zig-out/bin/ by default). This is what makes zig build produce an actual binary you can find.
The run_cmd and run_step block creates the zig build run subcommand. b.addRunArtifact(exe) creates a step that executes the compiled binary. run_cmd.step.dependOn(b.getInstallStep()) says "before you can run it, you need to install it" (which means compiling it first). This dependency chaining is how the build graph works -- steps depend on other steps, and the build system figures out the right order.
The test section does the same for zig build test -- compile the test binary (which includes all test blocks from the root source file and its imports), then run it.
Build modes and optimization
Zig has four build modes that you select with -Doptimize=:
const std = @import("std");
pub fn build(b: *std.Build) void {
const optimize = b.standardOptimizeOption(.{});
const exe = b.addExecutable(.{
.name = "perf-demo",
.root_source_file = b.path("src/main.zig"),
.target = b.standardTargetOptions(.{}),
.optimize = optimize,
});
b.installArtifact(exe);
// You can also hardcode optimization for specific targets:
const always_fast = b.addExecutable(.{
.name = "fast-tool",
.root_source_file = b.path("src/tool.zig"),
.target = b.standardTargetOptions(.{}),
.optimize = .ReleaseFast, // Always optimized
});
b.installArtifact(always_fast);
}
Debug (default): no optimization, safety checks enabled (bounds checking, null pointer detection, integer overflow detection), debug info included. Fast compilation, large binary, slow execution. Use during development.
ReleaseSafe: optimized for speed, but safety checks remain enabled. This is the "production" mode for most applications -- you get good performance while still catching bugs that slip through testing. If an out-of-bounds access happens in production, you get a stack trace in stead of a silent memory corruption.
ReleaseFast: maximum optimization, safety checks disabled. The compiler assumes your code is correct and optimizes accordinly. An out-of-bounds access is undefined behavior -- it might crash, it might silently corrupt memory, or it might appear to "work" and then fail mysteriously later. Use only for performance-critical inner loops where you've already validated correctness with ReleaseSafe.
ReleaseSmall: optimized for binary size, safety checks disabled. Useful for embedded targets, WebAssembly, or anywhere binary size matters more than raw speed. The compiler trades execution speed for smaller output.
The practical difference between these modes is significant. I've seen ReleaseFast binaries run 5-10x faster than Debug on computation-heavy workloads. But Debug catches bugs that ReleaseFast turns into silent corruption. My workflow: develop in Debug, test in ReleaseSafe, profile in ReleaseFast, ship in ReleaseSafe unless profiling shows the safety checks are a bottleneck (they rarely are).
Linking C libraries
One of Zig's biggest selling points is seamless C interop, and the build system is where that happens. Want to use zlib for compression? sqlite3 for a database? SDL2 for graphics? It's a few lines in build.zig:
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const exe = b.addExecutable(.{
.name = "db-tool",
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
// Link the C standard library (needed for most C deps)
exe.linkLibC();
// Link system libraries by name
exe.linkSystemLibrary("sqlite3");
exe.linkSystemLibrary("z"); // zlib
// Add include paths if headers aren't in the default location
exe.addSystemIncludePath(.{ .cwd_relative = "/usr/local/include" });
b.installArtifact(exe);
}
exe.linkLibC() links the C standard library. Most C libraries expect libc to be available, so you'll call this almost anytime you use C code. exe.linkSystemLibrary("sqlite3") tells the build system to find and link libsqlite3 on the system (using pkg-config or known system paths). On Ubuntu that means apt install libsqlite3-dev and you're set.
In your Zig source, you'd import the C headers:
const c = @cImport({
@cInclude("sqlite3.h");
@cInclude("zlib.h");
});
// Now you can use c.sqlite3_open(), c.compress(), etc.
This is the @cImport mechanism we touched on briefly in earlier episodes. The Zig compiler reads the C header files at compile time and generates Zig bindings automatically. No FFI boilerplate. No manually declaring function signatures. Just @cInclude the header and the functions are available with their original names under the c namespace.
Having said that, linkSystemLibrary only works for libraries installed on your system. For portable builds (where you can't assume the dependency is installed), you'll want to bundle the C source and compile it as part of your build -- which is what build.zig.zon and the package system handle.
The package system: build.zig.zon
Zig's package manager uses two files: build.zig.zon for declaring dependencies and build.zig for consuming them. The .zon format (Zig Object Notation) is like JSON but uses Zig syntax:
// build.zig.zon
.{
.name = "my-project",
.version = "0.1.0",
.dependencies = .{
.zap = .{
.url = "https://github.com/zigzap/zap/archive/refs/tags/v0.2.0.tar.gz",
.hash = "1220aabababababababababababababababababababababababababababababababababab",
},
.clap = .{
.url = "https://github.com/Hejsil/zig-clap/archive/refs/tags/0.9.1.tar.gz",
.hash = "1220cdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcd",
},
},
.paths = .{
"build.zig",
"build.zig.zon",
"src",
},
}
To add a dependency, you use zig fetch --save which downloads the package, computes the content hash, and adds the entry to your .zon file:
zig fetch --save https://github.com/zigzap/zap/archive/refs/tags/v0.2.0.tar.gz
The hash is a content hash of the entire package -- if the upstream tarball changes (even by one byte), the hash won't match and Zig will refuse to use it. This gives you reproducible builds without a lock file. The hash IS the lock.
Then in build.zig, you consume the dependency:
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// Fetch the dependency declared in build.zig.zon
const zap_dep = b.dependency("zap", .{
.target = target,
.optimize = optimize,
});
const exe = b.addExecutable(.{
.name = "my-server",
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
// Make the dependency's module available to our code
exe.root_module.addImport("zap", zap_dep.module("zap"));
b.installArtifact(exe);
}
b.dependency("zap", ...) fetches the dependency (using the URL and hash from .zon) and builds it with the given target and optimize options. exe.root_module.addImport("zap", ...) makes it importable in your source code as const zap = @import("zap");. That's the entire workflow. No npm install. No cargo add. Just zig fetch --save URL and a few lines in build.zig.
The package system is still evolving (Zig 0.14 made several improvements), but the basic flow is stable: declare in .zon, fetch with zig fetch, consume with b.dependency(), import in source.
Custom build steps
Here's where build.zig being real Zig code truly shines. You can define custom build steps that run arbitrary logic -- generating source files, processing assets, running code generators, anything you can express in Zig:
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// Custom step: generate a version string at build time
const gen_step = b.addWriteFile(
"generated/version.zig",
\\pub const version = "1.2.3";
\\pub const build_time = "2026-04-10T12:00:00Z";
\\pub const git_hash = "abc123";
);
const exe = b.addExecutable(.{
.name = "versioned-app",
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
// Make the generated file available as an import
exe.root_module.addAnonymousImport("version", .{
.root_source_file = gen_step.files.getLast().getPath(),
});
b.installArtifact(exe);
}
Then in src/main.zig:
const version = @import("version");
const std = @import("std");
pub fn main() void {
std.debug.print("App version: {s}\n", .{version.version});
std.debug.print("Built at: {s}\n", .{version.build_time});
}
This is a simple example but the pattern extends to anything: generating Zig code from a schema file, embedding binary assets as comptime-known byte arrays, running a pre-build linter, etc. Because build steps form a dependency graph (step B depends on step A), the build system automatically figures out the correct execution order and parallelizes independent steps.
A more practical example -- running a tool that generates source code before compilation:
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// First: build our code generator tool
const codegen_tool = b.addExecutable(.{
.name = "codegen",
.root_source_file = b.path("tools/codegen.zig"),
.target = b.host, // Always build for the host machine
.optimize = .ReleaseFast,
});
// Second: run the codegen tool to produce a source file
const codegen_run = b.addRunArtifact(codegen_tool);
codegen_run.addArg("--output");
const gen_file = codegen_run.addOutputFileArg("generated.zig");
// Third: build the main app, importing the generated file
const exe = b.addExecutable(.{
.name = "my-app",
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
exe.root_module.addAnonymousImport("generated", .{
.root_source_file = gen_file,
});
b.installArtifact(exe);
}
The build graph here is: compile codegen.zig -> run codegen to produce generated.zig -> compile main.zig with the generated file as an import. The build system resolves this ordering automatically. If codegen.zig hasn't changed since the last build, it skips recompilation. If main.zig hasn't changed but the generated file has, it recompiles only main.zig. This is incremental building -- a feature you get for free from the dependency graph.
Build options: configuration at build time
Custom build options let the user pass configuration via -D flags. This is how you implement feature flags, conditional compilation, or build-variant selection:
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// Define custom build options
const enable_logging = b.option(
bool,
"enable-logging",
"Enable verbose logging (default: true in Debug)",
) orelse (optimize == .Debug);
const max_connections = b.option(
u32,
"max-connections",
"Maximum simultaneous connections",
) orelse 100;
const exe = b.addExecutable(.{
.name = "server",
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
// Pass options to the source code as comptime-known values
const options = b.addOptions();
options.addOption(bool, "enable_logging", enable_logging);
options.addOption(u32, "max_connections", max_connections);
exe.root_module.addOptions("config", options);
b.installArtifact(exe);
}
In your source code, you consume these options as a comptime import:
const config = @import("config");
const std = @import("std");
pub fn main() void {
if (config.enable_logging) {
std.debug.print("Logging enabled\n", .{});
}
std.debug.print("Max connections: {d}\n", .{config.max_connections});
}
Build with zig build -Dmax-connections=500 -Denable-logging=false. Because these values are comptime-known, the compiler can eliminate dead branches entirely. If enable_logging is false, the entire logging code path disappears from the binary -- not just "if false, skip" at runtime, but literally removed from the compiled output. This is conditional compilation without preprocessor macros. Just Zig.
Cross-compilation
Cross-compilation in Zig is a first-class feature, not an afterthought. You don't need a separate cross-compilation toolchain. The Zig compiler IS the cross-compiler:
const std = @import("std");
pub fn build(b: *std.Build) void {
const optimize = b.standardOptimizeOption(.{});
// Build for the native platform (default)
const native_exe = b.addExecutable(.{
.name = "app-native",
.root_source_file = b.path("src/main.zig"),
.target = b.standardTargetOptions(.{}),
.optimize = optimize,
});
b.installArtifact(native_exe);
// Also build for Linux x86_64
const linux_exe = b.addExecutable(.{
.name = "app-linux",
.root_source_file = b.path("src/main.zig"),
.target = b.resolveTargetQuery(.{
.cpu_arch = .x86_64,
.os_tag = .linux,
.abi = .gnu,
}),
.optimize = optimize,
});
b.installArtifact(linux_exe);
// And for ARM64 (Apple Silicon, Raspberry Pi 4, etc.)
const arm_exe = b.addExecutable(.{
.name = "app-arm64",
.root_source_file = b.path("src/main.zig"),
.target = b.resolveTargetQuery(.{
.cpu_arch = .aarch64,
.os_tag = .linux,
.abi = .gnu,
}),
.optimize = optimize,
});
b.installArtifact(arm_exe);
}
Run zig build and you get three binaries in zig-out/bin/: one for your machine, one for Linux x86_64, and one for ARM64. Built from the same source, on the same machine, in one command. No Docker. No VM. No cross-compiler installation.
This works because Zig ships its own linker and a bundled C standard library for every supported target. When you write exe.linkLibC() for a Linux ARM64 target, Zig uses its own musl (or glibc, depending on the ABI) rather than requiring you to install ARM64 glibc headers. The result is that cross-compilation Just Works for pure Zig code and for C code that only depends on libc.
From the command line, you can also specify the target directly: zig build -Dtarget=aarch64-linux-gnu -Doptimize=ReleaseSafe. This compiles for ARM64 Linux with safety checks. Try that with CMake ;-)
Multi-binary projects
Real projects often produce more than one binary -- a server and a client, a main application and a utilities tool, a library and a test harness. The build script handles this naturally because each addExecutable or addStaticLibrary call is just another node in the build graph:
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// Shared module used by multiple binaries
const core_module = b.addModule("core", .{
.root_source_file = b.path("src/core/lib.zig"),
.target = target,
.optimize = optimize,
});
// Binary 1: the server
const server = b.addExecutable(.{
.name = "server",
.root_source_file = b.path("src/server/main.zig"),
.target = target,
.optimize = optimize,
});
server.root_module.addImport("core", core_module);
b.installArtifact(server);
// Binary 2: the CLI client
const client = b.addExecutable(.{
.name = "client",
.root_source_file = b.path("src/client/main.zig"),
.target = target,
.optimize = optimize,
});
client.root_module.addImport("core", core_module);
b.installArtifact(client);
// Binary 3: an admin tool
const admin = b.addExecutable(.{
.name = "admin-tool",
.root_source_file = b.path("src/admin/main.zig"),
.target = target,
.optimize = optimize,
});
admin.root_module.addImport("core", core_module);
b.installArtifact(admin);
// Test step runs tests for all binaries
const test_step = b.step("test", "Run all tests");
const core_tests = b.addTest(.{
.root_source_file = b.path("src/core/lib.zig"),
.target = target,
.optimize = optimize,
});
test_step.dependOn(&b.addRunArtifact(core_tests).step);
const server_tests = b.addTest(.{
.root_source_file = b.path("src/server/main.zig"),
.target = target,
.optimize = optimize,
});
server_tests.root_module.addImport("core", core_module);
test_step.dependOn(&b.addRunArtifact(server_tests).step);
}
The core_module is shared between all three binaries. Change something in src/core/lib.zig and all three get recompiled. Change something in src/server/main.zig and only the server is recompiled. The build system tracks dependencies at the file level, so incremental builds are efficient even in large multi-binary projects.
This is also where the project structure from @scipio/learn-zig-series-10-project-structure-modules-and-file-io" target="_blank" rel="noopener noreferrer">ep010 comes together. Modules (@import with file paths) organize your source code. The build script declares how those modules are assembled into executables and libraries. The two systems work hand-in-hand: modules define code boundaries, the build system defines compilation boundaries.
The build graph mental model
Here's the key insight that ties everything together: build.zig doesn't DO anything directly. It describes a directed acyclic graph (DAG) of steps. When you run zig build, the build system:
- Calls your
pub fn build(b: *std.Build)function - Identifies the target step (default: the install step, or whatever you pass as
zig build <step-name>) - Walks the dependency graph backwards from that target
- Executes steps in topological order, parallelizing independent steps
This is why the order of calls in build.zig doesn't matter much. You can declare the test step before the executable, or the run step after the install. The dependency graph determines execution order, not the source order. Every step.dependOn(&other_step.step) call adds an edge in this graph.
The zig build command with no arguments runs the default install step. zig build run runs the "run" step (which depends on install, which depends on compilation). zig build test runs the "test" step. You can define as many named steps as you need -- zig build lint, zig build docs, zig build deploy, whatever makes sense for your project.
Exercises
Create a
build.zigthat builds two executables from the samesrc/directory:app(fromsrc/main.zig) andtool(fromsrc/tool.zig). Both should share a common module fromsrc/common.zig. Add arun-appstep and arun-toolstep so you can dozig build run-appandzig build run-toolseparately. Also add a test step that runs tests from all three files.Add a custom build option
--Dlog-level=that accepts an enum value (info,warn,error). Pass it to the source code viaaddOptions()so that@import("config").log_levelis a comptime-known enum value. Inmain.zig, use aswitchon this value to conditionally compile different logging behavior -- the unreachable branches should be eliminated by the compiler.Set up a
build.zig.zonandbuild.zigthat pulls inzig-clap(a command-line argument parser) as a dependency. Wire it throughb.dependency()andaddImport(), then write amain.zigthat uses clap to parse--nameand--countarguments. Verify thatzig build run -- --name hello --count 3works end to end. (You'll need an internet connection forzig fetch.)
Dusssssss, wat hebben we nou geleerd?
build.zigis a Zig program, not a config file. Your build configuration uses the same language features you already know -- conditionals, loops, functions, error handling.zig initgenerates a starterbuild.zigwith standard target/optimize options, an executable step, a run step, and a test step. Understand these default pieces and you can extend the build to do anything.- Four build modes control the safety/performance tradeoff:
Debug(safe + slow),ReleaseSafe(safe + fast),ReleaseFast(unsafe + fastest),ReleaseSmall(unsafe + smallest). Develop in Debug, ship ReleaseSafe unless profiling says otherwise. - C interop goes through the build system:
exe.linkLibC(),exe.linkSystemLibrary("name"), and@cImportin source code. The build system finds and links C libraries;@cImportgenerates Zig bindings from C headers. - Package management: declare dependencies in
build.zig.zon(content-hashed for reproducibility), fetch withzig fetch --save, consume withb.dependency()andaddImport(). No lock file needed -- the content hash IS the lock. - Custom build steps (code generation, asset processing, pre-build tools) integrate naturally because build steps form a dependency graph that the build system resolves automatically.
- Build options (
b.option()+addOptions()) pass comptime-known configuration from the command line to the source code. Dead branches are eliminated entirely -- this is conditional compilation without preprocessor macros. - Cross-compilation is built in:
b.resolveTargetQuery(.{ .cpu_arch = .aarch64, .os_tag = .linux })builds for ARM64 Linux from any host. No extra toolchains. Zig ships its own linker and bundled C standard library for every target.
We've now covered the full compilation pipeline. Next time we're looking at something lower-level -- how Zig handles sentinel-terminated types and C strings. If you've been wondering why [*:0]const u8 has that colon-zero in it, or how Zig's string handling differs from C's null-terminated convention, that's what's coming ;-)