Erik Explores

Erik Explores

Share this post

Erik Explores
Erik Explores
What Makes the Zig Programming Language Unique?
Copy link
Facebook
Email
Notes
More
Languages

What Makes the Zig Programming Language Unique?

Zig lets you run code during compilation. What are the implications?

Erik Engheim's avatar
Erik Engheim
Oct 06, 2022
∙ Paid
4

Share this post

Erik Explores
Erik Explores
What Makes the Zig Programming Language Unique?
Copy link
Facebook
Email
Notes
More
Share
Upgrade to paid to play voiceover
Zig mascot Zero the Ziguana
Zig mascot Zero the Ziguana

Compile-time computing was pioneered by the Lisp programming language in the 1960s. Compile-time computing means that the code you later compile isn’t just what you wrote down. What you compile later is also code that is “written” by your code. Code generating code. While a common feature in dynamically typed languages such as Lisp and Julia, it has been rare in statically typed systems programming languages, such as C. C++ has gradually evolved the ability but often in a limited or clunky fashion.

Zig flips the script and makes compile-time computing one of the most central and well supported features of the language. That is why Zig is a great language to learn the concept from. In the Zig world, we call compile-time computing comptime, from the keyword used to mark code required to run at compile time or variables to be known at compile time.

The ability to run Zig code during compilation allows Zig developers to write generic code and do meta programming without having any explicit support for generics or templates.

Erik Explores is a reader-supported publication. To get updates and access full content become a subscriber or read this full article on Medium.

Let me walk you through some code examples to better explain what the whole idea is and why it matters. Consider the following simple function to find the maximum of two values a and b. Without generics or comptimecode, we would need to hardwire such a function to operate on a specific variable types, such as 32-bit integers referred to as i32 in Zig.

fn maximum(a: i32, b: i32) i32 {
    var result: i32 = undefined;

    if (a > b) {
        result = a;
    } else {
        result = b;
    }

    return result;
}

Usually, an executable program in Zig, will have a main function, just like a C/C++ program. From there we can invoke our maximum function. In the next code example, don't pay too much attention to how we grab stdout or why we need to prefix our print function call with the try keyword. The latter has to do with Zig error handling, which we will not cover in this story.

pub fn main() !void {
    const stdout = std.io.getStdOut().writer();

    const a = 10;
    const b = 5;

    const biggest = maximum(a, b);

    try stdout.print("Max of {} and {} is {}\n", .{ a, b, biggest });
}

Obviously, the solution given is quite limiting. maximum only operates on 32-bit integers. C programmers would be very familiar with this issue. In the C programmer world, C preprocessor macros come to the rescue. However, Andrew Kelley, designed Zig specifically to not have to rely on C-style macros. In fact, the whole reason Zig exists is that Andrew just wanted to program in C, but without the bad parts such as macros. comptime sprung into existence precisely to replace C-macros.

Let us look at a Zig solution to this problem. We will define a generic maxiumum function in Zig. The i32 type arguments will be replaced by anytype and @TypeOf(a). At the spot the maximum function gets invoked, anytype will assume the type of the argument provided. Keep in mind that we are not dealing with a dynamic programming language. Instead, Zig will compile different variants of maximum for each case, where maximum is called with a different set of argument types. The type of a and b is still determined at compilation time rather than at runtime.

While it is possible to determine the type of an input argument at compile time, doing so for a variable or return type is trickier. You cannot state that the return type is anytype because the specific type cannot be determined at the call site. Instead, we use a compiler intrinsic, @TypeOf which runs at compile time to produce the return type. @TypeOf(a) evaluates to type of the a parameter at compilation time. We use the same trick to specify the type of the result variable.

fn maximum(a: anytype, b: anytype) @TypeOf(a) {
    var result: @TypeOf(a) = undefined;

    if (a > b) {
        result = a;
    } else {
        result = b;
    }

    return result;
}

While this solution is an improvement, it has a number of issues:

  1. Nothing stops you from calling maximum with values which are not numbers.

  2. If b is the larger value, it could contain a value which requires more bits than type @TypeOf(a) can hold.

To check if a and b are of the right type, we can create a function to be run at compile time which checks if a type is a number. Let us define a function assertNumber with a single parameter T representing a type rather than a value. The parameter definition is preceded with the keyword comptimetelling the compiler that the argument must be known at compile time.

Also notice the switch-case statement. In Zig, switch-case can return values. We switch on the type argument T. If T matches a number type, the switch-case statement returns true which gets assigned to the is_numvariable. Otherwise, we use the else keyword to return false by default.

fn assertNumber(comptime T: type) void {
    const is_num = switch (T) {
        i8, i16, i32, i64 => true,
        u8, u16, u32, u64 => true,
        comptime_int, comptime_float => true,
        f16, f32, f64 => true,
        else => false,
    };

    if (!is_num) {
        @compileError("Inputs must be numbers");
    }
}

// testing function
pub fn main() !void {
    assertNumber(bool);
}

Of particular interest in this function definition is the compiler intrinsic @compileError. It is used to send compiler errors to the user. In this code example, I am providing a non-number type as argument toassertNumber. bool to be specific. Should you try to compile this program, you will get the following error messages:

assert-number.zig:11:9: error: Inputs must be numbers
        @compileError("Inputs must be numbers");
        ^
assert-number.zig:17:17: note: called from here
    assertNumber(bool);
                ^
assert-number.zig:16:21: note: called from here
pub fn main() !void {

In other words, we can write code in such a way that we can give a helpful error message to the user when they try to compile invalid code.

We can use assertNumber to check the inputs to our maximum function. To make sure the return type is large enough, we will require that both inputs are of the same type.

fn maximum(a: anytype, b: anytype) @TypeOf(a) {
    const A = @TypeOf(a);
    const B = @TypeOf(b);

    assertNumber(A);
    assertNumber(B);

    var result: @TypeOf(a) = undefined;

    if (A != B) {
        @compileError("Inputs must be of the same type");
    }

    if (a > b) {
        result = a;
    } else {
        result = b;
    }

    return result;
}

When maximum is invoked at runtime, all the compile time code has already been run and replaced with their result.

The current solution doesn't solve all problems with our initial naive solution. We are forced to make a and b parameters of the same type. What if we want to allow both a signed 8-bit and signed 32-bit integer argument? In Zig, that would be arguments of type i8 and i32. In this case we must ensure that the return type is i32. Our current solution doesn't do that. What we need is a function that runs at compile time comparing the types of a and b and which returns the type with the highest bit-length.

To achieve this, we will make a number of functions:

  • nbits function to figure out the number of bits in a type T

  • largestType function to pick largest of two types A and B

Notice in the next code example how we are marking the type arguments with comptime to tell Zig that these inputs must be known at compilation time. We use the @typeInfo compiler intrinsic which at compile-time returns a composite object, info, which describe a type: Is the type signed or unsigned? How many bits are used to represent the type?

fn nbits(comptime T: type) i8 {
    return switch (@typeInfo(T)) {
        .Float => |info| info.bits,
        .Int => |info| info.bits,
        else => 64,
    };
}

fn largestType(comptime A: type, comptime B: type) type {
    if (nbits(A) > nbits(B)) {
        return A;
    } else {
        return B;
    }
}

fn maximum(a: anytype, b: anytype) largestType(@TypeOf(a),
                                               @TypeOf(b)) {
    var result: @TypeOf(a) = undefined;

    if (a > b) {
        result = a;
    } else {
        result = b;
    }

    return result;
}

The switch-statement in the code example above may not be entirely obvious. Let me clarify. The type returned from @typeInfo(T) is of type std.builtin.TypeInfo which is a union type. Union types are a bit like structs. They have multiple fields, but these fields share memory. Hence, we need to figure out which field is actually in use. The switch-case allows us to determine whether a .Int or .Float field is currently used. The |info|syntax is used by Zig to unwrap values. In this case we are unwrapping the struct describing a type.

The info object will be either of type TypeInfo.Int or TypeInfo.Float, however, both struct types have a bits field.

In our revised maximum function, we don't specify the return value explicitly. Instead, we call the largestType function which returns the type we want to use as return-type for maximum. I know this sounds really funky, but it works because the Zig compiler can determine that the largestTypefunction call only relies on info known at compile time. The compiler will make multiple variants of maximum based on each place it is invoked. Each version will be compiled with different input and output types.

Using Compile Time Code to Implement Generics

To demonstrate how powerful Zig comptime is, I will show you how it can be used to implement generics. Here we implement a minimum function which looks more familiar to developers accustomed to generics or template-based programming. A key difference is that the type argument T is provided as a regular argument. C++, Java and C# developers would invoke this function by writing something like minimum<i8>(x, y), while Zig developers write minimum(i8, x, y).

fn minimum(comptime T: type, a: T, b: T) T {
    assertNumber(T);

    var result: T = undefined;
    if (a < b) {
        result = a;
    } else {
        result = b;
    }

    return result;
} 

In languages such as C++, Java, C++ and Swift you can usually infer the type by looking at input arguments. With Zig, such type inference isn't possible because the parameter T is supplied as a regular argument and thus cannot get special treatment. While this limitation is a downside to comptime over generics, the benefit is that comptime is more flexible in how you use it.

We can use comptime code to define generic types. I will demonstrate with a simple 2D vector class used to represent stuff like force, velocity, or position.

Keep reading with a 7-day free trial

Subscribe to Erik Explores to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Erik Engheim
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More