I can say the same (although my career spans only 30 years), or, more accurately, that it's one of the few languages that surprised me most.
Coming to it from a language design perspective, what surprised me is just how far partial evaluation can be taken. While strictly weaker than AST macros in expressive power (macros are "referentially opaque" and therefore more powerful than a referentially transparent partial evaluation - e.g. partial evaluation has no access to an argument's name), it turns out that it's powerful enough to replace not only most "reasonable" uses of macros, but also generics and interfaces. What gives Zig's partial evaluation (comptime) this power is its access to reflection.
Even when combined with reflection, partial evaluation is more pleasurable to work with than macros. In fact, to understand the program's semantics, partial evaluation can be ignored altogether (as it doesn't affect the meaning of computations). I.e. the semantics of a Zig program are the same as if it were interpreted by some language Zig' that is able to run all of Zig's partial-evaluation code (comptime) at runtime rather than at compile time.
Since it also removes the need for other specialised features (generics, interfaces) - even at the cost of an aesthetic that may not appeal to fans of those specialised features - it ends up creating a very expressive, yet surprisingly simple and easy-to-understand language (Lisps are also simple and expressive, but the use of macros makes understanding a Lisp program less easy).
Being simple and easy to understand makes code reviews easier, which may have a positive impact on correctness. The simplicity can also reduce compilation time, which may also have a positive impact on correctness.
Zig's insistence on explicitness - no overloading, no hidden control flow - which also assists reviews, may not be appropriate for a high-level language, but it's a great fit for an unabashedly low-level language, where being able to see every operation as explicit code "on the page" is important. While its designer may or may not admit this, I think Zig abandons C++'s belief that programs of all sizes and kinds will be written in the same language (hence its "zero-cost abstractions", made to give the illusion of a high-level language without its actual high-level abstraction). Developers writing low-level code lose the explicitness they need for review, while those writing high-level programs don't actually gain the level of abstraction required for a smooth program evolution that they need. That belief may have been reasonable in the eighties, but I think it has since been convincingly disproved.
Some Zig decisions surprised me in a way that made me go more "huh" than "wow", such as it having little encapsulation to speak of. In a high-level language I wouldn't have that (after years of experience with Java's wide ecosystem of libraries, we learned that we need even more and stronger encapsulation than we originally had to keep compatibility while evolving code). But perhaps this is the right choice for a low-level language where programs are expected to be smaller and with fewer dependencies (certainly shallower dependency graphs). I'm curious to see how this pans out.
Zig's terrific support for arenas also makes one of the most powerful low-level memory management techniques (that, like a tracing garbage collector, gives the developer a knob to trade off RAM usage for CPU) very accessible.
I have no idea or prediction on whether Zig will become popular, but it's certainly fascinating. And, being so remarkably easy to learn (especially if you're familiar with low-level programming), it costs little effort to give it a try.
I like languages that dare to try to do more with less. Zig's comptime, especially the way it supplants generics, is pretty darn awesome.
I was having a similar feeling with Elixir the other day, when I realized that I could built every single standard IPC mechanism that you might find in something like python.threading (Queue, Mutex, RecursionLock, Condition, Barrier, etc) with the Erlang/Beam/Process mailbox.
IMHO "clearly better" might be a matter of perspective; my impression is that this is one of those things where the different approaches buy you different tradeoffs. For example, by my understanding Rust's generics allows generic functions to be completely typechecked in isolation at the definition site, whereas Zig's comptime is more like C++ templates in that type checking can only be completed upon instantiation. I believe the capabilities of Rust's macros aren't quite the same as those for Zig's comptime - Rust's macros operate on syntax, so they can pull off transformations (e.g., #[derive], completely different syntax, etc.) that Zig's comptime can't (though that's not to say that Zig doesn't have its own solutions).
Of course, different people can and will disagree on which tradeoff is more worth it. There's certainly appeal on both sides here.
It's possible that something similar might be the right path for metaprogramming. Rust's generics are simple and weaker than Zig's comptime, while proc macros are complicated and stronger than Zig's comptime.
So I think the jury's still out on whether Rust's metaprogramming is "better" than Zig's.
Every language at scale needs a preprocessor (look at the “use server” and “use gpu” silliness happening in TS) - why is it not the the same as the language you use?
I look forward to a future high-level language that uses something like comptime for metaprogramming/interfaces/etc, is strongly typed, but lets you write scripts as easily as python or javascript.
For me it'd be hard to go back to languages that don't have all that. Only swift comes close.
#!/usr/bin/env rdmd
[D code]
and run it as if it were an executable. (The compilation is cached so it runs just as fast on subsequent runs.)
#!/usr/bin/env rdmd D code ...
and run it as if it were an executable.
What does this mean?
For example (you can pick another example if you want), how is C++'s std::vector less abstract than Java's ArrayList?
I've described this in the past as languages being "too general purpose" or too "multi-paradigm". Languages like Scala that try to be Haskell and Java in one.
> I have no idea or prediction on whether Zig will become popular
I think LLMs may be able to assist to move large C codebases to Zig in the next decade. Once zigc compiles C-Linux, bit-by-bit can be (LLM-assistedly) ported to Zig. This it not soon, but I think will be it's killer feature.
I don't mind if Linux becomes Rust+Zig codebase in, say, 10y from now. :)
pip install ziglang
Which means you don't even have to install it separately to try it out via uvx. If you have uv installed already try this: cd /tmp
echo '#include <stdio.h>
int main() {
printf("Hello, World!");
return 0;
}' > hello.c
uvx --from ziglang python-zig cc /tmp/hello.c
./a.outYou could go further like in this case, and use wheels + PyPi for something unrelated to Python.
Or I should say it was useful as a distribution method, because most people had Python already available. Since most distros now don't allow you to install stuff outside a venv you need uv to install things (via `uv tool install`) and we're not yet at the point where most people already have uv installed.
I know some of it has already happened with rust, but perhaps there’s a broader reckoning that needs to occur here wrt standards around how language specific build and packaging systems handle cross language projects… which could well point to phasing those in favour of nix or pixi, which are designed from the getgo to support this use case.
Usually arbitrary binaries stuffed in Python wheels are mostly self contained single binaries and such, with as little dynamic linking nonsense as possible, so they don't break all the time, or have dependency conflicts.
It seems to consistently work really well for binaries, although it would be nice to have first class support for integrating npm packages.
This is what I've started doing for every library I use. I go to their Github, download their docs, and drop the whole thing into my project. Then whenever the AI gets confused, I say "consult docs/somelib/"
During the last year I have been observing how MCP, tools and agents, have reduced the amount of language specific code we used to write.
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
https://dlang.org/spec/function.html#interpretation
It doesn't need a keyword to trigger it. Any expression that is a const-expression in the grammar triggers it.
The parent comment acknowledges that compile time execution is not new. There is little in Zig that is, broad strokes, entirely new. It is in the specifics of the design that I find Zig's ergonomics to be differentiated. It is my understanding that D's compile time function execution is significantly different from Zig's comptime.
Mostly, this is in what Zig doesn't have as a specific feature, but uses comptime for. For generics, D has templates, Zig has functions which take types and return types. D has conditional compilation (version keyword), while Zig just has if statements. D has template mixins, Zig trusts comptime to have 90% of the power for 10% of the headache. The power of comptime is commonly demonstrated, but I find the limitations to be just as important.
A difference I am uncertain about is if there's any D equivalent for Zig having types being expressions. You can, for example, calculate what the return type should be given a type of an argument.
Is this a fair assessment?
This is done in D using templates. For example, to turn a type T into a type T star:
template toPtr(T) { alias toPtr = T*; } // define template
toPtr!int p; // instantiate template
pragma(msg, "the type of p is: ", typeof(p));
The compiler will deduce the correct return type for a function by specifying auto* as the return type: auto toPtr(int i) { return cast(float)i; } // returns float
For conditional compilation at compile time, D has static if: enum x = square(3); // evaluated at compile time
static if (x == 4)
int j;
else
double j;
auto k = k;
Note that the static if* does not introduce a new scope, so conditional declarations will work.The version is similar, but is intended for module-wide versions, such as:
version (OSX)
{ stuff for OSX }
else version (Win64)
{ stuff for Windows 64 }
else
static assert(0, "unsupported OS");
Compile time execution is triggered wherever a const-expression is required. A keyword would be redundant.D's mixins are for generating code, which is D's answer to general purpose text macros. Running code at compile time enables those strings to be generated. The mixins and compile time execution are not the same feature. For a trivial example:
string cat(string x, string y) { return x ~ "," ~ y; }
string s = mixin(cat("hello", "betty")); // runs cat at compile time
writeln(s); // prints: hello,betty
I'll be happy to answer any further questionsFor example Zig has a function ArrayHashMapWithAllocator which returns well, a hash table type in a fairly modern style, no separate chaining and so on
Not an instance of that type, it returns the type itself, the type didn't exist, we called the function, now it does exist, at compile time (because clearly we can't go around making new types at runtime in this sort of language)
The issue with mixins is that using string concatenation to build types on the fly isn't the greatest debugging experience, as there is only printf debugging available for them.
Pointing out that other languages have used partial evaluation, sometimes even in ways that somewhat overlap with Zig's use, completely misses the point. It's at least as misplaced as saying that there was nothing new or special about iPhone's no-buttons design because touch screens had existed since the sixties.
If you think Zig's comptime is just about running some computations at compile time, you should take a closer look.
> But the extent and the way in which Zig specifically puts it to use -- which includes, but is not limited to, how it is used to replace other features that can then be avoided (and all without macros) -- is unprecedented.
That MrWhite wanted to knkw an example of Zig's comptime that is not merely a "macro", rather the usage as a replacement of other features (I guess more complex..)
PS just interested in zig, I'd like some pointer to these cool feature :)
In addition, there's the classic example of implementing a parameterised print (think printf) in Zig. This is a very basic use of comptime, and it isn't used here in lieu of generics or of interfaces, but while there may be some language that can do that without any kind of explicit code generation (e.g. macros), there certainly aren't many such examples: https://ziglang.org/documentation/0.15.2/#Case-Study-print-i...
But the main point is that the unprecedented use of partial evaluation is in having a single unified mechanism that replaces generics, interfaces, and macros. If a language has any one of them as a distinct feature, then it is not using partial evaluation as Zig does. To continue my analogy to the novel use of a touchscreen in the iPhone, the simplest test was: if your phone had a physical keypad or keyboard, then it did not use a touchscreen the way the iPhone did.
write(1,2,"abc",4.0,'c');
write is declared as: void write(S...)(S args) { ... }
where `S...` means an arbitrary sequence of types represented by `S`. The implementation loops over the sequence, handling each type in its own individual fashion. User defined types work as well.It's like saying the the iPhone design wasn't novel except for the fact that prior art all had a keypad. But the design was novel in that it was intended to eliminate the keypad. Zig's comptime feature is novel in that it exists to eliminate interfaces, generics, and macros, and you're bringing up a language that eliminates none of them.
So D clearly isn't an example, but perhaps there's some other language I haven't heard of. Just out of curiosity, can a printf in D not only check types at compile time but also generate formatting code while still allowing for runtime variables and without (!!!) the use of string mixins? Like I said, it's possible there's precedent for that (even though it isn't the distinguishing feature), and I wonder if D is that. I'm asking because examples I've seen in D either do use string mixins or do not actually do what the Zig implementation does.
It's like how the novelty of the iPhone's touchscreen design was in not having a keypad, or that the novelty of the spork wasn't in inventing the functionality of either the spoon or the fork, but in having a single utensil that performs both. The more important aspect isn't the functionality but the design. I'm not saying you need to like any of these designs, but they are novel.
Saying that you could have a similar functionality by other means misses the point as much as saying that there's nothing special about a spork because if you have a spoon and a fork, then you have the same functionality. But you still don't have a spork.
You could, then, ask what the point of the novel design is. Well, in some languages you have generics and interfaces and compile-time expressions, but because none of these is general and powerful enough, so you also have macros. Macros are very powerful - perhaps too powerful - but they are difficult to understand, so they're used sparingly even if they can subsume other functionality.
Zig has shown that you can do almost anything you would reasonably want to do with macros with partial evaluation that has access to reflection. That wasn't obvious at all. And because that feature was not only powerful enough to subsume other features and make them redundant, but also very simple and easy to understand, it ended up with a design that is both minimal and easy to read (which is important for code reviews) but also highly expressive. Again, you don't have to like this rather minimalistic design, but it is novel.
Please show an example of Zig partial evaluation.
Again, the novel use of partial evaluation in Zig is that it eliminates generics, interfaces, and macros. Any language that has one or more of these features does not have this novel design.
I mean a simple example. Just to illustrate the concept. Like the examples I provided here:
Mostly what I think is the syntax is more complex with less utility than the equivalent D syntax. For example, the use of the 'comptime' keyword is not necessary. For another, the import declaration is overly complex.
I don't know enough about Zig to make informed suggestions on evolving it. D has borrowed stuff from many languages, but I don't recall suggestions in the D forums of a Zig feature that should be added to D, though I might have missed it.
Constant-folding just got watered down by the many dynamic evangelists in the decades after, that even C or C++ didn't enforce it properly. In perl5 is was watered down on add (+) by some hilariously wrong argumentation then. So you could precompute mult const expressions, but not add.
The same is true for templates, or macros—all of which are distinguished by being computed in a single pass (you don’t have to think about them later, or worry about their execution being interleaved with the rest of the program), before runtime start (meaning that certain language capabilities like IO aren’t available, simplifying reasoning). Those two properties are key to comptime’s value and are not provided by perl5’s BEGIN blocks—or probably even possible at all in the language, given that it has eval and runtime require.
When you want to use state, like openening a file for run-time, use INIT blocks instead. These are executed first before runtime, after compile-time.
My perl compiler dumps the state of the program after compile-time. So everything executed in BEGIN blocks is already evaluated. Opening a file in BEGIN would not open it later when required at run-time, and compile-time from run-time is seperated. All BGEIN state is constant-folded.
I know who you are, and am sure everything you say about the mechanisms of BEGIN is correct, but when I refer to “compile time”, I’m referring to something that happens before my program runs. Perl5’s compilation happens the first time a module is required, which may happen at runtime.
Perhaps there’s a different word for what we’re discussing here: one of the primary benefits of comptime and similar tools is that they are completed before the program starts. Scripting languages like perl5 “compile” (really: load code into in-memory intermediate data structures to be interpreted) at arbitrary points during runtime (require/use, eval, do-on-code).
On the other hand, while code in C/Zig/etc. is sometimes loaded at runtime (e.g. via dlopen(3)), it’s compile-time evaluation is always done before program start.
That “it completed before my code runs at all” property is really important for locality of behavior/reasoning. If the comptime/evaluation step is included in the runtime-code-load step, then your comptime code needs to be vastly more concerned with its environment, and code loading your modules has to be vastly more concerned with the side effects of the import system.
(I guess that doesn’t hold if you’re shelling out to compile code generated dynamically from runtime inputs and then dlopen-ing that, but that’s objectively insane and hopefully incredibly rare.)
But i would not put comptime as some sort of magical invention. Its still just a newish take on meta programming. We had that since forever. From my minimal time with Zig i kind of think comptime as a better version of c++ templates.
That said Zig is possibly a better alternative to c++, but not that exiting for me. I kind of dont get why so many think its the holy grail, first it was rust, and now zig.
Rust's borrow checker isn't unique either but was inspired by Cylone: https://en.wikipedia.org/wiki/Cyclone_(programming_language)
IMHO a programming language doesn't need a single USP, it just needs to include good existing ideas and (more importantly) exclude bad existing ideas (of course what's actually a good and bad idea is highly subjective, that's why we need many programming languages, not few).
The reason is the clickbait title.
> yours has zero value
Yours didn't bring much as well, so I suppose value isn't strictly required.
Yeah, I know nothing about Zig, and was excited by the author's opening statement that Zig is the most surprising language he has encountered in a 45 yr software career...
But this is then immediately followed by saying that ability to compile C code, and to cross-compile, are the most incredible parts of it, which is when I immediately lost interest. Having a built-in C compiler is certainly novel, and perhaps convenient for inter-op, but if the value goes significantly beyond that then the author is failing to communicate that.
I'd say the same thing about Rust. I find it the best way to express when what code should run at any given point in the program and the design is freakin interstellar: It is basically a "query engine" where you write a query of some code against the entire available "code space" including root crate and its dependencies. Once you understand that programming becomes naming bits and then queries for the ones you wish to execute.
Powerful macros that generate code that then gets compiled =)
It has been several decades since putting a slash between these two made sense, lumping them together like this. It would be similar to saying something like Java/Scala or ObjectiveC/Swift. These are completely different languages.
Indeed you see those for Java/Scala and Objective-C/Swift in technical books and job adverts.
Any search on the careers sites, or documentation, on companies that have seats at ISO, sell/develop C and C++ compilers, have such C/C++ references in a couple of places.
Do you need any example?
It is a bikeshedding discussion that doesn't help in anything, regarding lack of security in C, or the legions of folks that keep using C data types in C++, including bare bones null terminated strings and plain arrays instead of collection types with bounds checking enabled.
Even better, all heap allocations should be done via ownership types.
Calling into malloc () is writing C in C++, and should only be used for backwards compatibility with existing C code.
Additionally there is no requirement on the C++ standard that new and delete call into malloc()/free(), that is usually done as a matter of convenience, as all C++ compilers are also C compilers.
And this is exactly the stance I am arguing against. C++ is not the newer version of C. It forked of at some point and is a quite different language now.
One of the reasons I do use malloc for, is for compatibility with C. It is not for backward compatibility, because the C code is newer. In fact I actively change the code, when it needs a rewrite anyway, from C++ to C.
The other reason for using it even when writing C++ is, that new alone doesn't allow to allocate without also calling the constructor. For that I call malloc first and then invoke the constructor with placement new. For deallocating I call the destructor and then free. This also has the additional benefit, that your constructor and deconstructor implementation can fail and you can roll it back.
Finally, people like to argue between C and C++ when it convenient to do so, yet the compiler language switches to use C extensions in C++ mode keep being used across many projects.
What do you mean? I don't think I can follow you.
> yet the compiler language switches to use C extensions in C++ mode keep being used across many projects.
When you use compiler extensions, that just happen to be both available in C and C++, I wouldn't say you are writing C in C++, I mean the extension isn't standard C either.
Code written in C++ has different semantics, even when it is word-for-word the same as C code. They ARE different languages.
That is what writing proper modern C++ is all about, anything else is writing C in C++.
Null terminated strings with pointer arithmetic instead of std::string and string_view, pointer arithmetic instead of std::span, bare pointer arrays instead of std::array and std::vector, C style casts,....
That is a claim that is yours and I do not agree with that. C++ that does not fit your taste of modern C++ does not suddenly become C, it is likely a syntax error in C and when it compiles it has a different meaning. Code that may look to you like C in C++ has C++ semantics, that differs from C semantics.
The pedantic folks that jump of their chair when seeing something all companies that pay WG21 salaries use on their docs?
If only they would advocate for writing safer code with the same energy, instead of discussing nonsense.
That is why C and C++ communities are looked down by security folks, and governments.
Why the blog has a section on how it install it on the path is also very puzzling.
By the way, so does everyone using neovim.
See also Roblox (and there used to be a whole bunch of game engines that had Lua scripting but I -think- most of them have switched to "NIH!" scripting engines?)
I still write about C anyway. It may not trend, but it lasts.
I spent a substantial fraction of my professional career writing C, and I remain interested in WG14 (the language committee) and in several projects written in C though I avoid writing any more of it myself.
The reason it's so widespread is called "Worse is Better" and I believe that has somewhat run its course. If you weren't aware of "Worse is better" a quick Google should find you the original essay on that topic years back.
In contrast when I read an article about say Zig, or Swift, I am more likely to learn something new.
But I can certainly endorse your choice to write about whatever you want - life is too short to try to get a high score somehow.
Maybe I am biased, but for professional work, I stay with Go. I have built large distributed data systems that handle hundreds of millions of business transactions daily, and Go has been steady and reliable for that scale. Its simplicity, strong concurrency model, and easy deployment make it practical for production systems. I still enjoy exploring Zig and Rust in my spare time, but for shipping real systems, Go continues to get the job done without getting in the way.
> I have never deployed any production C code and I would not choose C for professional work either
What do you write about C, if not for practical usage in the industry? Can you post some links?
FWIW, since you seem interested, here are some blog posts of mine specifically about practical usage of C, some of which got a little discussion here on HN in the past:
https://www.lelanthran.com/chap13/content.html
If you have some spare time, I would really like to hear more about your experiences. It sounds like you have worked with C for a long time, and that kind of insight is hard to find now.
Most people around me started with JavaScript or TypeScript as their first language, and for many, that is still all they know. I mean no disrespect, it is just how things are today. It would be great to hear how your view of programming has changed over the years and what lessons from C still matter in your work today.
I've already replied to you in a sibling post, but I have been writing in C since the mid-90s; there's really not that much insight you get specifically to C.
Am I old? I am 31, and I started with C around age 14 (writing mods for ioquake3 forks), been my most used programming language ever since.
Articles about C never get much traffic, but that is fine. I wrote it because I care about how things really work, not because I expect it to trend. If even a few people read it and see the beauty in the old language that still runs the world, that is enough.
I hope next month I will have more time to write deep dives into the internals of SQLite, PostgreSQL, Redis and maybe curl, all written in C.
what gets me personally is what you describe at https://github.com/little-book-of/c/blob/main/articles/zig-i... - zig is made to feel easy and modern for people who don't know any better, and it does this well. But as soon as you actually need to do complex stuff, it gets in the way moreso than C and it's current environment/ecosystem will.
And to be fair, as much as I enjoyed writing in C in my younger years - I only use C when I actually need C. And asm when I actually need asm. Most of my code now uses higher level languages - this puts zig into such a niche.. it feels like golang to me: the cool language that isn't really solving as much of a need as you'd think.
- pointers to bitfields
- checked bitshifts
- small ints like u4
- imperative array initialization blocks
- test code blocks
- equivalent of debugger; keyword from js
- some vague stuff about being able to do at compile time
The rest is pretty genericI would love to see Rust get compile-time execution that is as capable as Zig or C++20.
Isn't that just macros which are way more powerful than what Zig or C++20 offer?
Can you give examples? Const functions are pretty capable in Rust already.
The "how to modify an environment variable" bit and the bin-dec-hex table made me feel the same way. Then I saw the part explaining how to check for duplicates in a row... I'm struggling to understand the point of the article. Testing a text generator?
I feel what is missing is how each feature is so cool compared to other languages.
As a language nerd zig syntax is just so cool. It doesn’t feel the need to adhere to any conventions and seems to solve the problems in the most direct and simple way.
An example of this declaring a label and referring to a label. By moving the colon to either end it makes labels instantly understood which form it is.
And then there is the runtime promises such as no hidden control flow. There are no magical @decorators or destructors. Instead we have explicit control flow like defer.
Finally there is comptime. No need to learn another macro syntax. It’s just more zig during compilation
https://matklad.github.io/2025/08/09/zigs-lovely-syntax.html
Zig's big feature imo is just the relative absence of warts in the core language. I really don't know how to communicate that in an article. You kind of just have to build something in it.
That's been my exact experience too. I was surprised how fast I felt confident in writing zig code. I only started using it a month ago, and already I've made it to 5000 lines in a custom tcl interpreter. It just gets out of the way of me expressing the code I want to write, which is an incredible feeling. Want to focus on fitting data structures on L1 cache? Go ahead. Want to automatically generate lookup tables from an enum? 20 lines of understandable comptime. Want to use tagged pointers? Using "align(128)" ensures your pointers are aligned so you can pack enough bits in.
There's a certain beauty in only having to know 1~2 loops/iteration concepts compared to 4~5 in modern multi paradigm languages(various forms of loops, multiple shapes of LINQ, the functional stuff etc).
Skipping other minor changes.
However I do agree C# is adding too much stuff, the team seems trying to justify their existence.
My experience with Golang so far is biased because i only recently looked at golang, for the past decade i have been working mostly in java and c#, so most of those newly added features in golang is stuff i'm already deeply familiar with conceptually.
Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Common Lisp has had it forever, and Dylan transferred that to a language with more conventional syntax -- but is very near to dead now, certainly hasn't snowballed.
On the other hand Julia does it very well and seems to be gaining a lot of traction as a very high performance but very expressive and safe language.
Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.
The tooling makes it easy to tell which version of a method you're using, though that's rarely an issue in practice. And the fact that methods are open to extension makes it really easy to fix occasional upstream bugs where the equivalent has to wait for a library maintainer in Python.
500kloc Julia over 4 years, so not a huge codebase, but not trivial either.
What Ada (and Rust) calls generics is very different -- it is like template functions in C++.
In those languages the version of the function that is selected is based on the declared type of the arguments.
In CLOS, Dylan, Julia the version of the function that is selected is based on the runtime type of the actual arguments.
Here's an example in Dylan that you can't do in Ada / Rust / C++ / Java.
define method fib(n) fib(n-1) + fib(n-2) end;
define method fib(n == 0) 0 end;
define method fib(n == 1) 1 end;
The `n == 1` is actually syntactic sugar for the type declaration `n :: singleton(1)`.The Julia version is slightly more complex.
fib(n) = fib(Val(n))
fib(::Val{n}) where {n} = fib(n-1) + fib(n-2)
fib(::Val{0}) = 0
fib(::Val{1}) = 1
println(fib(30))
This is perhaps a crazy way to write `fib()` instead of a conventional `if/then/else` or `?:` or switch with a default case, but kinda fun :-)This of course is just a function with a single argument, but you can do the same thing across multiple arguments.
define method ack(m, n) ack(m-1, ack(m, n-1)) end;
define method ack(m == 0, n) n+1 end;
define method ack(m, n == 0) ack(m-1, 1) end;As you can see from my comment history, I am quite aware of CLOS, Lisp variants and Dylan.
>Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Can you give one or two examples? And why is programming with it magical?
Because methods aren't "inside" objects, but just look like functions taking (references to) structs, you can add your own methods to someone else's types.
It's really hard to give a concise example that doesn't look artificial, because it's really a feature for large code bases.
Here's a tutorial example for Julia
The need for this jumped out at me during Writergate. People had alot of trouble understanding exactly how all the pieces fit together, and there was no good place to document that. The documentation (or the code people went to to understand it) was always on an implementation. Having an interface would have given Zig a place to hang the Reader/Writer documentation and allowed a quick way for people to understand the expectations it places on implementations without further complications.
For Zig, I don't even want it to automatically handle the vtable like other languages...I'm comfortable with the way people implement different kinds of dynamic dispatch now. All I want is a type-level construct that describes what fields/functions a struct has and nothing else. No effect on runtime data or automatic upcasting or anything. Just a way to say "if this looks like this, it can be considered this type."
I expect the argument is that it's unnecessary. Technically, it is. But Zig's biggest weakness compared to other languages is that all the abstractions have to be in the programmer's head rather than encoded in the program. This greatly hampers people's ability to jump into a new codebase and help themselves. IMO this is all that's needed to remedy that without complicating everything.
You can see how much organizational power this has by looking at the docs for Go's standard library. Ignore how Go's runtime does all the work for you...think more about how it helps make the _intent_ behind the code clear.
While I don't wholly agree with all choices made by Andrew and the Zig team, I greatly appreciate the care with which they develop features. The slow pace of deliberating over features, refining them, and removing unnecessary ones seems in sharp contrast to the development of any other langauge I'm aware of. I'm no language historian though, happy to be challenged.
Disclaimer: I would like to see Zig and other new languages to become a viable alternatives to C++ in Gamedev. But I understand that it might happen way after me retiring =)
A great degree of occasional criticism of Clojure typically comes from brief exposure - either from not using structural editing idioms, misunderstanding REPL-driven workflows, or confusion around "the type system". Of course, Clojure being dynamically/strongly typed, doesn't really have a type system in the traditional sense, yet it has mechanisms that provide type-like guarantees, and from pragmatic point-of-view those instruments are incredibly robust.
Not that it’s a bad thing. Python removes stuff, and it takes time to upgrade to new versions.
This just shows that you weren't around for pre-1.0 Rust. Back then Rust was infamous for the language making breaking changes every week. Check out this issue from 2013 tracking support for features which were deprecated but had yet to be removed from the compiler: https://github.com/rust-lang/rust/issues/4707 , and that's just a single snapshot from one moment in Rust's prehistory.
Try making a similar change between version 5.0 and 6.0, with hundreds of thousands of existing users, programs, packages and frameworks that all have to be updated. (Yes, also the users who have to learn the new thing.)
Let me guess: they didn't, and now there is a third-party "right" way to do it.
(We've been here before, many times.)
a few of those decisions seem radical, and I often disagreed with them.. but quite reliably, as I learned more about the decision making, and got deeper into the language, I found myself agreeing with them afterall. I had many moments of enlightenment as I dug deeper.
so anyways, if you're curious, give it an honest chance. I think it's a language and community that rewards curiosity. if you find it fits for you, awesome! luckily, if it doesn't, there's plenty of options these days (I still would like to spend some quality time with Odin)
There might be a few pathological code paths in the core libraries or whatever for certain things that aren't what they should be, but in terms of the raw language you're in the land of C as much as with any of these languages; Odin really doesn't do much on top of C, and what it's doing is identifiable and can be opted out of; if you find that a function in a hot loop is marginally slower than it ought to be, you can make it contextless, for example, and see whether that makes a difference.
We haven't found (in a product where performance is explicitly a feature, also containing a custom 3D engine on top of that) that the context being passed automatically in Odin is of much concern performance-wise.
Out of the languages mentioned Rust is the one I've seen in benchmarks be routinely marginally slower, but it's not by a meaningful amount.
[1] https://research.google/pubs/secure-by-design-googles-perspe...
[2] https://www.microsoft.com/en-us/msrc/blog/2019/07/we-need-a-...
Or to phrase that more directly at the point: for monolithic kernels to be obsolete you have to break up the monolithic part, not just shim a microkernel hypervisor on top of it.
Additionally there is a certain irony to use a monolithic Linux kernel, only to drown it on layers and layers of containers with Kubernetes.
Fortunately, vendors are gradually moving away from Linux, having been hamstrung by its failures. Google is planning to move to a capability-based microkernel in the coming years for Android and ChromeOS, and Huawei has already done so with HarmonyOS.
In a hundred years, Linux will be a footnote in computing history.
Exercise from other posts of mine which languages those might be.
So no, it's not as safe as rust in terms of memory, but it's quite close, and in the process lets you do some really cool stuff.
As for the other ones I listed,
https://en.wikipedia.org/wiki/Mesa_(programming_language) (some PDFs linked from there)
https://bitsavers.org/pdf/borland/turbo_pascal
https://docwiki.embarcadero.com/RADStudio/Sydney/en/Delphi_D...
and
Where Rust insists on having either partial safety through the checker or lack of control in unsafe code, Zig provides a toolkit for contructing safe frameworks. Zig also doesn't have main sources of unsafety coming from certain C design mistakes.
Besides, if you are after true memory safety then garbage collection is the way to go.
Somethings there are 100 things that possibly go wrong. With error data you can easily know which exact thing is wrong. But with error code you just know "something is wrong, don't know which exactly".
See: https://github.com/ziglang/zig/issues/2647#issuecomment-1444...
> I just spent way longer than I should have to debugging an issue of my project's build not working on Windows given that all I had to work with from the zig compiler was an error: AccessDenied and the build command that failed. When I finally gave up and switched to rewriting and then debugging things through Node the error that it returned was EBUSY and the specific path in question that Windows considered to be busy, which made the problem actually tractable ... I think the fact that even the compiler can't consistently implement this pattern points to it perhaps being too manual/tedious/unergonomic/difficult to expect the Zig ecosystem at large to do the same
https://matklad.github.io/2025/11/06/error-codes-for-control...
Honestly I was quite convinced by that, because it kind of matches my own experiences that, even when using complex `Error` objects in languages with exceptions, it's still often useful to create a separate diagnostics channel to feed information back to the user. Even for application errors for servers and things, that diagnostics channel is often just logging information out when it happens, then returning an error.
I haven't seen anyone use a global allocator in the way you're talking about, and if you did I feel like it goes directly against the Zig ethos. Part of the benefit of allocators being passed around is that all allocation are explicit.
Library developers tend to choose the path of least resistance, which is to not pass diagnostic information.
The most convenient diagonistic system is the good old logging. Logging is easy.
Maybe logging will be the de facto solution of passing error data in Zig ecosystem, due to psychological reasons.
If i found no consistency id be making a post like OP but from a different perspective.
In this context, adding data to an error may be expedient but 1) it has a non-trivial overhead on average and 2) may be inadvisable in some circumstances due to system state. I haven't written any systems in Zig yet but in low-level high-performance C++20 code bases we basically do the same thing when it comes to error handling. The conditional late binding of error context lets you choose when and where to do it when it makes sense and is likely to be safe.
A fundamental caveat of systems languages is that expediency takes a back seat to precision, performance, and determinism. That's the nature of the thing.
I agree that in special states like OOM passing error data with allocation is not ok.
The problem with sigils is that they compose poorly when casting (refs, counts), and do not generalize to other types.
Plus, they seem to encourage the language designers to implement semantic that is "context aware" which would have been another billion dollars mistake if perl had become more popular.
In other words, that's unnecessary complexity bringing the attention to a poor type system. A bad idea that deserves to die, in my opinion.
Have you used it in any large projects?
> how to handle errors and diagnostics, though it's an area of active exploration
I am flabbergasted and exasperated by this sentiment. Zig is over 9 years old at this point. This feels this same kind of circular arguments from Golang "defenders" about generics and error handling.If you look at the current Zig website the hello world example doesn’t compile because they changed the IO interface. Something as simple as writing to the console.
It’s easier to get things right if you have no issues breaking backward compatibility for a decade. It feels it’ll be well over 10 years before Zig is “1.0”.
I find it really amusing that we have a language that has built its brand around "only one obvious way to do things", "reducing the amount one must remember", and passing allocators around so that callers can control the most suitable memory allocation strategy.
And yet in this language we supposedly can't have error payloads because not every error reporting strategy is suitable for every environment due to memory constraints, so we must rely on every library implementing its own, yet slightly unique version of the diagnostic pattern that should really be codified as some sort of a language construct where the caller decides which allocator to use for error payloads (if any).
Instead we must hope that library authors are experienced and curious enough to have gone out of their way to learn this pattern because it isn't mentioned in any official documentation and doesn't have any supporting language constructs and isn't standardized in any way.
There must be an argument against this (rather obvious) observation but I'm not aware of it?
I want a function like `diffFiles(path_a, path_b)` to have an error set of `error { ReadError }` with more detailed information in the payload (e.g. file path, translated error code). The alternative is: `error { FileAReadErrorNotFound, FileBReadErrorNotFound, FileAReadErrorPermissionDenied, FileBReadErrorPermissionDenied, ...}`
I want off this ride.
https://github.com/ziglang/zig/issues/2647#issuecomment-2670...
IMO this is not a good reason at all.
But that is not implemented.
In any case, when debugging annotating error with extra context often is not enough. One often needs a detailed trace of what happens before.
So what I would like to see in any programming language is ability to do a structured logging with extra context from the call stack (including asynchronous support in languages that have that) that has almost zero overhead when the log is not printed.
Various languages and runtimes have some libraries that try to do that, but the usage is awkward and the performance overhead is not trivial.
We are free to do that as a return type like `Result(T)` and just forgo using `try`, but yeah, I wish this was in there.
See, we were trying to make this data we had into a string, Rust says all strings are UTF-8 encoded - but, turns out the data wasn't UTF-8 after all, here's an error with the data inside it.
Or a really delicate piece of design, (nightly for now) Vec::push_within_capacity. We're hoping the growable array (Vec<T>) has enough space for this T, thus getting rid of it, but if not we don't want to grow the array, maybe we're bare metal software and can't afford to allocate on this hot path, so we get back an error with the T we were trying to push onto the array inside it, so we can do something else with that T but only when the problem happened, otherwise it's gone.
Whereas lazy devs could just attach all possible data in a giant generic error if they don’t want to think about it.
Okay maybe theorically, but in the real world I would like to have the filename on a "file not found", an address on a "connection timeout", a retry count on a "too many failures", etc.
I’d like my parser library to be able to give me the exact file, line and column number an error occurred. But I’d also like to use the library in a “just give me an error if something failed, I don’t really care why” mode.
How useful is a file not found error type without data (the filename) when the program is looking for 50 files? Not very.
How useful is a generic error type with '{filename} not found' as generic string data packed in? Quite.
And even if it did, interpreting the error generally doesn't every work with a microscope over attached data. You got an error from a write. What does the data contain? The file descriptor? Not great, since you really want to know the path to the file. But even then, it turns out it doesn't really matter because what really happened was the storage filled up due to a misbehaving process somewhere else.
"Error data" is one of those conceits that sounds like a good idea but in practice is mostly just busy work. Architect your systems to fail gracefully, don't fool yourself into pretending you can "handle" errors in clever ways.
Also, a line number is often helpful, which is why compilers include it. Some JSON parsers omit that, which is annoying.
That's not error data, that's (one level of) a stack trace. And you can do that in zig, but not by putting call stack data into error return codes.
The conflation between exception handling and error flagging (something that C++ did largely as a mistake, and that has been embraced by managed runtimes like Python or Java) is actually precisely what this feature is designed to untangle. Exception support actually turns out to have very non-trivial impact on the generated code, and there's a reason why languages like Rust and Zig don't include them.
They're not talking about the stack trace, but about the common case where the error is not helpful without additional information, for example a JSON parsing library that wants to report the position (line number) in the string where the error appears.
There's no way of doing that in Zig, the best you can do is return a "ParseError" and build you own, non-standard diagnostic facilities to report detailed information though output arguments.
The final paragraph says "This is all quite surprising" -- why so? "and let one think that many advantages previously found only in interpreted languages are gradually migrating to compiled languages in order to offer more performance" -- sure, but Zig is hardly the first ... D and Nim both have interpreters built into the compiler that allow extensive comptime computation--both of those languages have far more metalanguage facilities than Zig, in addition to many other language features that Zig lacks--which is not necessarily a fault, as it aims for a certain kind of simplicity and close-to-the-metal performance ... although both D and Nim are highly performant (both have optional garbage collection, though Nim is more advanced in making GC-free programming approachable). One thing you can say about Zig though--it compiles like a bat out of hell.
P.S. Another thing about Zig worth mentioning that came up in some comments is cross compilation. I don't think people understand how Zig is different and what an engineering feat it is (Andrew has a writeup somewhere of how it's done--it's shocking):
If you install Zig, you can now generate executables for virtually any target with just a command line argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires recompiling the compiler and library to target a different architecture. Zig comes with precompiled libraries for a huge number of targets.
I noticed a comment where someone said they love Zig but they've never programmed in it--they use it to cross-compile their Nim programs. (The Nim compiler has a C code backend, and Zig has a C compiler built in, so Nim inherits instant arbitrary cross-compilation to any target via Zig).
That said, amazing effort, progress and results from the ecosystem.
Bursting on the scene with amazing compilation dx, good allocator (and now io) hygiene/explicitness, and a great build system (though somewhat difficult to ramp on). I’m pretty committed to Rust but I am basically permanently zig curious at this point.
[EDIT] “hate” > “dislike”. Hate is a strong word and surely I just need to spend some time writing zig and I’d get used to it.
Prefix anf different naming conventions of C-imported libraries is not less annoying.
I like how Zig feels clear and simple to start with. I like that it gives one toolchain and makes cross compilation easy. I like that it helps people see how systems programming can feel approachable again.
I also like that C has done these things for many years. I can use different tools, link libraries, and trust that it will still work. I can depend on standards that keep improving while staying familiar.
I think Zig is exciting for what it adds today. I think C is cooler because it has proved itself everywhere and still runs the world.
Today, Zig is so much better than C. I used to refer to Zig as an improved version of C. But I don't anymore. C may have come first, but chronological roles reversed. If Zig is a programming language, than C is a toy trying to copy Zig's functionality and usability.
Calling C easier to use in a cross platform context is absolutely insane. If I was only concerned about $HOST I would consider using C. Today, when I might want to copy a binary to literally any other system, I wouldn't even consider C. Zig wants code to work. C wants code to compile. There's a stark and critically important difference between the two.
> I think Zig is exciting for what it adds today. I think C is cooler because it has proved itself everywhere and still runs the world.
I couldn't have put it better myself, the only thing C has over Zig is inertia. But I wouldn't consider that a selling point....
You can now write wide and UTF-8 string literals directly:
char8_t* s = u8"こんにちは";
char16_t* t = u"Привет";
char32_t* ustr = U"你好";
It just works across compilers, no special libraries or hacks needed.C still feels like C, but cleaner, safer, and more consistent.
2 years later, already enjoying it in Zig `defer` is a lot less important to me now. But I still view it as a symptom of the death of the language. C isn't dead, by any stretch of the imagination, but it's no longer learning from it's mistakes, where as I still am.
Once I spent time with it, I saw how many smart ideas from the kernel could be used anywhere. the initcall system that runs modules in order, the way structs with function pointers create flexible drivers, the use of macros to build type-safe lists and so on.
https://www.collabora.com/news-and-blog/blog/2020/07/14/intr...
For real work, though, life is short. I use Go.
It is kind of interesting that packaging the same ideas with a C like syntax suddenly makes them "cool", 40 years later.
But yes, avoiding arcaneness for the sake of arcaneness will earn you more users.
A big success of Rust has nothing to do with systems programming or the borrow checker.
But just that it brings ML ideas to the masses without having to learn a completely new syntax and fight with idiosyncratic toolchains and design decisions.
Also highly subjective but the syntax hurts my eyes.
So I’m kind of interested by an answer to the question this articles fails to answer. Why do you guys find Zig so cool ?
So, no, I do not really see anything fundamentally new either. But to me this is the appealing part. Syntax is ok (at least compared to Rust or C++).
Having said this, I am still skeptical about comptime for various reasons.
What’s important is the integration of various ideas, and the nuances of their implementation. Walter Bright brings up D comptime in every Zig post. I’ve used D. Yet I find Zigs comptime to be more useful and innovative in its implementation details. It’s conceptually simpler yet - to me - better.
You mention Ada. I’ve only dabbled with it, so correct me if I’m wrong, but it doesn’t have anything as powerful as Zigs comptime? I think people get excited about not just the ideas themselves, but the combination and integration of the ideas.
In the end I think it’s also subjective. A lot of people like the syntax and combination of features that Zig provides. I can’t point to one singular thing that makes me excited about Zig
I view Zig as a better C, though that might be subjective.
Even C++ didn’t fully repent from this sin until around C++17. I appreciate the non-begrudging acceptance of this reality in Zig.
For example, apparently the plan9 OS gets special page_allocator handling: https://ziglang.org/documentation/master/std/#std.heap.page_...
It generates no code, it is a compiler barrier related to constant folding and lifetime analysis that is particularly useful when operating on objects in DMA memory. As far as a compiler is concerned DMA doesn’t exist, it is a Deus Ex Machina. This is an annotation to the compiler that everything it thinks it understands about the contents and lifetime of a bit of memory is now voided and it has to start over. This case is endemic in high-end database engines.
It should be noted that `std::launder` only works for different instances of the same type. If you want to dynamically re-type memory there is a different set of APIs for informing the compiler that DMA dropped a completely different type in the same memory address.
All of this is compiled down to nothing. It annotates for the compiler things it can’t understand just by inspecting the code.
But what's the Zig equivalent?
You might also find some of the builtin functions interesting as well[1], they have a lot of really useful functions that in other languages are only accessible via the blessed stdlib, such as @addrSpaceCast, @atomicLoad, @branchHint, @fieldParentPtr, @frameAddress, @prefetch, @returnAddress(), and more.
[1] https://ziglang.org/documentation/master/#Builtin-Functions
For example, in this test case:
https://gcc.godbolt.org/z/j3Ko7rf7z
GCC generates a store followed by a load from the same location, because of the asm block (compiler barrier) in between. But if you change `if (1)` to `if (0)`, making it use `std::launder` instead of an asm block, GCC doesn't generate a load. GCC still assumes that the value read back from the pointer must be 42, despite the use of `std::launder`.
I think the subtle semantic distinction is that `volatile` is a current property of the type whereas `std::launder` only indicates that it was a former property not visible in the current scope. Within the scope of that trivial function in which the pointer is not volatile, the behavior of `std::launder` is what I'd expect. The practical effect is to limit value propagation of types marked `const` in that memory. Or at least this is my understanding.
DMA memory (and a type residing therein) is often only operationally volatile within narrow, controlled windows of time. The rest of the time you really don't want that volatile qualifier to follow those types around the code.
while this is due to Zig maintainers' code quality, I think a large contributing factor is the choice of syntax. As an exercise, try navigating a C, C++ and any other language source code without an IDE or LSP. things like:
- "Where did that function come from?"
- "What and where is this type?"
what do you have to do to find that out? due to the flexible ways you can declare things in C, it may take you a lot of steps to find these information. even in search, a variable and a function can share the same prefix due to the return type placement. hence why some people prefer function return types in a separate line.
Even with languages like Rust for example, finding if a type in a function parameters is an enum or struct and finding its definition can require multiple steps like search "enum Foo" or "struct Foo", in Zig i can search "const Foo" and i will immediately know what it is.
while i do hope that C gets defer and constexpr functions in the next standard or maybe better generics or enums, Zig syntax is much better to work with in my opinion.
There's no reason to use cygwin with Rust, since Rust has native Windows support. The only reason to use x86_64-pc-cygwin is if you would need your program to use a C library that is not available for Windows, but is available for cygwin.
If you don't want to/can't use the MSVC linker, the usual alternative is Rust's `x86_64-pc-windows-gnu` toolchain.
I just use a it as cross-compiler for my Nim[0] programs.
[0] - https://nim-lang.org
and the actual cool stuff is missing:
> with its concept of compile time execution, unfortunately not stressed enough in this article.
indeed
> Zig for ( 0..9 ) |i| { }
> C for (i = 0; i < 9; i++) { }
I know an open interval [0..9) makes sense in many cases, but it's counterintuitive and I often forget whether it includes the last value or not. It's the same for python's range(0, 9).In most cases half-open intervals result in the simplest program, so I agree with the choice of Zig, which is inherited from other languages well-designed from this point of view, e.g. Icon.
I find half-open intervals more intuitive than either closed intervals or open intervals, and much less prone to errors, for various reasons, e.g. the size of a half-open interval is equal to the difference between its limits, unlike for closed intervals or open intervals. Also when accessing the points in the interval backwards or circularly, there are simplifications in comparison with closed intervals.
That means you have to waste bytes for the index when you need to include ..._MAX.
In a language where half-open intervals are supported consistently in all the places, this would be solved trivially, e.g. for a signed byte the _MIN and the _MAX values would be defined as -128 and +128, more intuitively than when using closed intervals, where you must remember to subtract 1 from the negated minimum value.
Even the C language has some support for half-open intervals, because the index pointing after the last element of an array is a valid index value, not an out-of-range value (though obviously, attempting to access the array through that index value would be trapped as an out-of-range access, if that is enabled).
Applied consistently, the same method would ensure that the value immediately above the last representable value of an integer type is valid in ranges of that type, even if it would be invalid in an expression as an operand of that type.
Edit: it doesn't use Range for ..=, but rather RangeInclusive, which works fine.
for i in 0..length {
…
}
for i in 0..=maxindex {
…
}EDIT: oh just noticed it's 3 dots in the close case... in Groovy it's just 2.
See Dijkstra for why this is the right way to represent ranges: https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...
With that said, here are a couple of things you have in Zig that you don't get in Odin:
- Cross-compilation & cross-linking (more or less works): Odin doesn't do cross-linking.
- Comptime; you can actually use it to effectively get Functors from ML, which means passing in interfaces to modules and getting compile-time generated modules back (structs in this case)
- Error set inference; Zig can figure out the complete set of errors a code path can return and make sure you handle them, or bubble that exact set (plus your own errors) up. This comes with the caveat that Zig has no capability to attach actual data to the errors, so you have to side-channel that info if you have it. Odin doesn't do error inference apart from the type checking side of it, but does allow using tagged unions as errors, which is great. They still interact exactly as they ought to with the zero-value-as-no-error machinery.
I didn't use comptime much when I used Zig, and I like tagged unions as errors much more than I value being able to cross-link, so I decided that Odin was better for me. Defaulting to zero-values and the zero-value being blessed in terms of language features didn't feel right to me before I started using it but now I can't really imagine going back to not assuming the zero-value being there.
With that said, I'll try it out. I'm not really impressed by what I've seen so far, though, it's very middle-of-the-pack with some really nonsense ideas. The possibility of easily creating your own checks with the compile-time machinery is potentially interesting but would probably turn into a nothingburger for us.
I think that's where most of this is at: After so many years of "waiting" (I think most people stopped actually waiting after a few years of mostly talking and very little actual productive doing) we'll end up with a very meh language that was touted as super special... And a painfully simple sokoban game that people are going to pretend is somehow super complex and hard to make.
I feel like Zig is aiming a lot higher. So that’s why it’s taking longer and also why people are more obsessed with it. The work on doing their own backend and incremental linker is impressive and interesting. So is their attempt at getting IO and async right.
I don't understand how the things presented in this article are surprising. Zig has several nice features shared by many modern programming languages?
That the author feels the need to emphasize this means either that they haven't paid attention to modern languages for a very long time, or this article is for people who haven't paid attention to modern languages for a very long time.
Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
> One is Zig’s robustness. In the case of the shift operation no wrong behavior is allowed and the situation is caught at execution time, as has been shown.
Panicking at runtime is better than just silently overflowing, but I don't know if it's the best example to show the 'robustness' of a language...
I'm not even sure I'd call this type inference (other people definitely do call it type inference) given that it's only working in one direction. Even Java (var) and C23 (auto), the two languages the author calls out, have that. It's much less convenient than something like Hindley-Milner.
It’s not common in lower level languages without garbage collectors or languages focused on compilation speed.
“Low-level” languages — Rust, C++, D
> what the heck does it matter what "much of the standard library uses" to this issue?
It matters in that most people looking for a low level manually memory managed language won’t likely choose D, so for the purposes of “is this relatively novel among lower level, memory managed languages” D doesn’t fit my criteria.
> Even C now has type inference. The plain fact is that the claim is wrong.
Almost no one is using C23 yet.
What Zig really does is make systems programming more accessible. Rust is great, but its guarantees of memory safety come with a learning curve that demands mastering lifetimes and generics and macros and a complex trait system. Zig is in that class of programming languages like C, C++, and Rust, and unlike Golang, C#, Java, Python, JS, etc that have built-in garbage collection.
The explicit control flow allows you as a developer to avoid some optimizations done in Rust (or common in 3rd party libraries) that can bloat binary sizes. This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
The built-in C/C++ compiler and language features for interacting with C code easily also ensures that devs have access to a mature ecosystem despite the language being young.
My experience with Zig so far has been pleasurable. The main downside to the language has been the churn between minor versions (language is still pre-1.0 so makes perfect sense, but still). That being said, I like Zig's new approach to explicit async I/O that parallels how the language treats Allocators. It feels like the correct way to do it and allows developers again the flexibility to control how async and concurrency is handled (can choose single-threaded event loop or multi-threaded pool quite easily).
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
Zig on the other hand does lazy evaluation and tree shaking so you can include a few features of the std library without a big concern.
IIRC there's also a mutex somewhere in there used to workaround some threading issues in libc, which brings in a bespoke mutex implementation; I can't remember whether that mutex can be easily disabled, but I think there's a way to use the slower libc mutex implementation instead.
Also, std::fmt is notoriously bad for code size, due to all the dyn vtable shenanigans it does. Avoid using it if you can.
Regardless, the only way to fix many of the problems with std is rebuilding it with the annoying features compiled out. Cargo's build-std feature should make this easy to do in stable Rust soon (and it's available in nightly today).
Zig is a good language. So are Rust, D, Nim, and a bunch of others. People tend to think that the ones they know about are better than all the rest because they don't know about the rest and are implicitly or explicitly comparing their language to C.
Of course both Zig and Rust are good languages. But my experience, and I believe your experience will be too if you try to compile programs of similar complexity using standard practices of each language, is that Zig compiles much more compactly in .ReleaseSmall mode than Rust does even with optimization flags, which makes it more ideal for embedded systems, in my opinion. I learned this on my own by implementing the same library in both languages using standard default practices of each.
Of course, at the desktop runtime level, binary size is frequently irrelevant as a concern. I just feel that since Zig makes writing "magic" code more difficult while Rust encourages things like macros, it is much easier to be mindful of things that do impact binary size (and perhaps performance).
This is not true. Zig, D, and Nim all have full-language interpreters built into the compiler; Rust does not. Its macros (like macros generally) manipulate source tokens, they don't do arbitrary compile-time calculations (they live in separate crates that are compiled and then run on source code, which is very different from Zig/D/Nim comptime which is intermixed with the source code and is interpreted). Zig has no macros (Andrew hates them)--you cannot "generate code" in Zig (you can in D and Nim); that's not what comptime does. Zig's comptime allows functions written in Zig to execute at compile time (the same functions can also be used to run at execution time if they only use execution-time types). The Zig trick is that comptime code can not only operate on normal data like ints and structs, but also types, which are first class comptime objects. Comptime code has access to the TypeInfo of types, both to read the attributes of types and to create types with specified attributes, which is how Zig implements generics.
Or it's hyperbolic.
I got that impression as well.
Xi's impressed about types being optional because they can be inferred.
That's ... hardly a novelty ...
Funny they mention Java that has got type inference few years now. Even C got a weaker version of C++'s auto in C23.
Zig feels like one of the few programming languages that mostly just avoids gigantic blunders.
I have some beefs with some decisions, but none of them that are an immutable failure mode that couldn't be fixed in a straightforward manner.
But destructors also don't conditionally interrupt the flow of execution, and always run at the end of a block.
> If I see a defer
The point is that you're not seeing it. In order to know if there's a defer happening at the end of a function you can't just read the end of the function, you need to read the entire function. That's non-local reasoning, which is what Zig professes to abhor.
And in fact defer is worse than destructors here, because a destructor runs the exact same code every time, whereas defer allows arbitrary code, so you need to review every single usage. And you also need to remember not to forget to use it in the first place (which is the classic footgun with defer), so in addition to vetting everywhere you use it, you also need to vet everywhere you might have forgotten to use it.
Why bring up destructors, I was talking about exceptions. Destructors and exceptions are orthogonal concepts, one can be implemented independently of the other. I'm specifically referring to try-catch blocks like those in java.
Compare this:
try { foo(); }
catch { bar(); }
to this: defer bar();
foo();
In the first one, bar() may or may not run depending if there's an exception. In the second one, bar is guaranteed to run. Thus, it means defer does not conditionally interrupt the flow of execution.> The point is that you're not seeing it. In order to know if there's a defer happening at the end of a function you can't just read the end of the function, you need to read the entire function.
What? You don't need to read the entire function, you only to scan or check for defers scoped in a block, or in some cases, just the top of a function or block. Wanting to just read the end of a function is unreliable anyway with the existence of early returns (which defer fixes by the way).
You could have made a more compelling (but not necessarily a valid) case by citing zig's try-catch, which makes me think that you are just arguing in abstract without actually having tried writing code that uses defer, or any zig code for that matter.
Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.
Working cross compilation out of the box any-to-any still isn't.
I guess it's convenient to have support for many target architectures built in by default. I wonder how big that package is.
From helicoptering folks onto steering committee and indoctrination of young CS majors.
There's nothing veiled or an insult: what I mentioned is a real factor in why people would read that statement and jump to demanding proof.
-
If I told a room full of plumbers that Sharkbites are actually sponsored by big Water trying to encourage water wastage, it definitely might not land... but none of them are going to demand a citation!
I think a better rule of thumb is that one shouldn't use tone indicators at all. If you are needing them, then chances are that what you are going to post is not valuable/funny.
(or maybe you did make stupid comments that were valuable and funny in 2005, I wouldn't know)
Not uncommon in this space though, especially as you get closer to the metal (close as cross-compilation is relative to something like React frontends, at least)
It seems that (debug) allocators are a nice idea, however, it seems that they exist "somehow" already for C, so I wonder: why would you pick this language for your next low level program? They provide runtime checks, so you need thorough testing before you can spot use-after-free or so. It's very similar to the existing situation with c/c++ and the sanitizers, although they work a bit differently.
So the question I have for hardcore low level programmers: why don't they invest more on the memory allocators like hardened_malloc[0] instead of starting a new programming language? It would probably be less expensive in terms of time and would help fix existing software.
A partial answer is that part of low-level programmers avoid memory allocation and threads like plague. In some cases they are not even an option (small embedded programming, it's nearly as low-level as you can get before going hardcore for real with assembly programming), but when they can the keywords are efficiency, reliability, predictability, and simplicity : statically allocating in advance is a thing you can do because the product is typically with max specs written on the box (e.g. max number of entries in a phone book, to take a generic dumb example), and you have to meet these requirements even if the customer uses all of the capabilities to the max; no memory overbooking allowed, which is basically what dynamic allocation is, in a sense.
> instead of starting a new programming language
If I were to start a new low-low level programming language, I would basically just fix C's weak typing problem, fix the UB problems that only come from issues with long-gone processors (like C++11 finally did with sign encoding), "backport" some C++ features (templates? constexpr?), add a pinch of syntactic sugar and fix union types to have proper sum types. But probably I've just described D and apparently a significant chunk of C23.
They allow for super ergonomical coding of state machines, which is a lot of fun.
To me it seems like a better C but not at all unique since most concepts in Zig are already present in other languages.
Zig is cool but not unique. And that is cool, too. Originality for the sake of originality doesn't add value in programming.
in fact files in zig are just structs !
>>>
Labeled breaks
Zig can do many things in compilation time. Let’s initialize an array, for example. Here, a labelled break is used. The block is labelled with an : after its name init and then a value is returned from the block with break.
>>>
The article hasn't even talked about how the language decides what an open curly brace is causing.
(also expected tesseract to do a bit better than this:
$ wl-paste -t image/png | tesseract -l eng - -
Estimating resolution as 199
const std = @import("std");
const expect = std.testing.expect;
const Point = struct {x: i32, y: i32};
test "anonymous struct literal" {
const pt: Point = .{
x = 13,
-y = 67,
33
try expect (pt.x
try expect(pt.y
13);
67);
) const std = @import("std");
const expect = std.testing.expect;
const Point = struct {x: i32, y: i32};
test "anonymous struct literal" {
const pt: Point = .{
.x = 13,
.y = 67,
};
try expect(pt.x == 13);
try expect(pt.y == 67);
The trick is to preprocess the image a little bit like so: ocr ()
{
magick - -monochrome -negate - | tesseract stdin stdout 2> /dev/null
}Unfortunately I get the same kind of garbage around closing curly braces / closing parenthesis / dots with this magick filter... It seems to do slightly better with an extra `-resize 400%`, but still very far from as good as what you're getting (to be fair the monochrome filter is not pretty (bleeding) when inspecting the result).
I wonder what's different? ( ImageMagick-7.1.1.47-1.fc42.x86_64 and tesseract-5.5.0-5.fc42.x86_64 here, no config, langpack(s) also from the distro)
I like the simplicity and speed of Rust's eGUI. Something similar for Zig would be amazing.
My personal experience was (back in 2019) that Zig was basically a language you could learn in a weekend and end up being reasonably productive after a week. With that in mind, you might find that you can try it out and either find something that you really like in it and continue, or simply drop it (I ended up picking Odin over Zig, for example, and have found it delightful even 1+ years into production).
The truth is that if you only ever learn what is already popular you'll end up being the professional equivalent of a gray mass with zero definition and unique value proposition.
Inserting the literal one byte instruction (on x86) - INT 3 - is the least a compiler should be able to do.
Does the feature end up feeling unused, dominating app code with test code, or do people end up finding a happy medium?
In my mind, it's an accessible systems language. Very readable. Minimal footprint.
If you are not using a GC language, you WILL be managing lifetimes. Rust just makes it explicit, when the compiler can’t prove it’s safe, which Zig, C don't really care.
In Zig and C, it's always expected that you will explicitly manage your lifetimes. Zig uses the allocator interface to explicitly allocate new buffer or heap values and its keyword 'defer' to clean up allocated variables after the scope exits so that allocations and frees generally live next to each other.
C on the other hand, is relatively unopinionated about how lifetimes are managed. The defer keyword honestly takes most of the pain of managing lifetimes away.
https://tigerbeetle.com/blog/2025-10-25-synadia-and-tigerbee...
Aside from the fact that Zig is still a bit immature in its std library and ecosystem, I mean. Is it a suitable systems language going forward?
Zig is actually perfect for production network services (that’s all TB is essentially, or how I see it, and what I was looking for in Zig—how to create something with explicit limits that can handle overload—it’s hard to build anything production-grade if it’s not doing NASA’s Power of Ten and getting allocation right—GC is not a good idea for a network service).
I wouldn’t say Zig’s std lib is immature. Or if anything, it has higher quality than most std libs. For example, the unmanaged hashmap interface is :chefskiss. In comparison, many std libs are yet to get non-global allocators or static allocation or I/O right.
[citation needed]
If we are to trust this page [0] Rust beats Zig on most benchmarks. In the Techempower benchmarks [1] Rust submissions dominate the TOP, while Zig is... quite far.
Several posts which I've seen in the past about Zig beating Rust by 3x or such all turned to be based on low quality Rust code with some performance pitfalls like measuring performance of writing into stdout (which Rust locks by default and Zig does not) or iterating over ..= ranges which are known to be problematic from the performance perspective.
[0]: https://programming-language-benchmarks.vercel.app/rust-vs-z...
Btw it's so much easier to add an environment variable in linux, I haven't used windows since 2007 so it was interesting for me to see how you do it there. Clicking on edit buttons to find that menu ? Nah, I don't do that here :)
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
The new Io interface is non-generic and uses a vtable for dispatching function calls to a concrete implementation. This has the upside of reducing code bloat, but virtual calls do have a performance penalty at runtime. In release builds the optimizer can de-virtualize function calls but it’s not guaranteed.
...
A side effect of proposal #23367, which is needed for determining upper bound stack size, is guaranteed de-virtualization when there is only one Io implementation being used (also in debug builds!).
https://kristoff.it/blog/zig-new-async-io/Don’t know. That’s how people usually get rid of repeat arguments (or OOP constructor).
For simple projects where you don't want to pass it around in function parameters, you can create a global object with one implementation and use it from everywhere.
Then HN proceed to keep the article at the head of the front page for the day.
Rust passes that test because it's categorically better than C and C++ in several ways: much better type system, safety, better modules and code reuse, etc. It's complex, but as far as I can tell most of its complexity is required to offer its level of safety guarantees in a pure systems language without a garbage collector or any kind of true dynamic typing. To make a safe systems language you need to have a very rich type system that can prove safety across a wide array of situations. Either that or you'd have to go to the other far end of the simplicity-complexity spectrum and have a language with virtually no features, which would result in very verbose code and probably a lot of boilerplate.
Zig's coolest feature to me seems like "comptime" and the lack of a weird macro side-language, which is one of Rust's anti-features that feels bolted on. Don't make me learn yet another language. Of course sophisticated macros in Rust are usually instead written in Rust itself via procedural macros, but that is more unwieldy than "comptime."
Still not enough to justify a whole new language and ecosystem though. Again: don't make me learn yet another language unless there's a big payoff.
Zig has completely changed the way I program (even outside of it). A lot of the goals and heuristics I used to have while writing code have completely changed. It's like seeing programming itself in a new way.
Perhaps I'm missing something but this is utterly routine. It even has the name used here: Cross-compiling.
"When a Go project utilizes CGo to interact with C code, standard Go cross-compilation might require additional steps. This is because Go can cross-compile Go code but not C code directly, necessitating the availability of target system libraries on the development machine. Tools like Zig can be used as a C compiler (zcc) to facilitate cross-compilation for CGo-dependent projects by providing the necessary cross-compilation capabilities for the C code."
Zig makes cross-compilation trivial and part of the language philosophy.
Other languages either rely on external toolchains (C/C++, Rust with C deps) or are limited in target flexibility (Go).
For projects targeting multiple OS/architectures, Zig is currently the most straightforward option.
It basically looks like C with different syntax, im also not convinced the 0…9 implicit range is better for iteration - i prefer it explicitly for lower level languages.
I don't find Zig nearly as readable as my D code, but alas, I don't do systems programming.
its a hipster language, absolute insanity to use it when rust exists unless you have that very specific c related slave work to do
Uncalled for and subjective. Certainly plenty of people call Rust's syntax ugly. Discussing syntax and not semantics is a waste of time.
> has zero use case other than mingling with legacy c code
So it has a use case?
> who in their right mind wants to be doing that
Some people have to.
> absolute insanity to use it when rust exists unless you have that very specific c related slave work to do
Some people do.
What's the need for such emotionally charged language in your comment?
I have my own reasons not to use Zig at this moment. I want enforced memory safety and am waiting on 1.0 to see what the language finally looks like. Until stabilization I certainly won't be using it in production. But that doesn't mean the project is meritless, that experimenting with language features before then is wrong, that making a language suitable for specific niches is a bad idea.
I don't see Zig as a replacement for tools that would have been written in Go, Java or C#, and I would rather we had less memory unsafe software out there, but it is a clear step function ahead of C.
Just like I and many others spend a lot of time trying to make Rust the best it can be, their team is doing the same.