The only downside is the stdlib being as fast-moving of a target as it is. Right now I've had to put a pin on getting panic stack traces to work on my N64 code because apparently the upcoming release changes a bunch of stuff around panic/stacktrace handling (and it's already changed quite a bit over the years even before these new changes).
The fact that zig can compile C code makes it useful for other languages too. I recently started using `zig cc` to cross-compile Nim for lots of different platforms within the same environment.
It takes no time to setup and, honestly, works like magic.
Agree, C interop is IMHO the big feature of Zig. There are plenty of systems programming languages in 2025, but where Zig shines is its pragmatism: a single standalone binary containing compiler, libc, build system, code formatter and test runner for C and Zig.
As of late though, I've been concerned with some "holy wars"/"ideological postures" that the dev team started which IMHO departs from the original "let's be pragmatic" mantra.
- There's a bunch of places where the stdlib just crashes on unreachable assertions, and that won't be fixed "because the kernel should have better error reporting".
- There are a bunch of kernel syscalls which are just not possible to call "because C enums should not allow aliases"
- etc
I hope this trend fades away and it gets back on a more pragmatic stance on these issues, nobody wants a systems programming language that plays the programming police.
Otherwise, C3 looks promising as well (though not as nice than Zig IMHO), but currently it's a bit too barebone to my taste. There no stable LSP, no nvim plug-in, etc.
Instead of "cross-compiling" or just running a native perl interpreter (there's one for about every platform!), I prefer how Actually Portable Executables make Perl multiplatform with just 1 binary asset running everywhere!
I wanted to write a webserver processing CGI to learn more about the "old school web", so I wrote https://github.com/csdvrx/PerlPleBean and the simplicity of just downloading and running the .com on anything is very nice
I'm now trying to do the same in Python3, but it's not as fun - and I'm not yet to the part where I will try to safely run python code within the python webserver, either through restrictedpython or ast.parse(), ast.walk(), eval(compile()) ...
An example, you can rewrite the calling program in a module.(https://metacpan.org/pod/Acme::Bleach orhttps://metacpan.org/release/DCONWAY/Lingua-Romana-Perligata...)
While cool for jokes or serious DSL's, it may lead to difficult to understand code. (Nothing wrong with Damian Conway btw, I just remembered he used source filters in interesting ways).
There are different styles, but in general they are concise, and I like them.
perl use various sigils to remain concise, while other languages take a lot of room on the screen: too many letters in the usual function names, not enough sigils within the language.
It's like if everything was in binary or hex, instead of using the full range of ASCII: while technically possible, it may be harder to fit into your head
Python has one sub-style I dislike the most: using tabs for indentation, because how much EXTRA room they use on the screen.
It must not just be me, as there are solutions for coloring the spaces (I forked https://github.com/csdvrx/indent-rainbow to focus on black-and-white and using spaces instead of tabs)
I use space to limit the issue, but I can't make python less verbose.
> it gives you all the tools to shoot yourself in the foot and take away the leg with it.
python isn't innocent either: I recently traced a issue where exit(0) wasn't working to a threading problem, making a bad use of atexit.
I don’t know a single Python project that does it. You can’t mix space and tabs for indentation.
4 spaces is the default for Python formatters like black, ruff (not sure whether it is configurable—never tried to change).
Big indent is a feature—deep nesting is a code smell.
Can you not adjust your tab stops?
I hate it too, because tabs look like spaces and they have a different syntactic meaning
function TabCollapse_Toggle() abort
if &tabstop ==1
set tabstop=8
else
set tabstop=1
endif
endfunction
BTW if you hate tabs looking like other characters and other invisible characters (like spaces at the end of line, non breaking spaces...), I have a solution in CuteVim (https://github.com/csdvrx/CuteVim : just run the portable executable) where I mapped it by default to a Fxx keyIf you already use vim, here's the relevant part: assuming your Shift-F11 is free, add to your vimrc:
" Default is off, `se list` to turn on and `se nolist` to turn off
" Traditional with ISO-8859-1:
"set listchars=tab:»·space:_,trail:·,eol:¶
" Or cuter with unicodes:
set listchars=tab:↹⇥,space:_,nbsp:␣,trail:•,extends:⟩,precedes:⟨,eol:↲
set showbreak=↪
inoremap <silent> <S-F11> <Esc>:set list!<CR>
noremap <silent> <S-F11> :set list!<CR>
Shift-F11 will then become a toggle, to show you tabs: you will see ↹ where the tab starts, and ⇥ for how long it is
I've used this in vim for years:
:se expandtab tabstop=4 shiftwidth=4
(those can be abbreviated, check docs)
Then:
I use only tabs for indentation.
All the tabs get expanded to four spaces each.
Indenting and unindenting lines or blocks of lines, moves them by four spaces.
Never had a problem.
Maybe there are more fancy ways, but this works fine for me.
> All the tabs get expanded to four spaces each.
Then Python will not work (?)
Maybe it is you who "will not work".
Did you try it before commenting? I already said it works fine.
A frequent opinion. Easy way to fit in for people who never bothered to learn the language. Which is all the more sad that Perl is super easy to learn (one layer at a time).
Writing readable perl is easy, just code it like it's LISP.
Granted, he was working with it in AI/bioinformatics.
One of my classmates who moved into the IT/management side of things historically got much quicker responses from the dev team whenever he volunteered to code something, as he was always going to do it in perl.
Perl with list comprehensions do look a bit like line noise...
https://en.wikipedia.org/wiki/Comparison_of_programming_lang...
... but not really worse than most other languages there.
By contrast, its "higher order functions" are useful but relatively speaking, read like someone shook a box of leftover syntax all over them.
https://en.wikipedia.org/wiki/Higher-order_function#Perl
Even though C# has a seizure in the middle, Perl still seems the clunkiest.
Making a large code base easy to read is very hard. People often work on tiny code bases and talk about how easy it is to read, not understanding that they are comparing something with a couple thousand lines of code to something with tens of millions.
I'm not sure what you're referring to (link?) but note that whether C++ code is good or bad can depend strongly on the tooling. Certain coding patterns can be fantastic when your tooling can adequately detect their misuse, and awful when it doesn't. Which means sometimes you can't tell just look at code and tell whether it is good or bad.
> But most of old cooperate, Microsoft or Stroustrup code is just horrible, worse than hard-core perl nonsense.
I got the impression Microsoft's C code was always pretty good, their C++ not so much a decade ago - not sure how their C++ is now.
But I like Perl (and other languages) too.
Variety is the spice of life.
Clearly the problem is all these languages, and not me.
It's one of his well-known quotes.
https://quotefancy.com/quote/1497280/Larry-Wall-I-think-I-m-...
A homopipic / homeopathic language?
Similiia similibus curentur.
When you do, you appreciate the density of information.
When I read perl it's like I read a poem: to take a simple example, 'while/until' instead of 'while/while not' creates more beautiful code
Ah that's an interesting take, my opinion is that the stdlib doesn't move fast enough.
In its current state it's pretty broken, most of the "process", "os" and "posix" modules are either straight up raising unreachable in normal scenarios, or simply badly designed. I would like the stdlib to be much more fast moving and fix all these issues, but I had the impression most work on it is frozen until 0.15 or 0.16, after incremental compilation is done.
I don't think so, my impression is that stdlib improvements are volontarily frozen for now, not because of a lack of contributors but because of a lack of clear plan as to what the stdlib should look like. There are a number of issues and PR of people willing to contribute to stdlib that are stalled.
That's not to say that's its bad per se, "we don't have a plan for now and don't want people to commit time for an unclear target" is a perfectly OK answer.
Sadly the job market looks dead
You can make your own though :)
Right now it's just a bunch of WIP Zig interfaces for the N64's hardware, but the end-goal is to get it developed enough for homebrew gamedev.
To get there, though, I need to implement Zig-ish interfaces to the N64's hardware, which is slowly-but-surely happening at https://fsl.yellowapple.us/zig64/dir?ci=trunk
Things I like:
- Vendor libraries like Raylib and MicroUI make it easy to get started
- I can pass around memory allocators and loggers implicitly using context, or explicitly if I need to.
- natively supports vector math and swizzling
- error handling with `or_else` and `or_return`
Things I don't like:
- Name spacing is a bit annoying. The convention is to prefix the procedures but I don't like how they look. It really isn't a big issue.
Have a quick read of the overview and if you are still interested, I highly recommand 'Understanding the Odin Programming Language' book by Karl Zylinski [2]
Regardless, I do recommend people to try it out. I use Linux and OpenBSD, too, despite Linus and Theo. :)
[1] The reason for why I think this can be found in their pull requests, but it's been some time I think.
It has nothing to do with adding new features. I agree with you, I do not want the language to be bloated, nor do I want new features blindly added. I prefer simplicity, too.
FWIW you can see him losing his "cool" on Discord, too, at times.
I do not intend to have a collection of all the times he lost his cool.
The compiler is very fast, even over large codebases.
Mostly trying to bring AWS tooling to the platform[1], or experimenting with cross-compilation[2] using another less well known systems language, zig.
[1] https://github.com/chris-armstrong/smaws/ [2] https://github.com/chris-armstrong/opam-cross-lambda
Re: AWS tooling, have you seen https://github.com/solvuu/awsm ?
It generates code for all 300+ AWS services and produces both Async and Lwt forms. Should be fairly extensible to Eio.
I worked on this. Let me know if you want to tag team.
The syntax is also not very friendly IMO. It's a shame because it has a lot of great ideas and a nice type system without getting all monad in your face. I think with better tooling and friendlier syntax it could have been a lot more popular. Too late for that though; it's going to stay consigned to Jane Street and maybe some compilers. Everyone else will use Rust and deal with the much worse compile time.
Very true. There's an alternate syntax for OCaml called "ReasonML" that looks much more, uh, reasonable: https://reasonml.github.io/
They exist, I think you just mean `int` is 63-bit and you need to use operators specialized `Int64.t` for the full precision.
I mean, I understand "reserved" to mean either "you can't depend upon it if you use it", or "it will break the runtime if you use it".
If something is "available", it should mean that it can be used to its full capacity. One of those bits are definitely not available.
That is a runtime system not suitable for systems-level programming.
My C experience gave me a fundamental misunderstanding because there, an int is always derived from either an 32- or 64-bit int, depending on architecture.
OCaml is architected differently. I imagine the purpose was to keep the programs mostly working the same across processor architecture sizes.
I imagine this fundamental difference between OCaml's native int and these more specific Ints is why there are open issues in the libray that I"m sure the int does not.
Regardless, no one should be using OCaml for systems-level programming.
Thanks for helping me get to the heart of the issue.
(a) int has 31 bits in 32-bit architectures and 63 in 64-bit architectures (which speed up some operations)
(b) the standard library also provides Int32 and Int64 modules, which support platform-independent operations on 32- and 64-bit signed integers.
In other words: int is different but you always have standard Int32 and Int64 in case you need them.
It seems therefore that the use for system-level programming should not be decided for this (although the fact that it is a garbage collected language can be important depending on the case, note that still its garbage collector has been proved one of the fastest in the comparisons and evaluations done by the Koka language team of developers).
Thanks for your patient elucidation.
This means the semantics for Int32 and Int64 are COMPLETELY different than that of an int. My problem is that I come from the C world, where an int is simply derived from either a 32- or 64-bit integer, depending on the target architecture.
OCaml's runtime is not a system designed for systems-level programming.
Thanks again.
Now I know why the F# guys rewrote OCaml's fundamental int types from the get-go.
Again, the semantics of Int is different but the semantics in OCaml of Int32 and Int64 is the same/standard. So you have 3 types: int, Int32 and Int64 and it is an static typed language.
Regardless, I don't think C's "probably 32 bit" non-guarantee is the make or break feature that makes it a systems language. If I care about the exact size of an integer in C I'm not going to use an int- I'm going to use explicit types from stdint. Rust makes that mandatory, and it's probably the right call. OCaml isn't really what I'd use for a systems language, but that's because it has no control over memory layout and is garbage collected. The fact that it offers a 63-bit integer doesn't really come into it.
They are, though. Int64 and Int32 only differ in bit length and are in formats native to the host microprocessor. int has one of its bits "reserved" for the OCaml runtime, but Int32 has no such overhead.
> The fact that it offers a 63-bit integer doesn't really come into it.
It does if you interoperating with an OS's ABI though, or writing a kernel driver.
But you're right: there are a host of other reasons that OCaml shouldn't even have been brought up in this thread ;-)
Peace be with you, friend. Thanks for so generously sharing your expertise.
> Performance notice: values of type int64 occupy more memory space than values of type int
I just couldn't even imagine that a 64-bit int would require MORE memory than an int that is one bit less (or 33 bits less if on a 32-bit architecture).
It really makes absolutely no sense discussing OCaml as a possible systems-level programming language.
It was early this morning.
2. It has some kind of pinning system that is completely incomprehensible. For example you can do `opam install .`, which works fine, and then `git switch some_other_branch; opam install .` and it will actually still install the old branch?? Honestly I've never figured out what on earth it's trying to do but me and my colleagues have had constant issues with it.
> Compared to what?
Compared to good tooling like Cargo and Go and NPM and uv (if you give it some slack for having to deal with Python).
It's better than Pip, but that doesn't take much.
The point 2 you mention, I don't understand the issue. There is an opam switch which works for me perfectly, no issues at all. Please, like any other tool it is better to read the manual to understand how it works.
Cargo and opam is not something comparable, probably next generation of dune could be, but at this moment it is make no sense compare two utilities that are so different. Compare with pip, julia package manager, etc is fine. Personally, I like more opam than npm and pip.
Why do you think that the syntax is not very friendly?
Not saying you are wrong, just interested to know.
https://dev.realworldocaml.org/
I also saw this book OCaml from the Very Beginning by John Whitington.
I have not read that one yet. But I know about the author, from having come across his PDF tools written in OCaml, called CamlPDF, earlier.
https://github.com/johnwhitington/camlpdf
>CamlPDF is an OCaml library for reading, writing and modifying PDF files. It is the basis of the "CPDF" command line tool and C/C++/Java/Python/.NET/JavaScript API, which is available at http://www.coherentpdf.com/.
Being a huge fan of F# v2 who has ditched all MS products, I didn't think OCaml was able to be systems-level because its integer vars can't be precisely specified.
I'd love to know if I'm wrong about this. Anyone?
You also mention that Int32 and Int64 are recent, however these libraries were part of OCaml already in the 4.X versions of the compiler and standard library (now we are in the 5.3).
Note that in OCaml you can use C libraries and it is quite common to manage Int32, Int64, signed etc...
> https://opam.ocaml.org/packages/stdint/
It's been a while since I investigated OCaml, so I guess this is a recent addition and is obviously not a part of the standard integer data types (and, therefore, the standard language), that not only have no signedness, and only have Int32 and Int64, but have "one bit is reserved for OCaml's runtime operation".
The stdint package also depends on Jane Street's "Dune", which they call a "Fast, portable, and opinionated build system". I don't need or want or need any of its capabilities.
As well, the issues page for stdint has a ton of more than year old open issues, so, as I understood, OCaml does not, like F#, have all sizes and signedness of ints available in their fundamental language. Such a language is simply not a good fit for system-level programming, where bit-banging is essential. Such low-level int handling is simply not a part of the language, however much it may be able to be bolted on.
I just want to install a programming language, with its base compiler and libraries and preferably with man pages, open some files in vi, compile, correct, and run. That is my requirement for a "systems-level" language.
I would never in my life consider OCaml with opam and Dune for building systems-level software. I wish it could, but it's not copacetic for the task, whose sole purpose is to produce clean, simple, understandable binaries.
Thanks for helping me understand the situation.
Int type (the one you dislike for systems programming)
Int32 type (part of the standard library, one of those you were looking for)
Int64 type (part of the standard library, one of those you were looking for)
Nativeint (part of the standard library, maybe the one you were looking for)
The library stdint is other option, which can be convenient in some cases but for Int32 and Int64 you don't need it also for Nativeint you don't need it.
I think you're misinterpreting this. That's just the date the most recent version of the library was published. The library is something like 15 years old.
> the standard integer data types (and, therefore, the standard language), that not only have no signedness
I'm not sure what you mean by this - they're signed integers. Maybe you just mean that there aren't unsigned ints in the stdlib?
> and only have Int32 and Int64, but have "one bit is reserved for OCaml's runtime operation".
The "one bit is reserved" is only true for the `int` type (which varies in size depending on the runtime between 31 and 63 bits). Int32 and Int64 really are normal 32- and 64-bit ints. The trade-off is that they're boxed (although IIRC there is work being done to unbox them) so you pay some extra indirection to use them.
> The stdint package also depends on Jane Street's "Dune", which they call a "Fast, portable, and opinionated build system". I don't need or want or need any of its capabilities.
Most packages are moving this way. Building OCaml without a proper build system is a massive pain and completely inscrutable to most people; Dune is a clear step forward. You're free to write custom makefiles all the time for your own code, but most people avoid that.
It's not clear from the docs, but, yeah, I suspected that might be the case. Thanks.
> I'm not sure what you mean by this - they're signed integers. Maybe you just mean that there aren't unsigned ints in the stdlib?
Yes, that's what I mean. And doesn't that mean that it's fully unsuitable for systems programming, as this entire topic is focused on?
> The "one bit is reserved" is only true for the `int` type (which varies in size depending on the runtime between 31 and 63 bits).
I don't get it. What is it reserved for then, if the int size is determined when the runtime is built? How can that possibly affect the runtime use of ints? Or is any build of an OCaml program able to target (at compile-time) either 32- or 64-bit targets, or does it mean that an OCaml program build result is always a single format that will adapt at runtime to being in either environment?
Once again, I don't see how any of this is suitable for systems programming. Knowing one's runtime details is intrinsic at design-time for dealing with systems-level semantics, by my understanding.
> Building OCaml without a proper build system
But I don't want to build the programming language, I want to use it. Sure, I can recompile gcc if I need to, but that shouldn't be a part of my dev process for building software that uses gcc, IMO.
It looks to me like JaneStreet has taken over OCaml and added a ton of apparatus to facilitate their various uses of it. Of course, I admit that I am very specific and focused on small, tightly-defined software, so multi-target, 3rd-party utilizing software systems are not of interest to me.
It looks to me like OCaml's intrinsic install is designed to facilitate far more advanced features than I care to use, and that looks like those features make it a very ill-suited choice for a systems programming language, where concise, straightforward semantics will win the day for long-term success.
Once again, it looks like we're all basically forced to fall back to C for systems code, even if our bright-eyed bushy tails can dream of nicer ways of getting the job done.
Thanks for your patient and excellent help on this topic.
Types are fully erased after compilation of an OCaml program. However, the GC still needs to know things about the data it is looking at - for example, whether a given value is a pointer (and thus needs to be followed when resolving liveness questions) or is plain data. Values of type `int` can be stored right alongside pointers because they're distinguishable - the lowest bit is always 0 for pointers (this is free by way of memory alignment) and 1 for ints (this is the 1 bit ints give up - much usage of ints involves some shifting to keep this property without getting the wrong values).
Other types of data (such as Int64s, strings, etc) can only be handled (at least at function boundaries) by way of a pointer, regardless of whether they fit in, say, a register. Then the whole block that the pointer points to is tagged as being all data, so the GC knows there are no pointers to look for in it.
> Or is any build of an OCaml program able to target (at compile-time) either 32- or 64-bit targets, or does it mean that an OCaml program build result is always a single format that will adapt at runtime to being in either environment?
To be clear, you have to choose at build time what you're targeting, and the integer sized is part of that target specification (most processor architectures these days are 64-bit, for example, but compilation to javascript treats javascript as a 32-bit platform, and of course there's still support for various 32-bit architectures).
> Knowing one's runtime details is intrinsic at design-time for dealing with systems-level semantics, by my understanding.
Doesn't this mean that C can't be used for systems programming? You don't know the size of `int` there, either.
> But I don't want to build the programming language, I want to use it.
I meant building OCaml code, not the compiler.
All this said, thanks for putting to bed, once and for all, any notion anyone should have that OCaml can be used as a systems language. Yikes!
> Doesn't this mean that C can't be used for systems programming? You don't know the size of `int` there, either.
You know that at compile time, surely, when you set the build target, no? Even the pointer sizes. Besides, after years of C programming, I got to where I never used the nonspecific versions; if I wanted 64-bits unsigned, I would specifically typedef them at the top, and then there's no ambiguity because I specifically declared all vars. (You can see how I did the same thing in F# at the bottom of this reply.)
It makes working with printf much less problematic, where things can easily go awry. Anyway, I want my semantics to percolate down pyramid-style from a small set of definitions into larger and larger areas of dependence, but cleanly and clearly.
Sure, DEFINEs can let you do transparent multi-targetting, but it ends up being very brittle, and the bugs are insidious.
Thanks for your excellence. It's been a joy learning from you here.
---
As an aside, here's a small part of my defs section from the final iteration of my F# base libs, where I created an alias for the various .NET types for standard use in my code:
type tI4s = System.Int32
type tI1s = System.SByte
type tI2s = System.Int16
type tI8s = System.Int64
type tI1u = System.Byte
type tI2u = System.UInt16
type tI4u = System.UInt32
type tI8u = System.UInt64
Why risk relying on implicit definitions (or inconsistent F# team alias naming conventions) when, instead, everything can be explicity declared and thus unambiguous? (It's really helpful for syscall interop declarations, as I remember it from so many years ago). Plus, it's far more terse, and .NET not being able to compile to a 64-bit executable (IIRC) made it simpler than C/C++'s two kinds of executable targets.Empirically this is a rather low cost. IIRC, the extra ops add less than a cycle per arithmetic operation, due to amortizing them over multiple operations and clean pipelining (and also things like shifts just being really cheap).
But yes, there are certainly applications where we almost exclusively use Int64 or Int32 rather than the primary int type, if you need exactly that many bits.
> You know that at compile time, surely, when you set the build target, no?
Well, that's true of OCaml as well.
This is ultimately a difference of opinion - I think that the cost of installing a single extra library to get ints of various widths/signedness would be worth the advantage of eliminating nearly all memory errors (and various other advantages of a higher-level language).
The main carveout I would agree with is any case where you absolutely need strict memory bounds - it's not clear to me how you'd satisfy this with any GC'd language, since the GC behavior is ultimately somewhat chaotic.
In the 1980s, complete workstations were written in Lisp down to the lowest level code. With garbage collection of course. Operating system written in Lisp, application software written in Lisp, etc.
Symbolics Lisp Machine
https://www.chai.uni-hamburg.de/~moeller/symbolics-info/fami...
LMI Lambda http://images.computerhistory.org/revonline/images/500004885...
We're talking about commercial, production-quality, expensive machines. These machines had important software like 3D design software, CAD/CAM software, etc. And very, very advanced OS. You could inspect (step into) a function, then into the standard library, and then you could keep stepping into and into until you ended up looking at the operating system code.
The OS code, being dynamically linked, could be changed at runtime.
If you want something that is essentially just a modernized C, go with Zig. The concept of compile-time programming having the same appearance as runtime programming is very cool in my opinion. My only major complaint at the moment is that duck typing is fairly prevalent. Sometimes function arguments are declared `anytype` and you occasionally have to dive down multiple function calls to figure out what's going on, though that's not too much of a hindrance in practice, in my experience.
My personal favorite language is Nim. Efficient, but simple, memory management (drawing from C++/Rust). You rarely have to think too hard about it, yet making fast programs is not complicated. You can stick to the stack when you want to. The flexibility at compile-time gives you great power (but it requires great responsibility -- easy to abuse in a bad way). The type system is awesome. The only downside for me is the tooling. The LSP needs much optimization, for example.
Also, I believe high-level compiled languages suffer from the fact that it is very hard to tell which construct is expensive and which is a zero-cost abstraction. Rust has the same issue, but "zero-cost" is a major feature of the language so you don't feel bad using an Iterator, for example, in kernel code. With Nim it is hard to tell.
import other
varA other.`+` varB
Which is very ugly. At that point, we might as well just go with the function name approach that languages like Go take: customAdd(varA, varB)
I suppose you could change it so operators are imported into the same namespace, and non-operators still require a separate namespace when referred to. But that makes it even more complicated in my opinion. I agree it's less obvious what's coming from where, but I think when your libraries have distinct responsibilities, it usually ends up being pretty straight-forward what function comes from where based on how it's named (if it's written well).https://tour.dlang.org/tour/en/gems/compile-time-function-ev...
You mean, something that Lisp does since the early 1980s?
Um, no. Debugging a macro in Lisp is a terrible experience while debugging a comptime function in Zig is brain dead simple.
Zig is the first "macro" system I've used that doesn't want to make me blow my brains out when I need to debug it.
1. It has an actually sound type system.
2. The language and standard library are waaaaaaaay ahead of Javascript.
3. The tooling is top notch. Better than JS/TS.
But on the other hand:
4. Way smaller ecosystem.
5. Debugging is worse if you're compiling to JS. The fact that the code you run is basically identical to the code you write in TS can be a big advantage. Only really applies for web pages though.
6. Type unions are way nicer in TS.
7. Non-nullable types interact badly with classes. It can make writing methods correctly really awkward - you have to explicitly copy member variables to locals, modify them and then write them back.
8. Way smaller community.
But when the types are sound you can use them to compile better code. That's what most languages with "proper" static types (not just type hints) do.
I know you didn't ask me but I think that not ensuring soudness is a feature because it allows the type system to wrap something that could work without it. Would you like unit tests if removing them would break your code? Maybe it's not a fair comparison, or maybe it is...
You can use other libraries for this like Riverpod with flutter_hooks and functional_widget which essentially removes the OOP structure of widgets and turns them more into functions, in a way.
I have also moved back hard to using TCL as my scripting language. I like it too much, and bouncing between Python, Go, and such for DevOps glue tires me out.
For systems, I love using plan9 (9front) to solve problems, which grounds me to C, awk, sed, and the rc shell.
A few decades ago plenty of Oberon dialects.
As language geek, I randomly select languages when doing hobby coding.
Regarding Go's remark, even if I dislike Go's authors decisions, back in my day writing compilers, linkers, firmware, networking stacks, and OS services was considered systems programming.
Likewise .NET team has been making wonders catching up to what C# 1.0 should have been for low level code, given its Delphi linage.
Java, in the context of being whole Android userspace, including drivers, there is very little systems exposed in the NDK. Vulkan is one of the few things not exposed to Java land, and that is being fixed with WebGPU like API in an upcoming version.
Discarding the preprocessor and replacing it with a proper module system is huge. I got burnt by templates and horrifying compile times in C++, but haven't had any problems with D templates. The module system makes templates feel much more natural to use. The syntax for templates is a huge improvement, and throwing `static if` into the mix results in concise and easy-to-read code.
I also quickly realized (with the help of some people on the D discord) that the garbage collector is fine for my needs. So I don't have to spend any time thinking about memory management... put stuff on the stack when I can for speed, othrewise just GC and don't think about it. I think there may be some issue with multithreading and the GC, but this is supposed to get fixed with the new GC that's on the way.
There are a few other nice QOL improvements. Getting rid of `->` is honestly worth its weight in gold. There's nothing difficult about forgetting to change a `.` to a `->` or vice versa in C++, but not having to trip over it periodically when you're compiling makes the language that much smoother. I was also initially confused by the `inout` keyword but have come to really like that, as well. Little niceties like `const(T[])` are small but, again, reducing just a little bit of friction like this across the language makes D much, much more pleasant to deal with than C++.
I think the main challenge the language is facing right now is that it's huge and a lot of it is still getting worked out. I never thought I'd pine for C++'s "rule of 3/5/0", but it's a lot tighter and more logically consistent than the equivalent in D. But part of that is there being a huge community of C++ developers who have taken the time to promulgate rules of thumb in the community. I'd kill for an "Effective D" book to short circuit some of this process... after all, I'm trying to write code, not play at the margins, tinkering with D's idiosyncracies.
https://en.m.wikipedia.org/wiki/Scott_Meyers
The Last Thing D Needs - Scott Meyers - DConf 2014
> (...) realized (with the help of some people on the D discord) that the garbage collector is fine for my needs.
Do you envision linking in a garbage collector in your eventual c++ rewrite?
In my area (numerical methods and computational geometry), I do not need anything to run in real or soft real time. The GC pauses aren't a concern. In which case, there is no real performance concern other than what I mentioned about the pauses being effectively single-threaded (my understanding... maybe this isn't exactly right). But this is supposed to be improved at some point, so whatever. Not having to explicitly think about memory management is a pure win.
On the other hand, my understanding is that using a GC in C++ could confuse things like Valgrind and ASan. Converting the entire codebase to use a GC is infeasible; so, if it made things more difficult for others by making these tools harder to use, it would be a nonstarter. But maybe this is just an imagined difficulty.
Another option is to just implement some scoped allocators. Everything I'm working on at the moment is "pure": some complicated operation applied to some fixed data. So, use an allocator to simulate GC within the scope of what I'm doing.
If anyone has thoughts here I'm definitely interested to here. Not that I'm looking forward to a C++ rewrite. :`(
So far in converting the lexer it does make it more comprehensible, it will probably do the same for the parser and AST. The real interesting bit will be once I tackle the later stages.
Also on a more serious note: I started some projects in Zig and even though most of my future projects will be built on a bedrock of C code, more and more of the top-level layers will happen in Zig.
I do use those so thank you :)
What I love most about C is the fact that it doesn't talk down to me no matter what crazy ideas I come up with. It's therapeutic for me, reminds me why I started writing code in the first place.
I realize that's also what many hate about it, the fact that it gives other people freedoms they would never trust themselves with.
https://github.com/danos/vyatta-dataplane/blob/master/src/npf/config/gpc_hw.c#L600-L623
https://github.com/danos/vyatta-dataplane/blob/master/src/npf/config/npf_rule_group.c#L252-L280
That is code which is around 4 years old.For the latter example, one could theoretically avoid declaring the variables 'event' an 'rg_match', instead direcly including the compound literals in the respective function calls. However it is a question of taste, and what is more readable.
(The above have designated initialisers, I'm can't remember if there are any compound literal examples there.
There is however one here, when the BSTR_K macro is also expanded, also the earlier BSTR_INIT:
https://github.com/danos/vyatta-dataplane/blob/master/src/npf/bstr.h#L199
Embedded is diverse. I would not use .NET for small embedded, i.e. stuff running on Arduino or ESP32.
However, I have successfully used .NET runtime in production for embedded software running on top of more performant SoCs, like 4 ARMv7 cores, couple GB RAM, Linux kernel. The software still has large pieces written in C and C++ (e.g. NanoVG rendering library) but all higher-level stuff like networking, file handling, and GUI are in memory-safe C#.
https://learn.microsoft.com/en-us/archive/msdn-magazine/2015...
https://www.ghielectronics.com/netmf/
You should also pour one out for Longhorn, where internal politics tanked the idea, and eventually Windows team redid all those .NET based ideas into COM/C++, and were even proud of doing so (see Hilo sample documentation), hence why nowadays COM based libraries are the main way to expose modern Windows APIs (aka post Windows XP).
Had they collaborated instead, probably Windows would be closer to something like Android userspace nowadays.
Or for Ironclad, another one from Microsoft research, lesser known, also from the same research group, which even includes type safe Assembly,
https://www.microsoft.com/en-us/research/publication/safe-to...
Microsoft Research has plenty of work in such domains, they also had a LLVM like compiler framework, based on MSIL, called Phoenix, among other stuff, e.g. Dafny, FStar, Drawbridge, also come from OS projects.
Unfortunely classical Microsoft management has been more like it isn't Windows, it isn't shipping.
Nokia owns the shambling corpse that is Bell Labs. Looking beyond the English speaking world, I wouldn’t discount that the chaebols (LG, Samsung, Mitsubishi, etc) all have a few companies dedicated to research at the Bell Labs level.
* Span<T>: https://learn.microsoft.com/en-us/archive/msdn-magazine/2018...
* C# now has a limited borrow checker-like mechanism to safely handle local references: https://em-tg.github.io/csborrow/
* Here is a series of articles on the topic: https://www.stevejgordon.co.uk/writing-high-performance-csha...
* In general, avoid enterprise style C# (ie., lots of class and design patterns) and features like LINQ which allocate a lot of temporaries.
Also can recommend reading all the performance improvements blog posts by Stephen Toub as well as learning to understand disassembly at a basic level which .NET offers a few convenient tools to get access to.
Helped me a bunch to get Sharpl spinning, much appreciated.
C# is Java-but-with-lessons-learnt, and is significantly less verbose and "enterprisey" in typical usage.
Modern .NET 9 especially embraces compile-time code generation, a "minimal" style, and relatively high performance code compared to Java.
Even if the JVM is faster in benchmarks for hot loops, typical Java code has far more ceremony and overhead compared to typical C# code.
Can you give an example? I don't think this is true anymore for modern Java (Java 21+)
Java currently beats .NET by about 40%: https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...
I judge more idiomatic / typical code complexity by the length of stack traces in production web app crashes. Enterprise Java apps can produce monstrous traces that are tens of pages long.
ASP.NET Core 9 is a bit worse than ASP.NET Web Forms used to be because of the increased flexibility and async capability, but it's still nowhere near as bad as a typical Java app.
In terms of code length / abstraction nonsense overhead, have a look at the new Minimal APIs for how lightweight code can get in modern C# web apps: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m...
What matters in practical scenarios is that ASP.NET Core is significantly faster than Spring Boot. If you have a team willing to use ActiveJ or Vert.x, you are just as likely have a team willing to customize their C# implementation to produce numbers just as good at web application tasks and much better at something lower level. There are also issues with TechEmpower that make it highly sensitive to specific HW/Kernel/Libraries combination in ways which alter the rankings significantly. .NET team hosts a farm to do their own TechEmpower runs and it just keeps regressing with each new version of Linux kernel (for all entries), despite CPU% going down and throughput improving in separate more isolated ASP.NET Core evaluations. Mind you, the architecture of ASP.NET Core + Kestrel, in my opinion, leaves some performance on the table, and I think Techempower is a decent demonstration of where you can expect the average framework performance to sit at once you start looking at specific popular options most teams use.
Smaller objects from dropping identity is nice but it really doesn't seem like it gives you more explicit memory layout, lifecycle, c interop etc that C# has with their structs. Maybe I'm missing something.
Then why would they add Span<T>, SIMD types and overhaul ref types in the first place?
If you link an example snippet of the type of code that gave you pause, I’m sure there is a better and more idiomatic way to write it.
That is if you don't want to get into unsafe code.
Apart from that (which simply concerns strings, as they're the great source of performance issues, all generic best practices, applicable to any language, should be followed.
There are plenty resources on the net, just search for it.
The big take-away I got from this (admittedly quite old now) experiment is that getting advertised performance out of unmanaged languages for typical real-world (i.e., non-benchmark) tasks often requires a lot more care than people really account for. Nowadays memory dominates performance more so than CPU, and the combination of a JIT compiler and a good generational, compacting garbage collector - like C# and Java developers typically enjoy - often does a better job of turning idiomatic, non-hand-optimized code into something that minimizes walks of shame to the RAM chips.
I've been having a lot of fun with Java lately, the maturity of the language/implementation and libraries allows me to focus on the actual problem I'm solving in ways no other language can currently match.
https://github.com/codr7/tyred-java https://github.com/codr7/eli-java
A non-answer, but tangentially relevant:
I once fiddled with Forth, but never actually accomplished anything with it.
Several OSs are written in Lisp; in some of them the difference between OS and application is a bit vague. At the time none of them were available to me to play with.
I discovered Oberon and fell in love. My first real programming language was Pascal, and Oberon is part of the same family. Oberon consisted of a compiler, operating system, user interface, application software, and tools, all self-hosted on Oberon. There was even an Oberon CPU at one time. But Oberon turned out to be just an academic curiosity, and wasn't available for any hardware I had access to anyway.
Turbo Assembler FTW :)
nasm has been lovely, but I haven't used in 10+ years. https://github.com/netwide-assembler/nasm
I barely ever used it, but I noticed that MASM 5.1 is included (together with MSC 5.1 and various other Microsoft tools from 1988) in Microsoft's MIT-licensed MS-DOS repo. Trying some hello world level examples there was nothing obviously annoying about it so far.
https://github.com/microsoft/MS-DOS/tree/main/v4.0/src/TOOLS
[1] http://sbcl.org/
But ultimately I realized that I’m not writing the type of software which requires such strict verification. If I was writing an internet protocol or something like that, I may reach for it again.
You get the added benefit of being able to easily consume C libraries without much fuss. The fuss is in navigating the C APIs of decades old libraries that we all still depend on every day.
They wouldn't be using Zig otherwise. :)
Yes, but it's mostly cultural.
Rust folks have a nasty habit of trying to "Rust-ify" bindings. And then proceed to only do the easy 80% of the job. So now you wind up debugging an incomplete set of bindings with strange abstractions and the wrapped library.
Zig folks suck in the header file and deal with the library as-is. That's less pretty, but it's also less complicated.
In Zig, you can just import a C header. And as long as you have configured the source location in your `build.zig` file, off you go. Zig automatically generates bindings for you. Import the header and start coding.
This is all thanks to Zig's `translate-c` utility that is used under the hood.
Rust by contrast has a lot more steps required, including hand writing the function bindings.
In general, the expectation is that you will use bindgen [0].
It's a very easy process:
1. Create a `build.rs` file in your Rust project, which defines pre-build actions. Use it to call bindgen on whatever headers you want to import, and optionally to define library linkage. This file is very simple and mainly boilerplate. [1]
2. Import your bindgen-generated Rust module... just use it. [2]
You can also skip step 1: bindgen is also a CLI tool, so if your C target is stable, you can just run bindgen once to generate the Rust interface module and move that right into your crate.
[0]: https://rust-lang.github.io/rust-bindgen/
[1]: https://rust-lang.github.io/rust-bindgen/tutorial-3.html
[2]: https://github.com/Charles-Schleich/Rust-Bindgen-Example/blo...
https://github.com/fabulous-dev/Fabulous
(I'm sure there are more, these two are those which I could recall from the top off my head)
Theres also D but finding libraries for whatever I want to work on proves problematic at times as well.
Coming from a more Python/Java/PHP/JS background, Elixir was a lot easier to pick up and doesn't frustrate me as much. Most of the remaining scary bits involve concurrency and process supervision trees.
Macros are powerful, but also easy to use in a way that makes everything hard to debug. For those unfamiliar with them, it's a bit like a function except any expressions you call it with are not evaluated first, but arrive as metadata that can be used to assemble and run new code.
I know “why” elm, I liked everything I saw about it, but how do you combine the two, if you do?
While I think Elm is neat, it suffers from ecosystem issues. It drive a large amount of Not Invented Here because JS invented somewhere else is hard to incorporate. Also, good luck rendering arbitrary HTML that comes in as data from somewhere else.
But Oberon+ is still too high-level for many system programming tasks. So I'm designing a new system programming language called Micron (for Micro Oberon, see https://github.com/micron-language/specification) which has the full power of C without its disadvantages. You can even use it for the OS boot sequence when there is no stack and no heap, but also for higher-level application development, due to its selectable language levels.
The open source tooling has significantly improved since I started using it in the last five years.
I liked how the language stayed pretty simple compared to other C-replacements. The standard library is also pretty nice. It is however an extremely niche language, but still quite capable
- Once you cut out the legacy nonsense out of C, you can then add a few nice modern features to your language and still end up with something that's smaller and simpler than C.
- Performance optimizations are possible. But by default, simplicity is always picked over performance. (i.e. most UB is eliminated, even if it hurts performance)
- A few basic pointer features go a long way in eliminating memory most memory safety bugs. There are non-nullable pointers, ranges with automatic bound checks, and no C strings.
- They get a lot of mileage out of their tagged union type. It allows for elegant implementations of algebraic types, polymorphism, and error handling.
- The error handling!
hare-ev is using rt to make the epoll syscalls. [1]
> On Linux, ev is implemented with epoll. Note that, on Linux, I/O on regular files is always blocking.
epoll is orthogonal to threads. It _can_ be used in a multithreaded program, but it doesn't have to be. It may very well be implemented in terms of kernel threads, but that's not what I'm talking about. I'm talking about user-space threads.
https://harelang.org/documentation/install/#supported-platfo...
Interesting reasons.
Did you ever check out Eiffel for systems programming work?
I had been checking it out some years ago, and apart from the general points about it, one use of it that I found interesting was in an article about using it for creating HP printer drivers. The author had mentioned some concrete benefits that they found from using it for that purpose.
Edit: I searched for that article, and found it:
Eiffel for embedded systems at Hewlett-Packard:
Despite its history it is a pretty modern language if you enable all warnings, set implicit none and ignore the old style of coding (a la FORTRAN 77 of older).
I had read about him and about his FP language early in my career, when I was reading up on all kinds of computer subjects.
https://en.m.wikipedia.org/wiki/John_Backus
[0] https://en.wikipedia.org/wiki/PRIMOS
[1] https://en.wikipedia.org/wiki/Livermore_Time_Sharing_System
[2] https://webhome.weizmann.ac.il/home/fhlevins/RTF/RTF-TOC.htm...
from the Wikipedia article about Fortran, under the Science and Engineering section:
https://en.m.wikipedia.org/wiki/Fortran
Although a 1968 journal article by the authors of BASIC already described FORTRAN as "old-fashioned",[58] programs have been written in Fortran for many decades and there is a vast body of Fortran software in daily use throughout the scientific and engineering communities.[59] Jay Pasachoff wrote in 1984 that "physics and astronomy students simply have to learn FORTRAN. So much exists in FORTRAN that it seems unlikely that scientists will change to Pascal, Modula-2, or whatever."[60] In 1993, Cecil E. Leith called FORTRAN the "mother tongue of scientific computing", adding that its replacement by any other possible language "may remain a forlorn hope".[61]
It is the primary language for some of the most intensive super-computing tasks, such as in astronomy, climate modeling, computational chemistry, computational economics, computational fluid dynamics, computational physics, data analysis,[62] hydrological modeling, numerical linear algebra and numerical libraries (LAPACK, IMSL and NAG), optimization, satellite simulation, structural engineering, and weather prediction.[63] Many of the floating-point benchmarks to gauge the performance of new computer processors, such as the floating-point components of the SPEC benchmarks (e.g., CFP2006, CFP2017) are written in Fortran. Math algorithms are well documented in Numerical Recipes.
That didn't age well.
My professor working on control system analysis for electrical power grids later thought that if he were to write it today, it would likely be done in matlab.
It has features like classes, first-class functions, tuples, ADTs, unboxing, and a little data layout language, some unsafe features, like support for generating and integrating new machine code, and can talk directly to kernels.
Sure these days not many folks write OS kernel in Pascal, but there are some, e.g: https://github.com/torokernel/torokernel
I once want to try Forth (perhaps there's a Unix clone in Forth?), but seems like most folks using it are embedded/hardware devs.
“The Macintosh used the same Motorola 68000 microprocessor as its predecessor, the Lisa, and we wanted to leverage as much code written for Lisa as we could. But most of the Lisa code was written in the Pascal programming language. Since the Macintosh had much tighter memory constraints, we needed to write most of our system-oriented code in the most efficient way possible, using the native language of the processor, 68000 assembly language. Even so, we could still use Lisa code by hand translating the Pascal into assembly language.”
MacOS was clearly Pascal-oriented, with its ‘Str255’, ‘Str63’, etc. data types.
source: C language developers for the Macintosh OS
even early Windows versions were somewhat Pascal-oriented, with things like "long far pascal" used in C function declarations, to indicate the calling convention being used, whether right to left, or left to right, iirc.
I’d like to give Zig and Nim a go, but Go and Elixir are probably next on the list, simply because I have unread books for them staring at me.
That said, the edges are still (very) rough when it comes to tooling (generics and macros absolutely murder Nimsuggest/lsp) and also "invisible" things impacting performance such as defect handling (--panics:on) and the way the different memory management schemes introduce different types of overhead even when working with purely stack allocated data.
But even with all that it's still an extremely pleasant and performant language to work with (when writing single threaded programs at least)
For parallel programming, there are also handy libraries. The best of which is Weave[1], but Malebolgia[2] is authored by the creator of Nim and works well in its own way too.
There is also active work being done on a new implementation of Nim which intends to clean up the some of the long-term spaghetti that the current implementation has turned into (like most long-term projects do), called Nimony[3], and is also led by the original creator of Nim. It is years away from production according to him, but is at least in the works.
I'd have to say Nim is by far my favorite programming language. The terseness, flexibility, and high performance, make it feel almost sci-fi to me. My only major complaint currently is the tooling, but even the tooling is still adequate. I'm glad it exists. Highly recommend.
[1] https://github.com/mratsim/weave
https://www.google.com/search?q=perl+simple+things+easy+and+...
Desktop apps: definitely. There's bindings for various UI toolkits like GTK and I know of a few people working on games in Ada, usually thick bindings like SDL: https://github.com/ada-game-framework and https://www.youtube.com/playlist?list=PLn3eTxaOtL2Oxl9HbNOhI... There's also Gnoga, which is similar to Electron for writing UI apps: https://github.com/Blady-Com/gnoga
A bunch of libraries for various drivers or other useful things on https://alire.ada.dev/crates.html (to include something like "ada_gui" in your ada project, you would just use alire, e.g. `alr with ada_gui`).
Much of Ada's webapp functionality is either interfacing with the Ada Web Server or gnoga (I've written a few servers using Ada Web Server, including one for an ".io" game).
There's an LLVM compiler which in theory can produce wasm but I've not messed with it: https://github.com/AdaCore/gnat-llvm
Mobile platforms can be targetted and cross-compiled in Alire, but I'm not sure who's doing it right now.
For anyone interested, I definitely recommend checking out some of the presentations of Ada's recent FOSDEM dev room https://fosdem.org/2025/schedule/track/ada/
There are fewer, and they do tend to be more demanding, but they certainly exist.
there was some Russian dev running a systems tech company, I forget his name, living in Thailand, like in koh samui or similar place. he used D for his work, which was software products. came across him on the net. I saw a couple of his posts about D.
one was titled, why D, and the other was, D as a scripting language.
I thought both were good.
I did, a quick thought, regarding https://github.com/docandrew/SPARKTLS: you might find https://github.com/Componolit/libsparkcrypto useful, too, if you have not already.
Nice projects BTW!
https://en.wikipedia.org/wiki/Seed7
It has a SourceForge page that actually doesn't suck and you will not hate landing into, unlike almost anything else SourceForge:
https://seed7.sourceforge.net/
Though there is an old school SourceForge file area with tarballs, the project page also links to a GitHub repo.
https://lwn.net/Articles/1006117/
It’s not always clear what is meant by “system programming”. I’ve begun writing utility scripts in Julia; it’s practical now because the startup time is vastly improved. These can be run like bash scripts, with a shebang line that invokes Julia with the desired environment (using the --project flag).
I think it is clear enough. The language must have a small or non-existing runtime so it is practical to write systems that do not ship the same fat runtime on every binary. The language must support compiling to binaries, otherwise it really cannot be used by itself for systems. It must provide access to the available Operating System API directly without the need for bindings (to the extent possible, as some OSs only expose the C API;ABI).
What is a system, you may ask. I think you can define that as anything that can run by itself (no runtime) and perform any "low level" operation permitted by the OS.
https://boo-language.github.io/ "A scarily powerful language for .Net". I didn't use it for too long before switching to Iron Python.
These days I would reach for a shell script for general scripting, filling in the gaps with maybe a C# console app or something in Common Lisp if I want/need some interactivity.
Something that happens pretty frequently is I'll take information I've written into an emacs org doc and run it through a CL function, whose output could be an org mode table which I can from there export to a different document format if necessary.
Currently solo managing a 30k line data analysis application I built for my company. Easily fits in my head given the obvious pyramidal functional-like structure. Maybe two lines of memory semantics anywhere in the entire thing, and only one module that's OO with a constrained scope. Lots of static data files (style sheets, fonts) slurped up as const strings at compile time. Incredible performance. Invoked by our PHP server backend, so instead of doing parallel or async in the analysis, the server gets that through batch invocation.
Working stupid well for our product, plus I can easily compile binaries that run on ARM and RISC-V chips for our embedded team just by invoking the proper gcc backend.
Replaced an ailing and deliberately obfuscated 20 year old jumble of C and PHP designed to extort an IP settlement from my company. Did it in a year.
Firmware is probably still best done in C (sometimes, C++), mostly because so many SDKs, libraries, and toolkits are done in those languages. Also, there's decades of "prior art" in C. Lots of places to look for solutions.
I worked on a project, where we tried using a very "less-popular" language for firmware.
It didn't end well.
I'd say that being a "Systems Programmer" means that you are operating at a fairly "advanced" level, where the benefits of "safer" languages may be less of a factor, and the power of more "dangerous" languages is more attractive.
Of course, on HN, suggesting C or C++ is suggesting "less popular" languages...
I've known many highly experienced and intelligent software devs that are terrible at that stuff, and are like coding time bombs.
Another "unpopular" language, is PHP: https://w3techs.com/technologies/history_overview/programmin...
For me it doesn't scale beyond a few dozen kilobytes (executable program file size) per program. For others (such as Chris Sawyer) assembly scales much better.
NASM supports more output file formats (i.e. .o files for many systems), and it can receive macro definitions from the command line (e.g. `nasm -DDEBUG`).
I've coded professionally in a dozen languages, including a lot of time in x86 assembler, C++ etc.
Still like VB.NET better than any other. To me, it was the most readable code.
Idk, if someone just reinvents clean C without the nonsense garbage with some modules and package manager this will be a huge win. Let me access my null pointers, let me leak memory, just get the hell out of my way and let me program and hold my hand only where I want it to be held - sane types that give me refactoring, code completion and code understanding, modules with imports. Let compiler give sane error messages instead of this cryptic c++ garbage. Is this too much to ask?
I wouldn't mind a "better C" that could use an LLM for static code analysis while I was coding. I.e. be more strict about typing, perhaps. Get out of my way, but please inform me if I need more coffee.
- It becomes impossible to call the wrong deallocation procedure.
- Deallocation can happen when the type (or allocator) goes out of scope, preventing dangling pointers as you can't have a pointer type in scope when the original type is out of scope.
This probably goes against Zig's design goal of making everything explicit, but I think that they take that too far in many ways.
A fairly common pattern in the Zig stdlib and my own code is to pass the allocator to the `init` function of a struct.
If what you mean is that allocation should be internal to the type, I don't agree with that. I much prefer having explicit control over allocation and deallocation.
The stdlib GPA for example is pretty slow, so I often prefer to use an alternative allocator such as an arena backed by a page allocator. For a CLI program that runs and then exits, this is perfect.
So, if only there is std with implicit allocators?
Separating interface and implementation is a good thing, but often you just want to split things into separate files without separate compilation. C supports #include and so it is maximally flexible.
first of all, thanks, guys, to all who replied. that's a wealth of info to follow up on.
referring to the comments seen so far:
I considered mentioning (Free) Pascal, but thought of not doing it, because I thought it is nowadays too niche, even though it is one of my early programming language loves (forgetting that the title of my post says "less popular languages" :)
and I also didn't think of Ada at all, somehow, although have been interested in it, too, lately, and have been checking out websites and blogs about it, and also have been searching hn.algolia.com for posts about it.
so it was cool to see multiple mentions of Ada here, by people who like and use it.
I'm surprised how popular Ada is here in these comments. I like some of the ideas (ranged types, for example) in Ada, I'm inspired to give it a try after seeing all the comments here.
Admittedly, the slowness of the compiler (due to the nature of the language), and lack of better tooling are not helping, but 9 out of 10 times I enjoy way more writing Crystal than Go.
Agreed. Good IDE support can easily add explicit types too.
gen servers, everywhere.
Not write-only like its ancestor. So many language criticisms solved. A true pleasure.
Still in its function-first development phase but apparently near the end. AST rewrite is still underway, then the team will address performance.
I wonder what the major differences are.
It's awesome. But I'm biased because I designed it.
You can't build anything, but you can build many things much more easily. Particularly distributed systems.
Also, shoutouts to Zig, Crystal, and Ballerina: those are other interesting ones off the top of my head, that folks should look into.
We still use it for all kinds of web services development work, mainly because there's years of in-house modules for everything and the malleability Perl has.
One downside is, of course, far less adoption and libs usually will have to have ReScript bindings written for them but that's fairly straight-forward and not something I have to do very often.
if the question of go being a systems language is controversial, then pike is even more so. i would situate pike somewhere between python and go. pikes major drawback is that it doesn't produce standalone executables.
the real question i'd like to ask is, what actually is a systems language?
I like that for low level SoC stuff, there is now the packed struct, which makes register representation very nice to deal with, especially with the definable int types, although I'm often torn between a u1, bool and sometimes even a enum(u1) for certain flags. I tend to let the SoC documentation (naming convention) drive that decision.
Otherwise there is a lot of nice and simply designed language stuff in Zig that also takes me back to my C / asm days. My least fav. part is maybe multi-line string literals that look like comments. I prefer the kotlin approach there.
I'd like to find a non walled-garden Zig community if there are other Zig fans here, ie just a forum. Or tips on editor to use? since I am tired of electrons being burned needlessly, and almost feel like I need to VM these modern dev tools.
It seems to be good enough that I basically don't interact with the Zig Discord anymore.
For version management I use mise (or scoop on Windows).
It's still being developed but on man the language is good.
You read its documentation and pretty much every-single-thing is right decision (from my PoV).
Beautiful language if you like OCaml, Rust. Primary target is wasm, but compiles to native as well.
I still use BlitzMax for game development (when I get time) - there's an updated version with some nice language additions support for more architectures, including the Raspberry Pi: https://blitzmax.org/
If you have played video games by Shiro Games (Evoland, Dune Spice Wars) or Motion Twin (Dead Cells) or even Paper Please!, then you had been exposed to this language.
I personally don't think programming paradigms like OOP, procedural or functional make anything easier/harder necessarily, just talking QoL stuff.
And obviously "easy" is relative: Odin is still a low level language where you need to manage your own memory.
https://jank-lang.org/ looks interesting to me --I have not tried it yet. I'm not sure if this language could qualify as a systems programming language. What do you think?
The biggest thing to be added recently is GPU programming, which given Mojo's focus on ML/AI makes a lot of sense.
It's probably not the best language to look into for general purpose systems programming, but if you are going to be interacting with GPUs or other hardware then maybe it's good to give it a look.
It is still changing a lot, so no real stability yet, but to be expected for such a young language.
Unfortunately the Zig compiler crashes when building my project and I haven't looked into debugging the compiler to find out why. There's a lot of gnarly code in this project (based on dikumud from ~1989?) with many inexperienced hands touching it over the decades.
Nowadays, alternatives exist, and so people can answer with one.
None of that has any meaning for the popularity of Rust or lack thereof.
CoffeeScript: Felt in love with CS as I wanted to rapid protypes with (now defunct) Framer Classic.
Smalltalk/Squeak/Vala are something I have wanted to dabble with for a while but haven't gotten around to.
I also have messed around with nim a little bit. I like it, I’m just not sure it’s worth putting a lot of effort into.
But for system programming, which is generally understood as developing OS kernel (filesystem, memory management, device driver etc) or embedded which you built a mini OS), then Go is not the proper choice (features like coroutine, AFAIK, needs OS). You'd want C/Pascal/Rust/Zig/<what else?> ...
But I have been recently using it for some tooling and small servers on personal projects where I'd have used python before.
Frankly it's been a joy and I wish I'd started earlier. The concurrency primitives are great, and the static binaries make deployment easy (raspberry pi in this case).
Struggle to use anything other than python professionally, the need to settle on a common denominator trumps pretty much everything else.
So a good FFI to C has always been an important requirement for me.
grammar Parser {
rule TOP { I <love> <lang> }
token love { '♥' | love }
token lang { < Raku Perl Rust Go Python Ruby > }
}
say Parser.parse: 'I ♥ Raku';
# OUTPUT: 「I ♥ Raku」 love => 「♥」 lang => 「Raku」
The same thing in other languages would require a lot more code, without a parser module. An LLM tells me that functional languages can handle this stuff well too, but that Raku code is just extremely simple to graspBut yeah, most stuff is easy, there is more than one way to do it, and the impossible isn't. Or something.
It’s still kicking.
there are people making operating systems for AMD64 in Pascal etc.... so there's plenty of choices, odd and even ones.
some examples of different interpretations of 'systems programming'.
low level systems code - like interacting with devices directly on bare metal. (mmio/io etc.)
kernel code - like writing a new subsystem in linux, which uses other drivers.
high-level systems - like game engines, automation frameworks, other high performance oriented systems-with-lot-of-subsystems?
These different domains, on different targets, might have more or less plausible options for you to try.
It was somehow rock-solid in over a year of factory production overseas. Which might not have been the case, if I'd written it in C and put in the kernel, like might normally be good practice.
(It hooked into some kernel interfaces, did a little multiple USB device management, and low-level keyboard-like decoding with efficient I/O, and buffered and parsed and did something with the output of that.)
I have mixed feelings about Python: it often it hurts more than it helps (if you know better ways to do things), but the ecosystem has some nice off-the-shelf components, and it's popular/employable. However, due to the popularity, the average quality of any article you might find through Web is unfortunately low.
(For an unpopular language, you'll get a few people writing articles from a junior/neophyte knowledge level, as part of their learning, or because someone said it was good for resume-boosting. That can be good. But no one is going to waste time pounding SEO low-quality filler for a language that doesn't make money. Well, at least they wouldn't before LLMs, but who knows how the economics have changed, now. :)
I was reading another post about someone showing some AI stuff and then: https://news.ycombinator.com/item?id=43246127
Bash.
> I used C for both application programming and systems programming
Gross. Learn C++, it's better than C in every way! shotsfired.jpg
> I've been wanting to get back to doing some systems programming, but preferably in a more modern language (than C) which is meant for that.
Use C++ then. Or if you're a hater and/or don't know how to keep footguns pointed away from your legs, use Rust.
> less commonly used ones
but tbqh why not Xojo?