Carpenters, plumbers, masons & electricians work on houses 3-300 yrs old, navigate the range of legacy styles & tech they encounter, and predictably get good outcomes.
Only C has, yet, given use that level of serviceability. C99, baby, why pay more?
When there’s an alternative that can compete with that sort of real-world durability, C will get displaced.
- People just stopped caring about operating systems research and systems programming after ~2005. Actual engineering implementations of the concepts largely stopped after the second half of 90s. Most developers moved on to making websites or applications in higher level programming languages.
- C hit a perfect balance of being a small enough language to grok, being indepedent of the system manufacturers, reflecting the computer architecture of 80s, actually small in syntax and code length and quite easy to implement compilers for. This caused lots of legacy software being built into the infrastructure that gave birth to the current contemporary popular OSes and more importantly the infrastructure of the Internet. Add in .com bubbles and other crises, we basically have/had zero economic incentive to replace those systems.
- Culture changed. We cared more about stability, repairability and reusability. Computers were expensive. So are programmers and software. Now computers are cheap. Our culture is more consumerist than ever. The mentality of "move fast and break things" permeated so well with economic policy and the zeitgeist. With AI it will get worse. So trying to make a real alternative to C (as a generic low level OS protocol) has reduced cultural value / optics. It doesn't fill the CVs as well.
It doesn't mean that people haven't tried or even succeeded. Android was successful in multiple fronts in replacing C. Its "intents" and low level interface description language for hardware interfaces are great replacement for C ABI. Windows' COM is also a good replacement that gets rid of language dependence. There are still newer OSes try like Redox or Fuchsia.
I am not sure I buy this from a system perspective, especially when taking this[1] into consideration.
______
1. Alexis King's reply to "Why do common Rust packages depend on C code?". Link: https://langdev.stackexchange.com/a/3237
There was no Rust at that point, and I used the most basic tool that could do it.
IMO I do see this changing in the future as higher power computers become expensive once again, and I'm not just referring to the recent chip shortage.
free_sized
#embed
static_assert
Types for enum
Alignof, alignas, aligned_alloc
_Atomic
I could write a whole essay about why, but now isn’t the time. I’m just going to enjoy the fact that TFA and the author don’t get it.
Is there an answer here more interesting than "it's what Unix and Windows were written in, so that's how programs talked to the OS, and once you have an interface, it's impossible to change"?
C is a pretty OK language for writing an OS in the 70s. UNIX got popular for reasons I think largely orthogonal to being written in C. UNIX was one of the first operating systems that was widely licensed to universities. Students were obliged to learn C to work with it.
If the Macintosh OS had come out first and taken over the world, we'd probably all be programming in Object Pascal.
When everyone wanted to program for the web, we all learned JavaScript regardless of its merits or lack thereof.
I don't think there's much very interesting about C beyond the fact that it rode a platform's coattails to popularity. If there is something interesting about it that I'm missing, I'd definitely like to know.
To me it looks like there is a better way to do things and the better one eventually wins.
From page 52:
> Operating systems have to deal with some very unusual objects and events: interrupts; memory maps; apparent locations in memory that really represent devices, hardware traps and faults; and I/O controllers. It is unlikely that even a low-level model can adequately support all of these notions or new ones that come along in the future. So a key idea in C is that the language model be flexible; with escape hatches to allow the programmer to do the right thing, even if the language designer didn't think of it first.
This. This is the difference between C and Pascal. This is why C won and Pascal lost - because Pascal prohibited everything but what Wirth thought should be allowed, and Wirth had far too limited a vision of what people might need to do. Ritchie, in contrast, knew he wasn't smart enough to play that game, so he didn't try. As a result, in practice C was considerably more usable than Pascal. The closer you were to the metal, the greater C's advantage. And in those days, you were often pretty close to the metal...
Later, on page 60:
> Much of the C model relies on the programmer always being right, so the task of the language is to make it easy what is necessary... The converse model, which is the basis of Pascal and Ada, is that the programmer is often wrong, so the language should make it hard to say anything incorrect... Finally, the large amount of freedom provided in the language means that you can make truly spectacular errors, far exceeding the relatively trivial difficulties you encounter misusing, say, BASIC.
Also true. And it is true that the "Pascal model" of the programmer has quite a bit of truth to it. But programmers collectively chose freedom over restrictions, even restrictions that were intended to be for their own good.
Not everything has to be written with all the warmth and humanity of a UN subcommittee interim report on widget standardisation.
There has to be an ABI that has to be agreed upon by everyone. Otherwise there wouldn’t be any interoperability. And if we didn’t have the SystemV ABI — what would we use instead? Prepare for a long debate as every language author, operating system designer, and platform under the sun argues for their respective demands and proposals. And as sure as the sun rises in the East someone, somewhere, would write an article such as this one decrying that blessed ABI.
SystemV shouldn’t be the be all and end all, IMO. But progress should be incremental. Because a lingua franca loses its primary feature and utility when we all return to our own fiefdoms and stop talking to one another in the common tongue.
It’s a pain in the metaphorical butt. But it’s better, IMO, than the alternatives. It’s kind of neat that SystemV works so well let alone at all.
>Furry avatar
>"About Me I am Aria Desires, Gankra, and a cat."
I seem to recall reading this before, I must have not noticed this furry stuff because I would have ignored it.
My take was that this is a rustacean who was having trouble porting something to C and then went in a deep rabbit hole of C traiditional software, and instead of recognizing that perhaps they are in way over their head, they concluded that the issue was in the C, that it's all wrong and therefore their mission of porting stuff to Rust is even more virtuous.
Being upfront about it by authors dispels a lot of the potential tension and substantially changes the way we interact. I understand there may be a conflict and not everyone will want to advertise their diagnosis, but in my experience once it becomes clear that's what's going on, it helps all the parties involved.
It's understandable how we got here, but it's an entirely legitimate question - could things be better if we had an explicitly designed interoperability interface? Given my experiences with cgo, I'd be pretty solidly on the "Fuck yes" side of things.
(Of course, any such interface would probably end up being designed by committee and end up embodying chunks of ALGOL ABI or something instead, so this may not be the worst possible world but that doesn't mean we have to like it)
[1] I absolutely buy the argument that HTTP probably wins out for out of process
Yes, we could define a language-agnostic binary interoperability standard with it's own interface definition language, or IDL. Maybe call it something neutral like the component object model, or just COM[1]. :)
Also, if I remember correctly, the first Rust and Go compilers were written in C.
It kinda is. Because it was made in the 1970s, and it shows (cough null-terminated strings uncough).
Or you know having a 64-bit wide integer. Reliably.
You did read the article, right?
Verilog is loosely based on C. Most designs are done in Verilog.
There's some people that still develop on C for sure, but it's limited to FOSS and embedded at this point, Low Level proprietary systems having migrated to C++ or Rust mostly.
I agree with the main thesis that C isn't a language like the others, something that we practice, that it's mostly an ancient but highly influential language, and it's an API/ABI.
What I disagree with is that 'critiquing' C is productive in the same way that critiquing Roman Law or Latin or Plato is productive, the horse is dead, one might think they are being clever or novel for finding flaws in the dead horse, but it's more often a defense mechanism to justify having a hard time learning the decades of backwards compatibility, edge cases and warts that have been developed.
It's easier to think of the previous generation as being dumb and having made mistakes that could have been fixed, and that it all could be simpler, rather than recognize that engineering is super complex and that we might as well dedicate our full life to learning this craft and still not make a dent.
I applaud the new generation for taking on this challenge and giving their best shot at the revolution, but I'm personally thinking of bridging the next-next generation and the previous generation of devs, the historical complexity of the field will increase linearly with time and I think if we pace ourselves we can keep the complexity down, and the more times we hop unto a revolution that disregards the previous generation as dumb, the bigger the complexity is going to be.