I admire Kees Cook's patience.
> you didn't coordinate with anyone. You didn't search lore for the warning strings, you didn't even check -next where you've now created merge conflicts. You put insufficiently tested patches into the tree at the last minute and cut an rc release that broke for everyone using GCC <15. You mercilessly flame maintainers for much much less.
Hypocrisy is an even worse trait than flaming people.
Eh I mean everyone's a hypocrite if you dig deep enough—we're all a big nest of contradictions internally. Recognition of this and accountability is paramount though. He could have simply owned his mistake and swallowed his pride and this wouldn't have been such an issue.
I remember Maddox on xmission having a page explaining that while he may make a grammatical error from time to time, he has published literally hundreds of thousands of words, and the average email he receives contains 10% errors.
However, Linus is well-known for being abrasive, abusive, call it what you want. If you can't take it, don't foist it, Linus. Even if you've earned the right, IMO.
Again, the only non-fictional people I know who qualify, are actually really nice to people.
You need to know what you support. If you are going to change, it must be planned somehow.
I find Torwalds reckless by changing his development environment before release. If he really needs that computer to release the kernel, it must be stable one. Even better: it should be a VM (hosted somewhere) or part of a CI-CD pipeline.
Compilers will be updated, they will have new warnings, this has happened numerous times and will happen in the future. The linux kernel has always supported a wide range of compiler versions, from the very latest to 5+ years old.
I've ranted about "-Werror" in the past, but to try to keep it concise: it breaks builds that would and should otherwise work. It breaks older code with newer compiler and different-platform compiler. This is bad because then you can't, say, use the exact code specified/intended without modifications, or you can't test and compare different versions or different toolchains, etc. A good developer will absolutely not tolerate a deluge of warnings all the time, they will decide to fix the warnings to get a clean build, over a reasonable time with well-considered changes, rather than be forced to fix them immediately with brash disruptive code changes. And this is a perfect example why. New compiler fine, new warnings fine. Warnings are a useful feature, distinct from errors. "-Werror" is the real error.
Linux decided, on a whim, that a pre-release of GCC 15 ought to suddenly be a compiler that the Linux project officially uses, and threw in some last-minute commits straight to main, which is insane. But even without -Werror, when the project decides to upgrade compiler versions, warnings must be silenced, either through disabling new warnings or through changing the source code. Warnings have value, and they only have value if they're not routinely ignored.
For the record, I agree that -Werror sucks. It's nice in CI, but it's terrible to have it enabled by default, as it means that your contributors will have their build broken just because they used a different compiler version than the ones which the project has decided to officially adopt. But I don't think it's the problem here. The problem here is Linus's sudden decision to upgrade to a pre-release version of GCC which has new warnings and commit "fixes" straight to main.
But pushing breaking changes just to suppress some new warning should not be the alternative. Working to minimize warnings in a pragmatic way seems more tenable.
Whereas in a lot of the C (and C++, and even older Python) codebases I've seen, these sorts of dependencies aren't locked to the same extent, so it's harder to track upgrades, and therefore warnings are more likely to appear, well without warning.
But I think it's also probably the case that a C expert will produce codebases that have no warnings, and a C novice will produce codebases filled with warnings, and the same for JS. So I can imagine if you're just "visiting" the other language's ecosystem, you'll see worse projects and results than if you've spent a while there.
Not that I approve the untested changes, I'd have used a different gcc temporarily (container or whatever), but, yeah, well...
This is common best practice in many environments...
Linus surely knows this, but here he's just being hard headed.
This is super nice in theory, but it gets murky if you veer off the "I'm building current mainline Firefox path". For example, I'm a maintainer of a Firefox fork that often lags a few versions behind. It has substantial changes, and we are only two guys doing the major work, so keeping up with current changes is not feasible. However, this is a research/security testing-focused project, so this is generally okay.
However, coming back to the build issue, apparently, it's costly to host all those buildchain archives. So they get frequently deleted from the remote repository, which leads to the build only working on machines that downloaded the toolchain earlier (i.e., not Github action runner, for example).
Given that there are many more downstream users of effectively a ton of kernel versions, this quickly gets fairly expensive and takes up a ton of effort unless you pin it to some old version and rarely change it.
So, as someone wanting to mess around with open source projects, their supporting more than 1 specific compiler version is actually quite nice.
The insanity is that the Kernel, Fedora and GCC are so badly coordinated that the beta of the compiler breaks the Kernel build (this is not a beta, this is a pre-alpha in a reasonable universe...is the Kernel a critical user of GCC? Apparently not), and a major distro packages that beta version of the compiler.
To borrow a phrase from Reddit: "everybody sucks here" (even Cook, who looks the best of everyone here, seems either oblivious or defeated about how clownshoes it is that released versions of major linux distros can't build the Kernel. The solution of "don't update to release versions" is crap).
(Writing this from a Linux machine, which I will continue using, but also sort of despise).
And reverted them as soon as the issue became apparent.
> then flames the maintainer who was working on cleanly updating the kernel for the not-yet-released compiler?
Talking aboutchanges that he had not pushed by the time Linus published the release candidate.
Also the "not yet released" seems to be a red herring, as the article notes having beta versions of compilers in new releases is a tradition for some distros, so that should not be unexpected. It makes some sense since distros tend to stick to a compiler for each elease, so shipping a soon to be out of maintenance compiler from day one will only cause other issues down the road.
That's just not what Fedora is, though. Being on the bleeding edge is foundational to Fedora, even if it's sometimes inconvenient. If you want battle-tested and stable, don't run Fedora, but use Debian or something.
> C "strings" work the way they do because C is a low level language, where you want to be able to do low-level things when necessary. It's a feature, not a deficiency.
Are NUL-terminated strings really considered preferable, even for low-level work? I always just considered them an unfortunate design choice C was stuck with.
Many O(1) operations/checks become O(n) because you have to linearly traverse the entire string (or keep a second pointer) to know where it ends/how long it is; you can't take a substring within another string without reallocating and copying that part over with a new NUL appended at the end; you can't store data that may contain a NUL (which text shouldn't, in theory, but then you need a separate approach for binary data); and plenty of security issues arise from missing or extra NULs.
"Of course the null-terminated strings of C are more low-level than the length-prefixed strings of Pascal, because the elders of C wisely designed them to be so." Alternatively, something is low-level because it works like C because C semantics have simply become the universal definition of what is thought of as low-level, regardless of machine mismatch.
Likewise, maybe it's not such a good idea that UNIXv6 or other educational unix-likes are used in operating system classes in universities. It's well-applicable, sure, but that's not the point of that education. Maybe we should use a Japanese or German clone of some IBM mainframe system instead, so that people actually get exposed to different ideas, instead of slightly simpler and less sophisticated versions of the ideas they are already familiar with. Too much unix-inbreeding in CS education isn't good.
I too regularly however come across people who believe some or all of the following:
- "Everything is ultimately just C"
- "All other languages just compile to C, so you should use it to be fast"
- "C is faster because it's closer to bare metal"
- "C is fast because it doesn't need to be interpreted unlike all other languages"
The special elevated position of C, being some kind of "ground truth" of computers is bizarre. It leads to all kinds of false-optimizations in practitioners in other languages out of some kind of misplaced confidence in the speed of C relative to all other languages.
The idea that C is "naturally faster" due to being some kind of representation of a computer that no other language could achieve is a hard myth to shake.
> Although we entertained occasional thoughts about implementing one of the major languages of the time like Fortran, PL/I, or Algol 68, such a project seemed hopelessly large for our resources: much simpler and smaller tools were called for. All these languages influenced our work, but it was more fun to do things on our own.
-- https://www.nokia.com/bell-labs/about/dennis-m-ritchie/chist...
And using Pascal as counter example gets tiresome, not only it wasn't designed for systems programming, most of its dialects did fix those issues including its revised report (ISO Extended Pascal), by 1978 Niklaus Wirth had created Modula-2, based on Mesa (Xerox PARC replacement for their use of BCPL), both of which never had problem with string lengths.
What is low level is deciding on an memory representation and working with it directly. A high level language will just have a "string" object, its internal representation is hidden to the programmer and could potentially be changed between versions.
In C, "string" has a precise meaning, it is a pointer to a statically allocated array of bytes with the characters 's', 't', 'r', 'i', 'n', 'g' followed by a zero. That is the low level part, C programmers manipulate the memory directly and need such guarantees. Had it been defined as the number of characters in 4 bytes followed by each character of 2 bytes each in native endian would be just as low level. Defining it as "it is a character string, use the standard library and don't look too closely", as it is the case in Java is high level.
The "feature" is that the memory representation of strings is well defined. The choice of zero-termination has some pros and cons.
Note that in many cases, you can use size+data instead, using mem* functions instead of the str* ones. And though it is not ideal, you can also use "%.*s" in printf(). Not ideal, but workable.
For situations where a null terminator really is better it's easy to add them to a length-prefixed string, whereas the reverse is not true.
They clearly got this wrong.
Since C doesn't have a string type, "quoted strings" are actually char[] but with '\0' as an extra last character.
People have therefore made warnings happen when defining a char[] which silently truncates the '\0', because that's a common source of bugs.
They've then had to develop a way of easily disabling that warning from being generated, because it's also common enough to want to avoid the warning.
All of this seems insane coming from a modern language.
But look at the complete disaster that was the Python 2 -> 3 migration, a large motivator for which was "fixing" strings from a non-unicode to unicode compatible type. A decade or more of lost productivity as people struggled to migrate.
There's no way to actually fix C. Just keep recommending that people don't use it.
My thinking was that the Linux kernel already uses a custom dialect of C with specific features that benefit their workflow; I'm surprised that one of those features wasn't a
char[] charset = b"abcdefghijklmnopqrstuvwxyz";
that would allow for intent to be signalled to the compiler.C will never get proper string support, so you'll never be able to seperate them from zero-terminated byte buffers vs byte-buffers in the type system.
So annotating vars is perfectly fine.
The problem was that the PM and Release manager was completely unaware of the state of the next branch, of its upcoming problems and fixes, and just hacked around in his usual cowboy manner. Entirely unprofessional. A release manager should have been aware of Kees' gcc15 fixes.
But they have not tooling support, no oversight, just endless blurbs on their main mailinglist. No CI for a release candidate? Reminds us of typical cowboys in other places.
typedef __nostring__ char* bytes;
And then use that type instead of annotating every single variable declaration.If the CI system didn't get the Fedora upgrade then it would not have caught it. Aside from that the kernel has a highly configurable build process so getting good coverage is equally complex.
Plus, this is a release candidate, which is noted as being explicitly targeted at developers and enthusiasts. I'm not sure the strength of Kees' objections are well matched to the size of the actual problem.
And Linus is usually much more critical in what gets into master when it comes to other people's contribution, let alone into an RC.
I don't think so. It doesn't make sense on the type. Otherwise, what should happen here?
char s[1];
char (__nonstring ns)[1]; // (I guess this would be the syntax?)
s[0] = '1';
ns[0] = '\0';
char* p1 = s; // Should this be legal?
char* p2 = ns; // Should this be legal?
char* __nonstring p3 = s; // Should this be legal?
char* __nonstring p4 = ns; // Should this be legal?
foo(s, ns, p1, p2, p3, p4); // Which ones can foo() assume to be NUL-terminated?
// Which ones can foo() assume to NOT be NUL-terminated??
By putting it in the type you're not just affecting the initialization, you're establishing an invariant throughout the lifetime of the object... which you cannot enforce in any desirable way here. That would be equivalent to laying a minefield throughout your code.unsigned means, don't use of an integer MSB as sign bit. __nonstring means, the byte array might not be terminated with a NUL byte.
So what happens if you use integers instead of byte arrays? I mean cast away unsigned or add unsigned. Of course these two areas are different, but one could try to design such features that they behave in similar ways where it makes sense.
I am unsure but it seems, if you cast to a different type you lose the conditions of the previous type. And "should this be legal", you can cast away a lot of things and it's legal. That's C.
But whatever because it's not implemented. This all is hypothetical. I understand GCC that they took the easier way. Type strictness is not C's forte.
No, they're very different situations.
> unsigned means, don't use of an integer MSB as sign bit.
First: unsigned is a keyword. This fact is not insignificant.
But anyway, even assuming they were both keywords or both attributes: "don't use an MSB as a sign bit" makes sense, because the MSB otherwise is used as a sign bit.
> __nonstring means, the byte array might not be terminated with a NUL byte.
The byte array already doesn't have to contain a NUL character to begin with. It just so happens that you usually initialize it somewhere with an initializer that does, but it's already perfectly legal to strip that NUL away later, or to initialize it in a manner that doesn't include a NUL character (say, char a[1] = {'a'}). It doesn't really make sense to change the type to say "we now have a new type with the cool invariant that is... identical to the original type's."
> I understand GCC that they took the easier way. Type strictness is not C's forte.
People would want whatever they do to make sense in C++ too, FWIW. So if they introduce a type incompatibility, they would want it to avoid breaking the world in other languages that enforce them, even if C doesn't.
EDIT: > what do you think should happen if you store a NUL when you're claiming you're not
I don't believe nonstring implies it doesn't end with a NUL, just that it isn't required to.
Note that "works as intended" isn't sole the criterion for "does it make sense" or "should we do this." You can kill a fly with a cannon too, and it achieves the intended outcome, but that doesn't mean you should.
edit: Unless what they actually mean is annotating struct members, that would actually make sense.
I imagine that it could work a little bit like unsigned: a modifier to integer types that tells that an integer's MSB is not to be used as a sign bit.
__nonstring__ tells that the last byte of a byte sequence doesn't need to be NUL.
I would find it sensible allowing putting the attribute to a type, but whatever.
Unless either the older GCC or the beta GCC isn't "official"? In which case that's not necessarily expected to be picked up in an RC?
Sometimes I'll need an array of 4 ints, so I'll define one:
int a[4] = {1,2,3,4};
other times I'll want 4 bytes. So sure, I can write: char a[4] = {'A','B','C','D'};
However, (I hope) I'll get the exact same compiler warning as the more readable: char a[4] = "ABCD";
that does the exact same. So I'll need the __nonstring__ anyway. And then why not use the more readable syntax, since I'm telling the compiler and reader explicitly that I don't want a null terminator?The core issue is C's habit of using the exact same language construct for different purposes, here char[] for both uint8_array and null_terminated_str.
The latter is a null terminated string, the former is not. Compiler warnings are principally a set of heuristics for bad code. Heuristically the first example is more likely to be intentional than the latter.