I probably would have gone for turning the UaF into an type confusion style attack: if you spray more sockets you'll end up with two files, the original and the new one, that have aliased sk members, but the vsock code will incorrectly cast the new one to a `vsock_sock`. From there you can probably find some other socket type that puts controllable data over some field that vsock treats as a pointer or vice versa, and use it as both a kaslr leak and data-only r/w primitive.
I'm aware that Linux is nearly 40 years old at this point, and C is even decades older. But it is mind-boggling to me that we're still talking about UAFs and jumping from dangling pointers to get privileged executions in the 21st century.
(rewrite it in Rust)
But we do have -fbounds-safety in clang (at least on macOS).
Not really. If you can't be perfect, at least be good.
Unless you want to make an argument that searching all of your code base for UB is better than running on a relatively small subset (20-30%).
Or that Linux should use tracing GC + value types. Which I would find decent as well. But sadly LKML would probably disagree with inclusion of C# (or Java 35).
C is just used as part of the process to go from a high level spec to executable code.
You can also compile eg Haskell via C.
The subset of C used in seL4 is highly constrained.
what is it with rust people and thinking robust, automatic correctness checking was invented in the last 20 years?
Disclaimer: above is sarcasm, while I don't think Linux Kernel Maintainers are perfect, I don't consider them dunces either. If they can't avoid writing UAF (use after free) or DF (Double free) bugs then what hope does the average C programmer has?
Linux isn't designed to be formally verifiable. For all of the manpower and resources dedicated to it, it's still bidden to the technical debt of being a 30 year old hobbyist experiment at its core. If you want a formally verified, secure kernel that wasn't written by a college student to see if he could, seL4 exists.
No. It exists but is it practical to be used in an OS? From what I gathered from HN, sel4 is very limited in scope and extending that to a semi-usable OS (that uses ""proven"" C) is expensive time and money wise.
> still bidden to the technical debt of being a 30 year old hobbyist experiment
Citation needed. I admit it has limitations but it's not like the biggest corps in the world aren't working on it.
See: https://news.ycombinator.com/item?id=43452185#43454498 https://news.ycombinator.com/item?id=43853283
Everything that has to meet a high criteria of safety is like this, yes. For the exact same reason unsafe exists in Rust and completely inescapable, because automatic correctness checking beyond the most trivial static analysis doesn't actually come for free, and it imposes intense design restrictions that make a lot of things very time consuming to do.
> Citation needed
https://groups.google.com/g/comp.os.minix/c/dlNtH7RRrGA/m/Sw...
You will never find a single point in the multiple archives of Linux's git history where a clean break point occurred and the kernel was greenfielded with the express intent of replacing its design abstractions with ones more appropriate and ergonomic for formal analysis. If you'd like to express further skepticism, feel free to try and find that moment which never happened.
Participation of large, well recognized corporations does not imply the kernel isn't a mess of technical debt. It actually implies worse.
Except in practice the code written in Rust experiences no safety issues[1].
I've seen this argument before a million times it is one part FUD (by making actual memory issues bigger than they are) one part Nirvana fallacy (argments that make so Java isn't memory safe because it will have to call into C).
[1] https://storage.googleapis.com/gweb-research2023-media/pubto...
> is actually worth it compared to other efforts to improve memory safety in C
As I aluded before, I am sure Linux Kernel Maintainers are aware of sel4. Which begs the question, why they didn't do it? It's in C, proves everything, and solves even more than Rust, why isn't it in kernel?
I'm going to hazard a guess, the effort of going full proof for something like Linux would be through the roof. You would need lifetime of LKML dedication.
And the same goes for improving the situation. Stopping the world and rewriting everything is never going to fly. The only viable approach is gradual introduction of safer techniques, be it a language or verification options, that step by step improve things and make the bar of adoption minimal. Nobody is going to get convinced if this is cast as a culture war. The folks annoyed by the status quo need to bring the skeptics along, everything else is doomed to fail (as well-illustrated by all the discussions here).
Yes. And?
Seatbelts and air bags all experience safety issues, and general public was against them for a very long time. It doesn't mean they don't increase safety. Or that because you could fall into volcano you should abolish seatbelts.
How about for start software with no memory errors? How about more XSS and less UAF?
Also, not even formal proofs are enough. A smart enough attacker will use the gap between spec and implementation and target those errors.
And while I agree we need more formal proofs and invariant oriented software design, those options are even less popular than Haskell.
"Formal verification" literally means that the implementation is proven to satisfy the formal specification, so by definition there won't be any gaps. That's the whole point of formal verification.
But you have a point in that there will still be gaps -- between specification and intent. That is, bugs in the specification. (Or maybe you mean "specification" to be some kind of informal description? That's closer to intent than an actual specification which in the formal verification world is a formal and rigorous description of the required behavior.) Though those bugs would likely be significantly less prevalent .. at least that's the expectation, since nobody has really been able to do this in the real world.
To quote Donald Knuth:
"Beware of bugs in the above code; I have only proved it correct, not tried it."
Sure but proof itself might have gaps itself. A famous proved implementation of mergesort had a subtle flaw when array size approached usize. They proved it would sort, they didn't prove it won't experience UB. Or maybe you prove it has no UB if there are no hardware errors (there are hardware errors), etc.The unbeatable quantum cryptography was beat in one instance because the attacker abused the properties of detectors to get the signal without alerting either sides of the conversation.
https://www.theregister.com/2010/09/01/quantum_crypto_hack/
> Or maybe you mean "specification" to be some kind of informal description?
I read somewhere that the secret to hacker mindset is that they ignore what the model is, but look at what is actually there. If they behaved as spec writers expected, they would have never been able to break in.
Reminds me of a place I worked. The doors were this heavy mass of wood and metal that would unlock with a loud thud (from several huge rods holding it secure). This was done to formally pass some ISO declaration for document security. The thing is any sane robber would just easily bypass the door by breaking the plaster walls on either side.
At least we can stop writing code made of plaster for start.
Then it's an incorrect proof. In the pen-and-paper era, that was a thing, but nowadays proofs are machine-checked and don't have "gaps" anymore.
Or your assumptions have a bug. That's exactly what I mean with specifications being potentially flawed, since all assumptions must be baked into the formal specification. If hardware can fail then that must be specified, otherwise it's assumed to not fail ever. This never ends, and that's the whole point here.
I suppose we are trying to say the same thing, just not with the same terminology.
> Yes. And?
You claimed otherwise.
Looks like we've got an encoding issue too.
Content-Type: text/html
i.e. no charset field.The document itself also lacks a declared character set.
Possibly a newer version that I haven't read fixed how they said that. As long as I don't check I can hope.
You should pretty much always use one.
Amazing! Sacrificing GPA for projects is always a good time
Throughout my CS studies, I was just collecting "tickets" (very hard to translate the actual word, "Schein"), which basically just attested that you have passed a course. They (often) had a grade on it, but it did not matter. Instead, once in the middle ("pre-diploma") and once at the very end of your time at university, you'd have oral exams. And those determined your grade. To attend them, you needed the right combination of "tickets".
The glaring downside of this system is that if you had a bad time in those few months of your very final exams, you could screw up your entire grade.
The upside of it, is that I was free (and encouraged) to pursue whatever I wanted, without each course risking to have an effect on my "GPA". I had way more tickets than I needed in the end, and still time and energy to pursue whatever else I wanted (playing with microcontrollers etc.).
What you are describing was one of the systems they used.
So at my university (OvG in Magdeburg) they used this system for math, but computer science had written exams.
This is how a lot of British undergrad courses ('modules') work. One giant exam at the very end determining everything; no quizzes, no problem sheets, no midterms.
"Never let school limit your education"
It's about time to look at a sane design, such as seL4[0].
Rust for Linux, wen?
It's a damn shame the current maintainers are so hostile to its adoption that many of the original rust 4 linux folks have left the project.
Not necessarily Rust, but something memory safe. Perhaps Java (if maintenance is that important) :P
Linux issues are not purely technical. There is the social inertia.
No. They have opinions and take actions to subvert related work. See people literally stopping a presentation to gripe about Rust cultists, or Hellwig throwing a hissy fit because he doesn't like that Rust code is adjacent to his DMA controller.
This isn't technical discussion, this is Office Politics 104.
If rust is a really good fit for the kernel I imagine people will adapt over time... so far it doesn't seem promising. If the main fear is that you'll end up having to maintain a bunch of rust code, the propensity for the rust people to rage quit doesn't really allay that fear.
If no one wants to do the thing you think is obviously better, you might be a misunderstood genius, or you might be wrong.
That's a not what happened. Hellwig asked that Rust in Linux guys not write common bindings to his layer but do it per driver.
In other words he as DMA code maintainer gets to choose what and how other people use his interfaces for. That's not how interfaces (as in two environments in software interacting ) work.
By that logic if he hates Logitech drivers he can sabotage them because he doesn't like how the devices look?
> In other words he as DMA code maintainer gets to choose what and how other people use his interfaces for.
He as DMA code maintainer choose no Rust code. However he then can't choose to veto Rust code bindings. He could have choosen to have Rust code and that would have given him the veto.
https://lore.kernel.org/rust-for-linux/CAHk-=wgLbz1Bm8QhmJ4d...
And this didn't go over well. Shocking.
https://lwn.net/Articles/1006805/
"We wrote a single piece of Rust code that abstracts the C API for all Rust drivers, which we offer to maintain ourselves".
I wish that HN as a whole could maintain a respectful and curious tone of debate when these threads come up. Feel like both rust advocates and skeptics could do a lot better.