Maybe one is more important than the other, I don't know. All the languages I use for work or hobbies are garbage collected and I'm not a security professional. But it does seem like the typical Rust program with it's massive number of "cargo adds" is an enormous attack surface.
If you have packages that don't come from a package manager - windows install, phone installs, snap, docker, flatpack, and likely more you have a different risk - a library may not have been updated and so you are vulnerable to a known flaw.
There is no good/easy answer to supply chain risk. It is slightly different on Rust because you can take the latest if you want (but there is plenty of ability to stay with an older release if you want), but this it doesn't move the needle on overall risk.
I'd actually call that quite difficult. In the case of xz it was a quite high-effort "long con" the likes of which we've never seen before, and it didn't quite succeed in the end (it was caught before rolling out to stable distros and did not successfully exploit any target). One huge close call, but so far zero successes, over almost 30 years now.
But typo-squatting and hijacked packages in NPM and PyPI, we've seen that 100s of times, many times successfully attacking developers at important software companies or just siphoning cryptocurrency.
Given the amount of potential targets, it would probably be trivial to get yourself into a position to cause devastating impact.
It seems pretty indisputable that "modern" langs substantially increase your supply chain attack surface. Of course some (like JS) are worse than others.
As a result, whether the net security benefit of using Rust vs C is positive or negative depends heavily on the program in question. There is a huge difference between e.g. Firefox and Wireguard in this respect.
Very few by number, but that's more an artifact of C's poor package management than a true reflection of how much third-party code you're actually pulling in. Something like APR is the equivalent of hundreds of Rust packages, and comes with a similar security risk. Sure, maybe there's someone who signs off on each release, but do you think they personally know and validate each of the dozen or more mostly-independent projects that actually make up their library? No, they delegate to separate maintainers - that information just isn't surfaced in the package management system.
Say what now? Have you ever worked on a project that uses C?
We were using 3rd party dependencies in C in the 1980s.
Here's a more current list for C and C++: https://github.com/fffaraz/awesome-cpp
"We shouldn't use the thing that has memory safety built in because it also has a thriving ecosystem of open source dependencies available" is a very weird argument.
I don't see anyone anywhere in this thread saying that we shouldn't use rust, or C for that matter.
This doesn't prove anything of course, but the only High severity vulnerability I had in production this year was a C library. And the vulnerability was a buffer overflow caused by lack of memory safety.
So I don't think it's a simple trade off of one sort of vuln for another. Memory safety is extremely important for security. Supply chain attacks also - but using C won't defend you from those necessarily.
There are of course still other vectors for supply chain attacks. The toolchain itself, for instance. But then you fairly quickly get into 'trusting trust' level issues (which are very real!) and you will want an OS that has been built with known clean tools as well.
It won't fix everything (see TARmageddon), but left-pad-rs's build.rs file should definitely not be installing a sudo alias in my .bashrc file that steals my password when I cargo build my project.
You may not use data derived from outside your program to affect something else outside your program--at least, not by accident. All command line arguments, environment variables, locale information (see perllocale), results of certain system calls ("readdir()", "readlink()", the variable of "shmread()", the messages returned by "msgrcv()", the password, gcos and shell fields returned by the "getpwxxx()" calls), and all file input are marked as "tainted". Tainted data may not be used directly or indirectly in any command that invokes a sub-shell, nor in any command that modifies files, directories, or processes, with the following exceptions: [...]
The function declarations declare every action it can do on your system, and any change adding new ones is a breaking change on the library.
We've knew how to do it for ages. What we don't have is a good abstraction to let the compiler check them and transform the actions into high-level ones as they go through the stack.
Like you say we don't have a good abstraction for this.
Hopefully, you know exactly what a function needs to do when you write it.
> every other function up the chain needs to be updated
There are types reuse and composition to deal with that. Many languages with advanced type systems do composition badly, but it's still there.
The problem is on the ones deep down. And we don't have very good tools for dealing with them, but they are perfectly workable even with what we have today.
(A second large problem is on frameworks running on the top-level of your app. We have some tools for dealing with those, but the situation here is way worse than for libraries.)
I don't see how that'd be possible. Often we want the library to do useful things for the application, in the context of the application. What would incentivize developers to specify more fine-grained permissions per library than the union of everything their application requires?
I see more use in sandboxing entire applications, and giving them more selective access than "the entire user account" like we do these days. This is maybe more how smartphones operating systems work than desktop computers?
If I want you to decode a JPEG, I pass you an input stream handle and you return an output memory buffer; because I didn't give you any other capabilities I know you can't do anything else. Apart from looping forever, presumably.
It still requires substantial discipline because the easiest way to write anything in this hypothetical language is to pass the do-everything handle to every function.
See also the WUFFS project: https://github.com/google/wuffs - where things like I/O simply do not exist in the language, and therefore, any WUFFS library is trustworthy. However, it's not a general-purpose language - it's designed for file format parsers only.
Still, it'd be highly painful. Would it be worth the trade-off to prevent supply chain attacks?
Didn't we have something like that in Java more than a decade ago? IIRC, you could, for instance, restrict which classes could do things like opening a file or talking to the network.
It didn't quite work, and was abandoned. Turns out it's hard to sandbox a library; the exposed surface ended up being too large, and there were plenty of sandbox escapes.
> There are lots of native sandboxing in linux kernel. Bubblewrap, landlock, gvisor and kata (containers, not native), microVMs, namespaces (user, network), etc
What all of these have in common, is that they isolate processes, not libraries. If you could isolate each library in a separate process, without killing performance with IPC costs, you could use them; one example is desktop thumbnailers, which parse untrusted data, and can use sandboxes to protect against bugs in the image and video codec libraries they use.
> It didn't quite work, and was abandoned. Turns out it's hard to sandbox a library; the exposed surface ended up being too large, and there were plenty of sandbox escapes.
The number of escapes is exaggerated. The issue was that it didn't have capabilities or anything like that, so it wasn't really used - you couldn't say "library X should be able to access files that the application has passed into it but not other files", you had to say "class X.Y.Z should be able to access the file /foo/bar/baz.jpeg" in your security policy. So it was unusably brittle and everyone just said "all code can access all files"
Even if we assume overhead is magically brought to zero, the real challenge is customizing the permission policy for each sandbox. I add, say, 5 new dependencies to my program, and now I have to review source code of each of those dependencies and determine what permissions their corresponding sandboxes get. The library that connects to a database server? Maybe it also needs filesystem access to cache things. The library that parses JSON buffers? Maybe it also needs network access to download the appropriate JSON schema on the fly. The library that processes payments? Maybe it also needs access to location information to do risk analysis.
Are all developers able to define the right policies for every dependency?
I don't know if anyone's doing it at the individual commit level as a business.
You don't need to pull in a library for every little function, that's how you open yourself up to supply chain risk.
The left-pad fiasco, for example. Left-pad was 11 lines of code. Literally no reason to pull in an external dependency for that.
Rust is doomed to repeat the same mistakes because it also has an incredibly minimal standard library, so now we get micro-crates for simple string utilities, or scopeguard which itself is under ~400 LoC, and a much simpler RAII can be made yourself for your own project if you don't need everything in scopeguard.
The industry needs to stop being terrified of writing functionality that already exists elsewhere.
I think that like everything else this is about balance. Dependencies appear to be zero cost whereas writing something small (even 400 lines of code) costs time and appears to have a larger cost than pulling in that dependency (and it's dependencies, and so on). That cost is there, it is just much better hidden and so people fall for it. If you knew the real cost you probably would not pull in that dependency.