159 pointsby transpute10 months ago6 comments
  • anthonyryan110 months ago
    This is by the author of the very helpful kernel-hardening-checker: https://github.com/a13xp0p0v/kernel-hardening-checker

    An interesting tool for analyzing your personal kernel config file and pointing out areas for security improvement. It's more comprehensive than KSPP (https://kspp.github.io/) but sometimes goes a little too far, suggesting disabling kernel features you may actively use.

    Definitely worth trying!

    • egberts110 months ago
      This is the way.

      Close all avenues, then only open what you exactly need.

      • anon636210 months ago
        By default, Linux has way, way too much functionality, insufficient testing and proof of security correctness, and not enough security controls.
  • nine_k10 months ago
    The number of defenses is pretty impressive. The number of out-of-tree and commercial defenses is also impressive. The amount dedicated to specifics of C (UB, bounds checks, use-after-free) is relatively small.

    It would be interesting to compare to, say, OpenBSD, with its apparently numerous security and defense-in-depth features.

    • zie10 months ago
      > It would be interesting to compare to, say, OpenBSD, with its apparently numerous security and defense-in-depth features.

      I'm not sure that would be a very fair comparison. A lot of OpenBSD security comes from just skipping giant swaths of stuff. Advanced filesystems are non-existent, Bluetooth is non-existent, etc.

      I haven't done a count lately, but I would guess the Linux Kernel alone is larger than the OpenBSD base system. It's simplicity is a huge security feature. Provided you don't need some of those features.

      I'm not saying this as an OpenBSD hater or anything, I run OpenBSD on at least one machine.

  • acje10 months ago
    I find it inspiring that we are getting to where we are dealing with models that classify vulnerabilities at a systems level. However I also think we are kind of barking up the wrong three. There is IMHO something wrong with the current strategy of scaling up the von Neumann architecture. It leads to fragile software partitioning, noisy neighbors and both slow and sometimes unintended communication through shared memory. I’ve tried to lay this out in detail here https://lnkd.in/dRNSYPWC
    • transpute10 months ago
      Have you looked at Barrelfish (2011) from Microsoft Research and ETH Zurich?

      https://www.microsoft.com/en-us/research/blog/barrelfish-exp...

      > “In the next five to 10 years,” Barham predicts, “there are going to be many varieties of multicore machines. There are going to be a small number of each type of machine, and you won’t be able to afford to spend two years rewriting an operating system to work on each new machine that comes out. Trying to write the OS so it can be installed on a completely new computer it’s never seen before, measure things, and think about the best way to optimize itself on this computer—that’s quite a different approach to making an operating system for a single, specific multiprocessor.” The problem, the researchers say, stems from the use of a shared-memory kernel with data structures protected by locks. The Barrelfish project opts instead for a distributed system in which each unit communicates explicitly.

      Public development stopped in March 2020, https://github.com/BarrelfishOS/barrelfish & https://barrelfish.org

      • _huayra_10 months ago
        Mothy Roscoe, the Barrelfish PI, gave a really great talk at ATC 2021 [0]. A lot of OS research is basically "here's a clever way we bypassed Linux to touch hardware directly", but his argument is that the "VAX model" of hardware that Linux still uses has ossified, and CPU manufacturers have to build complexity to support that.

        Concretely, there are a lot of things that are getting more "NOC-y" (network-on-chip). I'm not an OS expert, but deal with a lot of forthcoming features from hardware vendors at my current role. Most are abstracted as some sorta PCI device that does a little "mailbox protocol" to get some values (perhaps directly, perhaps read out of memory upon success). Examples are HSMP from AMD and OOBMSM from Intel. In both, the OS doesn't directly configure a setting, but asks some other chunk of code (provided by the CPU vendor) to configure the setting. Mothy's argument is that that is an architectural failure, and we should create OSes that can deal with this NOC-y heterogeneous architecture.

        Even if one disagrees with Mothy's premise, this is a banger of a talk, well worth watching and easy to understand.

        [0] https://www.usenix.org/conference/atc21/presentation/fri-key...

        • vacuity10 months ago
          He is right. The point of the operating system is to, well, operate the system. Hardware, firmware, software engineers should work together to make good systems. Political and social barriers are not an excuse for poor products delivered to end users.
      • egberts110 months ago
        Reminds me of Minix OS by Andrew Tannenbaum.

        Anyone remember the debate between microkernel vs monolithic kernel?

        https://en.m.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_d...

        • vacuity10 months ago
          In fact, Barrelfish is based on running a microkernel per core, and makes good use of this design to better adapt to hardware diversity.

          I understand why Linux develops everything in one place. This makes it far easier to manage. However, it is far more difficult to configure and specialize kernels. (Saw a paper where core operations of default Linux had gotten slower over the years, requiring reconfiguration.) Or to badly paraphrase Ingo Molnar: aim for one of two ideals in operating system design: the one that's easiest for developers to change/maintain and the one that maximizes performance.

      • nand_gate10 months ago
        Vapourware, what they post (microkernels) is nothing new.

        As far as a barrel CPUs to replace SMT... crickets

        • transpute10 months ago
          10 years of shipped code for multiple platforms (x86, ARMv7, ARMv8) is not varporware. Based on software experience with existing platforms, they have created an open hardware RISC-V core which requires custom software to achieve energy effiency with improved performance, https://spectrum.ieee.org/snitch-riscv-processor-6x-faster

          > Snitch proved to be 3.5 times more energy efficient and up to six times faster than the others.. "While we could already demonstrate a very energy-efficient and versatile 8-core Snitch cluster configuration in silicon, there are exciting opportunities ahead in building computing platforms scalable to thousands of Snitch cores, even spreading over multiple chiplets," says Zaruba, noting that his team is currently working towards this goal.

          https://github.com/pulp-platform/snitch

        • pjmlp10 months ago
          It is on most mainstream computers, even though it is the way in most high integrity computing deployments.
    • simonask10 months ago
      I think your take is interesting, but your article does not go into details with ideas about how to address these problems at the architectural level. Would you like to elaborate?
      • acje10 months ago
        There is some elaboration in part four of the series. A fifth part on actor model, gaps and surfaces is in the works. Part four https://lnkd.in/dEVabpkN
  • chenhoey121110 months ago
    Really solid conceptual map — not just for kernel devs, but also useful if you're working in Rust, Zig, or any low-level system code.

    Has anyone come across a similar visual breakdown for Wasm runtimes, especially around sandboxing and isolation models?

  • Sponge510 months ago
    > This map describes kernel security hardening. It doesn't cover cutting attack surface.

    For those wondering why SECCOMP is ommited.

  • hart_russell10 months ago
    Do these settings persist if I update the kernel on my ubuntu server?