92 pointsby todsacerdoti7 hours ago5 comments
  • bheadmaster5 hours ago

        An understanding of READ_ONCE() and WRITE_ONCE() is important for kernel developers who will be dealing with any sort of concurrent access to data. So, naturally, they are almost entirely absent from the kernel's documentation.
    
    Made me chuckle.
    • semiquaver5 hours ago
      More chuckles from the source:

         /*
          * Yes, this permits 64-bit accesses on 32-bit architectures. These will
          * actually be atomic in some cases (namely Armv7 + LPAE), but for others we
          * rely on the access being split into 2x32-bit accesses for a 32-bit quantity
          * (e.g. a virtual address) and a strong prevailing wind.
          */
  • staticassertion3 hours ago
    > There are a couple of interesting implications from this outcome, should it hold. The first of those is that, as Rust code reaches more deeply into the core kernel, its code for concurrent access to shared data will look significantly different from the equivalent C code, even though the code on both sides may be working with the same data. Understanding lockless data access is challenging enough when dealing with one API; developers may now have to understand two APIs, which will not make the task easier.

    The thing is, it'll be far less challenging for the Rust code, which will actually define the ordering semantics explicitly. That's the point of rejecting the READ_ONCE/WRITE_ONCE approach - it's unclear what the goal is when using those, what guarantee you actually want.

    I suspect that if Rust continues forward with this approach it will basically end up as the code where someone goes to read the actual semantics to determine what the C code should do.

    • bjackman15 minutes ago
      In my experience, in practice, it usually isn't that hard to figure out what people meant by a READ/WRITE_ONCE().

      Most common cases I see are:

      1. I'm sharing data between concurrent contexts but they are all on the same CPU (classic is sharing a percpu variable between IRQ and task).

      2. I'm reading some isolated piece of data that I know can change any time, but it doesn't form part of a data structure or anything, it can't be "in an inconsistent state" as long as I can avoid load-tearing (classic case: a performance knob that gets mutated via sysfs). I just wanna READ it ONCE into a local variable, so I can do two things with it and know they both operate with the same value.

      I actually don't think C++ or Rust have existing semantics that satisfy this kinda thing? So will be interesting to see what they come up with.

    • marcosdumay2 hours ago
      > I suspect that if Rust continues forward with this approach it will basically end up as the code where someone goes to read the actual semantics to determine what the C code should do.

      That will also put it on the unfortunate position of being the place that breaks every time somebody adds a bug to the C code.

      Anyway, given the cultures involved, it's probably inevitable.

      • mustache_kimonoan hour ago
        > That will also put it on the unfortunate position of being the place that breaks every time somebody adds a bug to the C code.

        Can someone explain charitably what the poster is getting at? To me, the above makes zero sense. If the Rust code is what is implemented correctly, and has the well-defined semantics, then, when the C code breaks, it's obviously the C code's problem?

        • Sharlin12 minutes ago
          I think a charitable interpretation is that given that the Rust code will be less forgiving, it will "break" C code and patterns that "used to work", albeit with latent UB or other nonobvious correctness issues. Now, obviously this is ultimately a good thing, and no developer worth their salt would seriously argue that latent bugs should stay latent, but as we've already seen, people have egos and aren't always exceedingly rational.
  • gpderetta6 hours ago
    Very interesting. AFAIK the kernel explicitly gives consume semantics to read_once (and in fact it is not just a compiler barrier on alpha), so technically lowering it to a relaxed operation is wrong.

    Does rust have or need the equivalent of std::memory_order_consume? Famously this was deemed unimplementable in C++.

    • steveklabnik6 hours ago
      It wasn’t implemented for the same reason. Rust uses C++20 ordering.
      • Fulgenan hour ago
        C++20 actually [changed the semantics of consume](https://devblogs.microsoft.com/oldnewthing/20230427-00/?p=10...), but Rust doesn't include it. And last I remember compilers still treat it as acquire, so it's not worth the bytes it's stored in.
        • jcranmer8 minutes ago
          In the current drafts of C++ (I don't know which version it landed in), memory_order::consume is fully dead and listed as deprecated in the standard.
      • gpderetta6 hours ago
        right, so I would expect that the equivalent of READ_ONCE is converted to an acquire in rust, even if slightly pessimal.

        But the article says that the suggestion is to convert them to relaxed loads. Is the expectation to YOLO it and hope that the compiler doesn't break control and data dependencies?

        • bonzini6 hours ago
          There is a yolo way that actually works, which would be to change it to a relaxed load followed by an acquire signal fence.
          • loeg5 hours ago
            Is that any better than just using an acquire load?
            • gpderetta5 hours ago
              It is cheaper on ARM and POWER. But I'm not sure it is always safe. The standard has very complex rules for consume to make sure that the compiler didn't break the dependencies.

              edit: and those rules where so complex that compilers decided where not implementable or not worth it.

              • bonzini2 hours ago
                The rules were there to explain what optimizations remained possible. Here no optimization is possible at the compiler level, and only the processor retains freedom because we know it won't use it.

                It is nasty, but it's very similar to how Linux does it (volatile read + __asm__("") compiler barrier).

                • comexan hour ago
                  This is still unsound (in both C and Rust), because the compiler can break data dependencies by e.g. replacing a value with a different value known to be equal to it. A compiler barrier doesn't prevent this. (Neither would a hardware barrier, but with a hardware barrier it doesn't matter if data dependencies are broken.) The difficulty of ensuring the compiler will never break data dependencies is why compilers never properly implemented consume. Yet at the same time, this kind of optimization is actually very rare in non-pathological code, which is why Linux has been able to get away with assuming it won't happen.
    • loeg5 hours ago
      Does anything care about Alpha? The platform hasn't been sold in 20 years.
      • jcranmer5 hours ago
        It's a persistent misunderstanding that release-consume is about Alpha. It's not; in fact, Alpha is one of the few architectures where release-consume doesn't help.

        In a TSO architecture like x86 or SPARC, every "regular" memory load/store is effectively a release/acquire by default. Using release/consume or relaxed provides no extra speedup on these architectures. In weak memory models, you need to add in acquire barriers to get release/acquire architectures. But also, most weak memory models have a basic rule that a data-dependent load has an implicit ordering dependency on the values that computed it (most notably, loading *p has an implicit dependency on p).

        The goal of release/consume is to be able to avoid having an acquire fence if you have only those dependencies--to promote a hardware data dependency semantic rule to a language-level semantic rule. For Alpha's ultra-weak model, you still need the acquire fence in this mode, it doesn't help Alpha one whit. Unfortunately, for various reasons, no one has been able to work out a language-level semantics for consume that compilers are willing to implement (preserving data dependencies through optimizations is a lot more difficult than it appears), so all compilers have remapped consume to acquire, making it useless.

      • gpderetta5 hours ago
        consume is trivial on alpha, it is the same as acquire (always needs a #LoadLoad). It is also the same as acquire (and relaxed) on x86 and SPARC (a plain load, #LoadLoad is always implied).

        The only place where consume matters is on relaxed but not too relaxed architectures like ARM and POWER, where consume relies on the implicit #LoadLoad of controls and data dependencies.

        • bonzini5 hours ago
          Also on alpha there's only store-store and full memory barriers. Acquire is very expensive.
  • chrismsimpson6 hours ago
    > The truth of the matter, though, is that the Rust community seems to want to take a different approach to concurrent data access.

    Not knowing anything about development of the kernel, does this kind of thing create a two tier Linux development experience?

    • zaphar6 hours ago
      Not sure if it introduces a tiered experience or not. But reading the article it appears that the Rust devs advocated for an api that is clearer in it's semantics with the tradeoff that now understanding how it interacts with C code requires understanding two APIs. How this shakes out in practice remains to be seen.
      • thenewwazoo3 hours ago
        Advocating for an API with clearer semantics has, afaict, been most of the actual work of integrating Rust into the kernel.
        • zaphar3 hours ago
          That is my understanding from the outside as well. The core question here should, I think, be whether the adoption and spread of clearer semantics via Rust is worth the potential for confusion and misunderstandings at the boundaries between C and Rust. From the article it appears that this specific instance actually resulted in identifying issues in the usage of the C api's here that are geting scrutiny and fixes as a result. That would indicate the introduction of Rust is causing the trend line to go in the correct direction in at least this instance.
          • thenewwazoo3 hours ago
            That's been largely my experience of RIIR over years of work in numerous contexts: attempting to encode invariants in the type system results in identifying semantic issues. over and over.

            edit to add: and I'm not talking about compilation failures so much as design problems. when the meaning of a value is overloaded, or when there's a "you must do Y after X and never before" and then you can't write equivalent code in all cases, and so on. "but what does this mean?" becomes the question to answer.

  • epolanski5 hours ago
    What is your take on their names instead of "atomic_read" and "atomic_write"?
    • gpm5 hours ago
      The problem with atomic_read and atomic_write is that some people will interpret that as "atomic with a sequentially consistent ordering" and some as "atomic with a relaxed ordering" and everything in between. It's a fine name for a function that takes an argument that specifies memory ordering [1]. It's not great for anything else.

      Read_once and Write_once convey that there's more nuance than that, and tries to convey the nuance.

      [1] E.g. in rust anything that takes https://doc.rust-lang.org/std/sync/atomic/enum.Ordering.html

    • bjackman22 minutes ago
      Those things both exist in the kernel and they refer to CPU atomics similar to std::atomic in C++.
    • kccqzy4 hours ago
      I think “atomic” implies something more than just “once” because for atomic we customarily consider the memory order with that memory access, but “once” just implies reading and writing exactly once. Neither are good names because the kernel developers clearly assumed some kind of atomicity with some kind of memory ordering here but just calling it “atomic” doesn’t convey that.