36 pointsby PaulHoule7 hours ago8 comments
  • throwaway815232 minutes ago
    GCC alread has this for x64, I thought. https://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html

    RISC-V has no carry bit and this whole thing becomes awkward.

    I am under the impression that boost::multiprecision has specialized templates for 128 and 256 bit math, but maybe I'm wrong. In practice when I've wanted extended precision, I've just used GMP or a language with bignums.

    I would expect the best x86 machine code for many 128 bit operations would use XMM instructions, no? But I haven't investigated.

  • ThatGuyRaion18 minutes ago
    Question for those smarter than me: What is an application for an int128 type anyways? I've never personally needed it, and I laughed at RISC-V for emphasizing that early on rather than... standardizing packed SIMD.
    • 7 minutes ago
      undefined
  • b1temy23 minutes ago
    I understand why a non-standard compiler-specific implementation of int128 was not used (Besides being compiler specific, the point of the article is to walk through an implementation of it), but why use

    > using u64 = unsigned long long;

    ? Although in practice, this is _usually_ an unsigned 64 bit integer, the C++ Standard does not technically guarantee this, all it says is that the type need to be _at least_ 64 bits. [0]

    I would use std::uint64_t which guarantees a type of that size, provided it is supported. [1]

    Re: Multiplication: regrouping our u64 digits

    I am aware more advanced and faster algorithms exist, but I wonder if something simple like Karatsuba's Algorithm [2] which uses 3 multiplications instead of 4, could be a quick win for performance over the naive method used in the article. Though since it was mentioned that the compiler-specific unsigned 128 integers more closely resembles the ones created in the article, I suppose there must be a reason for that method to be used instead, or something I missed that makes this method unsuitable here.

    Speaking of which, I would be interested to see how all these operations fair against compiler-specific implementations (as well as the comparisons between different compilers). [3]. The article only briefly mentioned their multiplication method is similar for the builtin `__uint128_t` [4], but did not go into detail or mention similarities/differences with their implementation of the other arithmetic operations.

    [0] https://en.cppreference.com/w/cpp/language/types.html The official standard needs to be purchased, which is why I did not reference that. But it should be under the section basic.fundamental

    [1] https://en.cppreference.com/w/cpp/types/integer.html

    [2] https://en.wikipedia.org/wiki/Karatsuba_algorithm

    [3] I suppose I could see for myself using godbolt, but I would like to see some commentary/discussion on this.

    [4] And did not state for which compiler, though by context, I suppose it would be MSVC?

    • Joker_vD18 minutes ago
      Since they don't calculate the upper 128-bits of the product, they use only 3 multiplications anyway.
      • b1temy11 minutes ago
        You are right. Not sure how I missed/forgot that. In fact, I think the entire reason I was reminded of the algorithm was because I saw the words "3 multiplications" in the article in the first place. Perhaps I need more coffee...
  • beached_whalean hour ago
    I am so happy that MSVC added 128 bit integers to their standard library in order to do ranges distance of uint64_t iota views. One type alias away from int128's on most machines running gcc/clang/msvc
  • PaulHoule2 hours ago
    Makes me think of the bad old days where the platform gave you 8-bit ints and you built everything else yourself... or AVR-8.
    • Neywiny42 minutes ago
      I guess modern compilers (meaning anything Arduino era and up, at least when I first got into them maybe mid 2010s) abstract that away, because while true that it's doing that under the hood we at least don't have to worry about it.
  • reactordev3 hours ago
    Tangential. A long time ago at a company far far away, this is how we did UUIDs that made up a TenantId and a UserId, using this exact same logic, minus the arithmetic. Great stuff.

    (We wanted something UUID like but deterministic that we could easily decompose and do RBAC with, this was prior to the invention of JWT’s, OAuth, and scopes, worked at the time).

  • Joker_vD42 minutes ago
    > On division: There is no neat codegen for division.

    Wait, what? I'm fairly certain that you can do a 128-bit by 128-bit division using a x64's 128-bit by 64-bit division instruction that gives you only 64-bit quotient and remainder. The trick is to pre-multiply both dividend and divisor by a large enough power of 2 so that the "partial" quotient and remainders that the hardware instruction would need to produce will fit into 64 bits. On the whole, IIRC you need either 1 or 2 division instructions, depending on how large the divisor is (if it's too small, you need two divisions).

  • azhenley2 hours ago
    > we use 256-bit integers in our hot paths and go up to 564 bits for certain edge cases.

    Why 564 bits? That’s 70.5 bytes.

    • wavemode4 minutes ago
      Maybe it's a typo for 512. I'm not even sure how you would achieve 564 in this context.
    • its_ubuntu17 minutes ago
      It was a nice, round number.