117 pointsby thunderbong4 days ago15 comments
  • armada6514 days ago
    > We now know that LP64 was the preferred choice and that it became the default programming model for 64-bit operating systems

    That is incorrect, Windows never adopted the LP64 model. Only pointers were increased to 64-bit whereas long remained 32-bit. The long datatype should be avoided in cross-platform code.

    • jagrsw4 days ago
      All C's native datatypes should be avoided for cross-platform data structures (networking, databases, file storage) because the standard only guarantees minimum sizes. Additional problem is the endianess.

      uint64_t is a bit verbose, many re-def this to u64.

      • coldpie4 days ago
        I think I agree, but I'd be interested in more discussion about this.

        I always understood the native types to be the "probably most efficient" choice, for when you don't actually care about the width. For example, you'd choose int for a loop index variable which is unlikely to hit width constraints because it's the "probably most efficient" choice. If you're forced to choose a width, you might choose a width that is less efficient for the architecture.

        Is that understanding correct? Historically or currently?

        Either way, I think I now agree that unspecified widths are an anti-feature. There's value in having explicitly specified limits on loop index variables. When you write "for(int32_t i; ...)", it causes you to think a bit, "hey can this overflow?" And now your overflow analysis will be true for all arches, because you thought about the width that is actually in use (32-bits, in this case). It keeps program behavior consistent & easier to reason over, for all arches.

        That's my thinking, but I'd be interested to hear other perspectives.

        • 2ndbigbang4 days ago
          There is int_fast32_t and int_least32_t but it is probably less confusing to just use exact sized types (and would make porting to other architectures simpler).
        • nyrikki4 days ago
          Historically that is incorrect.

          Remember that the C standard came about many years after the language was already in use, the C abstract machine wasn't an explicitly design for portability and performance, it was documenting an existing system.

          C compilers being performant and portable is partly due to luck but mostly due to hard work by very smart people.

          Last time I looked, clangs analysis and optimizing code was more than a quarter of a million lines as an example.

          C being imperative is probably a lens for understanding how the type of optimization you are talking about are opportunistic and not intrinsic.

          Another lens is to consider that the PDP11 had flat memory, but NUMA, l2 and l3 caches and deep pipelines make the compiler far more complicated despite maintaining that simple model on the abstract machine.

          Ironically, FORTRAN, which was written on IBM machines that had decrementing index registers.

          While the base one indexing is explained as being simply a choice of lowest value. In the historic context is is better conceptualized as a limit index.

          That more closely matches what you are describing above. If you look at the most recent CPP versions adding ranges, that is closer to both FORTRAN and the above IMHO.

          https://en.cppreference.com/w/cpp/ranges

          That history is complicated because Dennis Ritchie's work on college was on what he called 'loop programming', what we would call the structured paradigm today.

          That does have the concept that any loop that you know the number of iterations will always halt, but being imperative, C doesn't really enforce that although any individual compiler may.

          C compilers are u reasonably effective in optimization, but that is in spite of the limits of the C abstract machine, not because of it .

          As shown above, all it takes is one powerful actor like MS making a decision, that probably was justified at the time, to introduce side effects across all platforms.

          Often it is safe to assume that the compiler will make good decisions, other times you have to force it to make the correct decision.

          But using the default types is a decision more of a choice to value portability than about performance IMHO.

          • nyrikki4 days ago
            Probably should point out that for loops in C are syntactic sugar for while loops at the language level.

                 for loop
                 Executes a loop.
            
                 Used as a shorter equivalent of while loop.
            
            
            https://en.cppreference.com/w/c/language/for

            I am amazed at how good compilers are today.

            There is also the difference between portability, where it means it compiles vs meaning that the precision and behavior is similar across platforms.

            long would be more portable for a successful compilation but may cause side effects.

            I shouldn't have switched meanings in the above reply context.

          • 4 days ago
            undefined
        • kibwen4 days ago
          > I always understood the native types to be the "probably most efficient" choice, for when you don't actually care about the width.

          This itself is a platform-specific property, and is thus non-portable (not in the sense that your code won't run, but in the sense that it might be worse for performance than just using a known small integer when you can).

        • marcosdumay3 days ago
          > I always understood the native types to be the "probably most efficient" choice

          Both of int32_t on Windows and int64_t on Unixes can't be the "probably most efficient" choice on the same machine.

          Besides, struct bloating is a perfectly fine C optimization that your compile can do at any time to get the most efficient implementation without that "probably" part. It almost never does, tough, because it's a shitty operation and because CPUs that handle 64 bits perfectly but fumble around with 32 bits are a historic oddity only.

      • EasyMark3 days ago
        I so much wish stdint would have gone with the more sane u64 i64, u32, i32, etc I redefine them for my personal projects but stick to the standard on other projects.
    • af784 days ago
      The article focuses on Linux and FreeBSD, which are LP64.
    • kevin_thibedeau3 days ago
      Long shouldn't be avoided. The rule has always been 32-bits minimum. If that covers the range you need them you're good on every standards compliant platform, otherwise you use a different type. That is how the C native integer types are supposed to be used to maintain forward portability across different word size architectures. What is wrong is to blindly assume your types are larger than the minimum or exactly equal to the minimum.
    • BobbyTables23 days ago
      64bit systems have been around for about 20 years.

      In another 20 years when we have 128bit PCs, it will be comforting to know that we’ll still be hamstrung on 32bit integers because of a design choice made in the 1990s.

  • blueflow4 days ago
    > To support those needs, there were clutches like Intel’s PAE, which allowed manipulating up to 64GB of RAM on 32-bit machines without changing the programming model, but they were just that: hacks.

    You can go look up how the 32-bit protected mode got hacked on top of the 16-big segmented virtual memory that the 286 introduced. The Global Descriptor Table is still with us on 64-bit long mode.

    So, its not PAE that is particularly hacky, its a more broader thing with x86.

    • af784 days ago
      While a PAE system can address more than 4 GiB of physical memory, individual processes still use 32-bit pointers and are therefore still restricted to 4 GiB. I think this is why the author calls PAE a hack.

      In x86-64 long mode and i386 32-bit mode, pointers are really 64- and 32-bit, respectively; I would not call this a hack.

      • rft3 days ago
        To add another layer on top of these hacks, Windows has Address Windowing Extensions [1] that allows a 32bit process to use more than 4GB of RAM. Of course pointers are still 32bit, so you need to map the additional memory into and out of the virtual address space.

        x86 and its history is full of things that look hacky, and might be, but are often there for backward compatibility. If your x86 PC still boots in BIOS mode, it comes up in 16bit real mode [2], ready to run DOS. It then moves through the decades into protected mode and lastly (for x64 systems) long mode.

        [1] https://learn.microsoft.com/en-us/windows/win32/memory/addre... [2] https://wiki.osdev.org/Real_Mode

      • junon3 days ago
        Important missing info I suppose for some readers - that's what the "p" in "pae" stands for - physical address extension. It has no bearing on virtual addresses.
    • lmm4 days ago
      That other hacks exist does not make a given hack any less hacky.
  • zokier4 days ago
    I find it weird that the convention to use char/short/int/long/long long has persisted so widely to this day. I would have thought that already back in the 16 -> 32 bit transition period people would have standardized and moved to stdint.h style types instead (i.e. int32_t etc).

    Sure, that doesn't change pointer sizes, but it would have reduced the impact of the different 64-bit data models, like Unix LP64 vs Windows LLP64

    • jraph4 days ago
      I see two good reasons:

      (1) DX: typing "int" feels more natural and less clunky than choosing some arbitrary size.

      (2) Perf: if you don't care about the size, you might as well use the native size, which is supposed to be faster.

      In Java, people do use the hardware-independent 4 byte ints and 8 byte longs. I guess (1) matters more, or that people think that the JVM will figure out the perf issue and that it'll be possible to micro-optimize if a profile pointed out an issue.

      • epcoa4 days ago
        > Perf: if you don't care about the size, you might as well use the native size, which is supposed to be faster.

        You always care about the size (or should), especially if you're writing C or C++. Though it is often reasonable that 32767 is a sufficient limit and you're guaranteed at least that with int.

        • jraph3 days ago
          What I meant is that if you don't specifically need a small int size (because for instance you are going to serialize it into a format that mandates it), and you will not deal with even moderately large ints, you will probably use int even if you could use a smaller type. I've not seen many codebases doing something else.

          Of course you need to think about it. In C, but also in many languages (not python though, which magically switch to bigint when needed). In Java, the wrong int type won't cause UB, but it will throw (unchecked) exceptions.

      • Tuna-Fish3 days ago
        > which is supposed to be faster.

        If you care about this, you figure out exactly how much you need and always use the smallest type that meets this criteria.

        There have been architectures in the past where the "native" size was in practice faster than the smaller types, but those architectures are now long dead. On all modern architectures, none of the instructions for smaller data types are ever slower than the native size, and while using them doesn't directly win you cycles of execution time in the cpu (because they are no faster either), it wins you better cache utilization. As a rule, the fastest data type is a byte.

        There is no reason to ever use "int", other than inertia.

        • jraph3 days ago
          I was under the impression that native into size was still faster on current architectures, I guess I have stuff to read! Thanks for telling me.

          > There is no reason to ever use "int", other than inertia.

          ...and DX, as I said.

    • loeg3 days ago
      The stdint types only date to C99. 32-bit transition happened much earlier than that.
    • chipdart4 days ago
      > I find it weird that the convention to use char/short/int/long/long long has persisted so widely to this day.

      I don't think this is a reasonable take. Beyond ABI requirements and how developers use int over short, there are indeed requirements where the size of an integer value matters a lot, specially as this has a direct impact on data size and vectorization. To frame your analysis, I would recommend you took a peek at the not-so-recent push for hardware support for IEEE754 half-precision float/float16 types.

      • zokier4 days ago
        The cases where you want platform-specific integer width (that is not something like size_t/uintptr_t) is extremely niche compared to cases where you want integer to have specific width.

        I don't see the relation to fp16; I don't think anyone is pushing for `float` to refer to fp16 (or fp64 for that matter) anywhere. `long double` is already bad enough.

        • chipdart4 days ago
          > The cases where you want platform-specific integer width (that is not something like size_t/uintptr_t) is extremely niche (...)

          I think you got it backwards. There are platform-specific ints because different processors have different word sizes. Programing languages then adjust their definitions for these word sizes because they are handled naturally by specific processors.

          So differences in word sizes exist between processors. Either programming languages support them, or they don't. Also, there is also specific needs to handle specific int sizes regardless of cpu architecture. Either programming languages support them, or don't.

          And you end with "platform-specific integer widths" because programming languages do the right thing and support them.

          • zokier4 days ago
            The fact that we have all these different 64 bit data models demonstrates clearly how the connection between word size and C types is completely arbitrary and largely meaningless. And this is not specific to 64 bit either, same sort of thing happens on 8 bit platforms too. So you can not rely on `int` (or any other type) being word sized.

            Furthermore I argue that word size is not really something that makes sense to even expose at language level, the whole concept of word size is somewhat questionable. CPUs operate on all sort of things that can have different sizes, trying to reduce that to single "word size" is futile.

        • manwe1504 days ago
          My recollection of history is that the standardization of stdint.h happened long after the transition to 32 bit, and is only just finishing up becoming available on some major compilers after the transition to 64 bit is well behind us
          • layer84 days ago
            You could fairly easily create your own portable stdint.h equivalent in C89 (using the preprocessor and INT_MAX etc.). I remember doing that in the 90s, before C99.

            However, it was also conventional wisdom to use int by default to match the architecture’s “natural” word size, and maybe add a preprocessor check when you needed it to be 32-bit.

            Another consideration is that the built-in types have to be used with the C standard library functions to some extent.

          • jcranmer4 days ago
            stdint.h was introduced in C99, and MSVC didn't introduce it until 2010.
  • tzot4 days ago
    x32 ABI support exists at least in the kernel of Debian (and Debian based) distributions, and I know because I've built Python versions (along with everything else needed for specific workloads) as x32 executables. The speed difference was minor but existing, but the memory usage was quite a lot decreased. I've worked with a similar ABI known as n32 (there was o32 for old 32, n32 for new 32 and n64 for fully 64-bit programs) on SGI systems with 64-bit capable MIPS CPUs; it made a difference there too.

    Unfortunately I've read articles where quite-more-respected-than-me people said in a nutshell “no, x32 does not make a difference”, which is contrary to my experience, but I could only provide numbers where the reply was “that's your numbers in your case, not mine”.

    Amazon Linux kernel did not support x32 calls the last time I tried, so you can't provide images for more compact lambdas.

    • gregw24 days ago
      For the curious, "x32" Linux is a "L64P32" programming model. There is some lwn.net 2012 commentary on performance implications at: https://lwn.net/Articles/503541/
    • marcosdumay3 days ago
      I don't know where you get that "it makes no difference" opinion.

      Back then when the VPSes you could rent had 256MB of RAM or some times even 128MB, it was common knowledge that using a 32 bits distro would have a huge impact on your memory usage.

      Maybe you are reading those opinions wrong, and what they are really saying is "it's not pointers filling 4GB of RAM, the pointer size makes no difference on modern machines"? Because I can agree with that one.

      • tzot3 days ago
        > I don't know where you get that "it makes no difference" opinion.

        It was a far hotter topic back when.

        After so many years, the only article I could locate (but unfortunately not one I did comment on) that I remember is this one:

        https://flameeyes.blog/2012/06/19/is-x32-for-me-short-answer...

        There are other commenters though that mention the cache pressure and performance difference.

    • zh34 days ago
      Indeed, for many years we've been running multiple systems with X86_64 kernels and a 32-bit userspace, running many standard 32-bit applications (including browsers); only thing we've ever needed to do is run 'linux32' before starting X so that 'uname' reports i686 rather than x86_64.
      • martijnvds4 days ago
        The X32 ABI is not the same as the 32-bit mode used to run "i686" binaries on x86_64 (that would be the i386 ABI).
    • musicale3 days ago
      I miss x32 - it seemed like a nice idea. (And I'm a bit disappointed that iOS and Android didn't adopt something similar for ARM to save memory on phones.)

      Then again, I also thought segmentation/segment registers might be useful for bounds checking, as in Multics and in the original version of Google Native Client.

  • faragon4 days ago
    Using indexes instead of pointers in data structures works well, and the cost of the base address + offset is negligible, as similar address calculation is already generated by the compiler when accessing into an element of a data structure. In addition to that, mention that indexes can be used as offsets, or as an actual indexes of the size of an individual element, i.e. in that case non-trivial data structures with e.g. >= 32-byte elements could address hundreds of gigabytes of RAM.

    A practical use could be e.g. using bit fields can be convenient, e.g. having 32-bit indexes, with the higher bit for the color in a Red-black tree. And in case of requiring dynamic-sized items in the tree nodes, these could be in different 32-bit addressable memory pools.

  • gnabgib4 days ago
    Small discussion already (16 points, 10 hours ago, 7 comments) https://news.ycombinator.com/item?id=41768144
  • gregw24 days ago
    If I recall correctly, UNIX vendors in the late 90s were debating a fair bit internally and amongst each other whether to use LP64 or ILP64 or LLP64 (where long longs and pointers were 64bit).

    ooh, found a link to a UNIX Open Group white paper on that discussion and reasoning why LP64 should be/was chosen:

    https://unix.org/version2/whatsnew/lp64_wp.html

    And per Raymond Chen, why Windows picked LLP64: https://devblogs.microsoft.com/oldnewthing/20050131-00/?p=36... and https://web.archive.org/web/20060618233104/http://msdn.micro...

    For some history of why ILP32 was picked for 1970s 16 to 32 bit transition of C + Unix System V (Windows 3.1, Mac OS were LP32) see John Mashey's 2006 ACM piece, partcularly the section "Early Days" sechttps://queue.acm.org/detail.cfm?id=1165766

    No peanut gallery comments from OS/400 guys about 128-bit pointers/object handles/single store address space in the mid-1990s please! That's not the same thing and you know it! (j/k. i'll stop now)

    • senkora4 days ago
      From the "Early Days" section:

      > PDP-11s still employed (efficient) 16-bit int most of the time, but could use 32-bit long as needed. The 32-bitters used 32-bit int most of the time, which was more efficient, but could express 16-bit via short. Data structures used to communicate among machines avoided int.

      Oh, interesting. So "short" meant 16-bit portable integer, "long" meant 32-bit portable integer, and "int" meant fast non-portable integer.

  • crest4 days ago
    What the FreeBSD ISO size comparison overlooks is that to provide 32 bit application compatibility FreeBSD/amd64 will include a i386 copy of all libraries (unless deselected).
    • jmmv4 days ago
      Oh, good point. Somehow I missed that when looking at the contents. Will need to check again why.
    • yjftsjthsd-h4 days ago
      Does Debian not include multilib by default?
  • jauntywundrkind4 days ago
    Talking about the ISA needing to spend so much time addressing memory, I'm reminded of the really interesting Streaming Semantic Registers (SSRs) in Occamy, the excellent PULP group's 4096-core RISC-V research multichip design. https://arxiv.org/abs/1911.08356

    Just like the instruction pointer which implicitly increments as code executes, there are some dedicated data-pointer registers. There's a dedicated ALU for advancing/incrementing, so you can have interesting access patterns for your data.

    Rather than loops needing to load data, compute, store data, and loop, you can just compute and loop. The SSRs give the cores a DSP like level of performance. So so so neat. Ship it!

    (Also, what was the name of the x86 architecture some linux distros were shipp8ng with 32 bit instructions & address space, but using the new x86-64 registers?)

    • tzot4 days ago
      Re your parenthesized question: x32, as discussed in the article, is an ABI using the full x86-64 instruction set and registers, but pointers are 32-bit. I believe you are talking about the same thing, because any “32-bit instructions” (assumably x86/i686 instructions) cannot use “new x86-64 registers” (either in full count of registers or their 64-bit width).
  • ngcc_hk3 days ago
    This is crazy. Like why not baby start to eat spaghetti, rice or noodle. Just to attack people then and said they are silly not to think ahead, totally ignore nearly all have to handle transition due to advance in chip technology. The fact all had to do that meant it is human fact to learn not to ignore.

    One lesson is to check and not allow i.e. not ignore the address bit, like DEC (but where is DEC now btw). Tbh, look at C. How many features are not allowed but just undefined in that standard. Hence even that I wonder is there some reason we have to accept human fallibility and deal with it.

    Anyway easy to comment on hindsight. What I think more important is say it cannot be done and hence no way forward but totally new architecture. X86 cannot do 64 bit say … it ended up confusing me why we have amd-64 in an Intel cpu …

    What we need is like Apple somehow force migration except Apple II, we have from Mac OS … 9 … x … macOS with hardware change …

    IBM mainframe is still running and the 24 bit is a feature not a bug or mistake from a marketing point of view.

  • renox4 days ago
    the RISC example in the article is a bit weird: on one hand, it may take even more instructions to load an address in a RISC, on the other hand all the RISC I know have an 'addi' instruction, no need to do 'li r2, 1; add r3, r1, r2' to add 1 to a register!
  • chasil4 days ago
    Solaris famously compiles everything in (/usr)/bin as 32-bit.

    Alas, my SmartOS test system is gone, or I would show you.

    • quesera4 days ago
      It looks like there's some variability:

        smartos$ uname -a
        SunOS smartos 5.11 joyent_20240701T205528Z i86pc i386 i86pc Solaris
      
      Core system stuff:

        smartos$ file /usr/bin/ls
        /usr/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (Solaris), dynamically linked, interpreter /usr/lib/ld.so.1, not stripped
      
        smartos$ file /bin/sh
        /bin/sh: symbolic link to ksh93
      
        smartos$ file /bin/ksh93
        /bin/ksh93: ELF 64-bit LSB executable, x86-64, version 1 (Solaris), dynamically linked, interpreter /usr/lib/amd64/ld.so.1, not stripped
      
      And then the pkgsrc stuff:

        smartos$ which ls
        /opt/local/bin/ls
      
        smartos$ file /opt/local/bin/ls
        /opt/local/bin/ls: symbolic link to /opt/local/bin/gls
      
        smartos$ file /opt/local/bin/gls
        /opt/local/bin/gls: ELF 64-bit LSB executable, x86-64, version 1 (Solaris), dynamically linked, interpreter /usr/lib/amd64/ld.so.1, not stripped
    • wang_li4 days ago

          $ uname -a
          SunOS bob 5.11 11.4.68.164.2 sun4v sparc sun4v kernel-zone
          $ pwd
          /usr/bin
          $ file * | grep 32-bit | wc
          46 1008 8890
          $ file * | grep 64-bit | wc
          1036 21449 185122
      
      I think Solaris is famous for doing away with static linking.

      E: A Solaris 10 amd64 box is all 32-bit in /usr/bin.

    • yjftsjthsd-h4 days ago
      That was my first thought, too:) But is that x32 (amd64 with 32-bit pointers), or full i386 32-bit code+data?
  • sjsdaiuasgdia4 days ago
    I'm overly annoyed by the AI generated image of CPUs, with one showing a horrible mottled mess where the nice clean grid of contact pads should be.

    There's endless actual pictures of processors from both eras. Using actual images here would have been as fast, possibly faster, than the process of writing a prompt and generating this image.

    • chefandy4 days ago
      And that's why laymen with prompts will never replace trained artists/designers for anything but trivial use cases— the ability to decide a) how best to visually communicate the right thing to the right audience, b) whether generative AI is the best choice to achieve that and the capability to use more precise tools if not, and c) that what you have is cruddy enough, or the message is superfluous enough, that not using an image is more effective. This image fails and this is a trivial use case. While it's depressing to see so many lower-end commercial artists lose their livelihood and their depressing wages on the market upstream, I can't help but have a little schadenfreude seeing things like this after so many tech people with dunning-krueger-informed confidence about visual communication have gleefully called me a buggy whip manufacturer.
      • stale20024 days ago
        > why laymen with prompts

        Thats the baby mode stuff dude. You have missed the forest for the trees if the only thing that you can engage with, is the simplest, most low effort usecase for AI.

        There are so many other things that can be done with AI. The immediate, obvious stuff would be anything to do with Video To Video AI generation.

        IE, imagine someone shoots a video the normal way. With all the creative input that this involves. And then you take that video, and you change things in it, using AI.

        I don't know, you see the video and you realize that you need to add an additional lightsource in, and you want all the shadows to autocorrect. You could use AI to do that.

        Thats just a random/intermediate usecase off the top of my head, that involves a lot more creative input than just "prompt in, video out".

        I am sure that there could be a lot more crazy usecases, but you aren't going to be able to see then, because you are instinctively losing your mind by only talking about the dumbest and most easy usecase of prompt engineering.

        • chefandy3 days ago
          I'm a technical artist so I'm pretty familiar with professional film and game production pipelines, and how generative AI is used within them. The genuinely useful tools for NN in production workflows will look like Nuke's copycat tool. Being very familiar with the current state of these tools, I'd be shocked if something like lighting could be controlled anywhere close to precisely enough for real professional work in the foreseeable future. Even with a full 3D scene, forward-thinking tools like Octane can only muster up modest optimizations with NN lighting processing. Starting from a 2D image? Maybe for a real estate listing, online product photo, or personal media, but that's about it. Prompts are just fundamentally one of the least useful interfaces for precise work, and most people involved just don't realize how unforgiving the requirements are for high-end media production.
    • sph4 days ago
      Not only that, they had to manually edit the generated image to add the i386 and x86-64 labels on them.

      When all you have is a hammer...

    • Retr0id4 days ago
      The garbled pads were straight up trypophobia-inducing, genuinely stopped me in my tracks
    • chrsw4 days ago
      I was going to comment on this but then decided not to because I thought I was being too petty. But I'm glad to see other people agree. It's disturbing.

      I see someone else commented that it's probably due to copywrite/licensing. I agree there too. That's a shame. So, because of usage policies we end up with AI generated pictures that aren't real, aren't accurate and usually off-putting in some way. Great.

    • Filligree4 days ago
      First though, you would have needed to find a picture you can be sure isn’t in copyright. Or which is licensed appropriately.
      • Retr0id4 days ago
        I'm sure the same diligence was also performed when constructing the AI's training data set.
      • sjsdaiuasgdia4 days ago
        images.google.com -> search for "386 cpu"

        Click "tools" then "usage rights", pick "creative commons", pick an image.

        Now search for "core cpu" and pick a second image.

        Yeah that sure was hard and time consuming!

    • quesera4 days ago
      I would love it if AI images were tagged with the prompt that generated them.

      I assume PNG/etc image file formats have internal tag-like data structures.

      Browsers could display the image as usual, and show the "origin" tag data alongside HTML alt tag data.

      Of course people could null out, or overwrite the origin data, but this seems like a reasonable default practice.

      • Retr0id4 days ago
        There is indeed support for in-band alt text, it's not especially widely supported though.
    • hedora4 days ago
      Oh wow, I noticed and thought "When was the last time I saw a JPEG that was this crappily encoded?".

      I must be getting old.

    • 4 days ago
      undefined