222 pointsby blfr8 days ago15 comments
  • Cockbrand5 days ago
    Around the turn of the millennium I had a Sony Vaio 505TX, which had the same chipset. My machine was running Linux, and I maxed it out to 128MB RAM.

    There was a kernel patch for this chipset back then, which treated all memory above the lower 64MB as a RAM disk, which could then be used as swap space.

    This prioritized the faster portion of RAM while still having very fast swapping.

    • Cockbrand5 days ago
      Too late to edit - I just saw that the Vaio in fact had the 430TX chipset, not the 430FX. Both were artificially capped at 64MB of fast RAM, as they were late Pentium chipsets, and Intel rather wanted to sell the then-new Pentium II chips and chipsets if you wanted to have more memory.
      • bayindirh5 days ago
        > Both were artificially capped at 64MB of fast RAM, as they were late Pentium chipsets, and Intel rather wanted to sell the then-new Pentium II chips and chipsets if you wanted to have more memory.

        Intel being Intel, back then and now.

        • rasz5 days ago
          Intel always used ram for market segmentation. First to drop Parity support on all but the high end components. Cacheable ram limits on all but the high end components. Trying to monopolize ram with 1996 Rambus deal. Locking ram/fsb multipliers on all but the high end components. It was one of their go to Enshittification knobs.
    • zozbot2345 days ago
      In the modern era we'd probably repurpose NUMA support if this issue came up again, so that tasks would prioritize the fast portion of memory but the remainder would be fully usable as RAM (with fewer of the extra copies you'd have from "swap" use).
    • gzread5 days ago
      That is a hack. It shouldn't need to swap - it should just be able to start using it as normal memory when under memory pressure.
      • Cockbrand5 days ago
        I'm sure it was much easier to implement than what you're describing. So it's a hack indeed.
  • HerbManic5 days ago
    It is funny to see how these older machines perform at their higher end limits. I'm guessing the idea on this was that if you needed that much RAM, the sacrifice of L2 cache was a worth while trade off.

    It was only a few weeks ago that I found out the original BeBOX computers would switch off L2 cache when running in dual CPU mode. It was just a limitation of the memory controller. Again, the thinking of, if you need the extra compute over memory bus it would be a worth while trade off.

    • justin665 days ago
      > I'm guessing the idea on this was that if you needed that much RAM, the sacrifice of L2 cache was a worth while trade off.

      The idea was that nobody in their right mind would at the time populate that particular consumer motherboard/chipset with hundreds of megabytes of RAM because it would be hilariously expensive. If you needed that kind of RAM, you were purchasing a much more expensive workstation anyhow.

      By the time 384MB was a merely expensive amount of RAM, nobody would be interested in installing it in a Pentium. Those were the days when Moore's Law was still a very big deal. For that reason the firmware probably never received an update to fix the problem, even if that were possible.

      The docs on that motherboard sort of suggest that the motherboard could cache up to 512MB. This motherboard uses the new pipelined burst cache technology with 512K size and the memory cacheable size from 64MB to 512MB. I can't imagine they ever actually tested that.

    • hypercube335 days ago
      Honestly asking though is it worth that trade off? I enjoy watching people benchmark older Intel x86 based chips and without cache they are frankly awful slow. I'm not sure two without cache beat one with. The BeBox did run a totally different processor though so I have zero domain knowledge for that which is why I'm genuinely curious.
    • zurn5 days ago
      Looks like the BeBox motherboard didn't have the external L2 in the first place.

      Besides web sources, logic dictates this as well: Since dual-cpu was its selling point, it wouldn't make sense to ship a disabled L2 implementaton on the mobo at extra cost. There was no single-cpu model.

      • smallstepforman5 days ago
        That was a PPC603/604 limitation if you wanted multi CPU’s.
        • 5 days ago
          undefined
        • electroly5 days ago
          They eventually upgraded the BeBox to the 603e, I wonder if the same L2 workaround was used on those models.
          • classichasclass4 days ago
            Yes. None of the 603 series, including the 603e, was intended for multiprocessing, so the same hacks were required.
  • canpan5 days ago
    More RAM running slower is still true today. With AM5 you probably cannot enable EXPO with four ram slots filled vs two. The gap is not that extreme though.

    https://www.corsair.com/us/en/explorer/diy-builder/memory/2-...

  • krige5 days ago
    This reminds me that on an Amiga 600 or 1200, if you add more than 4 (IIRC?) megabytes of RAM through usual means, the PCMCIA slot becomes unusable due to addressing conflicts.

    There are workarounds, of course. For instance, the A1208 expansion has a jumper that limits added memory from 8MB to 4MB explicitly so that PCMCIA can be used.

    • Cockbrand5 days ago
      In addition, Amigas had three types of RAM to begin with - Chip Mem (shared between custom chips and CPU), Slow Mem (exclusive to the CPU, still IIRC as slow as Chip Mem) and Fast Mem (exclusive to the CPU and significantly faster).

      And just disabling the upper memory in order to be able to use the PCMCIA slot is a really lazy solution. Kinda typical for Commodore, though. 3rd party vendors offered better designs for their memory expansions.

      • krige4 days ago
        The A1208 is a third party solution, or at least is produced by third party. But yeah, more advanced expansions like TerribleFire sidestep the PCMCIA issue
  • Iflal5 days ago
    how funny is this, we used to spend weeks fitting assets into 4MB, and now we spend weeks trying to figure out why a 'Hello World' microservice is OOM-ing in a container with 2GB.

    We traded the 'Mo RAM' for 'Mo Layers,' and in the process, we lost the ability to reason about what the hardware is actually doing. Sanglard’s breakdowns are always a sobering cold shower for those of us pampered by modern GC and JITs

  • consp5 days ago
    Reminds me a bit about installing one of my 128MB 72 pin SIMM modules in a 486, it has the same caching issues. Most board will not accept them anyway (I have both FP and Edo ones) but if you put a lower capacity one in the first slot they will happily boot and accept the full ram amount if all lanes are occupied (which is not a given on all 486 motherboards). Also remember to enable quick ram check or you will be getting more coffee.
    • Schiendelman4 days ago
      You mentioning enabling quick ram check just gave me a little shot of nostalgia while having my coffee! Thank you.
  • bellowsgulch5 days ago
    This would have also still been true even roughly a decade later, during which time the industry was going through a transition from 32-bit computing to 64-bit, and large amounts of RAM read from BIOS in pre-UEFI systems were slower to boot the more memory you had!

    Imagine young would-become engineers at the time finding that adding that second stick to their laptop did in fact, not make their systems magically faster.

  • hsbauauvhabzb5 days ago
    Many modern apps seem to cache based on total ram installed, and don’t seem to scale well to larger than normal systems. Chrome, I’m looking at you.
  • pipes5 days ago
    Google says sdram in 1997 was 7 to 10 dollars per megabyte. So 384 would be 3840 not 40,000 am I missing something here?
    • eptcyka5 days ago
      Buyim higher density memory is almost always more expensive. Ye, you could buy 100s of the cheapest modules at that price, but ehat is the point if you can only stick 8 of them in any given machine?
    • ErroneousBosh5 days ago
      I had a desktop PC that I bought (as a pile of bits!) with 512MB of RAM in 1999 and I sure as hell didn't pay more than a couple of hundred for memory. That might have been EDO rather than SDRAM though but I can't see the price difference being that much!
      • rasz5 days ago
        https://news.ycombinator.com/item?id=47551166

        >128MB DIMM: May 1997 $300. July 1998 $150. July 1999 $99. September 1999 Jiji earthquake happens. September-December 1999 $300. May 2000 $89.

        >Then overproduction combined with dot-com boom liquidations started flooding the market and Feb 2001 $59, by Aug 2001 _256MB_ module was $49. Feb 2002 256MB $34. Finally April 2003 hit the absolute bottom with $39 _512MB_ DIMMs

        In 1999 512MB could cost $400, but it could also cost $1200 :)

      • cyberax5 days ago
        My computer had 16Mb in 1997, and it was lower-range but not the absolute bottom.

        It looks like Anandtech listed 128Mb for $300 (not inflation adjusted) in 1997. It fell to $150 in 1998 and by 1999 you could buy it for $100.

        So 512Mb RAM by the end of 1999 for ~$200 was plausible.

    • p_l5 days ago
      Possibly inflation adjusted?
      • debugnik5 days ago
        That would be around $7,900 USD.
  • MrBuddyCasino5 days ago
    My 1997 mainboard had extensible tag-ram, if I remember correctly. Perhaps this is the issue?
    • angry_octet5 days ago
      Some motherboards supported larger tag ram chips, but not all.
    • p_l5 days ago
      Some chipsets allowed that, but not all
  • satnhak4 days ago
    Thanks! Amazing website, so many useful articles. Wish I had a couple of free years to work through these books https://fabiensanglard.net/Computer_Graphics_Principles_and_...
  • simne5 days ago
    Also, internal CPU caches grow over time - in 286 and before, just was not any cache; in 386 first included page cache for mmu - stores tables with mostly used pages; in next generations sometimes advertised grow of page cache.

    So yes, even when your cpu could address similar size of ram, possible it don't have enough page cache for your application.

  • wicket5 days ago
    It sounds like a problem related to memory interleaving. He doesn't say whether the memory modules are identical, my bet is that they differ. Could also be a poor performing motherboard.
  • sidewndr465 days ago
    Does the language in this not make sense to anyone? Is it trying to say the the L2 cache provided by the chipset is not able to access memory past a specific address?
    • angry_octet5 days ago
      On the 486 the L2 tag memory is a separate external chip, as are the L2 chips. Why waste space on physical address bits that will never be used? DOS is never using that much memory. So only low addresses are cached.
  • shifto5 days ago
    Rip Rudy.