28 pointsby mfiguierea day ago4 comments
  • kristianpa day ago
    An overview of the new EPYC Turin server processors is here:

    https://www.phoronix.com/review/amd-epyc-9005

    • jshearda day ago
      The 9175F is kind of hilarious, it's just 16 cores but with 512MB of L3 cache between them. L3 cache is a per-chiplet resource so we can infer it has the same number of chiplets as the 128 core 512MB SKU, but 112 of the cores are disabled. Presumably that chip is aimed squarely at running software with per-core licensing as fast as possible with as few cores as possible.
      • cresta day ago
        Is it confirmed that it doesn't just include lots of dummy silicon for mechanical support, at least 16 working cores and the I/O die?
        • jshearda day ago
          It's confirmed by the fact that it has 512MB of cache - reducing the number of functional chiplets would reduce the total amount of cache proportionately, so it has to be maxed out with the full set of chiplets. The other 16-core SKUs with just 64MB of cache are what you get when you reduce the number of chiplets and fill out the rest of the package with dummy silicon instead.
  • Instead of buying 2 EPYC 4004 (4584PX) with marginally useless server features, I stuck 2 Ryzen 7000 (Ryzen 9 7950X3D) in uATX H13SAE-MF server boards with ECC RAM. They include IGPUs, but whatever. They work fine for load testing 100 and 400 GbE NICs.
    • kvemkon15 hours ago
      For a value solution I'd take then an ASUS B650E/X670E mainboard (with PCIe 5.0 x16 and ECC support) for half the price.

      But I'm not aware of a value 400 GbE NIC. Mellanox ConnectX-7 400G costs more than 3x the CPU.

      • hi-v-rocknroll25 minutes ago
        > ASUS B650E/X670E mainboard

        No, no, no. Fuck ASUS and you don't understand the use-case.

        I acquire gear through gray markets just fine because I'm not paying zillions for commercial channel parts. I can get a 1 port 400 GbE NIC for ~$1k USD. It maybe used or an engineering sample, but it works.

  • tiffanyha day ago

               AmpereOne     EPYC
      Kernel:        6.8     6.10
    
    Wasn’t there sizable efficiency improvements just in the newer kernel that could explain some of this away?
  • s-mona day ago
    Thats insane! I wonder how the cooling works.
    • wmfa day ago
      It's not that different from gaming AIOs. Personally I'm surprised they used water instead of heat pipes.
      • geerlingguya day ago
        A lot of the high-end datacenter racks are getting water cooling now—you have a unit in the base of the rack that distributes water to all the servers above with a couple redundant pumps/PSUs.

        Some systems are even water cooling random little components for completely fanless 1U/2U servers... I wonder about the longevity of those systems though!

        With the water cooling, you can pipe all that water out to chillers and at a certain heat volume it makes more sense than handling all the heat in air exchangers.

        • wmfa day ago
          Right, but that's not what AMD is doing here. They're using water to move heat a few inches. Heat pipes should be more efficient at that distance.