131 pointsby mfiguiere3 days ago6 comments
  • transpute3 days ago
    > Xe2, Intel is looking to use the same graphics architecture across their product stack.. integrated GPUs as a springboard into the discrete GPU market.

    Linux support for Xe2 and power management will take time to mature, https://www.phoronix.com/forums/forum/linux-graphics-x-org-d...

    Xe SR-IOV improves VM graphics performance. Intel dropped Xe1 SR-IOV graphics virtualization in the upstream i915 driver, but the OSS community has continued improvement in an LTS fork, making steady progress, https://github.com/strongtz/i915-sriov-dkms/commits/master/ & https://github.com/Upinel/PVE-Intel-vGPU?tab=readme-ov-file.

    • iforgotpassword3 days ago
      Aww man this is so disappointing. Intel has a pretty good track record with their Linux drivers. Too bad cost-cutting seems to have reached driver development too.
    • teruakohatu3 days ago
      > Intel dropped Xe1 SR-IOV graphics virtualization in the upstream i915 driver,

      I missed this. Wow this is disappointing.

      • shadeslayer2 days ago
        Not sure if we need to support SRIOV on the HW. VirtIO GPU native contexts should be good enough for most consumers.

        I imagine SRIOV would be useful for more advanced usecases

        • transpute2 days ago
          SR-IOV is a rare competitive advantage of Intel GPUs over Nvidia/AMD.

          Why would Intel give up that advantage by directing customers to software GPU virtualization that works on AMD and Nvidia GPUs?

          • shadeslayera day ago
            Because implementing designing/manufacturing/validating SR-IOV HW is expensive. It's not something that would be useful as a differentiating feature for most consumer grade HW.
            • transputea day ago
              Intel vPro CPUs with iGPUs are used by the Fortune 500 enterprise industrial base. Intel hardware is already segmented for enterprise markets and they enable/disable features for specific markets.

              There's lots of hardware competition for consumers, including upcoming Arm laptops from Mediatek and Nvidia. Intel can use feature-limited SKUs in both CPUs and GPUs to target specific markets with cheaper hardware and reduced functionality.

      • reginald783 days ago
        I remember being somewhat excited for Intel dGPUs since I had a real interest in a card that could do GVT-g and also might have super low idle power consumption like their iGPUs that would fit well with my VM server. We ended up with GVT-g canceled and promising of SR-IOV coming eventually and dGPUs with atrocious idle power consumption!
    • cassepipe3 days ago
      So the state of Xe support on Linux is pretty good ? Is it worth it to run Linux on Alder Lake, can it take advantage of the full power of the iGPU ?
      • 3 days ago
        undefined
  • SG-3 days ago
    i wish they covered things like x264/x265/av1/etc encoding/decoding performance and other benefits that aren't just gaming.
    • wtallis3 days ago
      Video encode and decode aren't really GPU functions. They're totally separate IP blocks from the 3D graphics/vector compute part of the GPU. On Intel's previous laptop processor generation (Meteor Lake), the video encode and decode blocks were on an entirely different piece of silicon from the GPU.
      • adrian_b3 days ago
        True. The display controller is also a different block, separated from the GPU and from the video codecs.

        While on Lunar Lake the GPU and the video codec block are on the same tile, they are still in different locations on the compute tile.

        In the new Arrow Lake S desktop CPU, to be announced tomorrow, the GPU is extracted on a separate tile, like in Meteor Lake, while the other two blocks related to video output, i.e. the video codec block and the display controller block, are located on a tile that contains the memory controller and a part of the peripheral interfaces and which is made using a lower-resolution TSMC process than the CPU and GPU tiles.

      • jsheard3 days ago
        Benchmarking hardware encode is also a pretty specialized rabbit hole since it's not just the performance that varies, but also the quality of the results.
      • dyingkneepad2 days ago
        > the video encode and decode blocks were on an entirely different piece of silicon from the GPU.

        As far as I understand this is not true. It's a different engine within the graphics device, and it shares the execution units.

        • wtallis2 days ago
          For Meteor Lake, Intel provided slides to the press that clearly labeled media blocks on the SoC tile, not the GPU tile. The hardware encode and decode also definitely does not use the shader execution units.
        • shadeslayer2 days ago
          AFAIK the video encode/decode pipeline is separate from the graphics pipeline. But they do reside on the graphics tile.
    • booi3 days ago
      It’s probably just not that interesting. There’s generally a proprietary encode/decode pipeline on chip. It can generally handle most decode operations with CPU help and a very narrow encoding spec mostly built around being able to do it in realtime for broadcast.

      Most of the video you encode on a computer is actually all in software/CPU because the quality and efficiency is better.

      • vbezhenar3 days ago
        > Most of the video you encode on a computer is actually all in software/CPU because the quality and efficiency is better.

        I don't think that's true. I bought a Thinkpad laptop, installed Linux and one of my issues was that watching youtube video put CPU onto 60%+ load. The same with Macbook barely scratched CPU at all. I finally managed to solve this issue by installing Arch. When everything worked as necessary, CPU load was around 10%+ for the same video. I didn't try Windows but I'd expect that things on Windows would work well.

        So most of the video for average user probably is hardware decoded.

        • adrian_b3 days ago
          The comment to which you replied was about encoding, not decoding.

          There is no reason to do decoding in software, when hardware decoding is available.

          On the other hand, choosing between hardware encoding and software encoding, depends on whether quality or speed is more important. For instance for a video conference hardware encoding is fine, but for encoding a movie whose original quality must be preserved as much as possible, software encoding is the right choice.

        • foobiekr3 days ago
          Most hardware encoders suck.
      • ramshanker3 days ago
        >>> It can generally handle most decode operations with CPU help and a very narrow encoding spec.

        This is so much spot on. Video coding specs are like a "huge bunch of tools" and encoders get to choose whatever subset-of-tools suits them. And than hardware gets frozen for a generation.

      • KronisLV2 days ago
        > Most of the video you encode on a computer is actually all in software/CPU because the quality and efficiency is better.

        It depends on what you care about more, you don't always need the best possible encoding, even when you're not trying to record/stream something real time.

        For comparison's sake, I played around with some software/hardware encoding options through Handbrake with a Ryzen 5 4500 and Intel Arc A580. I took a 2 GB MKV file of about 30 minutes of footage I have laying around and re-encoded it with a bunch of different codecs:

          codec   method   time     speed     file size   of original
          H264    GPU      04:47    200 fps   1583 MB     77 %
          H264    CPU      13:43    80 fps    1237 MB     60 %
          H265    GPU      05:20    206 fps   1280 MB     62 %
          H265    CPU      ~30:00   ~35 fps   would take too long
          AV1     GPU      05:35    198 fps   1541 MB     75 %
          AV1     CPU      ~45:00   ~24 fps   would take too long
        
        So for the average person who wants a reasonably fast encode and has an inexpensive build, many codecs will be too slow on the CPU. In some cases, close to an order of magnitude, whereas if you do encode on the GPU, you'll get much better speeds, while the file sizes are still decent and the quality of something like H265 or AV1 will in most cases seem perceivably better than H264 with similar bitrates, regardless of whether the encode is done on the CPU or GPU.

        So, if I had a few hundred of GB of movies/anime locally that I wanted to re-encode to make it take up less space for long term storage, I'd probably go with hardware H265 or AV1 and that'd be perfectly good for my needs (I actually did, it went well).

        Of course, that's a dedicated GPU and Intel Arc is pretty niche in of itself, but I have to say that their AV1 encoder for recording/streaming is also really pleasant and therefore I definitely think that benchmarking this stuff is pretty interesting and useful!

        For professional work, the concerns are probably quite different.

      • Dalewyn3 days ago
        >Most of the video you encode on a computer is actually all in software/CPU because the quality and efficiency is better.

        That was the case up to like 5 to 10 years ago.

        These days it's all hardware encoded and hardware decoded, not the least because Joe Twitchtube Streamer can't and doesn't give a flying fuck about pulling 12 dozen levers to encode a bitstream thrice for the perfect encode that'll get shat on anyway by Joe Twitchtok Viewer who doesn't give a flying fuck about pulling 12 dozen levers and applying a dozen filters to get the perfect decode.

        • timc33 days ago
          It’s not all hardware encoded - we have huge numbers of transcodes a day and quality matters for our use case.

          Certainly for some use cases speed and low CPU matter but not all.

        • imbnwa3 days ago
          Not sure why downvoted, all of serious Plex use runs on hardware decode on Intel iGPUs down to an i3. One only sources compute from the CPU for things like subtitles or audio transcoding
          • timc33 days ago
            Because Plex and gamers streaming is not the only use case for transcode
            • Dalewyn3 days ago
              "Most of the video you encode ..."
    • wcfields3 days ago
      I agree, I never really cared about QSV as an Intel feature until I started doing Livestreams, using Plex/Jellyfin/Emby, and virtualizing/homelab work.
      • WaxProlix3 days ago
        QuickSync passthrough should get you everything you need on i3+ chips. It's basically intel's only selling point in the homelab/home server space, and it's a big one.

        [Edit: I think I initially misread you - but I agree, it's a huge differentiator]

        • close043 days ago
          > It's basically intel's only selling point in the homelab/home server space

          In the homelab/home server space I always thought the OOB management provided by AMT/vPro is probably the biggest selling point. Manageability, especially OOB, is a huge deal for a lab/server. Anyone who used AMD's DASH knows why vPro is so far ahead here.

          • BobbyTables23 days ago
            Intel probably spends more on office supplies than they make from homelab customers…
            • close043 days ago
              Maybe, but I wasn't thinking of Intel's profit. The question was what might be a bigger selling point in a home lab, QuickSync for transcode related tasks (your Plex/Jellyfin machine for example, which would also work with most Nvidia GPUs and some AMD ones), or OOB manageability for your entire home lab especially if it's composed of multiple machines and IP KVMs quickly become cumbersome.
              • Wytwwww2 days ago
                > Nvidia GPUs

                You would need an actual GPU, though. Massively increasing cost, power usage etc. without providing any real value in return for many use cases and AFAIK HW transcoding with Plex doesn't even work properly with with AMDs iGPUs?

                The N100 can transcode 4k streams at ~20w while costing barely more than a Raspberry Pi.

                • wcfields2 days ago
                  Yeah, I’d love to use AMD cpus for my Plex/Homelab/VM/unraid system but when you’re building one for home use, every watt matters and an Nvidia GPU, while nice, is hard to justify just for transcodes.

                  I feel like my Dad saying “turn off the damn lights” now that I gotta pay the ‘light bill’ on a machine that runs 24/7 with spinning disks.

    • Remnant443 days ago
      As mentioned in other responses, that part of the GPU simply isn't interesting from an architectural perspective, which is what Chips and Cheese is all about.

      GPU compute performance is both technically interesting, and matters to much more than simply gaming!

    • hggigg3 days ago
      100% agree with that. x265 transcoding gets done on my MBP regularly so I’d like to see that as a comparison point.
      • TiredOfLife3 days ago
        x265 is a cpu based H.265 encoder and is not accelerated.
      • adgjlsfhk13 days ago
        what actually uses x265? I thought pretty much everyone used AV1 for their next gen codec.
        • throwaway484763 days ago
          Hardware people don't mind paying licenses for x265 because they can just bake in the cost. It just causes problems for software, especially when it's free.
          • adgjlsfhk13 days ago
            right, but if none of the software uses it, the hardware is pretty worthless.
            • acdha3 days ago
              That’s only true if you’re writing the codec. If you’re calling the system APIs, you’re using Microsoft or Apple’s license.

              The last time I looked it was worth supporting because there was a 20 point gap in hardware support but that’s closed as each generation of hardware adds AV1 support.

            • KeplerBoy3 days ago
              Video software doesn't need to license the codec if the GPU driver takes care of it, right?

              If hardware accelerate decoding works, you just feed the binary video blob to the driver and it returns decoded frames.

            • pjmlp3 days ago
              Proprietary software doesn't have such issues.
        • hggigg3 days ago
          Me when I want to transcode something to save a bit of disk space.
    • pa7ch3 days ago
      Agreed, my laptop burns a lot of battery on AV1 video and I'd like information on how chips with AV1 decode perform with chrome.
  • chmod7753 days ago
    That's a big hit in performance compared to the AMD chip. Just to save $100 on a $1700 notebook? Sadly the article didn't get into power draw too much. That might've been much more interesting.
    • phkahler3 days ago
      >> Sadly the article didn't get into power draw too much.

      They covered power quite a bit, but claimed the biggest power draw comes from memory access. I got the impression they were blaming AMDs increased memory bandwidth on their smaller cache size and hence a form of inefficiency. But higher frame rates are going to require more memory accesses. The smaller cache should have less impact on the number of writes needed. IMHO just some top line power consumption numbers are good, but trying to get into why one is higher than the other seems fruitless.

  • Sakos3 days ago
    Lunar Lake gaming performance is incredible on Windows. It makes me want the Steam Deck 2 to be based on the next Intel platform. That said, the Linux graphics drivers are terrible (https://www.phoronix.com/review/lunar-lake-xe2) and the Phoronix benchmarks for Lunar Lake overall (outside of gaming: https://www.phoronix.com/review/core-ultra-7-lunar-lake-linu...) showed terrible performance in all sorts of aspects, jesus. Xe2 is a huge win, the rest not so much.
    • skavi3 days ago
      GPU benchmarks in TFA are run on Windows: https://open.substack.com/pub/chipsandcheese/p/lunar-lakes-i...
      • Sakos3 days ago
        Weird. I've seen far better results elsewhere.
    • automatic61313 days ago
      MSI Claw 2 might, given it's original is Meteor Lake based. But it sold like ** so there may not be a successor
      • Sakos3 days ago
        Did the first claw even sell well? That said, the Steam Deck competitors aren't interesting to me without the touchpads and four back buttons.
        • automatic61313 days ago
          >Did the first claw even sell well?

          Extremely poorly. The worst of all deck-likes.

        • kaliqt3 days ago
          No. And I know this by the sheer lack of videos and discussion of any kind on it.
    • formerly_proven3 days ago
      With totally new hardware platforms things often take a minute to really work (even on Windows).
  • KeplerBoy3 days ago
    Here's hoping ARM on the desktop/laptop finally takes off and we see Nvidia returning to these market segments.

    Their Tegra chips could do a lot in these laptop / handheld gaming devices.

  • nuz3 days ago
    Nvidias moat is so enormous
    • Wytwwww2 days ago
      What moat? Nvidia is barely even competing in the same Xe2 is in. Their laptop GPUs aren't particularly spectacular and aren't at all suitable for low-power use cases.