You can feel additional latency easily in competitive FPS or high speed arcade racing games.
I guess I can see some utility in situations where latency is not a major factor, but IMHO, that pushes out most gaming.
And generated frames are far worse than that. If you're running at a very high base framerate (100+) then they can look OK but the moment the frames get any further apart the visual quality starts to tank.
Publishers are also forcing the settings to be on, to save time on optimization and for false advertising.
Fake frames are cool tech but they are horribly mismarketed, indistinguishable from scam.
[0]: https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html
https://blog.kronis.dev/blog/what-is-ruining-dual-gpu-setups
https://blog.kronis.dev/blog/more-pc-shenanigans-my-setup-un...
https://blog.kronis.dev/blog/two-intel-arc-gpus-in-one-pc-wo...
When paired with a worse CPU like a Ryzen 5 4500, the experience won't always be good (despite no monitoring software actually showing that the CPU is a bottleneck).
When paired with a better CPU (I got a Ryzen 7 5800X to replace it, eventually with an AIO cause the temperatures were too high under full load anyways), either of them are pretty okay.
In a single GPU setup either of them run most games okay, not that many compatibility or stability issues, even in older indie titles, though I've had some like STALCRAFT: X complain about running on an integrated GPU (Intel being detected as such). Most software also works, unless you want to run LLMs locally, where Nvidia will have more of an advantage and you'd go off the beaten path. Most annoying I've had were some stability issues near the launch of each card, for example running the B580 with their Boost functionality on in their graphics software sometimes crashed in Delta Force, no longer seems to be an issue.
Temperature and power draw seem fine. Their XeSS upscaling is actually really good (I use it on top of native resolution in War Thunder as fancy AA), their frame generation feels like it has more latency than FSR but also better quality, might be subjective, but it's not even supported in that many games in the first place. Their video encoders are pretty nice, but sometimes get overloaded in intensive games instead of prioritizing the encoding over game framerate (which is stupid). Video editing software like DaVinci Resolve also seems okay.
The games that run badly are typically Unreal Engine 5 titles, such as S.T.A.L.K.E.R. 2 and The Forever Winter, where they use expensive rendering techniques and to get at least 30 FPS you have to turn the graphics way down, to the point where the games still run like crap and end up looking worse than something from 5 years ago. Those were even worse on the A series cards, but with the B series ones become at least barely playable.
In a dual GPU setup, nothing works that well, neither in Windows 11, nor Windows 10, neither with the A580 + B580, nor my old RX 580 + B580: system instability, some games ignoring the Intel GPU preference being set when an AMD one is available, low framerates when a video is playing on a secondary monitor (I have 4 in total), the inability to play games on the B580 and do encoding on the A580 due to either just OBS or also the hardware not having proper support for that (e.g. can't pick which GPU to do encode on, like you can with Nvidia ones, my attempts at patching OBS to do that failed, couldn't get a video frame from one GPU to the other). I moved back to running just the B580 in my PC.
For MSRP, I'd say that the Intel Arc B580 is actually a good option, perhaps better than all A series cards. But the more expensive it gets, the more attractive alternatives from AMD and Nvidia become. Personally wouldn't get an A770 unless needed the VRAM or the price was really good.
Also I’m not sure why the A580 needed two 8-pin connectors if it never drew that much power and also why the B580 has plenty of larger 3 fan versions when I could never really get high temps when running Furmark on the 2 fan version.
Without the side panel, the temps are like 10-15C lower than with the side panel, so without they go up to about 78C under full load but do hit 90C and the clock frequencies are dialed back with the panel on.
That is already with a CO value of -10 across all cores.
I will probably need a different case altogether, or just get rid of the solid front panel (those vents on it are too small) and replace it with a custom mesh.
Thankfully, for now, in CPU-Z the scores are ~6500 without the side panel and ~6300 with the panel, so with the AIO and more powerful fans on it, it's pretty close to working optimally, even if not quite there yet.
I also tried it with 5x120mm case fans and an air cooler, it was slightly worse than the AIO. Also tried multiple different thermal pastes, didn't make much of a difference. Might also just be cursed and have ghosts in it, go figure.
I had a fully new build so used one of the well reviewed Fractal cases to get good airflow, with 5x140mm case fans.
> x50-class GeForce GPUs are among the most popular in the world, second only to the x60-class on Steam. Their price point and power profile are especially popular:
> For anyone upgrading an older x50-class system
> Each GeForce RTX 5050 graphics card is powered by a single PCIe 8-pin cable, drawing a maximum of 130 Watts at stock speeds, making it great for systems with power supplies delivering as little as 550 Watts.
The 1050, 2050 and 3050 were all bus-powered cards. I doubt 95% these systems even have the cable coming from their power supply. Imagine all the poor saps that excitedly swap out their old card for this, and... nothing works.
Source link: https://www.nvidia.com/en-us/geforce/news/rtx-5050-desktop-g...
I've got a ~ 2006 380W power supply hanging out near my desk and it's got a 6-pin pci-e cable; I really don't think people won't have at least that, certainly not 95% of systems with a pci-e x16 slot.
I personally think people remember being happy with the 750ti and just keep buying those cards.
It's also with DLSS on, so you could just as easily have the framerate be 100 FPS, 1000 FPS, or 10000 FPS. The GPU doesn't actually have to render the frame in that case, it just has to have a pixel buffer ready to offload to whatever hardware sends it over the link to the display. Apparently some people actually really like this, but it isn't rendering by any reasonable definition.
https://www.nvidia.com/en-us/geforce/news/rtx-5050-desktop-g...
It's about the RTX 5070 but the criticisms still hold for the RTX 5050 since Nvidia is still doing the same shenanigans.
That's AI frame hallucination which the 5050 has.
Without the DLSS, the numbers from independent reviewers has basically been exactly on par with the previous generations (about 10% increase in performance).
I agree that it's not super appealing, but Team Green has to hit the low price points somehow. This feels more like a hedged bet against Intel trying to muscle their way back into the budget market.
https://www.techpowerup.com/gpu-specs/radeon-rx-9070-xt.c422...
Intel does too.
https://www.techpowerup.com/gpu-specs/radeon-rx-9070-xt.c422...
Although we don't know how 5050 will perform, 50 series have roughly ~same perf in render as models from 40 series at the same tier. 40 series in turn are only a mild bump over 30 at the same tier. And 30-series was a reasonable improvement over 20, but mostly in perf/$ measure and not raw perf. Extrapolating, 5050 is likely not going to give much of a boost if any, and spending money on a 8GB card in 2025 is just throwing money away at this point as software is now increasingly expecting to be able to work with >8GB of VRAM.
But it isn't really that uncommon either, I had a suzuki motorcycle that used a connector with 15 amp pins to handle 30 amps of current on one pin. I eventually concluded the only reason that connector was in the harness was to ease assembly and just cut it out entirely and soldered the junction together.