For example, take 7-Zip Compression 22.01. The CPU Power Consumption Monitor chart states:
AmpereOne: Average 278.72W EPYC: Average 311.64W
But the fine print under that same chart states:
AmpereOne: 6968J per run EPYC: 14439J per run
By the Joules per run numbers, AmpereOne is far more power efficient than EPYC, requiring only less than half of the energy to complete a run.
In that case, how could the average power of EPYC to be only 11.8% higher than that of AmpereOne? For this benchmark EPYC is 14.2% faster than AmpereOne, and if the average power numbers are correct, the EPYC should have slightly lower Joules per run than AmpereOne.
That is not the only anomaly. For example, the CPU Power Consumption Monitor chart for John the Ripper 2023.03.14 also does not make sense.
Never-mind that these are all reduced to absurd levels, or biased.
My favorite was some site crapping on a SSD that only managed 3GiB/s for 100GiB of data, then dropped to 500meg or something. But, they didn't mention data transferred at all. Just speed vs time. Obviously pushing for that higher kickback on the ssd that costs 4x as much and uses 8x the power.
Do these EPYCs usually go this low when idling? I ask because im considering getting one but it would idle more than 50% of the time, or would waiting for 5c make more sense?
I find 19 Watts surprisingly low. I know that the mainboard and peripherals would consume more, but my system running a 5950x, which im planning to upgrade to an EPYC, idles at around 130 Watts.
Given that most (all?) cutting-edge chips use TSMC nowadays, can you really have an apples-to-Apples comparison if the chips being compared aren't on the same process node?
Unless you're comparing price/performance, since nowadays there's no guarantee that a process shrink will get you significantly cheaper transistors (RIP, Dr. Moore).
That is because all cutting-edge chips use TSMC.
No competition means price per transistor can stay consistent or even rise, which is one part of why most modern CPUs and GPUs have price/performance ratios that are the same or worse than their previous-generation counterparts.
>can you really have an apples-to-Apples comparison if the chips being compared aren't on the same process node?
Of course not, but that isn't going to stop people from doing it, nor is it going to stop people from going "x86 is dead" when comparing last-gen-node AMD processors to CPUs only Apple can use (conveniently forgetting that Qualcomm's products underperform at the same process node).
Qualcomm’s X Elite matches or exceeds Intel Lunar Lake on an older N4P node in efficiency and speed.
Sources: https://www.notebookcheck.net/Intel-Lunar-Lake-CPU-analysis-...
Of the very few benchmarks that can compare Apple with non-Apple, I have never seen any where an M3 was 2-3x more efficient than Lunar Lake, so a link would be appreciated.
On the contrary, most if not all benchmarks showing battery lifetimes were showing better values for Lunar Lake, implying better efficiency.
Other than by the battery lifetime I cannot see how you can test the efficiency of an Apple computer, except by using a power and energy measurement instrument on the wall socket, because in none of the reviews about Apple computers have I seen any mention about accurate internal power sensors exposed to the user.
An M3 is definitely much more efficient in single-threaded execution than Lunar Lake, which is due to having a higher IPC and a lower clock frequency.
On the other hand, in multithreaded applications there is very little efficiency difference between different CPU microarchitectures that are implemented in the same TSMC process.
GCC, Gimp, Firefox, ...
However I have never seen published benchmarks for them.
A benchmark that would be valid for comparing the efficiency of an Apple computer with a non-Apple computer would be to compile using gcc a big software project. A cross-compilation of the project would be more accurate, because for a native compilation target the compiled files might be not the same.
Also, there are benchmarks for browsers which you could run on both types of computer.
But top of the line ARM machines are really hard to get a hold of. We need an OpenAI for ARM ;)
RISC-V is.
Avoiding falsifiable statements is a skill set that might be worth having in your communications toolkit.
(I remember reading that some philosophy school had {True, false, unknown, unknowable} but, alas, cannot find any reference to that just now)
My opinion is definitely biased, though. Only time will tell