- The technical experts (including Intel engineers) will say something like "it affects Blizzard Creek and Windy Bluff models'
- Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).
- The spec sheet for the hardware calls it a "Xeon Osmiridium X36667-IA"
Absolutely none of these forms of naming have any way to correlate between them. They also have different names for the same shit depending on whether it's a consumer or server chip.
Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.
Usually I just ask the LLM and accept that it's wrong 20% of the time.
I’m doing some OS work at the moment and running into this. I’m really surprised there’s no caniuse.com for cpu features. I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out. Especially across Intel and amd. Can I assume apic? Iommu stuff? Is acpi 2 actually available on all CPUs or do I need to have to have support for the old version as well? It’s very annoying.
Windows has specific platform requirements they spell out for each version - those are generally your best bet on x86. ARM devs have it way worse so I guess we shouldn’t complain.
The easiest thing would probably to specify the need for "x86-64-v3":
* https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...
RHEL9 mandated "x86-64-v2", and v3 is being considered for RHEL10:
> The x86-64-v3 level has been implemented first in Intel’s Haswell CPU generation (2013). AMD implemented x86-64-v3 support with the Excavator microarchitecture (2015). Intel’s Atom product line added x86-64-v3 support with the Gracemont microarchitecture (2021), but Intel has continued to release Atom CPUs without AVX support after that (Parker Ridge in 2022, and an Elkhart Lake variant in 2023).
* https://developers.redhat.com/articles/2024/01/02/exploring-...
AFAIK, that only specifies the user-space-visible instruction set extensions, not the presence and version of operating-system-level features like APIC or IOMMU.
If you were willing to accept only the relatively high power variants it’d be easier.
For anyone not familiar with caniuse, its indispensable for modern web development. Say you want to put images on a web page. You've heard of webp. Can you use it?
At a glance you see the answer. 95% of global web users use a web browser with webp support. Its available in all the major browsers, and has been for several years. You can query basically any browser feature like this to see its support status.
Even the absolute most basic features that have been well supported for 30 years, like the HTML "div" element, cap out at 96%. Change the drop-down from "all users" to "all tracked" and you'll get a more representative answer.
It's also not monotonic, on both CPU and GPU sides features can go away later because either due to a hardware bug or the vendor lost interest in supporting it.
You're often better picking a subset of CPU features you want to use and then sampling to see if it excludes something important.
But how? That’s the question.
https://web.archive.org/web/20250616224354/https://www.cpu-m...
https://www.cpu-monkey.com/en/cpu-amd_ryzen_7_pro_8840u
A nice reminder to stick any page you find useful in the wayback machine and/or save a local copy.
Aha, but which digit? Sure, that's easy for server, HEDT and desktop (it's the first one) but if you look at their line of laptop chips then it all breaks down.
I was convinced that the process was encouraged by folks who used it as a sort of weird gatekeeping by folks who only used the magic code names.
Even better I worked at a place where they swapped code names between two products at one time... it wasn't without any reason, but it mean that a lot of product documentation suddenly conflicted.
I eventually only refereed to exact part numbers and model numbers and refused to play the code name game. This turned into an amusing situation where some managers who only used code names were suddenly silent as they clearly didn't know the product / part to code name convention.
But you're correct that for anything buried in the guts of CPUID, your life is pain. And Intel's product branding has been a disaster for years.
Intel removed most things older than SB late 2024 (a few xeons remain but afaik anything consumer was wiped with no warning). It’s virtually guaranteed that Intel will remove more stuff in the future.
https://en.wikipedia.org/wiki/List_of_Intel_Core_processors
https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors
It doesn't have the CPUID but it's a pretty good mapping of model numbers to code names and on top of that has the rest of the specs.
I've found that -- as of a ~decade ago, at least, ark.intel.com had a really good way to cross-reference among codenames / SKUs / part numbers / feature set/specs. I've never seen errata there but they might be. Also, I haven't used it in a long time so it could've gotten worse.
Now the only issue you have is that there is no consistent schema between those files so it's not really any use.
Coincidentally, if anyone knows how to figure out which Intel CPUs actually support 5-level paging / the CPUID flag known as la57, please tell me.
"Products formerly Blizzard Creek"
WTF does that even mean?
It's fraud, plain and simple.
In a very distant past, AMD was publishing what the CPUID instruction will return for each CPU model that they were selling. Now this is no longer true, so you have to either buy a CPU to discover what it really is, or to hope that a charitable soul who has bought such a CPU will publish on the Internet the result.
Without having access to the CPUID information, the next best is to find on the Intel Ark site, whether the CPU model you see listed by some shop is described for instance as belonging to 'Products formerly Arrow Lake S", as that will at least identify the product microarchitecture.
This is still not foolproof, because the products listed as "formerly ..." may still be packaged in several variants and they may have various features disabled during production, so you can still have surprises when you test them for the first time.
But if you want any deep and complex technical info out of them, like oh maybe how to configure it to fit UK/EU regulatory domain RF rules? Haha no chance.
We ended up hiring a guy fluent in Hebrew just to talk to their support guys.
Super nice kit, but I guess no-one was prepared to pay for an interface layer between the developers and the outside world.
Same with Intel.
STOP USING CODENAMES. USE NUMBERS!
Android have done this right: when they used codenames they did them in alphabetical order, and at version 10 they just stopped being clever and went to numbers.
Android also sucks for developers because they have the public facing numbers and then API versions which are different and not always scaling linearly (sometimes there is something like "Android 8.1" or "Android 12L" with a newer API), and as developers you always deal with the API numbers (you specify minimum API version, not the minimum "OS version" your code runs in your code), and have to map that back to version numbers the users and managers know to present it to them when you're upping the minimum requirements...
Well, it was until they looped.
Xenial Xerus is older than Questing Quokka. As someone out of the Ubuntu loop for a very long time, I wouldn't know what either of those mean anyway and would have guessed the age wrong.
I want a version number that I can compare to other versions, to be able to easily see which one is newer or older, to know what I can or should install.
I don't want to figure out and remember your product's clever nicknames.
Finding the latest release and codename is indeed a research task. I use Wikipedia[1] for that, but I feel like this should be more readily available from the system itself. Perhaps it is, and I just don't know how?
I typically prefer
cat /etc/os-release
which seems to be a little more portable / likely to work out of the box on many distros.Do those boxes really still exist? Debian, which isn't really known to be the pinacle of bleeding edge, has had /etc/os-release since Debian 7, released in May 2013. RHEL 7, the oldest Red Hat still in extended support, also has it.
Yes, they do. You'll be surprised by how many places use out-of-support operating systems and software (which were well within their support windows when installed, they have just never been upgraded). After all, if it's working, why change it? (We have a saying here in Brazil "em time que está ganhando não se mexe", which can be loosely translated as "don't change a (soccer) team which is winning".)
You would be alarmed to know how long the long tail is. Are you going to run into many pre-RHEL 7 boxes? No. Depending on where you are in the industry, are you likely to run into some ancient RHEL boxes, perhaps even actual Red Hat (not Enterprise) Linux? Yeah, it happens.
At least Fedora just uses a version number!
Maybe they should stop synlinking the new versions after 14, because AFAIK, they already tried everything else.
Under https://en.wikipedia.org/wiki/Ryzen#Mobile_6 Ryzen 7000 series you could get zen2, zen3, zen3+, zen4
- sSpec S0ABC = "Blizzard Creek" Xeon type 8 version 5 grade 6 getConfig(HT=off, NX=off, ECC=on, VT-x=off, VT-d=on)=4X Stepping B0
- "Blizzard Creek" Xeon type 8 -> V3 of Socket FCBGA12345 -> chipset "Pleiades Mounds"
- CPUID leaf 0x3aa = Model specific feature set checks for "Blizzard Creek" and "Windy Bluff(aka Blizzard Creek V2)"
- asserts bit 63 = that buggy VT-d circuit is not off
- "Xeon Osmiridium X36667-IA" = marketing name to confuse specifically you(but also IA-36-667 = (S0ABC|S9DFG|S9QWE|QA45P))
disclaimer: above is all made up and I don't work at any of relevant companiesNVidia has these, very different GPUs:
Quadro 6000, Quadro RTX 6000, RTX A6000, RTX 6000 Ada, RTX 6000 Workstation Edition, RTX 6000 Max-Q Workstation Edition, RTX 6000 Server Edition
It would be like having Quadro 6000 and 6050 be completely different generation
Oh and there's also RTX PRO 6000 Blackwell which is Blackwell from 2025...
They've hyperoptimized all these marketing buzzwords to the point that I'm basically forced into the moral equivalent of buying GPU by the pound because I have no idea what these marketers are trying to tell me anymore. The only stat I really pay attention to is VRAM size.
(If you are one of those marketers, this really ought to give you something to think about. Unless obfuscation is the goal, which I definitely can not exclude based on your actions.)
Lest anyone think AMD is any better the Radeon 200 series came in everything from terascale 2 (4 years old at that point) to GCN3.
The gpu manufacturers have also engaged in incredible amounts of rebadging to pad their ranges, some cores first released on the GeForce 8000 series got rebadged all the way until the 300 series.
Somewhat surprisingly it sometimes had a better performance than Radeon 9200 precisely because it lacked pixel shaders and yet had a good enough perf.
An Intel Core Ultra 7 155U and a Core Ultra 7 155H, are very different classes of CPUs!
If you're comparing laptops, you'll see both listed, and laptops with the U variant will be significantly cheaper, because you get half the max TDP, 4 fewer cores, 8 fewer threads, and a worse GPU.
This isn't to say the 155U is a bad chip, it's just a low-power optimized chip, while the 155H is a high-performance chip, and the difference between their performance characteristics is a lot larger than you'd expect when looking at the model numbers. Heck, if you didn't know better, you might text your tech-savvy friend "hey is a 155 good?", and looking that up would bring up the powerful H version.
Their laptop naming scheme at least is fairly straightforward once you figure it out.
U = Low-TDP, for thin & light devices
H = For higher-performance laptops, e.g. Dell XPS or midrange gaming laptops
HX = Basically the desktop parts stuffed into a laptop form factor, best perf but atrocious power usage even at idle. Only for gaming laptops that aren't meant to be used away from a desk.
And within each series, bigger number is better (or at least not worse - 275HX and 285HX are practically identical).
Previously, they had a P series of mobile parts in between the U and H series (Alder Lake and Raptor Lake). Before that, they had a different naming scheme for the U series equivalents (Ice Lake and Tiger Lake). Before that, they had a Y series for even lower power than U series.
So they mix up their branding and segmentation strategy to some extent with almost every generation, but the broad strokes of their segmentation have been reasonably consistent over the past decade.
I've been really quite happy with it - most of the time the CPU runs at about 30 deg C, so the fan is entirely off. General workloads (KDE, Vivaldi, Thunderbird, Konsole) puts it at about 5.5 watts of power draw.
LGA2011-0 and LGA2011-1 are very unalike, from the memory controller to vast pin rearrangement.
So not only they call two different sockets almost the same per the post, but they also call essentially the same sockets differently to artificially segment the market.
All things considered I actually kind of respect the relatively straightforward naming of this and several of Intel's other sockets. LGA to indicate it's land grid array (CPU has flat "lands" on it, pins are on the motherboard), 2011 because it has 2011 pins. FC because it's flip chip packaging.
That's an industry-wide standard across all IC manufacturing - Intel doesn't really get to take credit for it.
Ah, but if you want to buy a newly released CPU and the board does support/work with it, but nobody has updated the documentation on the website: How do you know?
Ultimately it's always a crapshoot. Some manufacturers don't even provide release notes with their BIOS updates...
Back in the day, this is what forums were for. Unfortunately forums are dead, Facebook is useless, and Google search sucks now. So you should just buy it, if it doesn't work ask for a refund and if they refuse just do a chargeback.
People believe "bigger number" = better, and marketing teams exploit that.
"My computer is too slow. I know it's an i9 -- whatever that means. But all these new ones are also i9s. You'd think they'd have something newer than that in the past 5 years. Oh well. I guess I can't get something better than what I have, so I'll just have to wait until something better comes along."
This results not in moving old products out of warehouses, but instead in moving zero products at all.
> There are only two hard things in Computer Science: cache invalidation, naming things, off-by-one errors.
> There’s two hard problems in computer science: We only have one joke and it's not funny.
"There is 10 kinds of people, those who can read binary and those who can't."
Personally I prefer the cache invalidation one.
I like the continuation (which requires knowledge of the original): “And those who didn’t expect this joke to be in base 3”.
Having some portion of the socket name stay the same can still be helpful to show that the same heatsinks are supported. I agree there are many far better ways Intel could handle this.
In addition to all of the slightly different sockets there was ddr3, ddr3 low voltage, the server/ecc counterparts, and then ddr4 came out but it was so expensive (almost more expensive than 4/5 is now compared to what it should be) that there were goofy boards that had DDR3 & DDR4 slots.
By the way it is _never_ worth attempting to use or upgrade anything from this era. Throw it in the fucking dumpster (at the e waste recycling center). The onboard sata controllers are rife with data corruption bugs and the caps from around then have a terrible reputation. Anything that has made it this long without popping is most likely to have done so from sitting around powered off. They will also silently drop PCI-E lanes even at standard BCLK under certain utilization patterns that cause too much of a vdrop.
This is part of why Intel went damn-near scorched earth on the motherboard partners that released boards which broke the contractual agreement and allowed you to increase the multipliers on non-K processors. The lack of validation under these conditions contributed to the aformentioned issues.
Wasn't this the other way around, allowing you to increase multipliers on K processors on the lower end chipsets? Or was both possible at some point? I remember getting baited into buying an H87 board that could overclock a 4670K until a bios update removed the functionality completely.
So I suspect maybe it's just a perverse effect of successive generations of marketing and product managers each coming up with a new system "to fix the confusion?" What's strange is that there's enough history here that smart people should be able to recognize there's a chronic problem and address it. For example, relatively simple patterns like Era Name (like "Core"), Generation Number, Model Number - Speed and then a two digit sub-signifier for all the technical variants. Just two digits of upper case letters and digits 1-9 is enough to encode >1200 sub-variants within each Era/Gen/Model/Speed.
The maddening part is that they not only change the classifiers, they also sometimes change the number and/or hierarchy of classifiers, which eliminates any hope of simply mapping the old taxonomy to the new.
Of course it’s only a solution if you are buying. If you writing low-level software for these outside userspace, I suppose you’ll have to follow the development of CPUs.
The next motherboard (should RAM ever cease being the tulip du jour) will not be an ASRock, for that and other reasons.
For the love of everything though, just increment the model number.
looking at you USB 3.0 (or USB 3.1 Gen 1 (or USB 3.2 Gen 1))
But if you think that's bad, you haven't seen the name change shenanigans Microsoft pulls in Azure.
With Intel's confusing socket naming, you can buy a CPU that doesn't fit the socket.
With USB, the physical connection is very clearly the first part of the name. You cannot get it wrong. Yeah, the names aren't the most logical or consistent, but USB C or A or Micro USB all mean specific things and are clearly visibly different. The worst possible scenario is that the data/power standard supported by the physical connection isn't optimal. But it will always work.
The actual names for each data transfer level are an absolute mess.
1.x has Low Speed and Full Speed 2.0 added High Speed 3.0 is SuperSpeed (yes no space this time) 3.1 renamed 3.0 to 3.1 Gen 1 and added SuperSpeedPlus 3.2 bumped the 3.1 version numbers again and renamed all the SuperSpeeds to SuperSpeed USB xxGbps And finally they renamed them again removing the SuperSpeed and making them just USB xxGbps
USB-IF are the prime examples of "don't let engineers name things, they can't"
While not disagreeing, I'd ask for a proof it's not a marketing department's fun. Just to be sure.
Engineers love consistency. Marketing is on the opposite side of this spectra.
Engineers don't make names that are nice for marketing team.
But they absolutely do make consistent ones. The engineer wouldn't name it superspeed, the engineer would encode the speed in the name
But sometimes the extra power or extra data transfer is not an option. For charging a laptop for instance, you typically need 20V, if your charger doesn't support that, you can't charge at all. And then there is Thunderbolt, DisplayPort, Oculink, where the devices that use these features won't work at all in an incompatible port. And I am not aware of device that strictly requires one of the many flavors of USB 3 or 4, but I can imagine a video capture card needing that. Raw video requires a lot of bandwidth.
Consumer oriented sockets(LGA115x) has different notches and pin counts to prevent this issue - actually, some of "different" sockets in consumer oriented sockets with "different" chipsets are actually identical, and sometimes you see Chinese bastardized boards that use discarded server-marked chips and pins-fudged hacker builds online that should not be possible according to marketing materials, so there is their own rabbit hole there.
Not at all. If you want to charge your phone, it might "always work", but if you want to use your monitor with USB hub and pass power to your MacBook, you're gonna have a hard time.
I don't know what "always work" means here but I feel like I've had USB cables that transmit zero data because they're only for power, as well as ones that don't charge the device at all when the device expects more power than it can provide. The only thing I haven't seen is cables that transmit zero data on some devices but nonzero data on others.
You can maybe blame USB consortium for creating a hard spec, but usually it's just people saving $0.0001 on the BOM by omitting a resistor.
How polite. It can be useless, not "not optimal". Especially since usb-c can burn you on a combination of power and speed, not only speed.
I can't find a USB-C PD adapter for a laptop that uses less than 100W. As a result, I can't charge a 65W laptop from a 65W port because the adapter doesn't even work unless the port is at least 100W.
It does not always work.
It seems totally random, and you cannot rely on watts anymore.
So a 100 watt GAN charger might be able to deliver only 65 watts from it's main "laptop" port, but it has two other ports that can do 25 and 10 watts each. Still 100 watts in total, but your laptop will never get it's 100 watts.
Not every brand is as transparent about this, sometimes it's only visible in product marketing images instead of real specs. Real shady.
That might not necessarily be the right conclusion. My understanding is: almost all USB-C power cables you will enounter day to day support a max current of at most 3A (the most that a cable can signal support for without an emarker). That means that, technically, the highest power USB-PD profile they support is 60W (3A at 20V), and the charger should detect that and not offer the 65W profile, which requires 3.25A.
Maybe some chargers ignore that and offer it anyway, since 3.25A isn't that much more than 3A. For ones that don't and degrade to offering 60W, if a laptop strictly wants 65W, it won't charge off of them.
So it's worth acquiring a cable that specifically supports 5A to try, which is needed for every profile above 60W (and such a cable should support all profiles up to the 240W one, which is 5A*48V).
(I might be mistaken about some of that, it's just what I cobbled together while trying to figure out what chargers work with my extremely-picky-about-power lenovo x1e)
And wow, I'll keep away from Dell, thanks.
The ones I use most are 20W and 40W, just stuff I ordered from AliExpress (Baseus brand I think).
Email them, address is in the guidelines.
on the other side AMD with legendary AM4