There are two, non-exclusive paths I'm thinking at the moment:
1. DRM: Might this enable a next level of DRM?
2. Hardware attestation: Might this enable a deeper level of hardware attestation?
In general, this solution would be expensive and targeted at data lakes, or areas where you want to run computation but not necessarily expose the data.
With regard to DRM, one key thing to remember is that it has to be cheap, and widely deployable. Part of the reason dvds were easily broken is that the algorithm chosen was inexpensive both computationally, so you can install it on as many clients as possible.
Same here.
Can't wait to KYC myself in order to use a CPU.
It's truly amazing how modern people just blithely sacrifice their privacy and integrity for no good reason. Just to let big tech corporations more efficiently siphon money out of the market. And then they fight you passionately when you call out those companies for being unnecessarily invasive and intrusive.
The four horsemen of the infocalypse are such profoundly reliable boogeymen, we really need a huge psychological study across all modern cultures to see why they're so effective at dismantling rational thought in the general public, and how we can innoculate society against it without damaging other important social behaviors.
The reason the 'Epstein class' are able to get away with crimes is because in recent US elections the US voted to elect politicions that intentionally are not investigating those crimes and even pardoned some criminals convicted of them.
We are not anymore their clients, we are just another product to sell. So, they do not design chips for us but for the benefit of other corporations.
3. Unskippable ads with data gathering at the CPU level.
I remember how thinking how fun it was! I could see unfolded before me how there would be endless ways to configure, reconfigure, optimize, etc.
I know there are a few open source chip efforts, but wondering maybe now is the time to pull the community together and organize more intentionally around that. Maybe open source chipsets won't be as fast as their corporate counterparts, but I think we are definitely at an inflection point now in society where we would need this to maintain freedom.
If anyone is working in that area, I am very interested. I am very green, but still have the old textbooks I could dust off (just don't have the ole college provided mentor graphics -- or I guess siemens now -- design tool anymore).
The future is bleak.
I think eGovernment is the main use case: not super high traffic (we're not voting every day), but very high privacy expectations.
2. No, anyone can run the FHE computations anywhere on any hardware if they have the evaluation key (which would also have to be present in any FHE hardware).
It's not related to DRM or trusted computing.
A: "Intel/AMD is adding instructions to accelerate AES"
B: "Might this enable a next level of DRM? Might this enable a deeper level of hardware attestation?"
A: "wtf are you talking about? It's just instructions to make certain types of computations faster, it has nothing to do with DRM or hardware attestation."
B: "Not yet."
I'm sure in some way it probably helps DRM or hardware attestation to some extent, but not any more than say, 3nm process node helps DRM or hardware attestation by making it faster.
But when homomorphic encryption becomes efficient, perhaps governments can force companies to apply it (though they would lose their opportunity for backdooring, but E2EE is a thing too so I wouldn't worry too much).
It raises the hurdle for those looking to surveil.
If a tree falls in the forest and no one is around to hear it, does it make a sound?
This is primarily for cloud compute I'd imagine, AI specifically. As it's generally not feasible/possible to run the state of the art models locally. Think GDPR and data sovereignty concerns, many demand privacy and can't use services without it.
No, but media can be watermarked in imperceptible ways, and then if all players are required to check and act on such watermarks, the gap becomes narrow enough to probably be effective.
See Cinavia.
That is nice speed-up compared to generic hardware but everyone probably wants to know how much slower it is than performing same operations on plain text data? I am sure 50% penalty is acceptable, 95% is probably not.
This hardware won’t make the technique attractive for ALL computation. But, it could dramatically increase the range of applications.
That rules out anything latency-sensitive, but for batch workloads like aggregating encrypted medical records or running simple ML inference on private data it starts to become practical. The real unlock is not raw speed parity but getting FHE fast enough that you can justify the privacy tradeoff for specific regulated workloads.
5000 * 0 is still 0.
I joke, but i think relative numbers like this are very misleading as FHE is starting from such an absurdly slow place.
Still, this is pretty cool and there are probably niche applications that become possible with this, but i think this is a small enough speed up that it is still very niche.
However... In a world where privacy is constantly being eroded intentionally by governments and private companies, I think this will NEVER, ever reach any consumer grade hardware. My cynic could envision the technology export ban worldwide in the vein of RSA [0] .
Why would any company offer the customers real out of the box e2e encryption possibilities built into their devices.
DRM was mentioned by another user. This will not be used to enable privacy for the masses.
https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...
But getting them available for customers for example say even a PCIe card or something and then that automatically encrypting everything you ever run today over an encrypted connection would be a dream.
Why not when government can just force companies to backdoor their hardware for them. That way users are secure most of the time except from the government (until the backdoor in intel's chips gets discovered anyway), and users have a false sense of security/privacy so people are more likely to share their secrets with corporations and the government gets to spy on people communicating more openly with each other.
[1] https://confer.to/blog/2025/12/confessions-to-a-data-lake/
The correct solution isn't yet another cloud service, but rather local models.
Within the enclave itself, DRAM and PCIe connections between the CPU and GPU are encrypted, but the CPU registers and the GPU onboard memory are plaintext. So the computation is happening on plaintext data, it’s just extremely difficult to access it from even the machine running the enclave.
There is basically no business demand beside from sellers and scholars.
After nearly 3 decades of critical technology systems architecture and management involving ongoing industry audits my experience and age knows why my hair has lost some of its color. Much of that lost color comes from security management of third party systems, yes the old dreaded dependencies. Elimination of those third parties is key for one's cyber sanity and hair color yet with technology still in its infancy some cannot distinguish the forest from the trees.
Nothing remains the same as progress moves forward correcting for past mistakes while learning what works and does not along that journey, technology platforms are no exception. Analogously early automobiles lacked safety features as well such as windshield wipers and seatbelts so has the passage of time proved their addition to be valued? Few people today truly understand how things work as nearly all just want the instant fix "pill" to alleviate their issues however this approach cannot work with security. True security is designed in from the foundation and such secure platforms go unseen yet we have an endless list of victims from those insecure systems which have "bolted on" security after the fact. This security change and more is coming to system designs as the entire world is now fully aware of cyber security, or in this case, the lack of it.
Time, the young fail to consider it up until a single moment in their life, while the old reflect on where theirs went. After the reflection of one's time however change becomes obvious.
The PC market was made shitty enough this year, that Mid/High class Mac Pro/laptops are actually often a better value deal now (if and only if your use-case is covered software wise.)
Intel does plan on a RTX + amd64 SoC soon, but still pooched the memory interface with a 30 year old mailbox kludge. Intel probably wont survive this choice without bailouts. =3
Judging by Nvidia's current valuation, that's a parenthetical worth ~4 trillion dollars. Apple isn't muscling AMD or Nvidia out of the datacenter anytime soon, and they're basically feeding Intel Foundry customers by dominating TSMC fab capacity. Apple's contribution to the chip shortage is so bad that even they have considered using Intel Foundry Services: https://www.macrumors.com/2025/11/28/intel-rumored-to-supply...
It's been 7 years of Apple Silicon and the macOS market share really hasn't shifted much. The Year Of Apple Silicon For People Whose Use-Case Is Covered Software Wise was 2019; the majority of remaining customers aren't showing any interest.
Indeed, but a local LLM finishing in 3 days instead of 1 on a $40k GPU changes the economic decision priority for some.
Apple sales grew "21.3% year-over-year as of the second quarter of 2025", but also sales flattened as supply chain pricing shocks from "AI"/tariffs hit late last year.
"Judging by Nvidia's current valuation" is a bad bet with current circular investment conditions.
We shall see, but as EOL drivers and OS rot hits legacy NVIDIA hardware... people are going to have to find some compromise in the next 2 years. Even AMD 9850X3D currently cost less than 64G of low end PC ddr5 memory.
Odd times for sure =3
If computation can happen directly on encrypted data, does that reduce the need for trusted environments like SGX/TEE, or does it mostly complement them?
If you need to trust the encryption and trust the hardware itself, it may not be suitable for your environment/ threat model.
The textbook example application of FHE is phone book search. The server "multiply" the whole phonebook database file with your encrypted query, and sends back the whole database file to you every time regardless of queries. When you decrypt the file with the key used to encrypt the query, the database is all corrupt and garbled except for the rows matching the query, thereby causing the search to have practically occurred. The only information that exists in the clear are query and the size of entire database.
Sounds fantastically energy-efficient, no? That's the problem with FHE, not risks of backdooring.
First you encrypt the data. Then you send it to hardware to compute, get result back and decrypt it.
Are we reading the same article? It's talking about homorphic encryption, ie. doing mathematical operations on already encrypted data, without being aware of its cleartext contents. It's not related to SGX or other trusted computing technologies.
That's my point, this sounds like a way to create a backdoor for at-rest data.
I get the feeling honestly it seems more expensive and more effort to backdoor it..