.\setup.exe /product server /auto upgrade /EULA accept /migratedrivers all /ShowOOBE none /Compat IgnoreWarning /Telemetry Disable
I've used this to upgrade 10 to 11 on non approved hardware, going back to at least 2nd gen Intel CPUs. I've used it to upgrade existing Pro, EDU and IOT that didn't want to upgrade.The install window will say server but it isn't.
Despite the TPM being a pretty good and useful idea as a secure enclave for storing secrets, I'm concerned that giving companies the ability to perform attestation of your system's "integrity" will make the PC platform less open. We may be headed towards the same hellscape that we are currently experiencing with mobile devices.
Average folks aren't typically trying to run Linux or anything, so most people wouldn't even notice if secure boot became mandatory over night and you could only run Microsoft-signed kernels w/ remote attestation. Nobody noticed/intervened when the same thing happened to Android, and now you can't root your device or run custom firmware without crippling it and preventing the use of software that people expect to be able to use (i.e. banking apps, streaming services, gov apps, etc.).
Regardless, this is more of a social issue than a technical issue. Regulatory changes (lol) or mass revolt (also somewhat lol) would be effective in putting an end to this. The most realistic way would be average people boycotting companies that do this, but I highly doubt anyone normal will do that, so this may just be the hell we are doomed for unless smaller manufacturers step up to the plate to continue making open devices.
You just need to be able to translate their doublespeak.
This is all publicly documented by Microsoft you just need to translate their doublespeak.
Google is doing does the exact same thing and people were sounding the alarms when they did it but Microsoft gets a pass?
Use ChaGPT to outsource your critical thinking for you because I’m not gonna do it.
While I'm not a cryptographer... I never really understood the appeal of these things outside of one very well-defined threat model: namely, they're excellent if you're specifically trying to prevent someone from physically taking your hard drive, and only your hard drive, and walking out of a data centre, office, or home with it.
It also provides measured boot, and I won't downplay it, it's useful in many situations to have boot-time integrity attestation.
The technology's interesting, but as best as I can tell, it's limited through the problem of establishing a useful root-of-trust/root-of-crypt. In general:
- If you have resident code on a machine with a TPM, you can access TPM secrets with very few protections. This is typically the case for FDE keys assuming you've set your machine up for unattended boot-time disk decryption.
- You can protect the sealed data exported from a TPM, typically using a password (plus the PCR banks of a specific TPM), though the way that password is transmitted to the TPM is susceptible to bus sniffing for TPM variants which live outside the CPU. There's also the issue of securing that password, now, though. If you're in enterprise, maybe you have an HSM available to help you with that, in which case the root-of-crypt scheme you have is much more reasonable.
- The TPM does provide some niceties like a hardware RNG. I can't speak to the quality of the randomness, but as I understand it, it must pass NIST's benchmarks to be compliant with the ISO TPM spec.
What I really don't get is why this is useful for the average consumer. It doesn't meaningfully provide FDE in particular in a world where the TPM and storage may be soldered onto the same board (and thus impractical to steal as a standalone unit rather than with the TPM alongside it).
I certainly don't understand what meaningful protections it can provide to game anti-cheats (which I bring up since apparently Battlefield 6 requires a TPM regardless of the underlying Windows version). That's just silly.
Ultimately, I might be misunderstanding something about the TPM at a fundamental level. I'm not a layperson when it comes to computer security, but I'm certainly not a specialist when it comes to designing or working with TPMs, so maybe there's some glaring a-ha thing I've missed, but my takeaway is that it's a fine piece of hardware that does its job well, but its job seems too niche to be useful in many cases; its API isn't very clear (suffering, if anything, from over-documentation and over-specification), and it's less a silver bullet and more a footgun.
So basically the same thing you'd get by having an internal USB port on the system board where you could plug a thumb drive to keep the FDE key on it?
> It also provides measured boot, and I won't downplay it, it's useful in many situations to have boot-time integrity attestation.
That's the nefarious part. You get adversarial corporations trying to insist that you run their malware in order to use their service, and it's giving them a means to attempt to verify it.
Which doesn't actually work against sophisticated attackers, so the security value against real attacks is none, but it works against normies which in turn subjects the normies to the malware instead of letting someone give them an alternative to it that doesn't screw them.
https://learn.microsoft.com/en-us/windows-hardware/design/de...
Microsoft is making the VBS mandatory for OEMs, hence the CPU needs support, hence the ~7 year old minimum requirement for CPUs in what Microsoft supports for Windows.
Yes, you can disable it during setup as a workaround, but it's exactly that. And why you'd want to make your system less secure, well I'll leave that to the exercise of the reader when they'll turn around two weeks from now and complain about Windows security.
The actual CPU requirements are VMX, SLAT, IOMMU and being 64 bit, which have all been available on the Intel side at least, since at least 2008, with some coming available even before that.
The CPU requirement was just an attempt to force people to buy new hardware they didn't need. Nothing more.
A perfect example of this is the Ryzen 5 1600. Its not officially supported but meets every single one of the requirements and had no trouble enabling the feature in the run up to the release of Win11 (before it was blocked for no reason). I know this because I did it.
Also they marked all but one 7th Intel Core CPU as unsupported, and the one they did add just so happens to be the one they were shipping in one of their Surface products. No way you can tell me this list was based fact and not the whims of some random PM when they do stuff like that.
I'd offer that the likely goal here is the most usable system possible, working with what one has. If folks are here, there's usually a lot of necessity factors in play.
Why do you feel the need to defend a convicted monopolist for engaging in user hostile behavior?
It's worth asking, but I think there's an answer: they want the OS to be transformed into an interface to their cloud where recurring revenue is easier. To do that, they need to make it more like a mobile OS and more locked down. TPM helps this.
Why didn't that go further? Presumably virtually any x86-64 box currently in circulation would be fast enough to run a VM running a full copy of 32-bit XP/Win7/Win10, or even a full carousel (or download store) of DOS and early-windows releases. It could be the most compatible Windows ever, solving the weird "64-bit systems can't run some 16-bit apps" gotcha and perhaps allowing some way to bridge in support for devices that can only be driven by old 32-bit XP drivers.
Given the free Win 7/8->10->11 upgrade path, almost every end user who'd want a Windows license probably already has one. This leaves enterprise licensing and computer manufacturers (laptops, mini-PCs, desktops), who wouldn't care about this because they'll have newer hardware anyway.
…going back to at least 2nd gen Intel CPU.
Would that be the 4040 or the 8008?Heh, yeah. In the moment I couldn't come up with the brief, unique descriptor and reached for the modern shorthand.
The only clear option is to use the internal code name, but that's technically not valid once it's released so "Products formerly Sandy Bridge" is the best Intel can come up with. (https://www.intel.com/content/www/us/en/ark/products/codenam...)
I haven't tried Win11 on personal hardware so far, but since Win8, boot times are not much of an issue in my experience.
Making the whole OS the vehicle for a rent-seeking vendor lock-in scheme built to make you pay more and more to keep up the same set of functionality is more of a problem I think.
Just a note for others that that the language of the ISO needs to match what you used to install Windows 10.
For example, I installed Windows 10 with the "International English" ISO and if I try this with the Windows 11 "US English" ISO, then it doesn't let me do an upgrade where it keeps installed programs and drivers.
Another trick that should still work, though I haven't tested it with newer Windows 11 builds: to create Windows 11 install media that will install and boot via BIOS — useful on machines where Windows doesn't work correctly under UEFI, e.g., older MacBooks that only work properly with Windows when booted via CSM — create writeable, BIOS-bootable Windows 10 x64 install media, then replace the install.wim file with one from an appropriate Windows 11 ISO.
I've used the command with a Win11 LTSC/IoT 24H2 ISO. I upgraded Win10 LTSC/IoT 21H2 to Win11 24H2 LTSC/IoT. I've done this on two old notebooks, a Dell Core2Duo and a Thinkpad T430.
Any specific features?
W10 IoT gets support until 2032 I believe.
ex: With Win10->Win11 I get a fair number of crashes when remotely viewing the event log mmc.
I setup a virtual Win10 edu to try and convert to Win11 LTSC/IoT. The only option setup gave me was to wipe out my apps and keep my personal files. That's what it did.
So the command doesn't offer much improvement over a wipe and a reload. Sorry I don't have better news.
The command doesn't ask any questions so there's no opportunity to tweak it. I'm getting a recovery plan in place before I pull the trigger on mine.
edit: I might also go another way. There are some other setup methods that might be a better fit for cross-upgrading Windows types. I'm actively investigating but it may be a month before I'm in a position to try them out.
I setup a virtual Win10 edu guest in hyper-v. I mounted a Win11 LTSC/IoT iso as a drive using hyper-v tools. When I ran the command I got the same error you did.
Next. I copied the Win11 LTSC/IoT iso to a folder in the Win10 edu guest. I mounted the ISO and ran the command and didn't get the error.
It's installing now but the setup only gave an option for saving my files, not the apps. It's not great but it makes sense.
The line should look like this: https://i.postimg.cc/VLHfF4H3/commandline.png
If it's correct, I'd like to know some specifics, if you don't mind. Current OS and ISO you're working with.
I've never had this fail and if there's an instance where it will, I'd like to know about it.
Using updated Win 10 and current 25H2 iso downloaded direct from Microsoft.
The only thing that is perhaps a little unique is that this is a Win10 Home installation that was previously upgraded to Win10 Pro.
EDIT: Well, there is one more little detail. I used RUFUS to produce a bootable USB drive. Apparently, the install is checking for this somehow.
I reformatted and used WinRAR to extract the ISO to the USB drive and it is currently in the process of installing (30% complete). I'll post the final results.
Mounting an ISO from within Windows seems to expose an upgrade-centric version of the installer.
Just don't do it. Instead, simply format the drive and extract the ISO to it using WinRAR.
Then the install works as prescribed.
I got the same behavior on one that had W10 22H2.
But got a more descriptive error message [/product not a recognized switch].
To fix this, had to replace the setup.exe file that is now provided with the 25H2 ISO. The current setup.exe now appears to be badly lobotomized by a decision-maker who has got to be equally brainless (less-brainful?) compared to how it was before.
Using setup.exe from the 23H2 ISO seems to be a workaround for this next annoying decline in Windows 11 suitability for industrial and sensitive enterprise applications. If I said it was like the "canary in the coal mine" some would say I was exaggerating because it is too late for that and there have been earlier warning signs for years. Not much like the tweety bird who thought he saw a puddy cat, more like the chicken on the dinner table now, or a goose who is more than fully "cooked".
Going further, it's also good to prevent the "surprise" data loss, when you are using a local "account" and not on line at all, which threat only comes from Microsoft itself with their auto-bitlocker encrypting your whole drive more aggressively on new installs like never before, the result can be worse than many types of malware/ransomware. To prevent that you need to interrupt the first boot after the upgrade files are copied, and boot instead to the Recovery Console or alternatively a separate Windows install so you can do an offline Regedit creation of a new DWORD in the target Windows\System32\Config\SYSTEM hive being upgraded; adding \HKLM\system\ControlSetXXX\Control\BitLocker\PreventDeviceEncryption, then setting value=1. This needs to be done carefully and some renaming in Regedit can be involved.
Unfortunately, even if this Registry setting preventing encryption has been previously set=1 before this type upgrade, that PreventDeviceEncryption DWORD is completely removed by this setup process. Which is why I go a little further and check manually, replacing it if necessary.
Then on the second reboot, the DWORD needs replacement again, so repeat the above process :\
Allowing you to re-live the experience as only malware-type persistence can ;)
After that when you reboot back to the target volume being upgraded (rather than the alternate utility bootmedia), the W11 setup process will proceed without encrypting. Otherwise it can be very likely to encrypt everything it has access to and it's not intended to be recoverable without a Microsoft account. Even with a Microsoft account I don't trust this, seems like the opposite of "trustworthy computing" to me :\
Anyway with that in mind the command line does perform as expected and took this PC from W10 pro 22H2 to W11 pro 25H2, preserving my installed programs and files as far as I can tell. And this is on an MBR-booting PC where Windows 10 was installed to an MBR partition, using legacy CSM with UEFI disabled. No GPT, no EFI folder, none of that.
In W10, was only using the first 64GB of a HDD as NTFS, with the remainder unallocated. My Recovery folder (containing winre.wim) was intentionally the one on the same volume as Windows 10. This direct W11 upgrade created a new 750mb type 27 ("hidden" recovery) partition immediately following the 64GB. With the new 750mb containing its new Recovery folder.
If there would not have been enough unallocated space on the drive, I believe the upgrade process would have replaced the W10 winre.wim in my C:\Recovery folder with the W11 version? Not sure at this point, but C:\Recovery\WindowsRE still contains the previous W10 winre.wim, and the new recovery partition contains the W11 winre.wim.
Edit: found a recent article documenting bitlocker problems that might be related, look at the comments:
https://www.guru3d.com/story/windows-11-25h2-update-causes-u...
> The platform claimed its "initial actions" (could be either the first takedown or appeal denial, or both) were not the result of automation.
Didn't know YouTube can improve their review time from 45 minutes to 5 minutes without automation. I bet it's pure magic.
Well - it is time that the rest of the world censors these two corporation. I don't want them to restrict information.
People will find workarounds by the way. This is now a Streisand effect - as people see that Google and Microsoft try to hide information from them, they will now look at this much more closely than before, with more attention.
(Having said that, my bypass strategy is to not use Windows 11 altogether. I don't depend on it, having used Linux since 21 years now, but my machine to the left is actually using Win10, for various reasons, such as that I can fix problems of elderly relatives still using Windows. But I won't use Win11 ever with its recall-spy software. I also don't care that it can be disabled - any corporation that tries to sniff-invade on me, is evil and must be banned.)
Edit: Ok so the video was restored. That was good, but still, we need an alternative here. Google holds WAY too much power via youtube.
This comment section is wild.
The videos are up. Microsoft and Google weren't meeting in secret backrooms to censor this one channel. The most likely explanation is that a competing channel was trying to move their own videos up in the rankings by mass-reporting other videos on the topic.
It's a growing problem on social media platforms: Cutthroat channels or influencers will use alt accounts or even paid services to report their competition. They know that with enough reports in a short period of time they can get the content removed for a while, which creates a window for their own content to get more views.
The clue is the "risk of physical harm". People who abuse the report function know that the report options involving physical harm, violence, or suicide are the quickest way to get content taken down.
Why? Because they were all paying people to DDoS each other. Kinda silly, but good for business.
It’s also a monopoly, luxottica owns practically all brands and dictates the prices.
That estimate is way to high. More like 90 eurocents (~$1) for the whole thing, assembled. That's retail price:
(swiss is not its own language btw. but italian, german and french)
At the time(and really, even now) people would get their eyeglasses from their local provider. Who cares, insurance probably covers some or all of it. Even getting contacts or glasses as a prescription was like pulling teeth, since they wanted to keep it in house.
So the new market of get your prescription, then buy online was born. And it was like the wild west, not full of eye care professionals but...mostly less than above board places all fighting for your click.
Think about it...if you finally decide to Google eyeglass frames or such, you were entering a whole new realm. And why fight over SEO, when you can just take your competition offline, as most people will click a link, watch it load for 5s, then click back and try the next.
I have no idea if the industry is still shady or not, but 20 years ago, it was full of nothing but bad actors.
I don't know if it matters at all to the conversation or not, but none of the actors(gambling or eyeglasses) were based in the US, despite their domain names and courting US customers. The DDoS company was based in the US.
Exactly, this is why vision "insurance" is basically a scam, supported only by US tax laws that enable employers to offer vision "insurance" tax-free, while people buying their own eyeglasses have to pay with after-tax dollars.
Except where insanely inflated, glasses cost at most tens of dollars. Certainly not the kind of thing one needs insurance to cover.
That's not the argument IMO. They don't have to be intentionally malicious in each action. A drunk driver doesn't want to kill a little girl in the road. Their prior choices shape the outcome of their later options. A drunk driver decides to get behind the wheel after drinking. A large company makes a decision to make more profit knowing there are repercussions and calculating the risk.
Complain to Congress, they’re the ones who set this up to work this way.
It's also not clear how an informational YouTube video would be either a circumvention tool or an act of circumvention if nothing in the video itself is infringing.
You'd see them, if you read the article. Look for the big image with the caption saying "Source:".
I should warn you that you'll have to make it through seven (7) sentences of text before you get there.
As a side note, not a single word of your comment just now is true. Did you think no one would notice?
Please don't don't sneer at fellow community members on HN. https://news.ycombinator.com/newsguidelines.html
>Two weeks ago, Rich had posted a video on installing Windows 11 25H2 with a local account. YouTube removed it, saying that it was "encouraging dangerous or illegal activities that risk serious physical harm or death."
Stop embarrassing yourself.
Who lobbied for it to work that way? I'm assuming google aren't entirely innocent here.
If they don't react quickly and decisively to reports of "possible physical harm", even if the reports seem unfounded, they'll eventually get the NY Times to say that somebody who committed suicide "previously watched a video which has been reported to Youtube multiple times, but no action was taken by Google."
If that's too expensive, your platform is broken. You need to be able to process user reports. If you can't, rethink what you're doing.
The central ill of centralized web platforms is that the US never mandated customer/content SLAs in regulation, even as their size necessitated that as a social good. (I.e. when they became 'too big for alternatives to be alternatives')
It wouldn't be complicated:
- If you're a platform (host user content) over X revenue...
- You are required to achieve a minimum SLA for responsiveness
- You are also required to hit minimum correctness / false positive targets
- You are also required to implement and facilitate a third-party arbitration mechanism, by which a certified arbitrator (customer's choice) can process a dispute (also with SLAs for responsiveness)
Google, Meta, Apple, Steam, Amazon, etc. could all be better, more effective platforms if they spent more time and money on resolution.As-is, they invest what current law requires, and we get the current situation.
I really wish someone could tell me that either
1) Yes we can make a system that enables functional and effective customer support (because this is what this case is about) no matter the language
2) No we can’t because it’s fundamentally about manpower which can match the context with actual harm.
Whatever I suspect, having any definitive answer to this decides how these problems need to eventually be solved. Which in turn tells us what we should ask and hope for.
I'm not saying that it's humans, but it's humans.
Augmented by technology, but the only currently viable arbitrator of human-generated edge cases is another human.
If a platform can't afford to hire moderation resources to do the job effectively (read: skilled resources in enough quantity to make effective decisions), then it's not a viable business.
But, it is viable. Many profitable businesses exist that don't pay for this.
One may instead mean that they want such businesses to be made non viable, in which case we should critically consider which business models that we might currently like other consequences of may be made non viable. For example, will users suddenly need to pay per post? If so, is that worth the trade-off?
Imho, we should do what we can to make sure they're required to pay for those externalities.
Then, they either figure out a way to do that profitably (great! innovation!) or they go under.
But we shouldn't allow them to continue to profit by causing external ills.
They do figure out how. That's the problem. This stuff is all trade offs.
If you say they have to remove the videos or they're in trouble then they remove the videos even if they shouldn't.
You can come up with some other rule but you can't eliminate the trade off so the choice you're making is how you want to pay. Do you want more illegitimate takedowns or less censorship of whatever you were trying to censor?
If you tried to mandate total perfection then they wouldn't be able to do it and neither would anybody else, and then you don't have video hosting. Which nobody is going to accept.
And that requirement can be created by more robust, outcome-defined regulation.
People keep looking at the absolute amount of profit across a massive service and assuming that it means they could afford to do something expensive. But the cost of the expensive thing is proportional to the size of the service, and then they can't, because dividing the profits by the number of hours of video turns into an amount of money that doesn't buy you that.
> And that requirement can be created by more robust, outcome-defined regulation.
What are you proposing exactly?
Outcome-based metrics are the things that often fail the hardest. It's reasonable to have a given level of human review when you have functioning CAPTCHAs on the reporting function to rate limit spam reports, but if you then require that by law and LLMs come around that can both solve CAPTCHAs and auto-generate spam reports to target competitors etc., now your cost of doing human review has gone up by many fold but you're still expected to meet the same outcomes. Then they either have to tune whatever metric you're not forcing them to meet up to draconian hellscape levels to meet the one you're demanding or you're now demanding they do something that nobody knows how to do whatsoever, both of which are unreasonable.
And all of this is because the government doesn't know how to solve the problem either. If you want to prohibit things with a "risk of physical harm" then you have to hire law enforcement to go drink from the fire hose and try to find those things so the perpetrators can be prosecuted. But that's really expensive to do properly so the government wants to fob it off on someone else and then feign indignation when they can't solve it either.
This can be accomplished with bogus dmca notices too. Since google gets such a high volume of notices the default action is just to shoot first and ask questions later. Alarmingly, there are 0 consequences (financial or legal) for sending bogus dmca notices
https://techhq.com/news/dmca-takedown-notices-case-in-califo...
I think it's high time google stopped acting as judge jury and executioner in the court of copyright enforcement.
[https://copyrightalliance.org/education/copyright-law-explai...]
Not saying Google is good or anything, but this is well trod ground at this point.
Does Microsoft unfairly benefit from Google's takedown tirefire? I do not know.
But if I were designing a voting system for takedowns it would be: 1. 1 non-DMCA takedown vote per user per year 2. No takedown votes for accounts less than 1 year old 3. Takedown all equivalent content when a video is voted down. 4. Verification of DMCA ownership before taking down DMCA-protected content.
Also, it doesn't even need to be collusion between Microsoft and Google, but to pretend like that's never a thing is to be ignorant of history.
Stop defending these big companies for these things. Even if your version of the story is true, the fact they allow their platform to be abused this way is incredibly damaging to content creators trying to spread awareness of issues.
But also, do you seriously think there is a massive amount of competition at the scale of a 330k subscriber channel for people to bother pulling off this kind of attack for two videos on bypassing Windows 11 account and hardware requirements?
Regardless of what happened here, Google is to blame at least for the tools they have made.
As for Microsoft, I don't think there's anything disagreeable with saying that they've tried hard to get people to switch to hardware with their TPM implementation and lying about the reasons. Likewise for forcing Microsoft accounts on people. I am not certain they were involved in this case, but they created the need for this kind of video to exist, so they are also implicated here.
Enough to cause this behavior. I don’t know if theres a mathematical or organization law or something, but it seems like theres always a way to abuse review mechanisms for large communities / sites.
Never enough manpower to do review for each case. Or reviews take a long time.
Manpower at a given salary cost.
All content platforms could throw more money at this problem, hire more / more skilled reviewers, and create better outcomes. Or spend less and get worse.
It's a choice under their control, not an inevitability.
The only frequent obvious problem I see is Youtube not telling people why their videos get hidden or taken down or down ranked. Long time creators get left in the dark from random big changes to the platform that could be solved with an email.
We have companies with billions of customers but smaller customer service than a mid-sized retailer from the 90s. Something is not right.
IME it's especially bad with Admob. They've purposefully kept their email contact option broken for years and the only "help" you can access is from their forum, which is the absolute worst and never provides any meaningful resolutions. It's awful.
People posting on these sites as content creators aren’t customers.
The root problem is twofold: the inability to reliably automate distinguishing "good actor" and "bad actor", and a lack of will to throw serious resources at solving the problem via manual, high precision moderation.
> The platform claimed its "initial actions" (could be either the first takedown or appeal denial, or both) were not the result of automation.
They'll silently fix the edge case in the OP and never admit it was any kind of algorithmic (OR human) failure.
People are so quick to assume conspiracy because it is mentally convenient
Source:
If they claim that a non automated review occurred but then still took down/denied appeal, what caused them to change course?
What is your source that the restoration of the video was not because of the noise?
They need to do what? Browser, zoom, email client. They are never going to install anything.
All of these have great options on linux, and they work just as well.
Just put them on Debian stable and be done with it.
All 3 give it a solid thumbs up. "It never crashes", "It's so easy", "It's fast", "None of that Windows bs".
Even the one major 'windows' app that my mom needs to use is going Web only... so I figure if I install Debian Stable + Widevine that'll cover 99.9% of the use case and I gain an OS that just works correctly.
Any particular reasons?
That being said, it's also pretty easy to get a full linux shell and even install gui apps via flatpak or whatever.
Besides, all major distributions (Debian, Fedora, Ubuntu) ship with a shim signed by Microsoft, and systemd..
*BSD is the only escape, but for how long?
> Linux is under control of the same companies
Linux is indeed open source, so are you trying to say that "Linux is EFFECTIVELY under control of the same companies VIA UEFI WITH SECURE BOOT ENABLED"? Or is there a big-Tech cabal controlling Linux in another manner? I get that most big-Tech companies are major contributors to open source projects.
> all major distributions (Debian, Fedora, Ubuntu) ship with a shim signed by Microsoft
Having a shim signed by Microsoft makes no difference if these distributions are being installed on hardware without UEFI firmware implemented on the motherboard’s SPI flash e.g. motherboards from Purism (Librem Laptops), System76 (Thelio, Galago Pro, etc.), Framework Laptop (2021 →), Star Labs, Raspberry Pi / Single-Board Computers and uncountable DIY PC builds with motherboards (ASUS, ASRock, Gigabyte, etc.) that expose Secure Boot options. It is usually only when consumer hardware is being used from major OEMs (Dell, HP, Lenovo, etc.) that ship with only Microsoft’s key in the firmware trust database.
> and systemd
You are suggesting that “systemd” is also part of the lock-in or control (in your mind) of those distributions. But strictly in the context of shim and Secure Boot, systemd is not the same issue: systemd is an init-system/process manager in userland, not part of the firmware/boot loader signature infrastructure. Major distros use systemd, so from a “vendor/lock-in” narrative they may lump bootloader trust and systemd governance together. But strictly speaking your assertion is more of a opinion/ideological piece than a formal technical dependency.
> *BSD is the only escape
Not true. Not all Linux distributions use it — Tails, Qubes OS, PureOS, Alpine, Void, Gentoo, etc., deliberately avoid it. Most minimalistic, privacy, or DIY distributions refuse the Microsoft-signed shim route because their users are expected to control their firmware settings or use owner-controlled keys.
The YouTube drama you glossed over is the point: we've reached a stage where explaining how to bypass Microsoft's arbitrary hardware requirements gets censored for "physical harm"
On systemd: calling it a Red Hat/Microsoft, driven monoculture that mediates everything from device mounts to DNS is accurate, the same consolidation that gave us Microsoft signed boot chains also delivered one init system to rule them all, dismissing this as "merely ideological" is exactly how normalization works, by the time it's a technical dependency, it's alreadt too late, look at the "cloud" ecosystem..
You listed exceptions, but let's be honest, they are only just distros.. Tails and Qubes are security, hardened research tools, not daily drivers for "elderly relatives". Alpine, Gentoo and Void require deep knowledge, technical skills and an ongoing maintenance that defeats the "set it and forget it" goal
And yes, you can buy a Purism or System76 laptop, but that's the exception that proves the rule: you must pay a premium and choose their hardware to escape the shim problem, that's not freedom; it's choosing your corporate master from a smaller menu, all subject to the same master/ideology
*BSD remains the only ecosystem offering a complete, usable desktop without either a Microsoft signature or a sprawling, vendor, controlled init system, if that sounds hyperbolic, it's because the Overton window has already shifted so far toward corporate control that stating the obvious appears radical
Today Linux supports most HW but Tomorrow, if the Chip Security Act passes, chips will be legally required to contain tracking and kill-switch mechanisms, while the Act doesn't directly mandate Linux to restrict hardware support, it creates the legal infrastructure for exactly that: either mainstream distributions cooperate with the surveillance architecture or risk being barred from running on modern hardware
The 'choice' becomes BigTech-approved Linux that supports backdoored silicon, or niche distros that can't run on any new machine
I could continue with many more examples, but I feel like none of the people over hear understand the point
https://www.centerforcybersecuritypolicy.org/insights-and-re...
Nowadays they censor by putting pressure (by denying payment capabilities) on sites that offer content that they dont agree with.
It’s literally their mission: to organize the worlds information.
We just didn’t understand it at the time.
And if they do care they will find workarounds as you said.
Nothing will change, the frog has been sitting in boiling water for more than a generation now and the newbloods never experienced the computational freedom you hold dear; they will happily use whatever corporate surveillance technology is being forced upon them. They will even defend it to the bone if you try to take it away
(a big) But YouTube has grown to be such a monopoly, that they now dictate what we are going to be able to watch on the web.
This is sadly so hard to change, so many creators are now literally working "for" YouTube, and there are so many quality videos there.
They might even put the ads in different places for different users to throw off things like Sponsorblock.
Also you can block the ads so you have the third option.
Until that point I’ll continue to pay. Content creators tell me they get more income from premium subscribers too so win-win.
I consider my time to be valuable, and really hate ads though so removing even a few minutes of ads are worth the $8 to me.
It is literally better in every way and helps support the content creators whose work you enjoy. As a bonus it includes YouTube music.
I guess taking those creators work without paying using adblock is one way to live, but that doesnt give you the moral high ground you think it does and your entitled attitude is kind of nasty. Do you really feel like YouTube and all of those creators owe you free shit? Just don't use it if you hate commercials that much and can't afford 8 lousy bucks.
No I don't work there. I'm just a fan who thinks the excuses people make to steal are gross.
I would have to jump through all kinds of hoops to do that, like creating an account, staying logged in, and on top of that I would still need to have my adblocker to fix their website trying to shove "trending" videos down my throat. The real recommendations based off viewing history work great with a disposable anonymous cookie.
Simply put, paying doesn't provide enough value over doing it the free way. This is YouTube's problem, not mine. I don't pirate games, simply because buying them from steam or gog is a much nicer user experience than pirating the games. But dealing with google and getting so little in return, still needing to use extensions to fix their crap website anyway? Nah, I don't care about their business model.
I’m having much more trouble imagining life without Google Maps that without YouTube.
The report system has been gimped massively,can't even type in reasons any more just have to select from some limited options and hope for the best. Took me over half a year of reporting a permanent street closure near me for them to actually change it and all the whole they were happy to direct people and cars down it . Other times they just outright reject reports without any reason.
Directions have got more sucky over the years.
More and more advertising has creeped into the maps as well, seeing the logos for stores and restaurants over other places and when zoomed out because they paid to be boosted.
I only use Google for street view and, on google earth, for historical aerial imagery these days, not for navigation. For that I use apps that use OSM like Organic Maps or now CoMaps.
I find it hard to extract the same practical value from YouTube. There have been cases where I would see how people repair stuff and to some degree it has been useful but it is hard to find that "useful" type of video you look for among all the noise. Product review videos are always kind of fishy, because reviewers are mostly sponsored. So I can't quite get to extract anything of great value from YouTube.
Btw, thank you for the Organic Maps tip. Looks really really cool!
That said, YouTube has been auto-dubbing videos using an algorithm that overdubs English spoken by people with an accent, which I consider discriminatory (if not outright racist), so I'm trying the various alternatives now. In a few months I think I'll have more of an opinion about them.
I'll just have to remember to never visit Spain, lest I get arrested for drug trafficking because of my phone.
Microsoft on the other hand seems to be reheating the old Palladium/Trusted Computing concept enhanced now by Copilot. This idea was already criticized over 20 years ago as a dangerous attempt of turning desktop machines to uncontrollable appliances which would run only approved software and serve, access approved safe content rigged with DRM. And frankly, with all this play with chat control, age verification it's hard to not see some similarities. Maybe that's where this is all going.
I'm not quite that cool, but I have been using it full time since about 2009, so I'm not too far behind :)
The only time that I have to use Windows is because I have to play tech support for my parents, because despite considerable effort on my end, I have been completely unsuccessful at convincing them to move to Linux or Mac. It's a little annoying, because when I bring up the subject they act like I should just "live and let live", but that's a really stupid argument when they're saying this while I am fixing their computer. Somehow this is lost on them.
I have complained about this a bunch of times on here, but I'll say it again: If you work on Windows Update, then you should consider any career other than software engineer. Windows Update has made the world a worse place because it disincentivizes updating your computer, leading to an increase in open. Update software isn't allowed to suck.
It didn’t bother me that much until a few months ago when an auto update to Windows 11 bricked my mom’s computer, and since Windows won’t support filesystem that didn’t coexist with dinosaurs and because System Restore doesn’t work, and because the automatic repair tools never work, I ended up having to walk them through flashing a drive with a fresh copy of Windows 11 to get it working.
Oh, also, the diagnostic tools don’t appear to work, and they don’t really have “live USB” support for anything anymore, so I actually had to first walk my dad through flashing Ubuntu to a drive so I could tmate in, mount the NTFS drive and rsync all the files to my server. The only way to save Windows and NTFS is to use Linux, apparently:
If they were running Linux, I would have them set up with automatic system snapshots with ZFS and since ZFS is actually a competently built filesystem the snapshots would actually work.
I am ragging on filesystems but it’s not even inherent to filesystems. I run NixOS on my laptop and every time I do a rebuild it takes a snapshot, so if I break something I can just reboot and choose an older generation, and this works even on ext4. I think this is how Windows System Restore is supposed to work but it has the slight disadvantage of not actually working, or it is so limited so as to be useless (e.g. you apparently cannot restore a Windows 10 restore point from a Windows 11 install, a big problem if the auto update to Windows 11 was the actual problem).
I have debated giving the ultimatum, and I haven’t ruled it out, but in order for it to work I need to be willing to stick to my guns if they call my bluff, and I don’t know that I am there yet. I have been trying to plant the seed though it has not been successful.
I'm happy to work in Linux and see the great improvements they did thru decades
If they sensor something like this, how could we trust platforms with the actually important subjects?
Most Americans literally can’t imagine news as anything other than entertainment.
Framing it in terms of trust is already problematic.
We don't trust the NYTimes or Washington Post, they are a source of information that needs to be taken with shovels of salt and require additional research to get to anything trustworthy. And we always understood that was their role.
We don't trust supermarkets or retailers to give us important pricing information, we do the research to get anything actionable.
Why is trust involved for YouTube ?
and it is why total freedom of speech on a platform does not mean we can trust it. maybe even the opposite because people who tell a lie are more motivated (money or whatever)
I am not justifying w11 video removal I'm just saying thinking youtube trustworthy because it's open to everybody is a mistake
More or less, the charitable and responsible approach to being ultra-rich, and which has disappeared in this century.
I see the people in charge of these big corporations as lizards, given every decision they take seems to be anti-Humanity. We should cherish non-profits, small businesses, having a good and boring life, doing normal things. Instead we idolise being successful, rich, or famous. What a stupid system…
The answer is no, we can't.
The only real competing video platform that promises no censorship is Rumble ( https://rumble.com ), but it has a very right-wing slant due to conservatives flocking to it during all the Covid-era social media censorship.
Take freedom of speech for instance, half the thing you can say in usa would be deemed as hate speech in Europe.
There's also this annoying pattern where 98% of the complaints about censorship are from people who are mad that the objectively stupid and dangerous stuff they were trying to profit from got censored, so it becomes a "boy who cried wolf" situation where any complaint about internet censorship is ignored on the assumption it's one of those. (What if there really is a Nigerian prince who needs my help, and I don't read his email?)
This time, though... Society is not being destroyed by people pirating Windows 11. That is entirely different from censoring things that destroy society, and they don't have a good excuse.
https://slatestarcodex.com/2017/05/01/neutral-vs-conservativ...
> The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
Rumble isn't going to save the internet.
We call those "free speech" platforms nowadays, because apparently the only free speech is Nazi speech.
Anyway I doubt youtube did this intentionally, but it does show how vulnerable their system is to false reports.
- the videos haven't been removed
- the removal part of the story doesn't matter much, and shouldn't be the focus in the title
?What do you need in Windows that is not possible in Linux? Its slowness to justify your 40-hour work week?
But in the past 20 years I tried using Linux on the desktop a couple of times.
It always ends the same way - out of the blue it refuses to boot. Of course there's usually a solution, but I just really don't like that my PC can just suddenly decide that I'll be troubleshooting for the rest of the day, usually in front of some very minimal "maintenance" CLI. And that's if I got the time - I may have to use my laptop for the rest of the week, now dreading the weekend instead of welcoming it.
Right now I'd have to do a bunch of research first. Would I still be able to play all the games I play with my friends once a week? I have 3 monitors, one of them has a different DPI than the others, did they fix that by now? I got a stream deck, will that be essentially useless? Is my webcam / mic supported? Do I need to learn about various audio architectures before I can ever use a mic again? Which ones of the dozens of apps I use every day can be made to run under Linux?
It'll probably take a 40-hour work week to get to like 90% of where I was on Windows, and then I'd consider myself lucky that I got that much to work at all. And then I'd start waiting for the first "troubleshooting day".
With all that negativity I have to also say that I adore Linux on the server. When all you need in terms of hardware is basically a CPU and any number of storage devices and all you get in terms of UI is SSH, Linux is far superior to anything else.
Distros like Arch, NixOS (my current laptop driver) or even Debian require a bunch of tinkering to get some things to behave properly.
Also, I get tired of all the tech "reboots", eg the 3 or 4 different ways of setting up network or DNS, pipewire vs pulseaudio vs whatever, Wayland vs X11, etc.
> the 3 or 4 different ways of setting up network or DNS, pipewire vs pulseaudio vs whatever, Wayland vs X11, etc
Sounds like a problem with your distribution. I've been on openSUSE Tumbleweed for years and I've never had to tinker with any of those.
Easier to work on than Windows but my Linux pisses me off every day.
Problems with docks, forgetting all monitor setups except for the last dock (I use three, two at the office, one at home), Zoom ALWAYS having problems with screensharing, Network Manager issues since forever (can't VPN like a human being, have to use vpnc like an animal), etc, etc.
In my case it stems from having to deal with multiple distros (and multiple generations of distros, eg 3 LTS Ubuntus) professionally.
In other cases, distros give a choice on which tools to use, usually because the new one is better (but also happens to come with its own new bugs).
Unrelated, I love that any "why aren't you using Linux?" question is actually almost always just a thinly veiled "let me tell you why you're wrong" plant.
That's actually the opposite of what I said. All those issues seem to come from the fact that the user didn't choose a distribution where it's "one-click install".
If you came to me and said "I tried Arch Linux and my installation broke after every update", I think it's fair to say that it's something you should've expected before you installed the distribution. It's unfair to make the comparison for stability between Windows and Linux if your only example is Arch Linux.
So yes, I maintain that the distribution choice is important and that if you constantly run into issues, it's probably a problem with your distribution (or your use thereof).
If there's one thing I'll admit to "doing it wrong" it's that I've been on a distro-hopping binge the past few years because I've (fortunately) not actually needed my laptop as a daily driver, so I've experienced a bunch of them and, so far, none of them have given me a compelling reason to stay.
Many have been interesting (particularly NixOS and Bluefin), some have been easy until you decide you want to get away from defaults (Mint comes to mind). All of them have had some quirks/issues.
I haven't tried a SUSE in probably 25 years so maybe that'll be my next hop.
Mind you, I've had Linux devices for 30 years and I was also a FreeBSD-as-my-main-desktop user for about a decade, so it's not like I'm not into this kind of tech.
I've tried about 4 or 5 distros before settling on openSUSE Tumbleweed (now on my 4th or so year). Linux Mint, Fedora, Kubuntu, Solus, Manjaro...
Ironically, I find Tumbleweed (a rolling distro) more reliable than all the others I've tried. I can't say it's stable per se, but if something breaks you can rollback very easily. Doesn't break often, though.
For example i think the first issue any new user will face is with many codecs not being available in the official repository distros, making various sites (and video plays) unusable. The solution to that one is simple, add packman, which is a community repository that contains all codecs - but IIRC packman is not mentioned anywhere during the install, it is something you need to search for (it is in the wiki). However, packman very often conflicts with the official repos when it comes to updates, making all GUI-based ones (that do not seem to handle cross-repo conflicts like zypper) pretty much unusable as they always give up in the presence of a conflict. And unfortunately some comments i've seen (mainly on Reddit) from people working on the distro seem to indicate at least a minor hostility towards using packman, so i do not see this being solved any time soon.
For an experienced Linux user this is a trivial issue, something that i doubt most (long time) openSUSE Tumbleweed users would even think about, but for someone new to Linux it can be a larger issue they wont find in distros like Debian (though they may find other issues :-P).
There have been other issues i had with openSUSE Tumbleweed, like -e.g- at some point after an update every 3D game had some significant input lag regardless of vsync state. I never solved that one, i just rolled back updates (snapper is great for that, but again an advanced Linux user feature) until at some point -months later- the problem stopped happening. Though now i have another issue where the X server randomly starts not updating the screen for random numbers of milliseconds - essentially it feels as if the entire thing is stuttering - but weirdly enough there are no CPU or GPU usage spikes and it doesn't seem to be relevant to CPU/GPU usage at all. If anything, it does not happen at all if there is some OpenGL or Vulkan program running in a window (so it doesn't affect games at all, just regular desktop use) and sometimes i just end up running vkcube in another virtual desktop (it doesn't matter if the output is visible or not) to avoid it. My guess is that there is some sort of scheduling bug in the modesetting driver as i never had that issue with the amdgpu driver (my guess is the modesetting driver doesn't get as much testing as the amdgpu driver on AMD GPUs), but the amdgpu driver causes the X server to hang after i suspend and resume my desktop since i got a RX 7900 XTX (it did not happen with my RX 5700 XT, which was rock solid), so it is choice between the lesser evil.
> I so badly want to jump ship entirely, but there's several things holding me back. I do music production as a hobby and Ableton Live doesn't play nice with Linux. In fact it seems anything that is resource intensive without native linux support has some issues. I'm also an MS stack developer, so things like Visual Studio Pro aren't available (although I've been using Cursor IDE more and more these days). Lastly I have some games acquired through "the high seas" in which a work-around doesn't exist for compatibility.
The responses I got were to switch to different software. No, no, and no. I paid a lot of money for Ableton Suite and poured many many hours into learning how to use it; it's the DAW I prefer to use, I don't want to switch.
Having said this, I did try to dual boot recently with Linux Mint, and once again ran into headaches getting my Logitech mouse buttons to work.
This should generally work for games of various origins as well.
Extra mouse buttons should generally map correctly. For me, my Logitech MX Master 3 works under Arch. You may need to add udev rules if your mouse generally works but additional buttons don't seem bindable.
Try an Arch linux based distro, Omarchy or Manjaro. Most of these tweaky things will generally work better since you will be on the latest versions of software.
Objectively if you want to run desktop performance intensive software, Linux is not the primary place unless it’s AI/HPC or crypto related. Linux is a bad choice for gaming and people like you who try to pretend like it’s not are wrong and they should feel bad for spreading lies on the internet.
This depends on the game too obviously.
Though having a computer that actually.. just works can't be overstated.
I get that's a car-aazy answer, but here I am.
I don't know what your requirements are because I can say the exact same for Windows.
> especially on laptops
I agree with this but only if you have Nvidia drivers.
But whenever I run into an issue after an update, I just rollback and wait for a few more days because it usually gets fixed. More often than not, it's not even an issue that deserves to rollback, let alone spend a whole evening troubleshooting.
Next time you try a Linux distribution, may I suggest openSUSE Tumbleweed with KDE Plasma?
I know this is a special case: hardware with specific Microsoft firmware. But I imagine that other people have other specific cases.
Also Lightroom and Fusion 360 don't run on Linux, fusion kind of works through wine but barely, and lightroom does not work at all.
Half the time I woke it from sleep the lockscreen would be broken and unresponsive too, requiring a reboot.
Overall its just too much time to figure out these problems, windows just works with very little involvement on my part.
Linux just has no upside over Windows in a dual boot context.
If you do dual-boot and don't care about the privacy of the data you put into Windows, I guess so.
I also personally keep no data on my devices, but if I did, having data that I need to reboot to get to would be friction I don't want.
Now I get your point. But still I would prefer to access my "personal" accounts from a device I trust.
Do you use a cloud service for your files?
Some stuff goes in GitHub, none of which I actually truly care about though.
I'm sure you'll groan. :)
But hey, if it's good enough for Cloudflare and Datadog (two past employers), it's good enough for me.
I also may be weird because I don't own any media and I'm perfectly happy with the streaming model. I enjoy not having the mental load of thinking about self-hosting and backing up terabytes of stuff.
I feel "lightweight" and I like it.
I have a Nextcloud instance for family to store files, though.
When second and fourth largest companies by market cap find it in their financial best interest to collaborate with each other, we have a problem.
In healthy markets, two companies that harvest and sell data as a major source of revenue would want to pull an Auric Goldfinger and disrupt one another's data collection practices to decrease the supply and increase the price of ad-relevant data.
Nuked my Windows 10 install and put Pop OS on it + a MacBook separately.
I had Windows 11 (kept it around for gaming), I binned it a few weeks ago.
Don't game enough to justify it any more (haven't even tried gaming on linux yet).
Juice was no longer worth the squeeze.
Actually, I would trade visuals for better games. Most games nowadays are better enjoyed as movies than games.
When tech giants start deciding what technical knowledge is too "dangerous" for users to access, we've crossed into a different kind of territory. Installing an OS on your own hardware is now physical harm? That's some creative interpretation of their policies. The irony is that this kind of censorship just validates why people want to bypass these systems in the first place, nobody wants corporations deciding what they can and can't do with their own machines.
There are channels that exist solely to pump out AI slop seemingly designed to trick gullible seniors into identifying themselves in the comments. I suspect the scammers will go after these people later in pig-butchering or related scams.
For example, the “Senior Secrets” channel pumps out videos such as “Over 60? Add THIS Powder To Your Coffee To Walk like You’re 40 Again! | Senior Health Tips.” (I won’t link to the video, but you can easily find it with a search.) The video makes bold health claims justified by citing what appear to be scholarly research studies, such as:
> University of California, San Francisco (2023). "Mobility Enhancement Through Nutritional Supplementation in Older Adults." Journal of Gerontology: Medical Sciences, Volume 78, pp. 445-453.
However, none of the cited studies and papers are real.
The deeply concerning thing is that the video’s narrator invites the seniors who are duped by these claims to identify themselves and reveal their age and locations in the comments. From the transcript at 1m44s:
> "Before we begin, tell us in the comments now your age and where you're watching us from. We're reading and replying to every single comment, so drop your comments below."
I’ve already reported this content to YT, but I’ve seen no apparent follow-up.
Disclaimer: I used to work at Google, but not in anything YouTube related. If you’re in YT and want to reach out, my contact info is in my HN profile.
0 - idk. Can’t call employees “YouTubers”
This type of behavior is the reason.
Linux is good enough for most everything I do, for the rest is MacOS.
The videos are back. It's also possible that a group of people "brigade" reported his posts for some reason. YouTubers attract haters, too.
Now more people will be motivated to migrate AWAY from Windows since they will have no bypass.
Yes, some will but unfortunately in actual per capita/percentage terms it'll be pathetically small.
Do you really think the marketers, economists and social scientists at Microsoft haven't got that figure off to a tee aready?
It's a certainty they have and they've figured it just amounts to noise in the grand schema of things.
.
See: Windows 8, Windows Phone.
Unfortunately, I'm not sure a human ever really looked at my case, or was strongly disincentivized to go against the AI. I got nothing but bland, contentless denials of my appeals that got vaguer each time. And I was never able to go viral, so I'm banned from KDP for life for complete nonsense.
Hard to believe this is the same company that made Windows 7. Coulda just ported WSL and security fixes back to that and stopped there. But nooooo.
Observations indicate we're approaching a point of inflection. We've had about three decades of Big Tech running a serfdom, unless power starts shifting back to users we'll be locked-in serfs for good.
I reckon most of us don't actually realize how much trouble we're in already.
What actions could we take that actually matter here?
Where are the friction points for you?
Okay, nothing to see here then. Just some sensationalism around a content moderation mistake.
Is dual boot still a thing with all the effort from microsoft to make that hell or impossible?
It might help that I'm using Windows LTSC, and that I have installed Linux and Windows to separate SSDs (with the Linux SSD not being present when the Linux was installed). But it might be just unnecessary as well.
Installation is not complicated at all, but I'd install Windows first, because it can be a finicky PoS, Linux is much better at respecting the user's wishes. Installation can be done to the same drive. With the Windows already installed, you can resize the last, largest partition, and install Linux to the newly created free space.
The UEFI then can boot either Windows directly, by selecting that in the UEFI boot menu, or boot Grub, which can then boot Windows or Linux.
With most Linux install media, you can also manage the drives, like create partitions, repair boot, delete or create EFI boot entries, etc.
I know that some windows game anti-cheat will now deny some games to run if secure boot is not enabled.
And last but not least: I build my own distro, can I use my own crypto keys with UEFI secure boot hardware? That not blocking windoz secure booting (I guess crypto keys for windoz are generated and installed in the UEFI hardware upon... installation). I never actually have a look at that in the details.
I have no idea about the crypto key situation unfortunately.
... and that it is relatively easy to run (most) Windows apps they love through Bottles (https://usebottles.com/), and/or WinApps (https://github.com/winapps-org/winapps)...
... oof
- You can install KDE on Mint without switching distro or reinstalling[0]
- Debian (caveat: packages can be out of date if you need the latest-greatest of something)
- Fedora (caveat: two major OS upgrades per year can feel like a chore)
- EndeavourOS (caveat: Requires a bit more expertise and grease to properly maintain)
- Aurora (caveat: Still young project and I'd still consider it a bit experimental and adventerous)
- kubuntu (caveat: snaps. Accept them or learn how to disable)
KDE Linux is a thing and something to keep an eye on but it's still in alpha/beta and probably not ready for your use just yet.
[0]: Caveat: it's possible that some DE service might not be disabled properly from your old setup and conflict with KDEs variety if you keep the cinnamon packages around
I'll probably go with Kubuntu just because I want something as vanilla as possible with the largest support-base.
As a bonus, if you don't want to build everything from source, there are prebuilt packages available. Instructions for how to use them are in the "Installing the base system" section of the Gentoo Handbook. I've not used the Gentoo-provided prebuilt packages, but I do use my own prebuilts. I've found the process of using them to be well-documented and fairly straightforward.
https://www.fedoraproject.org/kde/
Ubuntu based distros are fine too, but there are a few weird things to get to grips with like Snaps.
There's really not much difference between most distros these days so I'm sure if you like one you'd like the other.
For example, half the time I try to log in or unlock the screen, it just ignores my password. Fortunately, I have discovered that pressing Escape triggers a crash, and I have to deliberately trigger a segfault by pressing Escape, in hopes that next time the password will be accepted.
It sounds like your problem may be with SDDM (the login screen program) rather than Plasma itself. You could try an alternative: https://alternativeto.net/software/sddm/
Wouldn't call it stable.
I've been using the Breeze Dark theme for approximately forever and I've never run into the problem you're describing. However, I've very rarely used SDDM... I find its default rainbow-colored background intolerable and use LightDM instead.
Do you happen to remember configuration that you ended up having to change, and is that computer running Nvidia graphics hardware with the closed-source drivers?
First, I had to figure out how to manually mount LUKS-encrypted laptop drive while booting from a USB stick, that took a while.
Trying to recover, I re-installed kde, sddm and sdd-kcm and qt5-declarative packages. Still broken. I made sure /etc/sddm.conf was the default configuration, still broken. Then finally I stumbled upon /etc/sddm.conf.d/kde_settings.conf, which was still overriding defaults to Maldives. Deleting it finally fixed the SDDM login.
My wife was thoroughly not impressed with Linux out-of-box experience!
No Nvidia graphics, this was a Lenovo Yoga laptop with AMD graphics.
[0] <https://wiki.archlinux.org/title/SDDM#Enable_HiDPI>
[1] Manual scaling (even non-integer scaling) works fine as long as you have a settings editor that will speak the XSETTINGS protocol, and a daemon running that can be queried. GNOME has both by default. KDE has the settings editor, and you might need to install xsettingsd or similar. The quirk I've found is that while GTK programs accept the display scaling changes immediately, QT programs must be restarted to adopt the changes.
Assuming they know what they're talking about and not just parroting whatever they read others mention, usually when someone says that "Wayland does $THING that X11/Xorg doesn't do", this is really a shortcut for "X11/Xorg could technically do $THING, if enough developers and projects cared about it, but that would be a massive undertaking and it is easier to convince developers do $THING if we can control most of the stack to only do $THING in one particular way we want by working from a clean slate".
Since you mentioned environment variables, not sure what SDDM exactly is doing, but in the case of HiDPI scaling under Xorg the only method for HiDPI i'm aware of that uses environment variables is Qt's `QT_SCREEN_SCALE_FACTORS` which is a semicolon-separated list of per-screen scaling factors that Qt applications can use to automatically scale themselves depending on the screen the window/application is in. Considering SDDM is written in Qt, i'll guess that this is what it set.
But the thing is, this is far from enough if you want "robust" support under X11/Xorg. The reason is that a typical X application under a typical X desktop has multiple components: an X server (which i'm going to assume it is Xorg for now - other X servers are basically Xorg forks and sync with its features), a window manager, an optional desktop compositor and a widget toolkit on the application's side (not strictly needed as an app can use its own adhoc code for that but let's assume it uses one since this doesn't really matter in this case).
The behavior you need for robust HiDPI support is for the application to use the proper scaling for each of its toplevel windows depending on the connected output the window is in (note: this may or may not actually be relevant to DPI - someone may have bad eyesight and want their 27" 1440p monitor to be 150% scaled) and have that be done automatically - ideally, transparently from the user's perspective - as they move windows between outputs and/or add/remove outputs (e.g. connecting/disconnecting or turning on/off a graphics tablet with an embedded monitor would add/remove an output).
Now, technically, Xorg does provide the necessary core functionality to implement the above, however the issues begin when you start considering who is going to implement it and what part of the stack is responsible for which aspect of supporting window scaling.
Ideally, what you'd want is for applications should be able to scale each of their toplevel windows arbitrarily based on notifications from the underlying system as the user interacts with the application windows (note: this is not necessarily limited to just the user moving windows between outputs - a user could, for example, select an option from their window manager to scale a window at 200% or 300% - this could be useful when doing video streaming or recording videos for example).
So, in an ideal world, the following should happen under X11/Xorg:
1. Widget toolkits can scale their widgets arbitrarily (ideally not just at fractional level but also sub-100% level too - useful when using secondary screens with a low resolution).
2. Window managers can receive RandR events for output DPI changes and use that information to maintain a scaling factor for each output (the user could also specify custom per-output scaling too).
3. As the user interacts with the windows, the window manager sends notifications to the windows/applications whenever a window needs its scale changed. The widget toolkits use these notifications to scale their windows' contents.
Ignoring a few details, the above is basically what Wayland does since it started from a clean slate where they could dictate everything from scratch.
However X11/Xorg already has a lot of software already written for it and there are a few snags in the way:
1. Pretty much no toolkit supported arbitrary scaling, so they had to be extended for it. Since Wayland needed that, toolkits that need to support it added the functionality anyway (e.g. Qt and Gtk) though not without issues along the way (AFAIK Gtk didn't support fractional scaling for a long time). Though not all toolkits have support for this.
2. Window managers must be extended to monitor outputs via RandR and send appropriate notifications whenever windows move across outputs to those windows. This would also need some new notification protocol (most likely a new version of EWMH). However...
3. ...toolkits must also be extended to support these notifications - supporting scaling isn't enough if they do not know when to scale. This introduces a problem because...
4. ...window managers will have to deal with toolkits not supporting the notifications. One way would be to just ignore them, but another way is to do the scaling themselves. However, there is another issue here.
5. When using (and having enabled) a desktop compositor scaling can be easy (especially when dealing with edge cases like a window lying across the edge between two monitors :-P), but without one, the window manager needs to scale the window itself (there was a Xorg branch by Keith Packard that introduced server-side window scaling but AFAIK it was never merged) without affecting the rest of the desktop - and of course do the appropriate coordinate transformations for various events (e.g. mouse motion). Moreover since a desktop compositor can be a separate program than a window manager (many -if not most- X11 window managers are not desktop compositors), they both need to somehow coordinate with each other.
6. Since this requires all window managers (and desktop compositors) to be updated, the inevitable result is that there will be a lot of them that will not be updated for quite some time, so applications (or realistically, widget toolkits) will need to also handle HiDPI scaling themselves by doing the RandR queries and automatically sizing their own windows based on output. This is a subpar option because the application does not know the window manager's own state and you can end up with the two "fighting" with each other. Also the window manager cannot do desktop-wide configurations (it is actually blind to them).
7. Obviously whatever protocols in place (as i wrote above, probably a new EWMH version) are used, they'll also need to let the components (window manager, widget toolkit) provide information for when any of the above are in place so the proper action is taken (e.g. a toolkit should not try to do the output tracking itself if the window manager supports it and a window manager should not try to do scaling itself if the widget toolkit supports it - but both need to inform the other about this).
As you can hopefully imagine, the above require the developers of all window managers, all desktop compositors, all widget toolkits and applications not only to coordinate with each other but also handle various cases in case the user used something in the stack that did not support things.
With Wayland since everything was done from scratch, there were less people that needed to be convinced to cooperate - and in practice since Wayland originated from RedHat and the GNOME ecosystem, convincing the appropriate GNOME and Gtk developers to cooperate was probably a coffee break away :-P. Meanwhile Qt would already need to add (or already had, not sure when it was added) support for scaling/HiDPI anyway for Windows and macOS, so the infrastructure was there.
The current situation is that Qt, currently, supports the #6 i mentioned above since it can be implemented without needing support from window managers, desktop compositors or specifying new protocols (something that seems to be much harder than it should be - e.g. AFAIK Cinnamon implemented a very trivial X attribute for displaying a percentage for windows in a taskbar/icon overlay -think of download percentage- but despite the developers' attempt to have others adopt it, i do not think it saw much adoption). But this is really the "fallback solution" when everything else is just not there, it is not the ideal one.
That said, from a technical perspective there is nothing theoretically stopping Xorg desktop environments having top-notch robust HiDPI support. What blocks everything is convincing the developers of the various components of the desktop stack to cooperate, implement and support it.
/me wonders if OP has been paying attention to how "consensus building" actually ends up working in the Wayland world
> With Wayland since everything was done from scratch, there were less people that needed to be convinced to cooperate - and in practice since Wayland originated from RedHat and the GNOME ecosystem, convincing the appropriate GNOME and Gtk developers to cooperate was probably a coffee break away...
/me realizes that the answer is "No. Not really."
To be less droll:
1) Through xrandr, even windowmaker provides the data required for an application to know the properties of the monitor that > 50% of each of its windows are on. Given how much nicer xrandr was than xinerama, WMs that cared about multihead moved over to it fairly quickly.
2) I'm certain that not every WM provides the information required for screen-DPI- and screen-scaling-aware programs to scale as desired. But, the "Wayland is a lightweight protocol that makes few policy decisions" motto turns out to mean that for most decisions that users care about, each Wayland WM (or whatever the Wayland terminology for the Wayland equivalent is) needs to re-make and reimplement those decisions. Feature fragmentation has been bad. So, no, if you're not going to hold Wayland to the "Every WM must implement all the features" standard, then you're not going to demand that of Xorg WMs.
3) You happened to mention the two things that's needed for Xorg to support both HiDPI and non-integer scaling... GUI drawing library support and a common protocol for setting and retrieving user-driven adjustments to the "natural" rendering scale given the display DPI. XRandR [0] has either always, or has effectively always provided the information required for GUI toolkits to scale their widgets according to a screen's DPI. And the XSETTINGS protocol [1] is used to store the user-commanded scaling adjustment. Glancing at the release date for those two things, they either substantially predate or came out very, very shortly after Wayland's initial release.
Weird. It's almost as if we were waiting on the GUI toolkits to use what Xorg had been providing them for ages.
Anyway. Check footnote 1 in the comment you replied to for the on-the-ground details on GUI toolkit render scaling on Xorg from an end-user's perspective.
[0] adopted no later than 2007
[1] first proposed in 2001 and adopted no later than 2009 (though, if I cared to spend more than a few minutes on the search, I expect I'd find that it was adopted much earlier)
What i described was about Wayland, GNOME and Gtk specifically, not the entire "Wayland world". Wayland has been a mess that could have been completely avoided if people just tried to fix any issues with Xorg instead of falsely claiming that Xorg cannot be fixed and we'd had proper support for HiDPI, HDR, mixed refresh rate configurations with compositing and all sorts of other nice things at least a decade ago instead of creating a pointless schism in the already tiny Linux desktop ecosystem but ultimately you cannot control what other people spend their time on.
1) Window Maker does not provide anything to any application, if applications need such information they have to use the extension APIs themselves. IF there was an agreed upon protocol for window managers notifying applications to scale themselves, then Window Maker could implement it. But such a protocol does not exist.
2) Window managers do not provide any information there at all since there is no such support. And yes, all Wayland compositors do need to implement that stuff, but because it started from a clean slate and Wayland compositors had to be written from scratch anyway, it was easier to convince developers to do that because they self-selected to go through the effort of making a Wayland compositor in the first place. As i wrote in my original post, the issue here isn't if something would be written or not, but convincing the people who work on the projects. It is mainly a social issue, not a technical one.
3) Yes, without any other support in place, GUI toolkits and other applications can use the information exposed RandR to implement scaling themselves but, as i already wrote, this is a fallback solution because the rest of what i describe is not there. This is far from having robust support, ignores things like custom scaling options, handling moving windows between desktops and support for applications that do not do scaling themselves (which is many of them), among other things.
All of the above are things i already addressed in my original message BTW and again, the issue is not technical but social/political. It is about convincing people to cooperate, not if something is technically possible (and let's be honest, it isn't like Xorg's code is written in stone, if something is currently impossible, the code could be extended to make it possible).
I did have some earlier snags which all went away after switching from Wayland session to X11 session.
/s, in case that wasn't blatantly obvious...
(Nah, that wording is but a generic legalese sounding way of casting a huge net to get all sorts of fish.)
Yet ChatGPT is not responsible for having led to suicides.
You want nanny states and nanny corps and authoritianism through and through (remember covid policies?), you'll get this more and more.
You either start rolling back all that BS in the name of freedom (no, not freedumbs) or you can't really complain.
If you hate Windows just use Linux, BSD or whatever.
I'm sick of all the "Windows 11 sucks" folks that yet keep using Windows.
Just boot your laptop from a Linux ISO and you've got the best way to bypass Windows 11.
Boycott Microsoft and everything it touches.
(Yeah, it's Nvidia, no, I didn't do my homework and bought Nvidia for a Linux PC).
While it may make sense for others, I don't find system that can lock up for 11 hours for updates suitable for anything other than occasional gaming. But why shouldn't I use it for it? I already think twice before getting any game that doesn't run on Linux and gave EA WRC Rally a downvote after they rug pulled Linux users. (A game that run on Linux on the beginning got borked with anticheat. A racing game, so you don't cheat your friends by having 1s less on that race you all compete on).
I guess it might be useful if you only keep it offline but in that case you aren't playing games online and thus you would be fine gaming on Linux given the only downside is lack of anticheat support.
Though, now that I've quite a bit of personal experience with how good Steam/Proton is for video games, I think I'll reclaim the surprisingly large amount of space that Windows is taking up.
If it has to be Windows, just remove all the shit of Win11 yourself, set it to unattended installation with a local account, remove the hardware requirements barrier while you are at it, remove the games, controller add-ons, virus scanner and whatever else you would like to (the windows store?) and create your own LTSC.
This isn’t a solution to the problem and missing the point of the whole argument. But if it has to be Windows, I would recommend to try it.
1] ntlite.com
I wonder if this is because Windows 11 has been used in critical systems to a certain extent?
The whole win 11 thing is embarrassing.
They are this far in, pushing features nobody asked for and is there any wonder the numbers blow chunks?
None.
When you're dealing with full-on idiots like that "support specialist" (AI?), all bets are off anyways. Might as well tell that clown that what he just said is the dumbest shit you've heard all week.
Take off the gloves and burn some bridges if you have to, the world will be better place for it.
Microsoft just wants you reliant on them. They can't tap value if you aren't integrated. Simple as.
Why is Microsoft allowed to operate in such a user hostile way?
Why aren't people like up in arms massively tanking their stock value, boycotting, reputation harming in every legal way possible en masse?
Like are people just careless and distracted 24/7?
Like surely this should just not be a thing?
I just don't understand how inhumane hostile behavior is just so rampant and like allowed to exist in our society.
It's because that's the default. Do you see any other facet of human organization which doesn't have constant hostile behavior? If it's large enough, or going on for enough time, there is abuse happening in it.
>Like are people just careless and distracted 24/7?
People just want to live their lives, on which a removed Win 11 bypass video has zero effect.
Then what have I been using and supporting it for?