Meanwhile, build database centers at incredible scale to run AI and force it into those same consumers in every way possible, but never tell them how much power that wastes.
Why would they? Those expenditures and investments are already priced in, and that blood is coming from that stone one way or the other or it won't, damn the expense.
Shame, on the other hand, is a renewable resource!
By controlling how usage analytics are instrumented UX designers can weaponize the data to support removing almost any feature or information they don't personally find essential. Of course, this entirely misses the fact that power users drive word-of-mouth and adoption >10x more than lowest common denominator users and also have significantly higher lifetime value because they are engaged and loyal (until you finally remove too much advanced utility). I'm all for simplicity - what I don't understand is the insistence on removing features or capabilities entirely instead of just putting them as an option in advanced settings. Different users have different preferences and good UX design can maintain surface simplicity without trashing depth, flexibility and personalization.
Let’s be honest, implementing this would be up to a bunch of offshore contractors because corporate can’t bring itself to pay software engineers to implement this feature thoughtfully and comprehensively.
Perhaps you would like to ask Copilot for the time?
Because it's enough to look at modern Windows and things like "CPU spikes 100s of percent when opening start menu because it's now written in React"
I am sure that the Power team has the understanding that user-initiated activity isn’t really anything they can control.
Once the user takes over, all tricks for efficiency go out the window. The user could run anything.
I don’t work for MS but I’m sure the Power team cares about the automatic built-in stuff that runs on schedules and timers: the things they can control. The start button doesn’t click itself.
I agree that using web tech to render things in the OS is silly, but Microsoft has been doing it in every Explorer window since 2001 with Windows XP. I don’t remember anyone complaining about that at the time except me.
> Once the user takes over, all tricks for efficiency go out the window. The user could run anything.
Erm. In case of the start menu and task bar, where they are pretending that showing servings is an issue, they do have control of a lot of things.
Somehow they split hairs over showing seconds, but are perfectly okay with web views in the task bar and in the start menu, and are perfectly okay with components in React Native.
> I agree that using web tech to render things in the OS is silly, but Microsoft has been doing it
Yes, they have been doing it. Somehow the "power team" that supposedly had ultimate control over what gets displayed had no issues with that
I wouldn't be surprised if there were some misalignments in goals though. E.g. if their team's goals are measured in results of typical battery life tests such as "how many hours the computer can sit idle playing a video" then they would be heavily weighted towards caring about these kinds of constantly recurring background draws instead of active usage draws.
Yeah, it's very likely "metrics- driven development" when optimizing certain metrics becomes its own isolated goal
They send any text you type in a form to their AI cloud and hold on to it for 30 days.
Any form.
On any website.
What the actual fuck?
They had searching on the web enabled... Pretty hard to search the web using Bing without sending along a search term.
The purpose of a system is what it does, after all
But the criterion of "having access to user input" is also necessary for goofy unneeded features like showing web search results in the Start Menu though, which they shove down people's throat like they do with every other feature their product team thinks is a great idea (explaining the "being covert" bit), at which point you have a complete, non-malicious explanation for the entire thing.
The reasonable thing to do then is to apply Hanlon's razor, at which point no, it's no longer reasonable to believe or portray it to be a keylogger anymore. Not essentially, not otherwise. Not only that, but the YouTuber in question made this portrayal knowing full well that it's impossible for them to actually properly demonstrate this feature doubling as a keylogger, as they have no access to the server side. They relied on people being gullible enough to simply not grasp this, and leveraged people's preexisting privacy concerns to farm views.
Having the capability to engage in crime doesn't make a criminal. Imagine if I portrayed 107M (!) of the 340M residents of the U.S. as a criminal because they own a gun, despite knowing full well that gun ownership sensibilities are just fundamentally different over there.
It's like making up a bunch of rubbish when there's a hate train going on against something or somebody just to participate. Then having all of that backfire disproportionately when the tides turn. Why make things up when reality has plenty bad enough stuff going on already that one can report on? Rhetorical question of course.
Why are we assuming good intentions from a company who for years has increased places and amounts of data it collects and tracks, and removed more and more ways to opt-out of this?
The intention of "search web first before searching local computer even if the user never asked for it" didn't appear from the intent of "let's create a keylogger", but it never came from a good innocent intention either.
Users of especially the home version of the OS are kind of fucked here.
Logs are always generated, and logs include some amount of data about the user, if only environmental.
It's quite plausible that the spellchecker does not store your actual user data, but information about the request, or error logging includes more UGC than intended.
Note: I don't have any insider knowledge about their spellcheck API, but I've worked on similar systems which have similar language for little more than basic request logging.
If there are a bunch of these corrections you know something is wrong there. IMO 30 days is quite modest and if this is properly anonymized..
Edit: dear HN user who decided to silently downvote - you could do better by actually voicing your opinion
Sure, I'll bite. Let's address the obvious issue first: what you're saying is speculation. I can only provide my own speculation in return, and then you might or might not find it agreeable, or at least claim either way. And there will be nothing I can do about it. I generally don't find this valuable or productive, and I did disagree with yours, hence my silent downvote.
But since you're explicitly asking for other people's speculation, here I go. Advanced "spellchecking" necessitates the usage of AI, as natural languages cannot ever be fully processed using just hard coded logic. This is not an opinion, you learn this when taking formal languages class at university. It arises from formal logic only being able to wrangle formal logic abiding things, which natural languages aren't (else they'd be called formal languages).
What the opinion is, and the speculation is, is that this is what the feature kicks off when it sends over input data to MS's servers for advanced "spellchecking", much like what I speculate Grammarly does too. Either that, or these services have some proprietary language engine that they'd rather keep on their own premises, because why put your moat out there if you don't strictly have to.
Technologically speaking, at this point it might be possible to do this locally, on-device now. This further didn't use to be the case I believe (although I do not have sources on this), and so this would be another reason why you'd send people's inputs to the shadow realm.
Better to say what you need to say. Leave the defense for the occasion someone misunderstood what you meant to say.
I can't count the number of times on HN that I've seen responses to posts that took advantage of the poster not writing defensively to emotionally attack them in ways that absolutely break the HN guidelines, and weren't flagged or downvoted. And on other sites, like Reddit, it's just the norm.
The defensive writing will continue until morals improve.
Note that this is from 2023. Their legal docs, last updated in 2024, claim a bit different: https://learn.microsoft.com/en-us/legal/microsoft-edge/priva...
> By default, Microsoft Edge provides spelling and grammar checking using Microsoft Editor. When using Microsoft Editor, Microsoft Edge sends your typed text and a service token to a Microsoft cloud service over a secure HTTPS connection. The service token doesn't contain any user-identifiable information. A Microsoft cloud service then processes the text to detect spelling and grammar errors in your text. All your typed text that's sent to Microsoft is deleted immediately after processing occurs. No data is stored for any period of time.
Microsoft ordered me to buy a new computer for Win 11, so I took said kids to Microcenter, asked for a machine whose specs could play a particular steam game on Linux, returned to my mortgage, installed Ubuntu and haven't given Windows a second thought in months.
https://www.omgubuntu.co.uk/2016/01/ubuntu-online-search-fea...
and importantly it seems that Canonical/Ubuntu is not doing something like that right now, whereas MSFT is all in on online only mode.
Does anyone know if that is true?
Some people checked it with wireshark at the time and didn’t find anything other than what was stated. [0]
0: https://gamersnexus.net/industry/2672-geforce-experience-dat...
By default, when you implement a form that takes a password, you (the developer) are going to be using the "input" HTML element with the type "password". This element is exempt from spellchecking, so no issues there.
However, many websites also implement a temporary password reveal feature. To achieve this, one would typically change the type of the "input" element to "text" when clicking the reveal button, thereby unintentionally allowing spellchecking.
You (the developer) can explicitly mark an element to be ineligible for spellchecking by setting the "spellchecking" attribute to "false", remediating this quirk: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
You (the developer) can of course also just use a different approach for implementing a password reveal feature.
As the MDN docs remark, this infoleak vector is known as "spelljacking".
- Don't show ads (saves power)
- Don't call home (saves power)
If, as tested, this setting makes a double-digit percentage difference, I'm glad Microsoft exposes it in the UI. I'd also be glad if they didn't do as much weird stuff on their user's devices as they do.
I'd rather them write more performant code. This feels like your car having the option to burn motor oil to show a more precise clock on the dash; you don't get kudos for adding an off-switch for that.
In keeping with the theme of the comment you're replying to, writing better-performing code and providing performance options are not mutually exclusive. Both are good ideas.
> This feels like your car having the option to burn motor oil to show a more precise clock on the dash; you don't get kudos for adding an off-switch for that.
(Sounds more like you're arguing that it should be forced off instead of being an option? Reasonable take in this case, but not the same argument.)
I think we all agree there needs to be some additional power draw for the seconds feature, but it’s unclear how much power is truly necessary vs this just being a poor implementation.
There's an ungodly amount of CPU and GPU spikes throughout the OS which make the "omg seconds" invisible in comparison
Energy isn’t free.
Even if they wrote more performant code, it would just mean less relative loss of energy to show seconds but still loss compared to not showing seconds.
I actively don't want to see seconds; the constant updating is distracting. It should be an option even if there were no energy impact. (Ditto for terminal cursor blinking).
My expectations of Microsoft software aren't terribly high. I'd say Windows is performant (ie it works about as well as I expect).
The feature is off by default in Windows 11 and was not offered in any previous non-beta Windows version.
(Have I mentioned how much I loathe Windows 11?)
The recommendations suggest, among other things, switching to power-saving mode, turning on dark mode, setting screen brightness for energy efficiency, and auto-suspending and turning the screen off after 3 minutes.
Power-saving mode saves little at least on most laptops but has a significant performance impact, dark mode only saves power on LED displays (LCDs have a slight inverse effect), and both dark/light mode and screen brightness should be set based on ergonomics, not based on saving three watts.
When these kinds of recommendations are given to the consumer for "lowering your carbon footprint", with a green leaf symbol for impact, while Microsoft's data centres keep spending enormous amounts of power on data analysis, I find it hard to see that as anything more than greenwashing.
Also airlines asking for extra money to offset emissions, just absolute insanity
This used to be done entirely in hardware (VGA text modes), and I believe some early GPUs had a feature to do that in graphics modes too.
It is not. This "feature" is disabled by default.
Google "manufactured outrage".
(For the record, I abhor Windows 11)
It doesn't because that feature only just release, only works on specific new laptops and most ipmortant: YOU HAVE TO MANUALLY ENABLE IT
That reminds me of Chrom[e|ium]'s insanely bad form suggest/autofill logic: The browser creates some sort of fuzzy hash/fingerprint of the forms you visit, and uses that with some Google black box to "crowdsource" what kinds of field-data to suggest... even when both the user and the web-designer try to stop it.
For example, imagine you're editing a list of Customers, and Chrome keeps trying to trick you into entering your own "first name" and "last name" whenever you add or edit an entry. For a while developers could stop that with autocomplete="off" and then Chromium deliberately put in code to ignore it.
I'm not sure how much of a privacy leak those form-fingerprints are, but they are presumptively shady when the developers ignore countless detailed complaints over many years in order to keep the behavior.
To be fair, websites with a horrible misunderstanding of security kept on using that for "this password is important, better make sure the user is forced to enter it by hand!"
and building multiple gigawatt consuming data centres to produce AI slop no-one asked for and no-one wants
powered by fossil fuels
(1) For mostly static screens that the GPU's frame buffer (and pipeline) could be empty and there would be nothing to process.
(2) The font rendering algorithm will have to run every second. These are not simple bitmap-type monospace fonts, and thus the calculation of the output will have to be done every second and that these rendered digits-combinations likely aren't cached.
(3) The thread for system clock displaying system can set a sleep for 1 minute (after it has aligned to the start of the minute (or in actuality the cron-type service underlying operating systems is likely used here for which the alignment to the minute start is already a given)), instead of 1 second.
For #2, it needs to render around 66 characters per minute. The power loss mentioned collectively is 15% in battery life. When I open a Word doc, that would show say around 1500 characters on the screen within two-three sevonds without any noteworthy drop in battery level. Hence, I do not think it is font rendering.
And we are talking about 15+ hours of actual office work in webbrowser + little bit of python math. so add 24% on top of that... that is literally weekend worth of work on one charge. current generation of laptop CPUS is insane.
On my laptop, I can check if it is enabled with:
cat /sys/kernel/debug/dri/0000:c1:00.0/eDP-1/psr_state
cat /sys/kernel/debug/dri/0000:c1:00.0/eDP-1/psr_capability
I'm one of those freaks who have this on and I honestly like it a lot. It gives me a feeling of certainty, grounding, and precision.
Primary driver for turning it on was their redesign of the clock flyout to be, uhh, nonexistent with Windows 11, which I'd previously use on demand for seconds information. I was also worried about this being a nonsolution and a distraction initially, but it ended up being fine.
I've met managers who literally lock the conference room door when it hits :00.
That's a little crazy in my view, but there are definitely places where it's the norm.
There are basically two ways of managing expectations around meeting times. The first is that it's acceptable for meetings to run late, so it's normal and tolerated for people to be late to their next meeting, and meetings often start something like 5 minutes late, and you try to make sure nothing really important gets discussed until 10 minutes in. The other is that it's unacceptable for meetings to start late, so people always leave the previous meeting early to make sure they have time for bathroom, emergency emails, etc. In which case important participants wind up leaving before a decision gets made, which is a whole problem of its own.
https://pubs.opengroup.org/onlinepubs/009695399/functions/st...
This is undoubtedly the answer, and I suspect that if any actual effort were made by Microsoft, the problem might be eliminated entirely. Maybe.
Most likely, the update is implemented calling a standard stack of system calls that are completely benign in a normal application, which is already limiting power savings in various ways. But when run by itself, the call stack is triggering a bunch of stuff that ends up using a bit more power.
The big question is: Can this actually be optimized with some dedicated programming time? Or is the display/task bar/scheduling such a convoluted mess in Windows that updating the time every second without causing a bunch of other stuff to wake up is impossible without a complete rewrite.
It's weird they didn't also include a simple web browser test that navigates a set of web links and scrolls the window occasionally. Just something very light at least, doesn't even have to be heavy like video playback.
Power consumption is incredibly difficult to benchmark in a meaningful way because it is extremely dependent on all the devices in the system, all the software running, and most power optimizations are workload dependent. Tons of time went into this in the windows fundamentals team at Microsoft.
>We’re currently running the same test again (on all three laptops to account for variance), _but this time with a video playing to simulate a more active usage scenario_
I doubt these results are as extreme when something else is happening on screen. Those results should be added to this article later:
> We’re currently running the same test again on all three laptops to account for variance, but this time with a video playing to simulate a more active usage scenario. Once those results are in, we’ll update the relevant section with the new data.
This effect is likely vanishlingly small, definitely overshadowed by engineering considerations like the voltage used when walking pixels through changes and such. But still, it's a physics nudge towards "yes".
It is like worry about Carnot’s limit… for a motor boat.
It would be interesting to test it over a remote desktop session where the screen on the device under test is off. That would eliminate a lot of factors related to the display. Presumably you'd see that the network traffic is either larger to begin with, or doesn't compress quite as well, giving you another reason to say "yes, but what if..."
The bigger problem is waking up the GPU and all the communication between components, which is why the computer with integrated graphics takes a smaller hit than the one with dedicated graphics. And why the ARM laptop did even better, because they were optimised for this usecase.
I does not need to be scaling up to high performance to:
- Read a piece of memory
- Increment with one and modulo
- Display a section of a texture on a section of the screen.
You will very likely spend more time passing memory around than otherwise, and to be honest if it happens every second I would hope it stays in cache so you wouldn’t ever even bother the memory.
But the software support to take advantage of it isn't really there. There isn't a standard API to access such functionality, and so the hardware compositors end up unused, so the don't really put much effort into improving them.
But with proper software support and a hardware compositor with enough flexibility, you could easily put the clock in it's own texture and update it with very low power consumption.
Actually, Desktop GPUs already have a single hardware sprite that gets used for moving the mouse cursor around with very little overhead (and lower latency).
I don't think any Microsoft employee is going to spend a day writing CPU scheduler code to make sure the seconds display in the task bar is being sent to the right low-power cores. I'm surprised they even bothered to port that from Windows 10 to Windows 11 to be honest.
Yes, it would require a small API addition to the desktop server (wayland, X11, ...) to "register"/transfer/update those 10 frames, their locations ... whenever the user initializes or changes the fonts, font size, ... the context switch can be totally eliminated.
The compromise for GNOME Terminal is that the cursor will stop blinking after a terminal has been idle for ten seconds.
[1] The caveat is that the majority of the time the system will not be idle but doing something else possibly even more energy-intensive.