Some additional examples beyond the OP:
- In the latest macOS, trying to set a custom solid color background just gives you a blinding white screen (see: https://discussions.apple.com/thread/256029958?sortBy=rank).
- GNOME removed all UI controls for setting solid color backgrounds, but still technically supports it if you manually set a bunch of config keys — which seem to randomly change between versions (see: https://www.tc3.dev/posts/2021-09-04-gnome-3-solid-color-bac...).
The pattern here seems pretty clear: a half-baked feature kept alive for niche users, rather than either properly supporting or cleanly deprecating it. Personally, I’d love to simply set an RGB value without needing to generate a custom image. But given the state of things, I’d rather have one solid, well-maintained wallpaper system than flaky background color logic that’s barely hanging on.
It also shows you the screen you set it for, and a boolean to set it for all screens at once.
KDE used to be the "bloated" desktop way back when (I know, pretty silly and laughable now given the current state of things).
That cemented Gnome/Mate into a lot of major distros as the primary DME. Ubuntu being the most famous.
The QT licensing situation is also a bit of a bizarre quagmire. There are certainly people that don't like KDE for more ideological reasons.
Personally, none of this bothers me and it's what I use for my personal computer. KDE is just so close to exactly how I'm used to interacting with computers anyways growing up through the Win95 era. It is so close to the Windows experience you want to have.
That’s not my recollection. I believe that the non-free license you mention was the major factor, in addition to the fact that KDE was written in C++ at a time when the free software community still preferred to write software primarily in C.
GNOME was written using a free software toolkit, and it was written in C, and it was associated with the FSF.
Nowadays the major problem with KDE is that by the time it is stable a new QT major version gets released and along with that it essentially gets a major rewrite, which takes years to stabilize and once it does a new QT version is released, etc etc etc.
But what things in KDE do you consider 'bloat'?
To the exact contrary to what you assert, one of the prominent argument against Gnome that I've been seing times and times again in DE debates, is the "dogmatic" opposition to SSD from the Gnome project.
Doubling down on short judge-y stacatto contributes to an aggressive "I don't need to tell you" vibe that would be sassy and fun, maybe, if in person. In writing online comments, it just means we need to get a 3rd comment from you before we get to anything I'm interested in (I don't particularly care what your one word description is, I don't know you)
There are a lot of people that bitch about it, but that is only because it is the most popular one and they think that Linux desktop is a zero sum game. They don't use Gnome most of the time and arguments tend to be parroting somebody else and probably out of date for years or just kinda made up conspiracy theories about Redhat or something.
The whole thing is pretty confused. Gnome being popular doesn't make KDE not be popular.
I'm not trying to be mean here, I'm just fascinated by what people will consider to be a waste of time.
This, sometimes it's enough for things to just work as you expect so you can do your actual work. I don't really understand why so many people are unpaid part time evangelizers.
Something that should be a default option, or a single-tap switch in settings, turned into a chore consisting of a period of agonising disbelief, doubt, denial, search, and eventually bitter acceptance.
The best thing to do is just take a file same as your screen resolution into your favorite image editor and fill it with actual true black. Save as png and send to phone.
As for "same image as your screen resolution": screenshot sounds like the exact fitting thing here. As a challenge, tried making screenshot black using stock Samsung "Gallery" and it seems that repeated Edit - Brightness: -100 - Save as copy, then open the copy and goto back to Edit can do the trick as well, after four or so copies. (Copies, because there is no way to re-apply same effect on the same photo, apparently.)
There's the peak GNOME experience.
At some point they decided 2x (3x?) scaling was enough for anyone and took away 4x, I didn't notice because I was already set at 4x and it continued working. Somewhat later they took away the backend, and then my system crashed with no error message immediately at login.
After much troubleshooting replaced a movie night, I inquired about the functionality being removed and was berated for using an undocumented/unsupported feature (because I was continuing to use it after the interface to set it had been removed, without my knowledge).
I'll never use gnome again if I can help it.
I suspect this is related to the System Preferences rebuild, since it's worked fine for 20+ years of OS X before that.
However, I've noticed, there's not much point in changing it. Showing the desktop is a waste of screen real estate because of the generations of abuse of desktop shortcuts. Even if you are careful, it becomes a cluttered wasteland (on Windows anyways). I just learned to never use the desktop for anything and always have windows up on various monitors.
# Disable icons on Desktop
defaults write com.apple.finder CreateDesktop false
They've also disabled auto-save if you don't have the documents backed up by OneDrive, which is the most egregious for me.
https://thetechmentors.com/f12-a-better-alternative-to-the-s...
The situation is better these days, with windows store apps. Still, I developed the habit of just never using the desktop in the XP days when things were really bad.
There was a war over your eyeballs, which had shady software vendors warring over desktop space, start menu space, taskbar space, even fucking file associations. I recall for a little RealPlayer and Windows Media Player used to yank back and forth file associations each time they ran, even if you tried to make them stop.
https://discussions.apple.com/thread/256028948?answerId=2613...
I also ran into it with 15.4.0 and worked around it by creating an image that's the solid color I like to use. It turns out that the system-supplied solid color options are themselves just 128x128 pixel PNG images too:
https://discussions.apple.com/thread/256028948?answerId=2613...
A non-textured triangle will be faster than a textured one as it can just return a literal in the pixel shader instead of wasting time sampling a texture for each pixel.
However a single texture sample is so cheap on modern hardware that a specialized path for solid colors wouldn't be worth the complexity in a 2D setting. It's fast enough.
Just offer a background picker, and have it generate a 1x1 PNG, for the color selected. Just like that, you can use the image background code path for everything; and the code for generating the image almost certainly exists in the OS already. Maybe add some metadata to the generated image to filter it from the images picker, and you're done.
Quite a capable machine for my uses.
Not supported in Windows 11. Maybe with some additional config? Can’t be bothered with something hat might turn out to be fragile and need more maintenance than I can be bothered with. That’s a young man’s gane.
Ok, I’m about due to give Linux another tickle anyways.
Hmm, which distro… can always give a few a spin.
Keep it simple, Pop!_OS.
Installed fast, no issues, runs fine, stable. Seems entirely usable.
Customisations? Nah, keep it simple.
I’ll set a black background though.
Nope.
Switching to upstream (Ubuntu) with KDE would probably be more your speed.
win11 ltsc works perfectly on it. With a solid background :D
Teams not loading due to security issues, but notifications coming through with full content of messages. Ability to type a handful of words in cloud version of Word (or paste full documents) before security check catches up and requires me to set up a sensitivity label. Etc.
It mostly tells me about MS doing very bad software architecture for web apps, though apparently the desktop apps are not immune to it either.
(Teams makes this Byzantine in the extreme to accomplish as you have to go find the folder it drops all shared files in to gain access to manage access settings. But it does allow you to retro change access even for things shared in Teams)
From the outside looking in, it's the age old organizational problem when there are no good synergies between customer experience and development teams.
Some customers will push back and have enough leverage to get an exception, but the default answer will be that this can't be disabled. You'll have some sales engineer challenged about the product behavior as part of an RFP and they'll try to convince you that nothing is leaked while knowing the financial opportunity would be much larger with these customers, if there was more concern for the customer.
If so, pasting that link into Slack may reveal its first page.
However when you're inside a note (which BTW, can also be converted into checkboxes, aka very simple TODOs), Google Keep, the note taking app from search giant Google, doesn't have search functionality for that specific note.
Besides the many small bugs, sometimes the missing functionality in Google apps is mind boggling.
https://support.microsoft.com/en-us/office/find-and-replace-...
I moved to obsidian with self hosted livesync and it's such a breath of fresh air. It actually does what it says. There's tons of plugins. The sync never drops a beat.
It's a typical example of Microsoft's solutions to be B-quality at best. An AAA level solution is always better.. Microsoft's goal is only to be not bad enough for people to drop the "included with your subscription" product and go for something actually fit for purpose.
Unfortunately I'm stuck with OneNote at work. I guess they want me to be ineffective and inefficient.
It's just annoying because I don't have any use for "office" applications but a personal knowledgebase is something that's super important to me.
The problem is, I don't really trust most small vendors...
1. All teams will henceforth expose their data and functionality through service interfaces.
2. Teams must communicate with each other through these interfaces.
3. There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
4. It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter.
5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
6. Anyone who doesn’t do this will be fired.
7. Thank you; have a nice day!
Number 7 is a joke, etc.
Those photos may have already been uploaded to google's web servers (from my understanding, this happens with google photos by default?), from which a preview has been generated. The permission is at the android app level, and is requested at some point to ensure that the permission model is respected from the POV of the user. I can imagine the permission request being out of sync!
Makes me think it must have something to do with their corporate culture and how they work, since their developers, to my knowledge, never had a reputation for being weak. Maybe it's just because they have such a gigantic user base for everything they do, that 80% is the best business value. Though I doubt that a bit, as a third party developer, it makes me avoid their products where I can.
I was thinking about something similar recently. 80% of features take 20% of the time. For the hobby stuff I program, I want to make the best use of my time, so I skip the last 20% of features to make more room for stuff that matters.
I call it PDD: Pareto-Driven Development. Looks like you think Microsoft is similar.
Like in this article alone, that's a change that should never be made. It's an understandable bug but it's indicative of carelessness, both managerial (how does this hold up to code review?) and QA (why is no one asking why login splashes take 30 seconds?).
Only when they "release" it. And then they start again from 10%. /s
I can understand that there's a lot of old/legacy third party stuff that would rely on the .cpl capabilities so you can't rip it out in one go, but "it's a lot of work" seems like a poor excuse for a company with resources like MS to not at least get all their own moved across with equivalents. They could even leave the old deprecated ones available in the background for anything legacy that needs to interact with them.
I'm certain that some idiotic change just like the ones suggested in the article destroyed this perfectly working feature, and nobody is bothered to fix it because it would impact the latest harebrained scheme to make my 10 year laptop do AI in its sleep and suggest helpful ads about the things it couldn't help overhear while "sleeping".
Most of my computers and friends computers have been ASUS though, maybe that is a connection.
(Windows user since 3.11 but I don't think those had sleep modes :-)
That said, if I remove the joystick from the picture, my ASUS-based Am4 system sleeps just fine.
I ended up getting one of those USB hubs with switches to disable the ports.
That was a saga in itself: it had a firmware issue where 3 of the 7 ports didn't work right. Apparently it's two 4-port hubs in a trenchcoat.
They had a firmware update but apparently it had a virus in the loader app so they pulled it, you needed to contact support to get the new firmware. Their support insisted on sending me a replacement hub instead, which of course didn't work. And then a second time, I'm pretty sure, and only then did they send me a new firmware update which solved the issue.
Well. Anyway. Those hubs are theoretically good so you can avoid the wear on the USB connectors.
But I guess the theme of bad QA is a common one. This wasn't an Amazon alphabet soup brand Chinese special, either.
That one, it seems if I disable the "software device" in device manager, it stops installing itself. But how do you release and force install a companion app for a monitor that will burn it out? It comes with a 3-year warranty that covers OLED burn-in...
Still, it's a feature I will definitely turn off first thing; off means off, if I want "full black on" I will set a black screensaver.
Some threads:
https://www.dell.com/community/en/conversations/alienware-de...
https://www.dell.com/community/en/conversations/alienware-de...
https://www.dell.com/community/en/conversations/alienware-de...
https://www.reddit.com/r/Alienware/comments/s8m4yx/alienware...
https://www.reddit.com/r/Alienware/comments/uon0kq/alienware...
Also worth noting that the helpless suggestions to "why, just turn the monitor off" would prevent the OLED maintenance from running (since that runs automatically when the monitor goes to sleep)
It can't just be the hardware, I think.
So I can definitely see why a driver installer is very nervous about people "shutting down" their devices to reload the system. Why would a full restart be required in the 3rd millennium to load a driver for an USB input device is of course another Microsoft philosophical moment.
Back then Windows would default to a crap version of sleep but you could still disable it in the BIOS and by tweaking a couple of other settings, thus forcing it to use proper sleep. I’m pretty sure I wrote a lengthy response on HN about this including the specifics at the time.
That worked well until I got a new Dell laptop that removed support for the good sleep mode entirely.
So then I’d make sure to always leave the machine plugged in and switched on overnight before any travel… which is how I discovered that the machine had a bug where sometimes it would fail to charge whilst plugged in and switched on but not awake, so occasionally I’d still end up with a dead laptop on the train.
So then I’d start up the machine as soon as I got out of bed so it’d get at least 30 - 45 minutes of charging with not much load on it whilst I was getting ready to leave.
I absolutely hate Dell.
For my own use I’ve been buying Apple laptops since 2011 and, although they went through a grim period meaning I kept a machine from 2015 to 2024, I never had this sort of nonsense with them.
Now you have to guess whether the software has really loaded or not before you start using it.
I could understand it if your device needed special access (VPN to prod etc), but you usually can't do that either from the dev machines - and need to first connect to a virtual machine (via browser or rdp) to be able to do that...
- umask was not 022, so installing pretty much anything with `sudo make install` would fail, as would some software.
- Running nmap caused an alert to phone home to IT, who would nag me on slack.
- Opening well-known ports to my LAN (like 22) caused an alert to IT.
- An "agent" program ran constantly, often using 100% of a CPU. The system overall had about 45 minutes of battery life.
- Various system settings were overridden by a sysctl.d/ file that was regenerated by the agent at boot. Fortunately I know how ASCII sorting works and could produce a file that overrode the overrides.
- Various capabilities (CAP_...) were disabled for my sudoer user.
It wasn't that bad, and IT was helpful, but it was a persistent annoyance. Maybe what happened is somebody googled "how to harden Linux" and then just made everything on the first page of results company policy.
and it has migrated to web apps today - where doing something causes the UI to show a loading/progress wheel, but it takes forever in actuality (or on start up of the webpage, you get a blank screen with placeholder bars/blurred color images etc).
And this is the so-called responsive design...
I’m not sure if this was meant to be a pun, but “responsive design” has nothing to do with how quickly a UI loads. It’s about adapting to different screen sizes.
I would imagine that to be the case for a lot of webapps out there.
Ding! Ding! Ding! We got a winner!
Yeah, maybe we could expect machines which got 40 years of Moore's law to give you an experience at least as snappy as what you got on DOS apps.
It's honestly very sad.
It is almost like they realized users are happy to wait 30-60 seconds for an app to open in 2001 and kept that expectation even as the task remained the same and computers got an order of magnitude more powerful in that time.
Loading apps on it definitely did not take one second. The prevalence of splash screens was a testament to that. Practically every app had one whereas today they're rare. Even browsers had multi-second splash screens back then. Microsoft was frequently suspected to be cheating because their apps started so fast you could only see the splash for a second or two, and nobody could work out how they did it. In reality they had written a custom linker that minimized the number of disk seeks required, and everything was disk seek constrained so that made a huge difference.
Delphi apps were easier to create than if you used Visual C++/MFC but compared to modern tooling it wasn't that good. I say that as a someone who grew up with Delphi. Things have got better. In particular they got a lot more reliable. Software back then crashed all the time.
I remember Windows Explorer opening swiftly back in the day (fileman even faster - https://github.com/microsoft/winfile now archived sadly) and today's Explorer experience drives me insane as to how slow it is. I have even disabled most linked-in menu items as the evaluation of these makes Explorer take even longer to load; I don't see why it can't be less than 1 second.
Anyway, I do recall Netscape taking a long time to load but then I did only have a 486 DX2 66MHz and 64MB of RAM.... The disk churning did take a long time, now you remind me...
I think using wxWidgets on Windows and Mac was quite nice when I did that (with wxFormBuilder); C++ development on Windows using native toolkits is foreign to me today as it all looks a mess from Microsoft unless I have misunderstood.
In any case, I can't see why programs are so sluggish and slow these days. I don't understand why colossal JS toolkits are needed for websites and why the average website size has grown significantly. It's like people have forgotten how to write good speedy software.
Why is my software slow? Partly because the task is inherently intensive. Partly because I use an old Intel MacBook that throttles itself like crazy after too much CPU load is applied. Partly because I'm testing on Windows which has very slow file I/O and so any app that does lots of file I/O suffers. And partly because it's a JVM app which doesn't use any startup time optimizations.
But mostly it's because nobody seems to care. People complain to me about various things, performance isn't one of them. Why don't I fix it anyway? Because my time is super limited and there's always a higher priority. Bug fixes come first, features second, optimizations last. They just don't matter. Also: optimizations that increase the risk of bugs are a bad idea, because people forgive poor performance but not bugs, so even doing an optimization at all can be a bad idea.
Over the years hardware gave us much better performance and we chose to spend all of it on things like features, reducing the bug count, upping complexity (especially visual), simplifying deployment, portability, running software over the network, thin laptops and other nice things that we didn't have on Windows 98. Maybe AI will reduce the cost of software enough that performance becomes more of a priority, but probably not. We'll just spend it all on more features and bug fixes.
which is fine, and you are doing the absolutely correct thing regarding fixing what's being complained about.
But the complaints i keep hearing (and having myself) is that most apps are quite slow, and has been increasingly growing slower over time as updates arrives - mobile phones in particular.
Win 9x era software relied entirely on files for sharing, and there was no real notion of conflict resolution or collaboration beyond that. If you were lucky the program would hold a Windows file lock on a shared drive exported over a LAN using SMB and so anyone else who tried to edit a file whilst you'd gone to lunch would get a locking error. Reference data was updated every couple of years when you bought a new version of the app.
This was unworkable for anything but the simplest and tiniest of apps, hence the continued popularity of mainframe terminals well into this era. And the meaning of "app" was different: it almost always meant productivity app on Win 9x, whereas today it almost always means a frontend to a business service.
Performance of apps over the network can be astoundingly great when some care is taken, but it will never be as snappy as something that's running purely locally, written in C++ and which doesn't care about portability, bug count or feature velocity.
There are ways to make things faster and win back some of that performance, in particular with better DALs on the server side, but we can't go backwards to the Win 9x way of doing things.
I disagree that those apps were dog slow at the time. They were fairly responsive, in my experience. It's true that loading took longer (thanks to not having SSDs yet), but once the app was up it was fast. Today many apps are slow because software companies don't care about the user experience, they just put out slop that is barely good enough to buy.
People underestimate how slow the network is, and put a network between the app and its logic to make the app itself a thin HTTP client and "application" a mess of networked servers in the cloud.
The network is your enemy, but people treat it like reading and writing to disk because it happens to be faster at their desk when they test.
Latency.
The issue the parent mentions is one of latency, if you’re in the EU and your application server is in us-east-1 then you’re likely staring at a RTT of 200ms.
The Pi under your desk from NY? 30ms- even less if its a local test harness running in docker on your laptop.
I know it's very simple, I know there isn't a lot of media (and definitely no tracking or ads), but it shows what could be possible on the internet. It's just that nobody cares.
[1] Yes, Hacker News is also quite good in terms of loading speed.
It's the right thing to do to load resources asynchronously in parallel, but you shouldn't load the interface piecemeal. Even on web browsers.
I'd much rather wait for an interface to be reliable than have it interactive immediately but having to make a guess about its state.
I've learned to use default configurations pretty much everywhere. It's far too much of a hassle to maintain customizations, so it's easiest to just not care. The exception is my ~50 lines of VS Code settings I have sync'd in a mysterious file somewhere that I've never seen, presumably on github's servers, but not anywhere I can see?
Is it? The vast majority of the time, I change settings/set things up the way I want, and then... leave them for literally years. Hell, I can directly restore a backup I have of Sublime Text from years ago and my customizations will work.
Somewhere along the way I lost interest in customizing the OS. These days I routinely switch between MacOS, Windows and various Linux flavors on lots of computers. The only thing I may customize is I write my .vimrc from memory.
On my Android phones, I change the wallpaper and I disable animations. Otherwise, stock everything.
Now that I think about it, it can't be the time saved, surely I waste more time on HN. It likely correlates more with using computers for work as opposed to for fun and learning. Even the learning I do these days is rather stressful - if I can steal an hour or two on the weekend, I feel lucky, so spending time to customize the environment seems like a waste.
Maybe if life slows down, I'll find joy in customizing my OSes again.
On the note of programming not being fun anymore, that's exactly why I'm making my secret project that I hope to release very very soon, maybe in a week or so. I want to make programming fun again, in a similar way that pico8 did, but x100.
> I have the same history customizing everything! ... then giving up because life gets busy.
I think this might be why some people have such different experiences. I don't try to customize "everything" - just what needs to be. Like, yeah, I would expect it to be difficult to maintain random Explorer customizations. I would not expect it to be difficult to maintain customization for a popular IDE.
Too much software put host-specific stuff in settings files (like absolute paths) or just are not stable enough in general that it is worth trying to maintain a portable configuration.
The hard part of maintaining a config is that there's no such thing as cost-free usage, it always takes a mental toll to change a config, to learn a new config, to remember which configs are in use and what they do, to backup configs, or at least to setup and maintain a config-auto-backup flow.
By far, the easiest mental model is just learning how everything works out of the box, and getting used to it. Then again, sometimes what people want is to do things the hard way for its own sake. That's probably part of why I kept going back to C over and over for so many years.
The oldest parts of my emacs config go back at least 30 years and I have had it in a git repo for ~15. I keep my entire .emacs.d versioned, including all third-party packages I depend on (so no fetching of anything from the cloud).
Have had to do at most minimal changes when upgrading to newer versions and with some tiny amount of work the same config still works for emacs from around version 21 to 31 (but features are of course missing in older versions).
Just your regular reminder that nix is good actually.
"I have a bug, you can get a full VM that reproduces it with 'nixos-rebuild build-vm --flake "github:user/repo#test-vm" && ./result/bin/run-*-vm'"
And the code producing that VM isn't just a binary blob that's a security nightmare, it's plain nix expressions anyone can read (basically json with functions).
And of course applying it to a new machine is a single command too.
(Would it be pedantic of me to say that I receive my fair share of bug reports on nix code I maintain, and when someone sends me their entire nixosConfig the very first thing I do is punt it back with a "can you please create a minimal reproducible configuration"? :D but your point stands. I think. I like to think.)
He surely didn't use any Microsoft product. /s
Would have been easier to stick with the pixel density we had.
Oh, and we have to wait a frame to see everything because of compositing that I still don't quite understand what it's supposed to do? Something something backing store?
Yes, xrandr -scale. Works fine for everything. Even better than Windows (which, for some reason, only scales some programs, not all)
More recently, long after I stopped using Windows but still many years ago, I was reading an article about Arthur Whitney. It had a photo which seemd to be at home, maybe in a furnished garage, and in the background was a desktop computer running Windows. The only window open was a cmd.exe. I am not suggesting anything. It is just something I always remember.
Perusing some recent Microsoft documentation I noticed this:
https://learn.microsoft.com/en-us/windows/configuration/shel...
Because they made it a runtime thing - "components just have to remember to do this", the code structure itself affords this bug.
There was a similar bug at facebook years ago where the user's notification count would say you had notifications - and you click it, and there aren't any notifications. The count was updated by a different code path than the code which inserted notifications in the list, and they got out of sync. They changed the code so both the notification count & list were managed by the same part of the system, and the all instances of the bug went away forever.
if (request.authenticationData) {
ok := validate(etc);
if (!ok) {
return authenticationFailure;
}
}
Turns out the same meme spans decades. void doFoo(PermissionToDoFoo permission, ...){...}
and then, the only way to call it is through something like from request import getAuth, respond
\\ Maybe<AuthenticationData> getAuth(Request request)
\\ void respond(String response)
from permissions import askForPermissionToDoFoo
\\ Maybe<PermissionToDoFoo> askForPermissionToDoFoo(AuthenticationData auth)
response =
try
auth <- getAuth(request)
permission <- askForPermissionToDoFoo(auth)
doFoo(permission)
"Success!"
fail
"Oopsie!"
respond(response)
It becomes impossible to represent the invalid state of doing Foo without permission.[1] - https://en.wikipedia.org/wiki/Midori_(operating_system)
Link to the patch fixing it: https://github.com/kubernetes/kubernetes/commit/7fef0a4f6a44...
Of course, we'd already fixed other issues like Kubelet listening on a secondary debug port with no authentication. Those problems stemmed from its origins as a make-it-possible hacker project and it took a while to pivot it to something usable in an enterprise.
If there is no authenticationData then the if !Ok is never run and the code continues execution as it were authenticated.
Correct. The only thing that changed is the number of level of abstractions.
But on my Win10 it stopped working idk why, so I wrote a script to download Bing Image of the Day instead: https://blog.est.im/2025/stdout-03
We all like to think we have picked up habits that immunize us from certain kinds of error but large software systems are complex and bugs happen.
The number of people in here taking ‘Raymond Chen tells an anecdote about the time a dumb bug shipped in Windows and was fixed two weeks later’ as an indictment of Microsoft’s engineering culture is frankly embarrassing. Trading war stories is how we all get better.
It would be better for us all if culturally, the reaction to a developer telling a story of how they once shipped a stupid bug were for everyone to reply with examples of worse stuff they’ve done themselves, not to smugly nod and say ‘ah yes, I am too smart to make such a mistake’.
I didn't say I'm immune to doing this myself, nor did I condemn anything about the particular scenario in the blog. My pain is in trying to articulate why some ways are better when any code that works is in some sense just fine.
>> We all like to think we have picked up habits that immunize us from certain kinds of error but large software systems are complex and bugs happen.
We sure do, although "immunize" is too strong. We try to minimize the likelyhood of these kinds of things. Experience is valuable and sometimes it's hard to articulate why.
It still feels more like craftsmanship than actual engineering. A lot of the time it’s more like how a carpenter learns to use certain tools in certain ways because it’s safer or less prone to error, than how an engineer knows how constructing a particular truss ensures particular loads are distributed in particular ways.
And one of the best tools we have for developing these skills is to learn from the mistakes others have made.
So I agree - I think your instinct here was to look at this error and try to think whether you have engineering heuristics already that would make you unlikely to fall into this error, or do you need to adjust your approach to avoid making the same mistake.
My criticism here was more directed to others in the thread who seem to see this more as an opportunity to say ‘yeah, Windows was always buggy’ rather than to see it as an example of a way things can fail that they need to beware of.
So, you're going to implement bitmapped backgrounds. Naturally, after writing the code, you test with and without bitmapped backgrounds and make sure both cases work. Right?
https://randomascii.wordpress.com/2024/10/01/life-death-and-...
If I had dollar for every minute of my life I spent troubleshooting random group policy quirks during my previous life as a sysadmin...
> Personally, I use a solid color background. It was the default in Windows 95,¹ and I’ve stuck with that bluish-green background color ever since.
My thoughts exactly, but I think it goes back to the Mac LC's we used in a school computer lab, and the palette of colors you could have with 16-bit graphics was so vast compared to the 16 color PC's I was used to.
Plus, you always have so much stuff open you're never going to see your wallpaper anyway. That's what screensaver are (were) for that rotate through a folder full of images.
A similar (slightly older) laptop I own boots from fully off to the KDE desktop in 25 seconds total including typing my password.
- Half of each boot time was wasted for a copilot 360 dialog. On every fucking boot, no copilot no office installed. Or rather copilot installed itself without notice and started to spam me
- In several places the OS would show me "news" like death messages, war updates and economy details. Definitely far from a productive environment and honestly heavily triggering, I don't read news anywhere but my PC is full of it and there is no option to disable? What about kids?
- I have updates or a broken system about every second time I boot the PC. I know it's because I just cut the power, but I hate when it asks 3 times if I want to actually shut down (and then still breaks it, or never actually shuts down)
- I constantly end up in a variety of login screens that want me to login to an Microsoft account I don't have and want
- There are soooo many ads. I've been on Linux for years, instead of traditional TV I almost always stream with ad blocker. The country I live in isn't plastered with Ads either. But this shithole of operating system is. It literally pops up ad notification above apps on default.
If anyone wonders most problems where solved with "ShutUp10" others with chatGpt and regedit. It was actually pretty hard when you have no idea about this OS and it's dark patterns.
On my Linux machines I don't even change the wallpaper, but windows defaults are unbearable and upright productivity killers
I think they're trying to emulate Apple who has had stocks integration by default for years, including being alongside the other pre-installed apps like SMS and Mail on the first iPhone. I imagine Apple did it to cement themselves as a high-class lifestyle brand, even though I'm sure there was never a time where most iPhone users were doing a lot of day trading.
I wonder what percentage of Windows users rely on the stock ticker in the start menu though...
Their other distributions are very good as well, especially for Windows XP because they bundle a lot of important drivers for old software to work correctly
Seeing how that complicated if-then logic is just too stiff a challenge to your average developer, we should probably just dispense with it.
In test and CI we had this set to a very low number. In acceptance (manual testing, smoke testing) to a very high number.
This was useful because it showed three things:
- Network- and services- configuration bugs, would immediately give a crash and thus failing test. E.g. firewalls, wrong hosts, broken URIs etc.
- Slow services would cause flickering tests. Almost always a sign that some service/configuration/component had performance problems or was misconfigured itself. Quick fix would be to increase the timeout, but re-thinking the service - e.g. replace with a mock if we couldn't control it, or fixing its performance issues, the proper- and often not that hard- fix. Or re-thinking the use of the service, e.g. by pushing it to async or a job queue or such another fix.
- Stakeholders going through the smoke-test and acceptance test would inevitably report "it's really slow" showing the same issues as above but in a different context and with "real" services like some external PSP, or SMTP.
It was really a very small change: just some wrappers around http-calls and other network calls, in which this config was used, next to a hard rule to never use "native" clients/libs in the code but always our abstraction. This then turned out to offer so much more benefits than just this timeout: error reporting, debugging, decoupling.
It wasn't javascript (Ruby, some Python, some Typescript) but in JS it would be as easy as `function fooFetch(resource, options) { return fetch(resource, options) }` from day one, then slowly extended and improved with said logging, reporting, defaults etc.
I've since always introduced such "anti-corruption" layers (facades, proxy, ports/adapters) very early on, because once you have "requests.get("http://example.com/foo/bar")' all throughout your python code, there's no way to ever migrate away from "requests" if (when!) it gets deprecated, or to add said timeout throughout the code. It's really a tiny task to add my own file/module that simply imports "requests" and then calls it on day one, and then use that instead.
One pattern I've had success with is using handles they need to be returned. If you never grab a handle you don't have to worry about returning it. Seems to work better than the fire and wait for side effects approach.
Typical response is "Well it should just work anyway!". Which is theoretically true -- the worst kind of true.
Just yesterday, I ran into a bizarre bug on Windows where the mouse cursor would move every time I pressed the arrow keys—almost like I was controlling the mouse with the keyboard. It drove me nuts. I checked all the usual mouse and keyboard settings, but everything looked normal. At one point, I even wondered if my machine had been infected by a virus.
Desperate, I Googled "mouse pointer moving on arrow keys". The first result had only one answer, which blamed... Microsoft Paint. I was skeptical—Paint? Really? That couldn’t possibly be it. Still, with no other leads, I gave it a shot. As it turned out, I did have Paint open in another Desktop View, where I’d been cropping a screenshot. The moment I closed it, the problem vanished. Instantly.
I still can’t believe that was the cause—and I’m a little embarrassed to admit it, even though no one was around to see it.
_____________________
1. https://superuser.com/questions/1467313/mouse-pointer-moving...
Years ago, I had a bug so bizarre I nearly convinced myself the machine was haunted. My mouse pointer started drifting—not randomly, but only when I pressed the arrow keys. Up arrow? Cursor nudged north. Down arrow? There it went again. I was convinced some accessibility setting or keyboard remap had gone haywire. I rebooted. I checked drivers. I even briefly entertained the idea that my codebase was cursed.
Three hours in, I realized the true culprit: MSPaint. I had opened it earlier, and because the canvas was selected, the arrow keys were actually moving the selection box—which, by delightful Windows design, also moved the mouse cursor. I wasn’t losing my mind. Just... slowly drawing rectangles in the background every time I hit an arrow key.
I closed MSPaint, and poof—my “haunting” ended. I haven’t trusted that application since. Great for pixel art, less great for your sanity.
------------
You and ChatGPT sound identical.