Microsoft introduced the "Office Startup Assistant" or "Office Startup Application" (osa.exe) in Office 97, it sat in the system tray and loaded the main office DLLs at startup: https://web.archive.org/web/20041214010329/http://support.mi...
OpenOffice.org (predecessor of LibreOffice) copied this feature, which they called "QuickStarter", I don't know exactly when, but no later than 2003: https://www.openoffice.org/documentation/setup_guide2/1.1.x/...
Microsoft made OSA made non-default in Office 2007, and _removed_ it from Office 2010: https://learn.microsoft.com/en-us/previous-versions/office/o...
Are they now bringing it back?
If you ever tried Office 97 on a PC of 10+ years later, it's amazing how fast and lightweight it was. Instant startup, super snappy. And those apps were not lacking in features. 95% of what you need out of a desktop word processor was in Word 97.
How did we get back to this though? We have gigabytes/sec with NVMe and stupid fast CPU's with at least 4 cores in even low end models. Yet a text editor takes so long to load we need to load it up on boot... Such a frustrating field to work in.
Not that I'm that nostalgic for the old days, we would have been doing the exact same thing if we were able to get away with it. But performance restrictions meant you had no choice but to care. Modern tech has "freed" us from that concern.
About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much). An operating system had to manage with 8,000 bytes, and a compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary, were it not for a thousand times faster hardware, modern software would be utterly unusable.
https://www.computer.org/csdl/magazine/co/1995/02/r2064/13rR...
That said, as someone fairly young, I still don't think that makes it wrong or something only an old man would think. Software seems to perform exactly as well as it needs to and no more, which is why hardware advances don't make our computers run software much faster.
If software was simpler, we could afford to offer some formal guarantees of correctness. Model check protocols, verify pre and post conditions à la Dafny, etc.
There's too much change for the sake of change.
+1 to this. Like a lot of issues, I think the root is ideological, but this one in particular very clearly manifests organizationally.
The companies building everyday software are ever bigger— full of software engineers, designers and various kinds of managers who are asked to justify their generous salaries. At an individual level I'm sure there's all sorts of cases, but at a general level there's often almost no other option but to introduce change for the sake of change.
He said it's often just some new marketing exec wants to put something on their resume, and they have certain metrics that they target that don't necessarily align with long term profits of the company.
I'm sure software has a similar problem.
+1 to this. Like a lot of issues, I think the root is ideological, but this one in particular very clearly manifests organizationally.
The companies building everyday software are ever bigger— full of software engineers, designers and various kinds of managers who are asked to justify their generous salaries. At an individual level I'm sure there's all sorts of cases, but at a general level there's almost no other option but to introduce change for the sake of change.
At a general level, I believe there are other options - changes/features need to meet some level of usage or it is scrapped out of recognition that supporting all these features make bugs more likely, performance likely to degrade, increase difficulty of adding features, make the product more difficult to use, etc.
The cynical spin I would put on this idea is that software performs as poorly as it can get away with. MSFT is feeling the need/pressure to have Office load faster, and they will try to get away with preloading it.
Otherwise, there is a strong pull towards bloat that different people will try to take credit for as features even if the cumulative experience of all these "features" is actually a worse user-experience.
The amount of things a computer can do in a single thread are amazing, and computers now have a dozen or more threads to do work. If developers cared about performance, things would easily be 20x as performant as they are today.
I'm not talking about "write in assembly, duh" I'm talking about just doing things intelligently instead of naively. The developers I support often simply are not thinking about the problem they're solving and they solve the problem in the simplest way (for them) and not the simplest way for a computer.
Software is an inefficiency amplifier, because the number of developers for a piece of code is much smaller than the number of computers that run that code; how much coal has been burned solely because of shitty implementations? I'd wager that the answer is "a LOT!"
Even if you don't care about coal usage, think about how much happier your users would be if your application was suddenly 5x faster than it was previously? now think of how many customers want their software to be slow (outside of TheDailyWTF): zero.
languages like javascript and python remove you so much from the CPU and the cache that even if you were thinking of those things, you can't do anything about it. JS and Electron are great for developers, and horrible for users because of that amplification I described above.
I am dead tired of seeing hustle culture overtake everything in this field, and important things, to me, like quality and performance and support all fall straight down the toilet simply because executives want to release features faster.
things like copilot could help with this, i hope. presumably copilot will help introduce better code into applications than a daydreaming developer would, though the existence of vibe coding sort of nulls that out probably.
one thing that AI will do quite soon is increase the amount of software that exists quite dramatically. and I am kinda concerned about the possibility that it's all going to suck horribly.
Example: when VS Code came out, it was much, much faster, more responsive and stable than Visual Studio at the time. Despite being based on Electron, it apparently was much better on architecture, algorithms and multithreading than VS with its C++ and .NET legacy codebase. That really impressed me, as a C++ programmer.
Overall, it feels like folks who idealize bygone eras of computing didn't witness or have forgotten how slow Windows, VS, Office etc. used to feel in the 90s.
Let’s normalize speed over time like we do dollars, so we are talking about the same thing.
Given the enormous multiplier in CPU and storage hardware speeds and parallelism today vs. say 1995, any “slow” application then should be indistinguishable from instant today.
“Slow” in the 90’s vs. “Slow” in 2025 are essentially different words. Given unclarified co-use pushes several orders magnitude of either speed or inefficiency difference under the rug.
The promise of computing is that what was slow in the 1960s and 1970s would be instant in 1990. And those things were instant, but those things aren’t what people did with computers anymore.
New software that did more than before, but less efficiently, came around, so everything felt the same. Developers didn’t have to focus on performance so much, so they didn’t.
Developers are lazy sacks who are held skyward because of hardware designers alone. And software developers are just getting heavier and heavier all the time, but the hardware people can’t hold them forever.
This cannot continue forever. Run software from the 1990s or 2000s on modern hardware. It is unbelievably fast.
Maybe it was slow in the 1990s, sure. I ask why we can’t (or won’t) write software that performs like that today.
The compiler for Turbo Pascal could compile something like a million lines per second in 1990. We have regressed to waiting for 60+ minute C++ compile times today, on even moderate project sizes.
Debugging in visual studio used to be instant when you did things like Step Over. You could hold the Function key down and just eyeball your watch variables to see what was going on. The UI would update at 60FPS the entire time. Now if I hold down that key, the UI freezes and when I let go of the key it takes time to catch up. Useless. All so Microsoft could write the front end in dotnet. Ruin a product so it is easier to write… absolute nonsense decision.
All software is like that today. It’s all slow because developers are lazy sacks who will only do the minimum necessary so they can proceed to the next thing. I am ashamed of my industry because of things like this.
As a programmer who studied computer and electrical engineering in university, never before have I been so offended by something I one hundred percent agree with
Source: https://arstechnica.com/gadgets/2020/11/a-history-of-intel-v...
(I'm sure someone could dig up more recent graphs, but you get the idea).
In order to get more performance, your app needs to use multithreading.
RAM parallel bandwidth, increased caching levels and size, and better caching rules, instruction re-ordering, predictive branching, register optimization, vector instructions, ... there have been many advances in single thread execution since the 90's. Beyond any clock speed advances.
Why? A good portion of programs are still single-threaded, and often that's the correct choice. Even in games a single-threaded main thread or logic thread may be the only choice. Where multi-threading makes sense it should be employed, but it's difficult to do well.
Otherwise, it's up to the OS to balance threads appropriately. All major OSes do this well today.
Nowadays you just "scale horizontally" by the magic of whatever orchestration platform you happen to use, which is the modern approach of throwing hardware at the problem in the vertical scaling days.
One can write software that uses the CPU cache in non-dumb ways no matter how many threads your program has. You can craft your structs so that they take less space in RAM, meaning you can fit more in cache at once. You can have structs of arrays instead of arrays of structs if that helps your application. Few people think of things like this today, they just go for the most naive implementation possible so that the branch predictor can’t work well and everything needs to be fetched from RAM every time instead of building things so that the branch predictor and the cache are helping you instead of impeding you. People just do the bare minimum so that the PM says the card is complete and they never think of it again. It’s depressing.
The tools to write fast software are at our fingertips, already installed on our computers. And I have had zero success in getting people to believe that they should develop with performance in mind.
Waiting is ok when it comes to sending batches of data to be transformed or rendered or processed or whatever. I’m talking about synchronous stuff; when I push a key on my keyboard the computer should be done with what I told it to do before I finish pushing the button all the way down. Anything less is me waiting on the computer and that slows the user down.
Businesses should be foaming at the mouth about performance; every second spent by a user waiting on a computer to do work locally, multiplied by the number of users who wait, multiplied by the number of times this happens per day, multiplied by the number of work days in a year… it’s not a small amount of money lost. Every more efficient piece of code means lighter devices are needed by users. Lambda is billed by CPU and RAM usage, and inefficient code there directly translates into higher bills. But everyone still writes code which stores a Boolean value as a 32-bit integer, and where all numbers are always 8-bytes wide.
What. The. Fuck.
People already go on smoke breaks and long lunches and come in late and leave early; do we want them waiting on their computers all of the time, too? Apparently so, because I’ve never once heard anyone complain to a vendor that their software is so slow that it’s costing money, but almost all of those vendor products are that slow.
I’m old enough that I’m almost completely sick of the industry I once loved.
Software developers used to be people who really wanted to write software, and wanted to write it well. Now, it’s just a stepping stone on the way to a few VP positions at a dozen failed startups and thousands of needlessly optimistic posts on LinkedIn. There’s almost no craft here anymore. Businessmen have taken everything good about this career and flushed it down the toilet and turned teams into very unhappy machines. And if you don’t pretend you’re happy, you’re “not a good fit” anymore and you’re fired. All because you want to do your job well and it’s been made too difficult to reliably do anything well.
Even operating systems don't get direct access to the hardware these days. Instead a bunch of SoC middlemen handle everything however they like.
If you do the things which make your code friendly to the CPU cache and the branch predictor, when it comes time for your code to run on the CPU, it will run faster than it would if you did not do those things.
Both are written/designed by people who care a lot about application performance and developer experience.
Copilot is trained on Github (and probably other Git forges w/o permission, because OpenAI and Microsoft are run by greedy sociopaths.)
I'd wager that the majority of fleshed out repositories on these sites contain projects written at the "too-high level" you describe. This certainly seems to be true based on how these models perform ("good" results for web development and scripting, awful results for C/++/Rust/assembly...) - so I wouldn't get your hopes up, unfortunately.
Low level programming means actual -thinking- about the system, resources, and language decisions etc
Even humans struggle with it, Its much easier to build a website than say a compiler, for anyone, humans and llm's included
My personal benchmark for these models is writing a simple socket BPF in a Rust program. Even the latest and greatest hosted frontier models (with web search and reasoning enabled!) can only ape the structure. The substance is inevitably wanting, with invalid BPF instructions and hallucinated/missing imports.
It works great for me, but it is necessarily an aid learning tool more than a full on replacement, someone's still gotta do the thinking part, even if the llm's can cosplay -reasoning- now
Not Java, but an IDE in 4K: https://en.wikipedia.org/wiki/BASIC_Programming
Having used it quite extensively (Well, five solid days over two weeks, which is about 1000x longer than most people gargling on the internet), it's surprisingly capable.
Imagine if someone with the same talent and motivation was working on today's hardware.
<aside> Someone on Atari Age wrote a LISP for the same machine.
Indeed. I am sure many of us here are burnt out on bloat. I am also sure many of us want to move to smaller stuff but cant simply because of industry momentum. BUT that doesn't mean the dream is dead, only that we must work towards those goals on our own time. I found Plan 9 and haven't looked back. I can rebuild the entire OS in seconds on a fast machine. Even my little Celeron J1900 can rebuild the OS for several supported architectures in minutes. I can share a USB device seamlessly across my network, PXE booted from a single disk without installing anything. Cat(1) is just 36 lines of C.
There's still hope. Just ignore the industry hype noise and put in the effort ourselves.
I'm looking forward to when app developers ship you an entire computer in the mail to run their text editor.
Yet for the user it is bad -- bloated, slow, feels non-native, has specific bugs which are hard to address for the devs, etc.
I don't see any light for the desktop UI development unless there is some lightweight universal rendering engine. Tauri with WebView is somewhat promising, but it has problems on Linux and it is hard to target older systems.
As for your second point: [1]
Correct me if I'm wrong, but isn't it around 800 lines[1]?
1. https://github.com/coreutils/coreutils/blob/master/src/cat.c
The problem isn't "engineering" the problem is the culture of product management. (Note: NOT product managers per se).
I ask this basic question, how many Directors, VP's or CPO's do you know who got their job based on "cutting out unused features"? If you can find one, it will end up being the exception that proves the rule. The culture of "add", "new" and "shiny" doesn't reward keeping things lean and effective. T
In the tangible world we look to accountants for this sort of thing (because they tend to have costs). Think cheap Costco hotdogs and free cookies at Double Tree. No one in product, dev and accounting is going to sit down and try to justify loosing some code, features and maybe a few customers to make it faster when you can just "engineer" you way out of it and not have to sell less is more.
Google goes a step further and kills entire apps
https://www.slashgear.com/1513242/ford-gets-rid-of-self-park...
But adding yet another gateway to ChatGPT’s API…that’s a $15/mo/user add-on right there, and not just a monkey, but one of the slower intern monkeys, could write the code for such an incredibly innovative and, there’s no other word for it, powerful new feature that everyone should be thrilled to pay for at a 200-300% (or more!) markup. So guess who gets that VP of New Bullshit Development position, and guess what kind of choices get reinforced?
(EDIT: grammar)
Performance just isn’t on that list, and it’s often more and harder work than a given new feature took to create. Office users are getting what Microsoft is designed to deliver.
Absolutely this. I think this is evidence that points to modern civilization starting to collapse. When we can engineer correctly, we're fucked.
Yes! This is what all my projects are geared towards restoring. The big one is not quite ready to announce yet, but I am very proud of it, and extremely excited to release it, to solve exactly that: it makes engineering fun again!
we now DO have civil engineering but that is it
While true LLMs fall flat on their face when fed massive codebases, the fact of the matter is that I don't need a 200k LOC program to accomplish a single task that an LLM can do in 2k LOC.
To give an example, we have proprietary piece of software that is used to make (physical) product test systems using flow charts and menus. It's expansive and complex. But we don't need it when we can just spend 30 minutes prompting your way to working test code and it produces way faster and more robust systems.
Maybe the devs of that software package cannot dump that whole codebase into an LLM and work on it. But they are completely missing the forest for the trees.
Have a large ZIP file. Preferably like a few gigs and lots of (small) files.
Try to open it with the built-in Windows 11 tooling from Microsoft. It's going to be super slow to even show anything never mind unpack it.
Now install say 7-zip and do the exact same thing(s) and it's going to be instant opening and unpacking it takes a much much smaller amount of time (only limited by disk speed).
Turns out optimizations / not doing stupid things is still a thing even with all this raw power we now have.
Besides, the only thing that matters is getting tickets "done" before the arbitrary sprint deadline in 2 weeks, so best not to spend any extra time cleaning up or optimizing the first thing that works. You can't think about performance until the sprint dedicated to performance.
Another example: it takes ~2 seconds to run git on my work machine
(Measure-Command { git status | Out-Null }).TotalSeconds
while running the same command on my personal Windows 11 virtual machine is near instant: ~0.1 seconds.
Still slower than Linux, but not nearly as bad as my work machine.Why the hell are all my desktop apps written in JS now?!
Have you seen the state of pretty much every non-js UX framework?
That's why.
JS/css/html won the UX domain in a way that no other language comes close to. If you look at the most recent most modern UX frameworks, they are often just half implemented poor mimics of the js/css/html stack with approximately 0 devs writing new 3rd party extensions.
Intellij uses swing, SWING, as it's UX. A framework written in the 90s filled with warts. Yet, it's still a better experience than the more recent JavaFX experience. Swing simply has more support.
The bigger issue isn't the tech, it's the ecosystem. While you might like swing, you simply are never going to find the swing version of Material UI or D3.js. That's more the problem that you'll run into.
For some of our apps because we need charting, we are using GraalJS just to run the JS charting library to export to an image that we ultimately put on some of our downloadable reports. It's a huge pain but really the only way to do that.
I remember a time when having your application look "out of place" was undesired, and the ultimate goal was to be "as native as possible". If you are running a website selling something, I agree that you want a brand-identity and a unique look. But productive software shouldn't require users to adapt to new UX paradigms (guessing whether the cancel button comes on the left or on the right, dealing with slightly do different input method and entry shortcuts…).
Anyhow, I think things could be worse, since, as you say, we can embed a webview into any JavaFX/Qt/… app and get the best of both worlds.
It’s not rocket science to eke out oodles of performance out of a potato if you don’t care about correctness or memory safety.
Word 97 will only delight you if you use it on an airgapped computer, as a glorified typewriter, never open anyone else’s documents with it, and are diligent about manually saving the doc to multiple places for when it inevitably self-corrupts.
But at that point, why not be like GRRM and write on a DOS word processor? Those were historically a lot more reliable than these second-generation GUI apps.
We'd sometimes go to the library to write something up in MS Word. We always liked this because it would be a good 5-10 mins to boot up some kind of basic Unix menu. You'd then select windows 3.1 and wait another 10-15 minutes for that to load. Then you could fire up word and wait another 5 minutes. Then you could do 5 minutes work before the class was over!
By piling up nonzero-cost abstractions left and right.
And getting out of the trap is hard too, because no single abstraction is to blame - you can't just hit things with your profiler and find the hot spot. It's all of them. So now you either live with it or rewrite an entire stack of abstractions.
Probably because windows needs to make a connection for every file somewhere else first and wait for the reply, before granting you the advanced software as a service feature called text editing.
It definitely feels like this at times and I fear there is too much truth in my statement.
But it is not just windows only. My old chromebook took seconds to open a folder in the file browser (even if it was already open). But a "ls" on the terminal was instant for any folder. So getting the information was not the problem. But from there to displaying it in a GUI, there seems to be myriads of important (tracking?) layers involved.
The Plan
In the beginning, there was a plan, And then came the assumptions, And the assumptions were without form, And the plan without substance,
And the darkness was upon the face of the workers, And they spoke among themselves saying, "It is a crock of shit and it stinks."
And the workers went unto their Supervisors and said, "It is a pile of dung, and we cannot live with the smell."
And the Supervisors went unto their Managers saying, "It is a container of excrement, and it is very strong, Such that none may abide by it."
And the Managers went unto their Directors saying, "It is a vessel of fertilizer, and none may abide by its strength."
And the Directors spoke among themselves saying to one another, "It contains that which aids plants growth, and it is very strong."
And the Directors went to the Vice Presidents saying unto them, "It promotes growth, and it is very powerful."
And the Vice Presidents went to the President, saying unto him, "This new plan will actively promote the growth and vigor Of the company With very powerful effects."
And the President looked upon the Plan And saw that it was good, And the Plan became Policy.
And this, my friend, is how shit happens.
from anonymous email
My intellij license just expired so today I'm back using Sublime Text, and honestly it's a breath of fresh air / relief - and it's not even the fastest editor, iirc it uses Python under the hood. I've installed Zed but getting plugins and keyboard shortcuts all lined up is always challenging. That one took ~2-3 seconds to cold start.
It seems fascinating how much more efficient Windows apps were back in the nineties, capable do to almost everything the same today apps do in a similar manner on orders of magnitude less powerful hardware, often performing even faster.
The last time I expressed this, probably also here, somebody suggested the performance drop is the cost of modern security - vulnerability mitigations, cryptography etc.
I also wonder if it's just harder to continually monitor performance in a way that alerts a team early enough to deal with regressions?
That said, security _can_ impact performance! I work on Stripe frontend surfaces, and one performance bottleneck we have comes from needing to use iframes to sandbox and isolate code for security. Having to load code in iframes adds an additional network load before we can get to page render.
Over time decisions are made independently by devs/teams which cause the code bases to get out of alignment with a performant design. This is exacerbated by the growth pressure. It's then really hard for someone to come in and optimize top to bottom because there is everything from a bug to a design problem. Remediation has significant overhead, so only things around the edges are touched.
Fast forward a couple of years and you have a code base that devs struggle to evolve and add features to as well as keep performant. The causes are many and we come to the same one whether we complain about performance or maintainability. You probably don't feel this way, but Stripe and other premier engineering companies are way ahead of others in terms of their practices when you compare with the average situation developers are facing.
Independent mobile development is often where I see most attention to performance these days. The situation for these devs is a little bit closer to what existed in the nineties. They have a bigger span of control and performance is something they feel directly so are incentivized to ensure it's great.
I remember everyone complaining about how bloated they were at the time. Pretty sure someone in 2055 is going to run today's Office on 2055 computers and marvel at how streamlined it is.
https://copy.sh/v86/?profile=windows95
Once it boots, run povray, then click run.
It took over 2 minutes to render biscuit.pov, though it did manage to use SSE!
We used to wait two hours for a mandelbrot to display on a Commodore 64, and were delighted when it did.
Ironically, it was specifically the longer feedback cycle of long builds and scheduled releases that seems to specifically have given us better software?
You can run Office 97 now and it'll start fast because disk i/o and cpus are so much faster now. Otoh Excel 97 has a maximum of 256 columns and 64k rows. You might want to try Excel 2007 for 16k columns and 1M rows, and/or Excel 2016 for 64-bit memory.
You could build fast software today by simply adopting a reference platform, say. A 10 year old 4core system. then measuring performance there. If it lags then do whatever work needs to be done to speed it up.
Personally I think we should all adopt the raspberry pi zero as a reference platform.
Edit: * office 2000 was fast too with 32 megs of ram. Seriously what have we done?
I rebooted a lot though (mostly used Linux by then), so maybe it was just fighting with itself when I immediately fired up office after boot.
It's amazingly fast, though it's missing some features and will be really expensive when it leaves beta.
Thank you for posting this, and if you have any other speedy apps you'd recommend I'd welcome suggestions. Mine top suggestions are Speedcrunch [0] (calculator app) and Everything [1] file search combined with Listary [2]
[0] https://github.com/ruphy/speedcrunch
[1] https://www.voidtools.com/
(For reference, I've tried Total Commander, DOpus, Files, Explorer XP, XY Explorer, Explorer ++, FreeCommander, Double Commander, Q-Dir)
I learned about File Pilot (whose author posts here: https://x.com/vkrajacic) from Casey Muratori (https://x.com/cmuratori) who pushed it a bunch because he loves fast things.
Mostly users interact with the explorer in this scenario to open/save a file in ‘BrowserOS’
It's a little unclear what you mean exactly. Do you want the browsing experience changed for the system's file open/save dialogs? i.e. a third-party file explorer opens instead with all of it's features.
I still love IntelliJ, its a great product. But it is slow, bloats the computer, needs constant updating. But at least its incredibly powerful as a tradeoff to the bloat.
The Office debate is slightly different. It is slow, bloats the computer, needs constant updating. But unlike IntelliJ i dont feel that there is any advantage to all that added weight. We are using a word processor. We type words onto a blank screen. Why is Word and Outlook the most common applications to crash my computer that has an M1 Max Chip with 48Gb of Memory? I can literally run and build AI models on my computer with no problem but you boot up Microsoft Word to type words onto a blank page and you have a 33% chance of crashing the computer.
Google Sheets and Docs are actually better tools in my opinion for most people. 99% or more of the features you need are available in the google equivalents. These products are fast and run in a browser! The UI is actually VASTLY superior to Microsoft's ribbon. I still can't find stuff that I need on a routine basis and have to google it. I don't have this problem when using Google's office equivalents.
The majority of the app is written in C++. Python is used for plugins.
This is even more sad with Apple. My M1 Mac felt incredibly snappy with Big Sur, but is getting ever so slightly slower with each update. My iPhone SE2 is a complete disaster.
My prediction is that we are about to enter a great winter of poor performance across the industry as AI slop is put in to production en mass. Along with bugginess that we have not seen since the early dotcom days.
Similar to the slow-opening glove box in a car, many humans perceive slow computers & software as a signifiers of importance/luxury/prestige. At least at first. By the time they are familiar with the slow software, and impatient to just get their work done - too late, SnailSoft already has their money.
I have a small utility app that I sell and make great pains to keep it small and resource light. I really appreciate when other devs do the same.
https://www.vintagecomputing.com/index.php/archives/1063
Their service used vector art to render interactive pages like that over <= 2400 baud modems. Other than it being proprietary and stuff, I'm guessing the web would be a much cooler place if HTML hadn't eaten their lunch. SVG is a decent consolation prize, I guess.
It's not merged yet but I've written an Elixir library that writes graphics files in Prodigy's graphics format, NAPLPS. I'm using it to get current weather and creating weather maps that are displayed in Prodigy. https://github.com/rrcook/naplps_writer
You can run Prodigy in DOSBox and get screenshots.
Opening a spreadsheet, even if you don't want to print it, will hang for 30 seconds doing nothing, because LibreOffice will load the printer settings for the document, which means asking the printer, which if the printer is network-based and turned off, means a 30 second wait for a timeout.
Reported in 2011, still not fixed: https://bugs.documentfoundation.org/show_bug.cgi?id=42673
"Print to PDF -> print the PDF" is much more reliable.
sigh
Perhaps spurred by this comment, there was new discussion in the report and it turned out Microsoft has fixed the issue in Windows 11: https://bugs.documentfoundation.org/show_bug.cgi?id=42673#c8...
Quote:
"I was one of the many who reported this problem in one from or another. The problem is Windows-specific. I have found out that the problem actually comes from the Windows print system. There is no way to check that a printer is actually active or working or even present without incurring a long time-out in case the printer is not present or powered. Trying to check the defualt printer will incur the same time-out.
Calc apparently wants to check the printer to know how to construct its layout, and has to wait for the time-out to continue.
Some of the comments that claim that Calc hangs and never returns have probably not waited long enough for the timeout.
On my new Windows 11 computer, this printer system behavior has been changed and I no longer experience a delay while opening Calc."
"It's written in Python and not e.g. in Rust" is simply not relevant in that context.
(For that matter, when uv is asked to pre-compile, while it does some intelligent setup for multiprocessing, it still ultimately invokes the same bytecode compiler - which is part of the Python implementation itself, written in C unless you're using an alternative implementation like PyPy.)
Of course, not always the case. C++ is a good counter example with a massive range of performance "orientedness". On the other hand, I suspect there are few Rust / Zig or C programmers that don't care about performance.
On the flip side, I've seen quite a few C developers using their own hand-rolled linked lists where vector-like storage would be more appropriate, without giving it a second thought. Implementing good hash tables from scratch turns out not to be very much fun, either. I'm sure there are off the shelf solutions for that sort of thing, but `#include` and static compilation in C don't exactly encourage module reuse the same way that modern languages with package managers do (even considering all the unique issues with Python package management). For better or worse.
(For what it's worth, I worked in J2ME for phones in the North American market in the mid 00s, if you remember what those were like.)
Well first off, pip itself does defer quite a few imports - just not in a way that really matters. Notably, if you use the `--python` flag, the initial run will only import some of the modules before it manages to hand off to the subprocess (which has to import those modules again). But that new pip process will end up importing a bunch more eventually anyway.
The thing is that this isn't just about where you put `import` statements (at the top, following style guidelines, vs. in a function to defer them and take full advantage of `sys.modules` caching). The real problem is with library dependencies, and their architecture.
If I use a library that provides the `foo` top-level package, and `import foo.bar.baz.quux`, at a minimum Python will need to load the `foo`, `foo.bar`, `foo.bar.baz` and `foo.bar.baz.quux` modules (generally, from the library's `foo/__init__.py`, foo/bar/__init__.py`, `foo/bar/baz/__init__.py` and `foo/bar/baz/quux.py` respectively). But some libraries offer tons of parallel, stand-alone functionality, such that that's the end of it; others are interconnected, such that those modules will have a bunch of their own top-level `import`s, etc. There are even cases where library authors preemptively import unnecessary things in their `__init__.py` files just to simplify the import statements for the client code. That also happens for backward compatibility reasons (if a library reorganizes some functionality from `foo` into `foo.bar`, then `foo/__init__.py` might have `from . import bar` to avoid breaking existing code that does `import foo`... and then over time `bar` might grow a lot bigger).
For pip, rich (https://pypi.org/project/rich/) is a major culprit, from what I've seen. Pip uses little of its functionality (AFAIK, just for coloured text and progress bars) and it tends to import quite a bit preemptively (such as an emoji database). It's very much not designed with modularity or import speed in mind. (It also uses the really old-fashioned system of ad-hoc testing by putting some demo code in each module behind an `if __name__ == '__main__':` block - code that is only ever used if you do `python -m rich.whatever.submodule` from the command line, but has to be processed by everyone).
And yes, these things are slow even with Python's system for precompiling and caching bytecode. It's uniformly slow, without an obvious bottleneck - the problem is the amount of code, not (as far as I can tell) any specific thing in that code.
...and by then, the requests for performance are somewhere between onerous and ridiculous.
I'm as wary of premature optimization as anyone, but I also have a healthy fear of future-proofed sluggishness.
Edit: It still might be a bad idea to waste the IO if you do not have to but the latency of a temporary table is usually RAM latency, not disk latency even for temporary tables on disk.
If you’re curious, the EBS disks Aurora uses for temporary storage, when faced with a QD of approximately 240, can manage approximately 5000 IOPS. This was an r6i.32xlarge.
My hypothesis is currently that the massive context switching the CPU had to do to handle the interrupts slowed down its acceptance of new connections / processing other queries enough such that everything piled up. I’ve no idea what kind of core pinning / isolation AWS does under the hood, but CPU utilization from disk I/O alone, according to Enhanced Monitoring, was about 20%.
My personal anecdotes, which are music centric but all apply to my software career:
1. I've studied music my whole life, and baked into music is the idea of continual practice & improvement. Because of this experiential input, I believe that I can always improve at things if I actively put a bit of energy into it and show up. I believe it because I've put in so many hours to it and have seen the results. This is deeply ingrained.
2. When I picked up bass in high school, I spent the first year learning tabs in my bedroom. It was ok, but my ability accelerated when I started a band with my friends and had to keep up. I really think the people you surround yourself with can: push you more, teach you things you didn't know, and make the process way more of a fun hang than doing it by yourself.
3. Another outcome from music education was learning that I really love how mastery feels. There's a lot of positive feeling from achieving and growing. As a result, I try to seek it out in other places as well. I imagine sports/etc are the same?
"Programming" consists of an insanely large number of isolated communities. Assuming the respective person is capable, I would assume that he simply comes from a very different "programming culture".
I actually observe a very related phenomenon for myself: the more I learn about some very complicated programming topics, the more "alien" I actually (start to) become to the programming topics that I have to do at work.
If you've ever given answers to that in another comments on HN or elsewhere, feel free to link.
Plenty of graduates simply got into to it to make money, and have no real deep interest. Some of them love the theory but hate the practice. And some of them are good at both of course.
By contrast, self taught people tend to have personal interest going for them. But I've also worked with self taught people who had no real understanding (despite their interest), and who were satisfied if something just worked. Even if they are motivated to know more, they are often lacking in deeper theoretical computer science (this is a gap I've had to gradually fill in myself).
Anyway, the determining factor is rarely exactly how they acquired their skills, it's the approach they take to the subject and personal motivation and curiosity.
There are many college educated software developers who have that sort of drive (or passion, if you will) and there are just as many who don't, it's not something college can teach you, and the same is true for self-taught developers.
At the end of the day "self-taught" is also a spectrum that ranges from people who created their first "hello world" React app 2 months ago to people who have been involved in systems programming since they were 10 years old, and have a wealth of knowledge in multiple related fields like web development, systems administration, and networking. That's why I think it's silly to generalize like that.
Software development is extremely broad so depending on the discipline self-taught developers might not be missing anything essential, or they might have to learn algorithms, discrete mathematics, linear algebra, or calculus on their own. I learned all of that in college but I'd probably have to learn most of it again if I really needed it.
Guess it makes sense; I'm self taught myself, but thought academically taught developers should have a leg up in theory and mathematics, at the same time though, at one point I considered further formal education for myself (in at least paid courses and such), I realized that I don't think there's much I can't teach myself with the resources available (which includes high quality university lectures which are available for free).
Thanks for your perspective.
I have a Master's in Economics. After 3 years of economics, I started a Master's program in maths (back then the Master's degree was the standard thing you achieved after 4.5–5 years of studying in my country, there was basically nothing in-between high school and that). 9 years later I got a PhD in mathematical analysis (so not really close to CS). But I've been programming as a hobby since late 80s (Commodore 64 Basic, Logo, then QBasic, of course quite a bit of Turbo Pascal, and a tiny bit of C, too). I also read a bit (but not very much) about things like algos, data structures etc. Of course, a strong background in analysis gives one a pretty solid grasp of things like O(n) vs O(n log n) vs O(n^2) vs O(2^n). 9 years ago I started programming as a side job, and 2 years later I quit the uni.
I lack a lot of foundations – I know very little about networks, for example. But even in our small team of 5, I don't /need/ to know that much – if I have a problem, I can ask a teammate next desk (who actually studied CS).
Of course, _nemo iudex in causa sua_, but I think doing some stuff on the C64 and then Turbo Pascal gave me pretty solid feeling for what is going on under the hood. (I believe it's very probable that coding in BASIC on an 8-bit computer was objectively "closer to the bare metal" than contemporary C with all the compiler optimizations.) I would probably be able to implement the (in)famous linked list with eyes closed, and avoiding N+1 database queries is a natural thing for me to do (having grown up with a 1 MHz processor I tend to be more frugal with cycles than my younger friends). Recently I was tasked with rewriting a part of our system to optimize it to consume less memory (which is not necessarily an obvious thing in Node.js).
Another teammate (call them A) who joined us maybe 2 years ago is a civil engineer who decided to switch careers. They are mainly self-taught (well, with a bit of older-brother-taught), but they are a very intelligent person with a lot of curiosity and willingness to learn. I used to work with someone else (call them B) who had formal CS education (and wasn't even fresh out of a university, it was their second programming job, I think), but lacked general life experience (they were 5-10 years younger than A), curiosity and deep understanding, and I preferred to work with A than with B hands down. For example, B was perfectly willing to accept rules of thumb as a universal truths "because I was told it was good practice", without even trying to understand why, while A liked to know _why_ it was a good practice.
So – as you yourself noticed – how you acquire knowledge is not _that_ important. IMHO the most important advantage of having a formal CS education is that your knowledge is more likely (but not guaranteed!) to be much more comprehensive. That advantage can be offset by curiosity, willing to learn, some healthy skepticism and age. And yes, I think that young age – as in, lack of general life experience – can be a disadvantage. B was willing to accept even stupid tasks at face value and just code his way through them (and then tear the code apart because it had some fundamental design problem). A, as a more mature person, instinctively (or maybe even consciously) saw when a task did not fit the business needs and sometimes was able to find another solution which was for example simpler/easier to code and at the same time satisfied the actual needs of the customer better.
That said, I have also worked with brilliant people who had no formal education in the subject whatsoever, they just really, really liked it.
I’m biased towards ops because that’s what I do and like, but at least in that field, the single biggest green flag I’ve found is whether or not someone has a homelab. People can cry all they want about “penalizing people for having hobbies outside of their job,” but it’s pretty obvious that if you spend more time doing something – even moreso if you enjoy it – you will learn it at a much faster rate than someone who only does it during business hours.
I am an old-school developer with a computer engineering degree but many of the old famous devs were self taught. Yes, if you learn how to code through online courses you will miss some important fundamentals but those are possible to learn later and I know several people who have.
We have excellent metrics between PMM and AWS Perf. Insights / Enhanced Monitoring. I assure you, on-disk temp tables were to blame. To your point, though, MySQL 5.7 does have a sub-optimal implementation in that it kicks a lot of queries out to disk from memory because of the existence of a TEXT column, which internally treated as a BLOB. Schema design is also partially to blame, since most of the tables were unnecessarily denormalized, and TEXT was often used where VARCHAR would have been more appropriate, but still.
Trust me so much, some of the stuff I learned there was actively harmful. Half of subjects were random fillers, and so on.
I envy Americans so much with that, their CS education seems to be top notch.
Have you tried launching a local app by typing in the start menu on a default win11 install with limited / slow internet access? Good times. How about doing some operation (say delete an e-mail) in one window of "new" outlook and having the others refresh?
I have never understood how some otherwise reasonable people are able to consider this absolute shitshow of a work environment good enough.
They refused to store files in directories and use good file names (although they were limited to 8.3), so they just scrolled through all their files until they found the right one. But they could open them so fast they didn't care.
In windows you had to use the mouse, click three times, wait for the document to load and render....it was instant under DOS character mode.
Reveal Codes was an order of magnitude more useful than whatever crap MS continues to pass off as formatting notation to this day 20+ years later, and that's before we even get into Word being over-helpful and getting ahead of you by doing things with autoformat you never wanted to have happen.
Yes, I know WordPerfect is still around, but fat chance being able to conveniently use it anymore.
I think rosy recollection is tainting your memory here. How often docs would get corrupted due to those aforementioned memory safety issues, even if that Win95/98 box was never connected to the internet.
Of course it’s gonna be super snappy. It was designed to run on any Doom-compatible hardware. Which includes some toasters now.
Edit: it’s also worth noting that 1997 was right around the time where Moore’s law succumbed to the laws of physics for improving single-core CPU performance via clock speed alone.
Also traditional menus had some traditional standards. Once you learned what was under "File" or "View" or "Insert" or "Format" it was often pretty similar across applications.
There is no faster discoverability than O(log(N)) search using the letters of the name as a lookup.
The biggest failure of modern operating systems is failing to standardize this innate reality.
Windows,Linux,etc should have 1. keyboard button to jump to search box 2. type 1-2 letters and hit enter. Operating systems and applications should all have this kind of interface.
The most ironic apps have a ribbon named something like "Edit" but then the most used "edit" command lives in an unrelated ribbon.
When the accursed ribbon came along, "discoverability" went out the window. Functions were grouped according to some strange MS logic I could never understand, and it takes me twice as long to format a page as it used to. Now, I basically just use Word for text entry, and if I want an elegant format, I use a graphic design app like Illustrator.
Judging from what I've read online, you may be the only person who actually likes the ribbon.
CUA ~= "standard menus + keyboard shortcuts for dos and windows": https://en.wikipedia.org/wiki/IBM_Common_User_Access
I was hoping to figure out what led to design incompetence so spectacular that people would still be discussing it after 17 years.
I think there’s a clue in the abstract: The author claims they made 25,000 mock UI screenshots, but doesn’t mention user studies or even internally prototyping some of the concepts to see how they feel.
It wouldn’t be so bad if keyboard navigation was as good as with the classic menus, but having to press the Alt key separately, and general increased latency, kills it.
And funny thing, barely uses any memory! Makes todays apps look like monsters. Even my music player uses 5x more memory while paused, then freaking Excel with multiple pages open.
`Omnibook300Intro-5091-6270-6pages-May93.pdf` https://www.hpmuseum.net/exhibit.php?hwdoc=123 sez:
> Built-in ROM applications don't use up disk space
- Microsoft Windows
- Microsoft Word for Windows
- Microsoft Excel
- Appointment Book
- Phone Book
- HP Calculator
- Microsoft File Manager
- LapLink Remote Access™
You are wildly off base. The algorithms aren't difficult or special. They were written by people reading text books for the most part.
They are able to sit on an on old algorithm for decades because the DMCA made interoperability and selling cheaper tools like that basically illegal.
Because of the DMCA, the work that was done to make IBM PC clones would have been close enough to illegal to kill all the businesses that tried it. They still tried with the liberal IP laws at the time.
> The thing is, all of these bad features were probably justified by some manager somewhere because it’s the only way their feature would get noticed. They have to justify their salary by pushing all these stupid ideas in the user’s faces. “Hey, look at me! I’m so cool!” After all, when the boss asks, “So, what did you accomplish in the past six months,” a manager can’t say, “Um, a bunch of stuff you can’t see. It just works better.” They have to say, “Oh, check out this feature, and that icon, and this dialog box.” Even if it’s a stupid feature.
On the other hand, I very much enjoyed going to Excel 97 cell X97:L97, pressing tab, holding Ctrl+Shift and clicking on the chart icon, because then you could play Excel's built in flight simulator
It still does. Neither LibreOffice itself nor it's installation process with its components choice have changed seriously since the old days and I'm very grateful for this. The QuickStarter isn't as relevant anymore as we have fast SSDs now but some slow computers are still around and that's great we still have the option.
SuperFetch was supposed to work with any app. It never seemed to have much effect IMO.
Notably, a solution to the current issues with modern office is to use a copy of Office 97.
20+ page XLS uses ~7MB and loads effectively instantly (on a frankly horribly performing laptop that can usually barely run demos on HN)
Why is HN suddenly so interested in Microsoft doing the same thing that has always been done by large, bloated app suites?
Probably because it is horrible? It's indicative of how we spend less time optimizing code than we do coming up with computationally expensive, inefficient workarounds.
Let's say a hypothetical Windows user spends 2% of their day using Office (I made that number up). Why should Office be partially loaded the other 98% of the time? How is it acceptable to use those resources?
When are we actually going to start using the new compute capabilities in our machines, rather than letting them get consumed by unoptimized, barely decent code?
I don't know about a "hypothetical" user, but I'd bet a "mean" (corporate) user probably uses office all day long. Hell, I've lost count of the number of e-mails I've seen having a screenshot which is inside a word document for some reason, or the number of excel files which are just a 5x4 table.
Even as a paying customer, all the Office apps and services are now so aggressively pushy it's gone beyond "Rude", is now passing "Annoying" and accelerating toward "Yeah, I can't do this." I just want to ask Satya "How much more do I have to pay you to simply STFU and let me NOT use (and not even know about) services I already pay for but don't need?"
I bought three 12 month Office subs for $49 each on a black Friday blow-out three years ago. The last one will expire in January and if it doesn't get better, I'll be ending my 30 year Office relationship. I'll probably go to Libre Office and replace OneDrive cloud storage with SyncThing + my own server. I'd be fine to keep paying $50 a year for the 5% of Office I actually use - but only if I can use the exact Office I had around three years ago before it was so annoying.
> «Even as a paying customer, all the Office apps and services are now so aggressively pushy it's gone beyond "Rude", is now passing "Annoying" and accelerating toward "Yeah, I can't do this." I just want to ask Satya "How much more do I have to pay you to simply STFU and let me NOT use (and not even know about) services I already pay for but don't need?"»
Office used to be software that justified its cost, it's now just consistently annoying to use.
However: raising concerns is a bad career move apparently. These ideas... aren't proposed by devs; if that makes sense.
Word is no longer useful to me for composition. This seems like a bad thing.
It reminds me of that college humor sketch about the CEO of Oreo shouting at his team for trying to innovate on the Oreo... It's a solved problem, we made the perfect cookie 100 years ago. Just stop
I recently wrote a macro so that Word could call an AI API to do AI-assisted translation, works like a charm.
The main advantage of office 2003 of course is that it's the last office without activation and other crap: you pass the serial and own it for life, it won't bother you again.
I wantwd to only use 2003 but after the 10th time I argued with a person that sent me a docx for editing I gave up.
The issue is that roundtripping between Office 2007+ and Office 2003 is unreliable and will often result in corrupted files.
Using Office 2003 (with Compatibility Pack add-on to open xlsx and docx) is ok for isolated work but can be unreliable for collaborative back & forth editing depending on what features are used. E.g. cell colors used in Excel 2007 xlsx get corrupted in Excel 2003 xls.
https://learn.microsoft.com/en-us/openspecs/windows_protocol...
That's actually not true, Office has had activation since XP (2002), so 2003 is included in that.
It works fine if the user is ok with the features from 2003. E.g. Excel 2003 is limited to smaller spreadsheets of 65536 rows by 256 columns but Excel 2007+ can handle larger worksheets of 1048576 rows by 16384 cols.
I also recently used Excel's new LAMBDA() function which was introduced 2020. In earlier versions, it required writing a VBA UDF to accomplish the same task of assigning a temp variable with a ephemeral value to calculate on intermediate values. VBA is a workaround but LAMBDA() is nicer to use because Excel will throw up scary security warnings whenever the xls file containing VBA macros is opened.
I might be able to get by with Word 2003 more than Excel 2003.
A lot of normal users of Excel are not users of database software like SQLite or MS Access. That's too cumbersome. E.g. they download a csv file that has ~100000 rows (which really isn't that "big") and clicking on it gets them an instant visual grid as a GUI in Excel. Slicing & dicing and pivoting data is way faster in Excel than coding SQL WHERE GROUP BY statements. I've commented previously on why databases are not substitutes for the workflow ergonomics made possible by spreadsheet tools : https://news.ycombinator.com/item?id=30987638
It's similar to reasons why Python/R or Jupyter notebooks are also not substitutes for Excel for the typical users of Excel.
The low row count of 65536 in Excel 2003 was just a legacy limitation of 1980s 16-bit computing that was carried over into 32-bit computing for many years for backwards compatibility reasons. Spreadsheet users don't really want to switch to databases or Python just to get more usable rows than 65536.
> The scenario of "I just sent you an xlsx where the rows highlighted in red are problems and if you can just add your notes to column K, that would be great. Thanks!" -- is not easy in other tools that are not spreadsheets.
There are no words to tell how much I hate that!!! ;-) Users are too creative. Some will merge some cells and not others and boom the file can't be properly sorted anymore. Many will use font color, font weight or background color to mean wildly different things, which is very difficult or impossible to sort or do a sumif over. Others will add footnotes, because why not?, and links to spreadsheets that never leave their own device, or mini-blank-rows for spacing and general layout, etc. It's completely insane.
Your side lost completely. Pop a signal flare or build a bonfire, maybe someone can rescue you from the island you've been living on since the war ended.
XLookup sure is useful but again you can probably replace it with a combination of vlookup and hlookup (or index match).
Regarding the size... If you're dealing with huge spreadsheets it's really better to use a proper db. Or even manipulate data with sqlite. sqlite can query xlsx files directly (with an extension), it's extremely fast and stable.
And whilst you can work around lack of XLOOKUP or SUMIFS using the older functions, again it constrains how you lay things out (eg VLOOKUP needs you to presort your table by the lookup column if you don’t want an exact match) and this can often result in sheets which are much more unwieldy and slow to calculate.
I remember Outlook clipping the last character off the email subjects, for example. Might have been Office 2010.
What blows my mind is how dreadful search is in Google docs. The thing that should be really good is really bad.
Strange days
If you thought for a few seconds, you would realize that companies with big legal teams would not sign a contract that would give Google the right to their data.
Are they really doing training separately for each workspace? I thought LLM training was enormously expensive and needed lots of data, which wouldn't make sense to do separately.
That may be the case, but I wouldn’t count on it. Probably it can change with one email from Google that has “oh btw we’re changing some contract terms, you have 14 days to opt out, no big deal” buried deep down.
The problem is that we thought "let's switch to the online MS Word editor", which then proceeds to delete your text as you write [1]. Bare in mind that my company pays an Office subscription per employee for that crap.
[1] https://www.reddit.com/r/Office365/comments/11be6wd/firefox_...
[2] https://answers.microsoft.com/en-us/msoffice/forum/all/how-c...
We gave up for large documents, assigned an editor and just send them chunks of text.
Never had issues with printers to be fair, but it sounds like something that could be done in a background thread.
Bare in mind that we are contrasting this with Office, which is itself incredibly slow to start.
> extremely janky UI from 20 years ago
I love this about Libreoffice, everything can be located super reliably.
> poor performance
For a Java application I think it's crazily fast?
> and bad formatting issues and incompatibilities
It's certainly not a 100% drop-in replacement. A lot of the formatting issues I have experienced is because a Office user did something that assumes a perfect renderer - something we don't even get in browsers. Like people pressing enter multiple times to create a new page and not just CTRL+ENTER.
LibreOffice isn't written in Java. It can optionally use Java for extensions and for some database reporting features: https://wiki.documentfoundation.org/Faq/General/015
Then I got a new computer without bothering to do the installation. It was a long time before I discovered that I need any of Word/Excel/PowerPoint. And I was able to get by with Google Docs. If that's not good enough, I go to the free version of Office 365. In the rare occasions where I need the actual, native Office software for compatibility/functionality reasons, I do it on another machine I have access to. This has worked out surprisingly well.
I also used OneNote for the better part of a decade before switching to Linux in 2017. Joplin is ok-ish, but Obsidian is closer to OneNote with its folder-based layout.
We evaluated it for our migration away from MS software and would have gone with it, but it lacks an office server for Nextcloud integration.
Otherwise it works fine, haven't had any issues with the documents it produces and I particularly like the direct export to pdf feature.
Select TOOLS > OPTIONS > ADVANCED > Enable experimental Options (WARNING this is experimental and may be unstable) > OK and then RESTART LIBREOFFICE. On restart VIEW > TOOLBAR LAYOUT > NOTEBOOKBAR. You can then play with the options with VIEW > NOTEBOOKBAR > CONTEXTUAL GROUPS/ CONTEXTUAL SINGLE / TABBED.
This is a perfect example of actions that make adoption harder. This should have been at most 2 clicks and prominently displayed assuming LibreOffice wants to be a great alternative to MS Office and make the transition easier. I have been using Linux daily for over 20 years now and it is not intuitive to me - it doesn't make me very optimistic about the experience for a new user.
Then this dialog appears: https://postimg.cc/YhVWyQVJ
I don't know, I quite like it, reminds me of the old Office look.
Plus, there's at least a bit of customization that you can do, which is pleasant: https://imgur.com/a/libreoffice-ui-80hwOp0
Very much seems like a matter of preference.
On my personal computers, I haven’t use MS Office in close to 20 years.
I use it at work, because that’s what we’re given to use, but 95% of my usage is opening CSV files in Excel. I find documents are rarely written in Word anymore, and the use of PowerPoint is actively discouraged at this point.
If the parent commenter only uses Office a dozen times per year, they should quite easily get by with something else. Google Docs, iWork, a simple text editor… there are options beyond LibreOffice. Which specific options would depend one what those dozen uses actually are.
If open source alternatives aren't suitable, my fallback is to get whatever the last retail box versions were of the few Office apps I actually occasionally use and then never update them. There hasn't been a single new Office feature I care about added in about ten years.
As someone who hasn't used office much in the last 15 years, it's nearly unusable for me. I have to Google how to do basic things because everything is confusing, ugly, and hidden(or hard to find amongst the huge number of icons).
It's because I don't like the Chinese torture you're referring to. We're programmers, we don't have to live that way.
you win this one vim, but I’ll get you next time.
Vim has by far the better default user interface, though.
Day to day I'm listening to music, reading emails, internet, and writing programs with vim (half my time I'm ssh'd into other machines anyways. I do ML research). So I got pretty much everything covered except Slack and Signal
Note that if you google you will probably get spotify-tui[1] which DOES NOT work
[0] https://github.com/aome510/spotify-player
[1] https://github.com/Rigellute/spotify-tui
side note: man... I really wish I had the time to write or rewrite some TUIs. I'm sure I'm not the only one... Problem with a lot of open source is that they're side projects. I'd imagine there could be state of things could be a lot better if some small org just paid a few engineers to make and maintain a few of them.
Could you expand on that?
Excel takes care of a lot of things that are a massive pain in the ass the deal with with any other solutions. It starts witht he ability to import data with a simple ctrl-c + ctrl-v instead of having to write code, but it doesn't end there.
Yesterday I was using Outlook 365, there was one URL in one of the emails and I needed to find other emails containing it. Trivial and one of main use cases, right.
Put URL in search box, 0 finds (including email I just copy&pasted it from). Mkay, maybe non-alphanumeric chars are messing with some internal regex or similar, stripped those into bare hostname, still 0 finds (when searching all mailboxes, including body).
Maybe its some exchange settings, who knows, who cares. Pissed off fighting such basic tech instead of doing actual work.
It’s truly amazing that we have seemingly regressed in basic desktop functionality since the early 2000’s.
Just for fun, try installing an old OS in a virtual machine. Marvel at how fast the old OS runs at modern SSD speeds. Get frustrated at the random hangs, freezes, glitches, and plain bad behavior of the programs you know and love, because the slowness of computers at the time hid it all. 20 cores of unused CPU power, dozens of gigabytes of RAM laying at the ready, disk I/O hitting dozens of megabytes per second, but still loading screens everywhere.
I once tried to go back, for nostalgia's sake, just doing the things I do on an old OS for fun. The grass wasn't much greener back then, I just had lower standards.
Can't really blame the devs though because very often they only had single threads and definitely single cores to work with.
I am using my mac with an LDAP (AD) user account, so I am possibly in the minority of people here.
I have to assume that Outlook email searches have already been set up to have ads injected into them, when/if one day Microsoft decides to flip the switch. Actually, I'm so out of touch with Windows they might already be doing this.
The airport approach to computing!
I have a feeling it's based on tokenising the input rather than a string scan like we'd do in the old days. Harder to match a literal string if all you have is a tree of tokens or something, I guess.
Opengrok was the first time I ran into this years ago. We had a perl code base, perl syntax is well known as "an explosion in an ASCII factory", so it was a real pain trying to find exact text matches using it.
Having said all that: I also hate how shitty search almost everywhere is. It’s hard, but not that hard.
We only have a single text field as the input; how are we supposed to guess whether you want to find an exact match of the phrase, a fuzzy match, at least one of the words provided, or any other possible variation? Also, are you interested in the content, the subject, the recipient, the sender address you used, a header field, an attachment, what have you? Do you want them ranked by the frequency of the word, or the position from the start of the text? Does it count those occurrences in quoted passages of previous mails downthread multiple times? What if it’s a stop word?
There are of course sensible ranking solutions and heuristics for these questions. I just want to highlight it’s not as trivial as it first sounds. Most mail clients probably don’t ship with a Lucene index—while they should.
I use Thunderbird and it's approximately 100x better at searching for emails than Excel. I just tell it if I'm looking in the subject, in the body, in the sender, whether it's fuzzy, etc, and then it pulls up the emails.
Whereas Excel doesn't ask shit and, in return, doesn't have a working search.
With mu4e (an Emacs package), you can have lightning fast searching across multiple mail accounts. And with a bit of work (https://stuff.sigvaldason.com/email.html) it will happily interoperate with Microsoft Exchange systems that require the OATH2 dance.
Which is maddening because back when it was released on Tiger it was great, and on spinning disks.
If the macOS or Windows searches were just wrappers for find/grep, it would already be an improvement!
Probably explains why it's something that works well and works fast.
Granted, creating any kind of complex multi-clause query is a pain, but for simple searches it never lets me down whereas Outlook often just fails to find things I know are present.
https://marcoapp.io/blog/marco-an-introduction
We're building an IMAP-primitive, cross-platform, multi-account email client that is single-digit-ms fast in terms of search.
It's so annoying when I KNOW I sent an email to someone a year ago and I put TO: Their name and it still doesn't come up.
Also: Smart folders still don't exist (e.g. a folder that automatically lists every email with a flag on it or some other condition). At least not in the "New Outlook" which we have to use at work. Apple had this back in 2007.
Same with OneNote by the way and the web version can't even search in whole notebooks, just single folders.
We use Office 365 and their hosted Exchange for email. I manage my mail in the native Mac Mail tool; my boss uses Outlook. For commercial exchanges (ie, dialog about sales with customers), we're almost always both on copy.
SEVERAL TIMES A MONTH he asks me to find a mail for him, because Outlook search is letting him down, often on bone simple searches (e.g., for something like a specific PO number or software serial number).
I find it immediately. Outlook strikes out. How do you break search so badly?
This would not be a problem for searching of course, if the cloud-based search worked properly. But yeah... About that. :X
The "classic" outlook should do it better but it also doesn't in my experience. Though I can't use it anymore at work lately.
It's just so bad because how can they screw this up? It's not some fluff feature, it's a core feature in an email client.
PS: If you have copilot, it does a lot better at finding stuff somehow, though like every AI it can be a bit hit and miss.
For general searches, I agree. I want those to be highly deterministic. But in that case I need to know exactly what I'm looking for.
There's also the other kind of thing though. "Who was that guy that I emailed with a year or two ago about this issue with MacBook Enrolment?". Yes I can filter by company or other details if I remember those things but sometimes I don't. And that's when AI search can really shine. Or not, it can also totally make up stuff out of its ass. But at least when it comes to emails that's easily verifiable.
But I'm already sick and tired of search not returning stuff I know is in there, because I forgot to check the blessed combination of boxes.
Of course that third party clients don't give them any telemetry, "insights", cross-marketing opportunities like copilot, has nothing to do with it.
But they are still all search functions, and the Outlook & Teams search functions seem so terrible that you'd think they'd try do something about it to support the pubic view of their other search related efforts.
Finance and insurance industries are full of Excel powerusers.
> LibreOffice is enough for the vast majority of use cases.
Often (from my job experience I can at least attest this for the finance and insurance sectors), Excel is an integrated part of many large workflows. Changing from Excel to LibreOffice would mean rewriting important parts of central business applications, so you better have a really good reason why you want to do the switch from Excel to LibreOffice.
On the funny note: as powerful excel is, it cannot open two files with the same name from different folders! Or at least my version can't.
This is really bad for a lot of reasons. Of course it's painfully slow, but it's also incredibly brittle and foot-gunny. Excel IS NOT a competent database engine or application engine. It makes JS and C++ look sane and safe.
Excel shouldn't be switched out to LibreOffice. It should be switched out to a proper application with a proper database. What, finance bros don't know how to navigate a database. Tough fucking luck! In the 70s, secretaries could do that. They better figure it out. Because these existing "systems" are a disaster waiting to happen.
> Why did you choose the expensive version
Big companies have (sometimes hard negotiated) volume contracts with Microsoft, which makes Excel much cheaper to them than to, say, small companies. Thus Excel is not really expensive for them.
Concerning
> where you never know what it will cost next year in the first place
For open source software there exists a similar risk that you don't know into which direction the product will develop.
In the past, Microsoft has been quite reliable in keeping backwards compatible, and continue selling office for decades.
In my observation, the zigzag course that Microsoft starting doing with Windows (but is now also doing with office), and, relatedly, deviating from the course of being very insanely dependable in delivering the software that companies need from them, is what by now got big companies at least have a look at what possible alternatives to Microsoft products could be.
You know what's really interesting to me about this argument point?
It is actually the proprietary solution that is at risk of this, and we feel it daily. The next version of Microsofts own flagship product (Windows) is nearly universally denigrated, but people are forced to upgrade.
With FOSS, there's significantly less risk, if the product changes direction you and your other company friends can just use old versions or in the worst case.. fork it.
I've worked on software that communicated with other software using custom Excel spreadsheets exported by yet different software, modified by humans. Every stage of the process was specs-incompliant and was using edge case features, but this process oversaw transport for goods worth millions every day. I tried my very best not to reach for a Windows VM, but there was nothing that could work on these files.
For the vast majority of times, bikes are good enough for the majority of travels, yet there are cars everywhere.
Wow that was a very Dutch comment! I wonder whether it resonates with the Americans here :D
This sums up my entire experience of "Enterprise applications".
I really don't understand how this market is dominated by an abusive platform(MS Office) & a broken POS(LibreOffice).
It's crazy to me how often Open Source pushers have the vibe "I don't use computers, I don't know what anyone does with computers, but I'm still dismissive and superior". I almost made this comment earlier today in reply to [1] which was another "You have problems with Linux? I daily-drive Linux for years and I've never had any problems" comment and the issues were common/well-known things - fractional scaling in Gnome, HDMI, screen sharing in Slack, crashes in Google Meet, crashes in Chrome, KDE unstable, missing desktop software they use, audio too buggy, constant crashes in another program after days of work.
Nothing anyone would be surprised at, except a Linux user who - apparently - never does anything with their computer and is baffled that other people do. I decided my comment was too trolling and didn't finish it, but here you are bringing that vibe again: you don't know what Excel does or why people use it, but you're confident that you know better; prompting me to actually call it out.
The first thing I looked for when I installed LibreOffice most recently, I found a thread[2] asking: "In excel you can create a table simply by using insert->table. Is there a way to create tables in calc, as well?" and the first visible reply is them explaining that they don't want to create a table in Write, they want Excel's "insert table" feature in Calc. Why would they have to explain that again already, their question was two short lines. Presumably the people replying don't know that feature exists. There are some people being helpful and suggesting ways to get similar effects, but of course there's "Why do you want to insert a table into a Calc spreadsheet? It doesn’t seem like a feature that I’d use much" ("I don't know what the feature does but I know I don't want it"), a couple people commenting explanations of what Excel's tables are and why they are useful including a link to a video demonstration ... followed by someone saying "It already works, they are called database ranges" - no, that's different. Someone who doesn't know what they are, didn't read the explanations or watch the video and still thinks they know better. Crazy.
One of the earlier repliers comes back with "LibreOffice has a database component which is by far more powerful than any fake tables on a calculator’s grid." doubling down on "I don't understand it, didn't read the explanations, don't respect you enough to consider that you know anything about what you want or do on a computer, I still know better" and with namecalling it 'fake'. Crazy.
Note, to avoid the obvious tangent, that I'm not demanding people implement features for me in free software. As the thread ends by kerosene5 "Instead of spending so much time poo pooing the feature request, you could explore it. It really is a good feature. I just downloaded and opened my bank transactions and the very FIRST thing I did was look how to convert it to a table. . . . back to Excel."
It's not "1% of Excel users need pivot tables or something" it was the very FIRST thing they wanted, and multiple people in that thread want, and the second thing I wanted. No this is not the only feature Excel has that LibreOffice hasn't; if you want to know why people aren't using Open Source software? Try actually looking and listening instead of whatever that is you are doing.
[Edit: THIS IS YOU: https://news.ycombinator.com/item?id=43855927 ! Of course it is. Of course it is].
[1] https://news.ycombinator.com/item?id=43855663
[2] https://ask.libreoffice.org/t/creating-tables-in-calc/1433
I"m aware that there are things I'm unaware of. Perhaps I don't know how to use Excel, I'm "no true Excel user" if you will, and therefore my opinion is invalid. Like you I'm baffled not by the technical side of things but by our reaction to it. People say "it powers 10% of the world's economy" but not "and it's a problem".
It's a problem. My point is not that LO is better, it's that we should work to remove our dependency on closed-source software for universal office work. Governments should be doing that. But no, let's just say "the other one sucks" and continue to act like depending on a single for-profit company for all our economies is a good thing.
Not sure how my other comment relates to this. Is it arrogant to have preferences?
> "Is it arrogant to have preferences?"
No, it's arrogant to not know what people are doing but still tell people that you know better what tools would be good to do what they are doing.
> "My point is not that LO is better, it's that we should work to remove our dependency on closed-source software for universal office work"
If that is what you had said, I wouldn't have replied.
Programmers don't like to hear that truth.
Absolute most of the time what is offered by markdown is enough.
When doing my thesis I was asking myself “is it really that important to use 16pt font or 14pt one or this is a made up rule because someone said so many years ago”
Bloatware is unwanted software, usually pre-installed or otherwise not installed by the user, that slows down your computer and takes up space.
So if a user wants Office, it is, by definition, not bloatware.
Even if we do consider it bloatware -- pre-installed, unwanted by the user, and using up system resources -- that isn't an explanation of why Office itself is slow.
all they had to do was keep up with whatever features are different in excel between now and then and implement those. leaving the menus and UX mostly alone, only improving things as time went on. update the engine to do the new features, and update the UI only enough to expose the new features and make them accessible.
but no... UX people don't have jobs if they can't redesign shit for no obvious reason. PMs don't have jobs if they can't force nonsense features no one ever asked for. Developers don't have jobs if they don't aggressively chase every new fad and tool and be in a constant state of learning (and thus unlearning).
this whole world is stupid and was a mistake.
> Despite the name, it is not a Chinese invention and it is not traditional anywhere in Asia. Its earliest known version was first documented by Hippolytus de Marsiliis in Bologna (now in Italy) in the late 15th or early 16th century, and it was widely used in Western countries before being popularized by Harry Houdini in the early 20th century.
However, I don't recommend reading those articles beyond the first paragraph and list of contents!
I actually worked on Office performance many years ago. We did a lot of very clever stuff to improve the product, even to the point of optimizing the byte ordering on disk (spinning rust) so that the initial boot would be faster.
That said, it always felt a bit like a losing battle. The goal was "make Office not get slower". It's very hard to convince app teams that their new shiny abstraction or graphics object is actually the reason everything is worse, and it's even more challenging when there's no direct impact- just a broad increase in system memory pressure.
Typically, perf isn't a few bad decisions. It's a very large number of independently reasonable decisions that add up to a bad result. If the team loses that discipline for even one moment then it's very very difficult to fix. I wonder if my former team still exists or if they've all been reassigned elsewhere.
This is precisely where the adage "premature optimization is the root of all evil" falls apart. You really do need everyone to care about performance to an obsessive, unreasonable degree to keep the entire, massive system performant. Companies with good engineering leadership understand this. The thousand cuts can come from language, libraries, feature creep, and pure ignorance or carelessness.
> people pull the ”Knuth said premature optimization is the root of all evil” card.
Incredible how so many people misuse quotes and end up undermining the the whole point of the quote.So for everyone that doesn't understand, here's the longer quote
There is no doubt that the holy grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
Knuth said: "Get a profiler and make sure that you're optimizing the right thing"It is incredible how this became "don't optimize".
https://dl.acm.org/doi/10.1145/356635.356640
(alt) https://sci-hub.se/10.1145/356635.356640
It seems like things like this are no longer possible for Microsoft. They keep producing clunky tools which, although functional, always come with a horribly frustrating UX (as usual).
I've been working within the Microsoft tech stack for around 25 years now (mostly SQL Server). I used to be a huge fan of their products because they were one of the best companies when it came to developer experience (developers! developers!). Unfortunately, that was a long time ago. Things are very different now. Of all the things I once liked, only SQL Server really remains (ironically, it's a technology they acquired - it used to be Sybase). I still think C#, F#, and PowerShell are great, but I actively discourage people from using most of their so-called "products" because the quality is just appallingly low.
Even something like Visual Studio is better replaced with Rider + LINQPad. Their GitHub repositories are full of open issues that have been dragging on for years. There's virtually nothing left of the old Microsoft that I still respect or admire.
That said, I have to admit that most other corporations aren't any better - there's a general trend of maximizing profit while offering the lowest quality that customers are still willing to tolerate. If I were starting IT studies today, I would go 100% down the open-source path.
The most funny part? I was debugging application .exe not starting. Reason? AVG antivirus UPLOADED EXE to their server for EXAMINATION. EXE with an 600$ Extended Validation license. There was a message for the user TO WAIT FEW HOURS before they studied it and exe could be unblocked from launching. All was completely normal to the said windows user. What a dystopian thing they are used to
FWIW, I haven't ran antivirus software on Windows (other than the included Defender, but that's not something I think about) in literally decades.
Are you saying Microsoft shouldn't care about making a secure system and they can just outsource the subject to a third company which will upload .exe to their server and tell user to wait few hours before it can be launched?
Pages is also pretty nice. Its definitely enough for home usage, and if my colleagues could read the pages files natively I would find it completely sufficient for professional use. I find it does layout much better than MS Office. Which honestly is a much bigger concern for home users: professional users will just switch to professional layout tools when they need it, but Sam doesn't need that cost/complexity for some bake sale fliers.
Numbers can also be nicer for home use cases, but is a bit weird if you're used to excel. And unlike pages or keynote quickly hits upper limits on complexity. I would never use numbers in a professional setting.
At this point, I've started using IDE extensions when I just need to view/filter
The alternative to the full office suite with decades of backwards compatibility and hundreds of features, is the quick, free version Microsoft made to fight off Google Docs.
And yet, weirdly, macOS comes up with absolutely no image editor of any kind. There's no equivalent of MS Paint. It's infuriating.
quicktime pro was like that. it was insanely powerful and things were all hidden behind just a few menu items and a few little added UI elements here and there. quicktime pro was amazing and I miss it a bit.
I'm done with it. I've switched to Ubuntu and I haven't looked back. I only boot up my Windows installation when I need to do game development on Unreal or use an incompatible program. But for now, MacOS and Linux are covering everything.
I used to be a big gamer but I've basically given up on playing games that don't work on Linux. The selection of games is steadily growing and some games work at launch (like Oblivion Remastered).
I know there's a lot of animosity for GNOME, but it's the best Linux desktop in my opinion. In terms of polish it's definitely the closest to MacOS.
Application installs are still an absolute pain, but it's gotten better. At the very least I can now go through the Ubuntu App Center to get the most common apps. There's the occassional app that doesn't work (like VLC) and then I'll have to look into Snap or Flatpak or whatever other variation of app packaging Linux devs decide to unleash on the masses... but then it works and I don't think about it again.
One last gripe for me is the lack of HDR support in Ubuntu. I can't use my LG C2 with it. But I've switched to using two Dell monitors with DisplayPort and now it doesn't matter... and I use the LG C2 with something else.
For the average user this experience sucks. But for me, I'm okay putting up with this pain if it means never using Windows again.
Not to defend Microsoft, as I've firmly believed them to be a shitty entity for a loooong time now, but as a counter example and many years going on Windows 10/11, I don't have any of these issues and I've only run debloater maybe a few times in the last 5 years.
I don't know wtf people are installing on their PCs to make them so shitty like this, but I've not encountered these things across dozens of personal or employer devices in recent in times. Like not even once. Maybe you're downloading beta drivers? Maybe the manufacturer of your devices are cheapo brands with poorly made chipsets? Maybe you have bloatware installed by your manufacturer that you haven't uninstalled? At this stage, it's hard to believe this is not some kind of user error. Be it a lack of research before acquiring a device, or lack of knowledge on how to navigate the device.
Edit: to put into perspective a bit more, I use my main laptop - a Lenovo Legion laptop - for gaming (many acquired through the "dark waters" even), full-stack software development, AI video up-scaling, photo-editing, running a media-server (Jellyfin), torrenting, office programs, running virtual machines, running WSL2 with docker, running many various open-source programs, producing music with Ableton and a plethora of third-party VSTs, etc.
No issues.
The problem is the design is just bad. Lots of things are just sucky and they're meant to be that way. Search is ass, explorer is half-decent only in Windows 11. There's way more than 3 settings panels, and yes, they all look different. You still have to edit the registry for some random tweaks. Apps put there files god knows where. Every app updates independently. You still have to go online and download random .exe and .msi files to install things. If you get errors the message is typically worthless. The system tray is a fucking mess. IIS sucks. powershell is okay but cmd is still around and yes, sometimes you have to use it. And, cherry on top, everything is slowwwww. Especially the file system. You don't really notice it until you have a version controlled code base but NTFS has to be, like, 1000x slower than competing Linux filesystems.
If your needs are smaller you can do a lot with just abiword and gnumeric. They launch instantly.
Gnome evolution is much nicer to use than Outlook in my experience.
Yes, it's slow and bloated. But it's comparably faster and leaner, and it doesn't use undocumented APIs to take resources away from everything else running on the same computer and make every other thing unusable.
And yeah, calc lacks features when compared to excel. So, avoid spreadsheets for complex problems.
In terms of word processing (which is perhaps an archaic term by now) I would ask people to look at what Visual Studio Code is. A rather minimal, skeletal, code editing platform that derives nearly all its value from the extensions people make for it. There are lots and lots of editors and IDEs. But extremely few of them serve as platforms. As the infrastructural basis for creating applications.
Yes, there are IDEs that are possibly marginally better at editing, say, Java or Go code. But VSC is pretty good at almost every language that is in common use today. And it manages to compete pretty well with more specialized solutions. It does this because an editor that does 90% in all the languages you use is far more valuable than switching between two editors that perhaps achieve 95%.
Word, and its open source counterparts, are antiquated and obsolete. I don't think the field can be advanced by building word processors that are just iterations of 30 year old ideas. Yes, you can probably extend them, but people don't. You have to understand what it is that makes some pieces of software work as platforms (like VSC), and why other pieces of software do not inspire people to build on them.
I think Microsoft should reinvent Word as a platform that is designed to be extended and that is easy to extend. I would then release the base software platform as open source. Much of the functionality that resides in Word today I would move to paid extensions - including useful bundles of extensions. This way Microsoft would retain its revenue stream, and I wouldn't have to deal with all of the crud Word contains.
I would also create a marketplace for both paid and free (open source) extensions. Which in turn would make the product more valuable (even though the base product is free). Because other companies and people invest in it and have a shared interest in its health beyond mere existence.
Of course, not only Microsoft can do this. Anyone could create an editing platform. But it would have to be someone with a bit of money who can spend perhaps 5-6 years supporting the effort to see if it takes off. Maybe it does, maybe it doesn't.
One reason I see this as perhaps the only way forward for this class of application is that I'm doing some work for a company that manufactures physical products. A would-be advanced user of office automation tools. This kind of business has a very complex document structure where there's a vast hierarchy of thousands of documents that goes into every project and even spans projects. Doing this with Word, Sharepoint and whatnot is complicated, fragile and requires a lot of work. It doesn't work very well. It also means you have to memorize a lot of procedures. This could have benefitted from very narrow, domain specific tooling. Including LLMs that allow you to ask questions with context derived from sources other than the Word documents. Yes, Microsoft is trying to stuff this into their products, but it isn't actually all that useful because it is generic. It is never going to support what our customer needs.
I don't think Office, LibreOffice etc are the right kind of tools. They are children of the 1990s. We have better starting points today and better technology. It is time to re-think this.
Developers at Microsoft are obviously not rewarded for quality. You have to assume that this is because managers and leaders in Microsoft are not rewarded for quality. You would think that a company that has deep pockets would be in a great position to do more ground-up re-implementations. And to do so with quality, performance and correctness as the main focus.
For instance the office suite. The last 20 or so years have taught me that an Office suite can be a lot simpler and it will actually work better if it is simpler. Just in the last 5 years I have observed three different companies where people routinely perform most of their writing and editing in other tools and then insert what they have written in Word. Because it is far better than creating the content in Word itself. At my current consulting gig a lot of people write things in Google Docs and then import them into Word documents to produce the official versions of documents.
Word is a mess. It is packed with too many features you will never use. Those have a cost because they take up screen space, and make the features you do care about harder to find and use. Word constantly distracts you because it misbehaves and you have to somehow try to deal with its quirks and interruptions. It is slow, complex and resource intensive.
Word is objectively not a very good piece of software. I have never met anyone who loves it. Who feels that Word makes them more productive than any alternatives. It is software you have to cope with. Software that must be tolerated. Or not.
I do not understand why Microsoft, with its deep pockets, has made no attempt to reinvent, for instance, Word, to create a word processor from scratch. With focus on quality, correctness, performance, usability, and perhaps most importantly: easy extensibility.
They could draw some inspiration from Visual Studio Code. There are many things that are wrong with VS Code, but they got a few things right. The most important being that unlike other IDEs it is essentially just a skeletal platform that derives its value from extensions. Third party extensions. This means that VS Code can be adapted to fit your individual needs, or more importantly, the needs to segments of users. It means that people who want to make tools can build on VS Code rather than having to do a lot of work orthogonal to their goal to create tooling.
Yes, you can probably wrangle special functionality into Word. But nobody does. Not at any meaningful scale.
Word is rooted in a world that existed before many of you were born. A world that is long gone. There has been decades of technology evolution. If you were to develop a word processor today, you would be starting from a point that is completely different.
And let's not get started on Azure. I have to deal with it about every two years. And every two years I try to approach it with an open mind and with optimism. Surely they have fixed things now? I am always disappointed. Things look slick on the surface, but then you start to use them, and you are confronted with systems that are slow, slow, slow, ugly and buggy. AWS is certainly not the belle of the ball. Its constant complexity and the awkwardness and just overall badness of the tooling makes me limit how much of it I make myself dependent on AWS services.
But at least AWS isn't as bad as Azure.
I don't get why Microsoft can't seem to invest in quality. Yes, I get all the arguments that it just needs to be good enough for their customers to keep using them, but surely, at some point it has to hurt your pride.
If I were in Nadella's shoes I would invest heavily in quality. In stripping things down. In starting over. In making sure that I understand the required cultural change required to make products that are objectively speaking, good. If not great. And perhaps that requires getting rid of a lot of long-time leaders that just can't change gears. Perhaps it requires creating teams that are isolated to a greater degree from other teams so they don't drag each other down.
I work at Microsoft and you're absolutely correct as far as I've observed. Rewards are for speed and doing things (usually hyped-based) that advance the goals of leadership... these goals are rarely if ever about "let's make sure we nail the basics first". I think it comes down to serving shareholders vs. serving real customers.
Why is this always the go-to? The Windows 11 start menu and task bar are exactly that, from scratch re-implementations of what existed before and they are garbage. There is a lot of institutional knowledge in that old code and to pretend it holds no to little value gives us half-hearted replacements which never quite ascend to the heights they were supposed to replace.
Sure, there are some exceptions where the concept around "what the thing is" needed to change and a new product needs to re-imagine a solution (VS -> VSCode). However, I feel that we, the software development community, put way more hope that this is true way more often than it is in reality.
As for knowledge: yes, it is more valuable than the software. That does not imply the software is the only place where software is stored. In its most useful form it is stored in people. Which is why you should revisit, and rewrite, code that is important often enough to ensure the knowledge is passed on.
However, don't forget to make room for new knowledge and new ideas. That has hardly happened to Office suites for 30 years. They just tend to become "more".
Hot take: your ""debloater"" screwed up your system.
I've had problems with Windows, but none of the ones you've described.
> For the average user this experience sucks. But for me, I'm okay.
I guess this describes my Windows experience. I _know_ some people have problems. I don't, because I guess either I got used to it or I know how to avoid it.
> never gotten bluetooth to work on windows
I seriously doubt this. seriously. if true, it is a user problem, because i've never had an issue, nor has anyone I know.
> apps randomly crash
true of any operating system, also that's not what "randomly" means. you mean "unexpectedly" I think.
> settings pages crash
never happened to me, ever. if it has, it was infrequent enough that i have no memory of it, and i've never heard this complaint before from anyone.
> snipping tool only works half the time
again, I use that thing continuously on Windows and it always works.
> xbox ads during gameplay
what game? what [everything]? I've never seen this and I play games on windows all the dang time.
it very much sounds like you've cherry picked experiences that others have had and piled them all here and declared that they happen to you. Maybe they have, I don't know, but if this has all happened to you in the last 4 years, you are the only person on the planet who has experienced this. Not even in the depths of Microsofts online communities and the Microsoft Discord do I read of a single person with all of these problems.
I don't know what your problems are underneath, but they're not Microsoft. If they were, I would have those problems, and I don't. Some of these were common 10 years ago when Windows 10 came out, but only for a month or two. Certainly not in the past 4 years. not unless you're intentionally avoiding upgrades or something.
Here are Windows Forum threads talking about each of the problems I've mentioned, with thousands of people saying "I have the same question":
https://answers.microsoft.com/en-us/windows/forum/all/unable...
https://answers.microsoft.com/en-us/windows/forum/all/window...
https://answers.microsoft.com/en-us/windows/forum/all/window...
https://answers.microsoft.com/en-us/windows/forum/all/snippi...
https://answers.microsoft.com/en-us/xbox/forum/all/unwanted-...
Others would show me how the computer would act weird after unplugging PCI cards while the computer was running, and blame Microsoft. “See!? SEE?!” Every single WTF moment I had in desktop support with issues like this was user error.
Maybe yours aren’t. Maybe the Microsoft Answers forum is filled with exceptionally smart people who all know exactly how to use a computer, never ask stupid questions, and never give wrong answers, but I think we both know that isn’t really true.
I have about a 50% success rate with Bluetooth devices pairing and reconnecting properly on Windows, so at least I’m doing better than OP.
The Bluetooth software stack on the whole is a disaster, but the only platform where I’ve had a trouble free experience is macOS.
This started happening to me too like six months ago. I figured, "yet again they broke something with an update, but it'll probably fix itself eventually."
Nope!
I'd switch to some 3rd party tool but my employer doesn't allow any since we all got upgraded to Windows 11. Why don't they allow it anymore? Because the snipping tool (Snip & Sketch).
At least they still let me install Ditto (I never liked how the Windows clipboard history feature works... No, I'll paste when I want to paste—not when I select the item!)
Multiply that by tens (or even hundreds) of teams and your app startup (either on desktop or mobile) is now a bloated mess. Happened to Office, Facebook iOS and countless others.
One solution is to treat startup cycles as a resource similar to e.g. size or backend servers.
The only way to achieve performance metrics in a large org IMO.
Google Search is still fast because if you degrade p99 latency an SRE will roll back your change. Macbooks still have good battery life because Apple have an army of QA engineers and if they see a bump on their Ammeters that MacOS release doesn't go ahead.
Everything else (especially talking about "engineers these days just don't know how to write efficient code") is noise. In big tech projects you get the requirements your org encodes in its metrics and processes. You don't get the others. It's as simple as that.
Never worked at MS but it's obvious to me that the reason Windows is shit is that the things that would make it good simply aren't objectives for MS.
Then Microsoft made the brave decision that testers were simply unnecessary. So they laid off all SDETs, then decided that SDE’s should be in charge of the tests themselves.
Which effectively made it so there was no test coverage of windows at all, as the majority of SDE’s had not interacted with the test system prior to that point. Many/most of them did not know how to run even a single test, let alone interpret its results.
This is what Microsoft management wanted, so this is what they got. I would not expect improvement, only slow degradation as Windows becomes Bing Desktop, featuring Office and Copilot (Powered By Azure™).
Basically making Windows a good desktop OS is not in any meaningful way the "purpose" of that part of MS. The "purpose" of any team of 20+ SWEs _is_ the set of objectives they measure and act upon. That's the only way you can actually predict the outcomes of its work.
And the corrolary is that you can usually quite clearly look at the output of such an org and infer what its "purpose" is, when defined as such.
[1] https://en.m.wikipedia.org/wiki/The_purpose_of_a_system_is_w...
[2] https://www.astralcodexten.com/p/highlights-from-the-comment...
I don't have a good guess for the average age of software developers at Microsoft, but claude.ai guesses the average "around 33-38 years" and the median "around 35-36 years old".
Also, the Office codebase is significantly larger than Windows (and has been for a while), that was surprising to me.
Microsoft need to update the spec for all new personal computers to include mandatory pre-load hardware. This would have a secondary CPU, RAM and storage used for pre-loading licensed Office products before your laptop boots. AI would analyse your usage patterns and fire-up Office for you before you even get to work in the morning.
Perhaps, this could even allow you to have Office on-hand, ready-to-use on its own hardware module, while you develop Linux application on your main CPU.
Further down the line. Someone see an opportunity to provide access to compatible modules in the cloud, allowing re-use of older incompatible hardware. But there would be the danger that service (without the support of MS), may go bust, leaving those users without their mandatory instant access to licensed Office products, forcing upgrades to even newer hardware.
And using it now and then it feels like that too. Windows 10 Mail app had integration with system calendar, you would get itsycal built into the OS. Windows 11 removed that and made the OS Mail app spam infested shit, and they expect me to pay a subscription for something that comes bundled with the OS I paid for.
Linux desktop is getting better but I still wouldn't daily drive it, so MacOS it is until Linux desktop gets to a more reliable state. I wouldn't be shocked it gets there - I believe Valve made relatively low investments and got a lot out of it, GPU vendors have an incentive to support it - for compute workloads and the gaming on Linux is becoming a thing. Also for office stuff the EU-US hostility could force EU to look for alternative software providers and move away from Microsoft.
Actually thinking about this just made me donate some $ to Gnome project.
As an example, the power button can no longer be configured to power off the machine, because this is "too destructive". I'm not talking about defaults -- they removed the ability for me to make this choice for myself. Not even Microsoft has done that.
https://bugzilla.gnome.org/show_bug.cgi?id=755953
On my machine, the power button is recessed and requires quite a bit of force to press. It is impossible to press accidentally, but the GNOME developers apparently know best.
I find them highly useful on macOS, but there I lack the configurability I have on Windows.
There is just no one working on the technology. TingPing, an Igalia employee and GNOME contributor, was working on a new D-Bus protocol for it, but the work stopped. There is a PR up on the freedesktop xdg-specs repository.
The first versions of Gnome 3 did indeed have a system tray for backwards compatibility, and it was hidden out of the way until you needed it. Eventually it was scrapped once enough software was updated to not rely on it.
If somebody insists on having a messy UI, they can use literally any other DE available for Linux.
I wouldn't hesitate to put a 'regular' computer user in front of Xfce, it strikes a nice combination of simple and discoverable with very few annoyances. It's also where I go when I want to use some many-windowed application that doesn't fit into tiling.
While it may seem overblown, it's absolutely RIDICULOUS how fast it flies on contemporary systems, and even older ones.
One can also omit most of their other apps, with the exception of KWIN, drawing the desktop and its window decorations, and konqueror, the filemanager, and the things managing the menu(bar(s)), applauncher.
Using modern apps for the rest, like anywhere else.
And have them styled, themed however you like. With a few mouseclicks.
If you don't have the time or inclination to tinker with things like tiling WMs - and more power to you if you do, but I don't - KDE is the best there is.
Everyone knows Dolphin is by far the best file manager, but not a lot of people know that Kate is fantastic. Konsole is really good too. The new System Monitor basically replaces a ton of programs. Spectacle is a great and snappy screenshot utility. Filelight is so useful.
There's definitely a few misses, though. kmail in particular. But, overtime the applications actually improve, both in performance and features. This seems to be in contrast in gnome, where apps like Nautilus have been getting worse for a long time. And in contrast to Windows.
Like what? Running htop or btop++ in Konsole(or whichever terminal emulator you prefer), as I do?¹ :)
Just opened that thing right now, and it gives nothing to me.
Shrug?
¹ On one of 9 virtual desktops, arranged 3x3, usually upper left.
https://postimg.cc/K35Qv6Pg < btop++
https://postimg.cc/0zGttJMC < htop
https://postimg.cc/pyTLRfPp < Itsatrap!
But the real icing is that it doesn't just work for processes, applications, CPU usage and whatnot. It works for any sensor, including temperature and fans. And, of course, it's all customizable.
That's seems to be common for all the monitoring stuff running graphically, no matter if Gnome, XFCE, or even old KDE (Trinity).
That aside, htop and especially btop++ are heavily customizable, too.
Whatever. To each his own. It's a matter of taste.
I have a tiny tiny bone to pick with whoever decided to enable bouncing app launch icons on my mouse that look terrible.
But like you said, it was just a matter of finding the right config setting to turn them off! I have been quite happy with KDE all things considered.
> no-longer
"Don't think that's in the OED"
> This is rather unfriendly and considerably more effort than editing logind.conf
"I don't like the tone in this sentence."
What a genuinely horrible person
>> gsd
> gnome-settings-daemon.
>> no-longer
> Don't think that's in the OED
>> This is rather unfriendly and considerably more effort than editing logind.conf
> I don't like the tone in this sentence.
I am dumbstruck that someone can be so utterly full of themselves that they can smugly correct someone’s grammar, and an obvious acronym, only to turn around and clutch their pearls that their victim said mean things in the nicest way possible about their software.
I knew there was a reason I haven’t liked GNOME for years. XFCE is the way.
I might eventually switch back to XFCE but for now I just need a DE that works and gets out of my way so I can write code, and for all it's faults Gnome still gets the job done.
It needs corporate (or government!) drive behind it or that won't change. I'm not talking about Redhat either who appears to just be a holding pen for the above.
That might be enough to prompt a change in direction, I guess time will tell.
> Review of attachment 312719 [details] [review]:
>> gsd
> gnome-settings-daemon.
>> no-longer
> Don't think that's in the OED
>> gsd no-longer facilitates users overriding power key actions
> And include references about when this happened.
>> This is rather unfriendly and considerably more effort than editing logind.conf
> I don't like the tone in this sentence.
So helpful.
But since mid Plasma 5 and on, it's incredibly stable and consistent in design. At this point, more consistent than gnome.
My go-to comparison is power tools: there's a consumer line that's underpowered but pretty easy to use by anyone, and then there's the professional tools for people that know how to handle these tools properly: more power, versatile, and user serviceable.
Smartphones take this to the extreme: on both Android and iOS every user is illiterate, because the OS is deliberately opaque to the user.
Seriously?
Are they removing ways to access the terminal or you can still at least do shutdown -h now?
I'm genuinely interested what Linux is missing for you? I've been daily driving it for years and do all my work and gaming on it. Is it specific software or?
- fractional scaling did not work in Gnome with Wayland for X11 Apps
- I still cannot use my LG C4 as a monitor in full capacity because AMD on Linux does not support HDMI 2.1
- Screen sharing was very buggy - in Slack especially - it would constantly crash the slack app during calls, ditto for camera, but even in Google meet and Chrome I've had desktop crashes
- When I switched to KDE/Plasma 5 to get fractional scaling it was extremely unstable
- Right now I upgraded my GPU to 9070XT - I'm still not sure if that would work on Linux yet because of driver support delay
- Guitar Amp simulator software I use does not support Linux, neither does Ableton (which supposedly can run on proton but with many glitches)
- The audio DAW situation was way too complicated and buggy
- I spent days to get the distro functional and usable with Ardour and it would still crash constantly - I just wanted to run some amp sims :(
It's just the little things and rough edges, but for example the fractional scaling stuff already improved because more apps that I use added Wayland support. And the emulation is getting better, with more users I could see larger DAWs supporting Linux as well. Not sure about the audio progress - JACK was a complete mess.
You can install AMDs driver from their repo directly, it works just fine (using it every day).
> I still cannot use my LG C4 as a monitor in full capacity because AMD on Linux does not support HDMI 2.1
That will never be possible. To prevent pirates from breaking it (lol), HDMI has decided to keep HDMI 2.1 secret. No open source version of HDMI 2.1 can exist.
That said, AMD's driver repo includes both the open source drivers and some proprietary versions of the driver, maybe that'll work for you.
Another option would be using a displayport output and a DP to HDMI converter, as e.g. Intel is using for their GPUs.
- HDMI 2.1: The HDMI Forum blocked it, as they don't want the details of HDMI 2.1 publically available. If you can, use DisplayPort, which is an actual open standard, and is better anyway. Nvidia works because they implemented it in closed-source firmware instead. https://www.phoronix.com/news/HDMI-2.1-OSS-Rejected
Strangely enough Plasma was able to handle this regardless (guess it was misreporting the resolution to X11 app or something like that to make it work ?) it was a Gnome/Wayland thing.
DisplayPort isn't an option - the TV only has HDMI in and converters suck (they crash constantly, even the expensive ones)
* The free trial is enforced as heavily as WinRAR's, and it's pretty cheap (~$60) to buy a licence if the nag screen makes you feel bad enough
KDE Plasma 6 made major improvements and has excellent fractional scaling, the best I've seen in a Linux desktop environment and comparable to scaling in Windows 10-11. I encourage you to give it a try.
and now I’m constantly getting these complaints “I can’t get screen capture to work under Wayland… I switched from lightdm to sddm and I can’t work out how to switch back… I accidentally started an i3 session and I can’t work out how to log out of it.”
It makes me kind of miss Windows, in a way. It is good he’s learning so much. But the downside is Linux gives him lots more ways to break things and then ask me to fix them for him. And a lot of this stuff I then have to learn myself before I can fix it, because most of my Linux experience is with using it as a server OS, where desktop environments aren’t even installed
It's harder as a parent to know that you're capable of solving their problem and still say no, but by age 12 that's pretty much your primary job: to find more and more things that they can start doing for themselves, express your confidence in them, and let them figure out how to adult bit by bit. Breaking a Linux install and fixing it again is among the lowest stakes ways that dynamic will play out from here on.
Well, there’s your problem ;-)
This is great, though, really. I broke our computer so many times growing up, I couldn’t possibly count. I don’t think I ever lost anything of import, other than some savegames of mine. I keep telling people who ask, “how do I learn Linux?” that they need to use it, tinker with it, break it, and fix it, ideally without anything other than man pages and distro docs. It is a shockingly effective way to learn how things work.
It isn't that he could do that, but what else to give up?
Examples (I've been on desktop Linux since 2009): shutdown actually reboots except for a few months with some lucky combination of kernel and nvidia driver. The brightness control keys didn't work for at least half of the years. They currently work. All of that has workarounds but I understand that some people legitimately fold and go using another OS.
I started with Linux installing it from floppy disks in about 1996.
In 1995, I was back on Windows 95 within a week because I needed to get something done.
In 2000, I was back on Windows 2000 within a week because I needed to get something done.
In 2005, I was back on Windows XP within a week because I needed to get something done.
In 2012, I was back on Windows 7 within a week because I needed to get something done.
In 2015, I was back on macOS within a week because I needed to get something done.
In 2020, I worked out I'm wasting my time on this.
I watch my colleagues and friend struggling with it. Lots of small papercuts. Lots of weirdness. Lots of regressions. Plus many years of server-side experience says to me "I should probably just use FreeBSD" in that space.
It just worked in Linux. I don't get where this comes from, because every time I hit a problem in Linux, there's a solution.
In windows, you get a vague hex error code that leads you to a support page where the error could be caused by one of a dozen reasons.
And on top of that, MS is constantly hostile to any user who just wants a basic OS to use their computer with.
Secondly, there isn't always a solution in Linux. I've got one now where something is utterly broken and it's 5 layers of maintainers down and no one gives a shit.
I want to upgrade in order to retain that local account.
Then use massgrave hwid activator.
Steam getting proton was a godsend, all those years of games became playable so now I have a huge back catalog.
Not a bad idea. This is exactly what I do on my daily driver.
In windows, I can just shut the lid and not worry about it, because it will sleep first, and eventually hibernate. Ubuntu would just sleep until the battery dies.
I found instructions for enabling hibernate in Ubuntu, and they did make it show up in the power menu, but it didn't seem to work. (Which is presumably why it was hidden to begin with.)
I also tried NixOS, but I couldn't even get it to boot the installer.
It's really funny because this is one of the things I absolutely do not like about Windows. I absolutely hate it that I put the computer to sleep and when I come back the next day it has hibernated. That said, I agree that hibernation has always been finicky on Linux, however, I would say Ubuntu is not the best distro for this use case. I have been using Fedora and they even publish official guides for it[0] that's how seriously they take it.
0: https://fedoramagazine.org/update-on-hibernation-in-fedora-w...
I do this for arch Linux on my framework and it's fine. Startup time is under ten seconds, essentially zero battery drain, right back in your session with all apps/docs open.
Hibernate is definitely better but still finicky even on Mac/Windows, machines can and do fry themselves, or require a hard reset if you unplug a device at the wrong time. Or unexpectedly continue draining the battery.
It's a terrible, funky, poorly documented, exception filled world down in the low power states for hardware.
It won't come back up ok, as recently as 6 weeks ago.
Or the rampant reports of things like this: https://discussions.apple.com/thread/255642823?sortBy=rank
But too many companies have discovered that a docile "user" who's fed constant dopamine hits and has no actionable way to use a device other than open their wallet and fork over cash to watch more cats dance, or shop on more stores is exactly what they want.
Why don't you just click here and pay for Onedrive. Or just click there and accept Apple's new ridiculous terms.
If you just want to watch cats dance... you do you. I'll just keep doing me over here.
I once tried to set up a GPU passthrough setup to a Windows VM to play WoW but there were a ton of report that Blizzard just banned players for using QEMU VMs because they were marked as cheaters.
Primagean recently said that in a video commenting PewDiePie's "I switched to Linux" video. While he's apparently a good programmer (he worked at Netflix), he uses Vim, so I don't trust him. Edit: part about vim is an edgy joke.
I work in AAA gamedev and have deployed kernel level anti-cheats before, and I’m aware how unpopular they are; so, sorry for that… you would also accuse us of “bad programming” if there was an overabundance of cheaters that went undetected and/or uncorrected.
The answer is unfortunately complicated, the kernel level anti-cheats themselves aren’t necessarily poorly written, but what they are trying to do is poorly defined, so theres a temptation to put most of the logic into userland code and then share information with the kernel component- but then it’s dangerous for the same reason that crowdstrike was.
Not doing endpoint detection is also a problem because some amount of client trust is necessary for a good experience with low input latency. You get about 8ms in most cases to make a decision about what you will display to the user, that’s not enough time to round-trip to a server about if what is happening is ok or not. Movement in particular will feel extremely sluggish.
So, its a combination of kernel level code being harder in general (malloc, file access etc; are things the kernel gives you in user land after all), the problem space being relatively undefined (find errant software packages and memory manipulation), not being able to break out of the kernel level environment for an easier programming and iteration experience and trying to not affect performance.
Lots of people think they can do it better, I’m happy to hire anyone who actually thinks they have a clue, it’s a really hard problem honestly and the whole gamedev industry is itching for something better: even us gamedevs don’t like kernel level anti-cheat, it makes debugging harder for ourselves too and introduces hard to reproduce bugs.
PS; sorry if I’m not being eloquent, I am on vacation and typing from my phone.
However, what if Primeagen meant that HAVING to IMPLEMENT kernel level anti cheat is a symptom of bad programming, and not the anti cheat per se? (that is, with good enough programming, it could somehow be avoided).
And kudos to you. I appreciate people in game dev, they can get a lot done in short time. I haven't played mmo fps since battlefield 3, and it wasn't that bad then. But I've heard that without kernel level they would be unplayable.
Thank you for your time!
Long term I'm kinda hopeful that this is something that will be mitigated through AI-based approaches working to detect the resulting patterns rather than trying to detect the cheat code itself. But this requires sufficiently advanced models running very fast locally, and we're still far from that.
I don't understand, how could crowdstrike have avoided their issues by putting more code in the kernel? Or am I misreading your statement?
If they had not tried to parse data inside the kernel it would not have been an issue.
The reason is the round trip time mainly.
Server corrections will feel like “floaty” or “banding” behaviour, we used to do that and people get upset because it “feels” wrong.
About 40% of games that use anti cheat currently work on linux. Getting banned for using wine is very rare because anti cheat that don't support linux would complain about not running an prevent you from even joining a game to get banned.
I've been through enough KDE, QT, and Gnome API changes. It's just not where I want to burn my limited time.
My first GDI programs still compile.
OLE? Sure, let every application talk to the DLL components of every other application! What could go wrong? Data wants to be free right? Spread the love.
Making the desktop into a live webpage? And of course let any webpage happily load whatever binaries it wants from the internet. Super handy stuff. For some people more handy than others (really how this did not cause a mega Wannacry-event back in the day I don't understand)
There is a reason this stuff is legacy. The only reason it still compiles is because some companies have spent millions on custom developments 20 years ago that nobody remembers how it still works. Not because you should still be using it :)
> Remember MDI Multiple Document Interface? Having Windows within windows. It was a terrible idea.
It was definitely overused - nobody needs Microsoft word to be a window manager for every doc file. But it ends up growing into something really nice where you get to build out the sub windows of your IDE wherever you want them.
> OLE? Sure, let every application talk to the DLL components of every other application! What could go wrong? Data wants to be free right? Spread the love.
This was also incredible. I built one of the first tabbed web browsers by embedding instances of the IE 4 DLL into my tabs. OLE and OCX extended object-oriented programming across program and language boundaries.
> Making the desktop into a live webpage? And of course let any webpage happily load whatever binaries it wants from the internet. Super handy stuff. For some people more handy than others (really how this did not cause a mega Wannacry-event back in the day I don't understand)
This was terrific for pranks. Yeah what were they thinking on this one.
> There is a reason this stuff is legacy. The only reason it still compiles is because some companies have spent millions on custom developments 20 years ago that nobody remembers how it still works. Not because you should still be using it :)
Maybe as I've aged the novelty of building programs has worn off. For some things I don't want to have to port or even recompile it. I just want it to run. If Win32 is the only stable Linux ABI, GDI is the only stable Linux GUI toolkit.
And I know someones franticly typing away right now - yes, I am fully aware you can customise things, but out of the box it should be pretty damn well polished so that you don't need to.
Ubuntu's probably got the closest but it still just doesn't quite feel like they've nailed the experience.
It'd be interesting if there was a "Ubuntu v2" type effort, over 20 years later. Before ubuntu it's not as though desktop linux was an impossible dream or there was a lack of distros, but Canonical cleaned up a lot of rough edges to the extent it became a lingua franca. It's to the extent you can rely on ubuntu being in instructions for linux software, for example if there's any differences to required package names it'll be the ubuntu names over debian's.
1. My capture card doesn't work reliably in any distro. I'm not a gamer so I can't use a cheap and ubiquitous USB V4L card, I capture retro computing screens at weird resolutions and refresh rates so I have to use an enterprise-grade solution that can handle strange things like sync-on-green from 13w3 connectors and extremely rare outputs from UNIX workstations from the 80s and 90s.
2. If someone sends me a link on my phone it is difficult to copy and paste it to a Linux system.
3. Battery life on laptops, despite decades of improvements, is atrocious on Linux. If my laptop gets twelve hours of real-world use under OS A and six hours under OS B, I've got to use OS A.
4. All of my screens are 4K. Today, in 2025, a full decade after 4K became standard, the way various DE/WMs handle scaling is embarrassing.
5. Nvidia. Yeah, it "works" for about 2-3 kernel upgrades then you're greeted with a blinking cursor upon boot because of DKMS or some random reason like patching the system and not rebooting for a couple of days and then patching again.
6. There's little consistency across devices. When I log in to system A I want every single icon, file, and application to be the same as system B. iCloud/Onedrive do this. You can do this on Linux while on a LAN with remote home folders. I don't work exclusively on a LAN. Or I can set up puppet/ansible for my non-infrastructure systems and that makes me throw up in my mouth.
Almost none of that is the fault of the kernel. That's irrelevant.
For headless servers, I want nothing else. For a daily driver, as much as it pains me, nothing comes close to the Apple ecosystem. Apple Silicon is years ahead of everyone, and their interop with (admittedly only their own) other hardware is incredible. Universal Clipboard is magic. The fact that I can do nothing more than open an AirPod case and my phone registers it is magic. Finally, the fact that MacOS is *nix is absolutely icing on the cake.
To give a very concrete example, I have two identical Thinkpad T14 at work, one running Linux (Debian Bookworm with KDE) and one running Windows 11. When doing normal office work, the Linux laptop easily lasts a whole workday with >20% battery left at the end. The Windows laptop runs out of battery in less than 2 hours.
> Today, in 2025, a full decade after 4K became standard, the way various DE/WMs handle scaling is embarrassing.
Generally, I agree, but Qt (KDE) is the standout to me, primarily because it is "commercial first, and open source second" in my mind. Do you have HiDPI scaling issues with Qt apps?Perhaps Syncthing would partially cover this? Not the applications, but the files ....
Also I really dislike how out of memory conditions just causes everything to grind to a halt for 5 minutes before something, typically Firefox, crashes. On Windows at least just Firefox gets very slow, but usually I can just nuke the process that eats too much memory. Not so on Linux as the whole desktop becomes unresponsive.
And every now and then I still need to fiddle with some config files or whatnot. Not game breaking but annoying.
I've listed some of which I encountered on Mint here https://www.virtualcuriosities.com/folders/273/usability-iss... Among them: AppImages just don't run unless you know how to make them run. This could be fixed with literally a single dialog box. There is no way to install fonts by default other than knowing where to put them and knowing how to get there. Every app that uses Alt+Click, e.g. for picking a color, won't work because that's bound by default by the DE.
These issues may sound small at first but think of it this way: did nobody making this OS think about how users were going to install fonts? Or ever used an application that used the Alt key? Or did they just assume everyone would know what to do when they download an appimage and double click on it and nothing happens?
And you can just feel that the whole thing is going to be like this. Every single time in the future you want to do something that isn't very extremely obvious, you'll find a hurdle.
I even had issues configuring my clock because somebody thought it was a good idea to just tell users to use a strftime code to format the taskbar clock. I actually had to type "%Y-%m-%d%n%H:%M" to get it to look the way I want. And this isn't an advanced setting. This is right clicking on the clock and clicking "Configure." When I realized what to do I actually laughed out loud because it felt like a joke. Fellas, only programmers know these codes. Make some GUIs for the normal people.
The fragmentation of Linux leads to a ping-pong of responsibilities. Linux can never be a bad OS because it isn't an OS.
On Windows, if the file manager is bad, that's Microsoft's fault. Period. Nobody tries to say "actually..." it's Microsoft's fault. Period. The same goes for the taskbar, for the control panel, for MS Paint, for even Microsoft Office. If Microsoft will fix it or make it worse depends on them, but nobody denies who is to blame and everyone know where the blame lies. Meanwhile I don't even know if the basic utilities that my distro distributes are under the responsibility of Mint's team or if they will just direct me to some random open source project's issue tracker if I start complaining about Celluloid or the "Drawing" app.
You can't talk about Linux thinking only about the good parts, or you aren't inviting people to try Linux, you're inviting them to try your distro. "Linux" means the whole ecosystem, including all of its problems.
Personally, I like modern Gnome: https://news.ycombinator.com/item?id=43859753
Personally, I find modern Gnome insufferable because it is non-customizable to the extent that even macOS only dreams of, and it doubles down on the modern trend of hiding important UI behind poorly discoverable gestures (active corners etc). Except their take on it is even worse in general for mouse users because of how much more "legwork" it adds - e.g. in a default Gnome setup on Fedora, you need to move mouse cursor in the top left corner for the dock to show up (so that you can switch apps or launch a new one)... but then it shows on the bottom of the screen, so now you need to move the cursor all the way there across the screen.
But that's all subjective and not really my point. The point, rather, is that Gnome looks and behaves very different from Win11 and macOS both, in ways that don't make it easy for users to migrate (and in fact they specifically state that their UX design does not consider that a goal).
I like they ditched all the unnecessary things from the settings. I think all the pro-level settings must be dealt with via terminal. That way, it’s both of two worlds. Me, I don’t mind it. But if I manage the computer for someone, I want them to have only the minimum things, so they won’t be overwhelmed. That’s very wise, and unfortunately all these Win3.1 geeks are complaining it’s bad. Yeah, okay, keep using your favourite XFCE then, or whatever.
I’d install Gnome for elderly, even if they have some previous Windows experience. Because they can afford to just ignore it. My mum, she has no computer, and last time she used Windows was like, idk, a decade ago. Explaining Gnome to her is easy: here is the Windows (or CMD) button, you press it once, you have this iPad like interface. Here is the Dock, you have all the necessary apps in there. More of them if you press that Windows button one more time. But actually you don’t need it 99% of the time, so you can survive with top left corner pressed once. Two times press is for me. Closing the app is that X button. What else does she need?
Now, try to explain the [any other DE basically] to elders the same way. Considering most of these people have iPads. And if they’re not, well, I don’t really get why, they should. My guess is that their interface appeal to that audience. And to me that’s a great thing, that’s most of non-tech people now.
However, I’m (being an obviously pro user) able to use the default Gnome productively. Almost as productively as I use SwayWM. To me, that’s very impressive.
Honestly, there's literally nothing missing from the experience for me. Dev tooling works way better (obviously), it feels much faster than both W10 and especially W11, I can still play Factorio and most other games in my 900-game Steam library (minus MP games with rootki- err, "advanced" anticheats), GPU and CPU drivers were a non-issue and bundled with the install, speakers work, bluetooth works, Wifi works (I'm on LAN but still).
The only thing is that it's kinda ugly (personal taste, I actually like W10 aesthetics :p), but one GNOME Tweaks install later and I got it looking more like how I like it, plus they're (System76) working on Cosmos or whatever they're calling it and it's looking promising. Also text is a bit blurry/hard to read for me, but it could also just be my shitty monitors (and me being used to the excellent Macbook screens)
Now, if you have some software you rely on like the Adobe suite, understandable, but I think for most people it's honestly the superior OS compared to Windows. I'm sure the experience on other friendly distros like Mint are similar, too.
Loading up a GTK app and switching to a Qt app is jarring, especially with basic things like a file picker.
Daily driving desktop Linux feels like you are living in a lower-middle income family. Yes, you have some nice things, but you can usually tell they are cost-cut versions that have filler plates or missing features present on higher-end versions of the product (i.e. macOS).
So I'm hoping to be able to transition out of the ecosystem because I hate their model and like choice. But at the same time I have work to do and last time I tried it wasn't there yet. It was better than it was 3 years ago, and that was better than 5 years ago, etc. I would say not a lot left and the momentum is building, I just don't have the 20 year old energy to be the early adopter anymore :)
Microsoft announces new European digital commitments https://blogs.microsoft.com/on-the-issues/2025/04/30/europea...
Our data is up for grabs since at least 2018[0]. There is no privacy.
Bröther, you're literally the thing we need protection from!
Then stop fucking collecting shit tons of data that you do not need.
This is convincing. Or would be, if the present challenges wouldn't extend to the court system itself.
That being said, European bureaucrats are even stupider and will largely take these commitments at face value, allowing them to have a tighter leash on the market.
I literally thought about that yesterday as my Windows computer I was using for a legacy application froze/slowdown to the point of unusability. Not the first time this has happened. And nearly every day I have a UI issue with some programs not maximizing and staying behind old windows. I've had embarrassing moments when my OS/MS teams crashes during a meeting. Not to mention the literal ads scattered in multiple screens that sometimes are impossible to turn off(the bottom left button)
My Fedora computer... Every year I have to upgrade it. That sucks. But its way better than anything I deal with on Windows.
FYI, Fedora is so solid that I don't even lump it in with Linux. Linux has baggage from the Debian/Ubuntu fanboys who use a literally outdated OS and have either: No idea its outdated. Or confuse the word "Stable" with bug free, when it means version locked.
If you havent used Fedora, you don't know where the current OS market is at. Fedora stands alone and separate from the rest of the Linux Distros. Its literally better than Windows. It just works.
It is just really one long reboot followed by a short one. The first one can be done while you are asleep. That is how I upgraded my daughters fedora from release 40 to 42.
If you really don't like 6 months or yearly upgrades, there are rolling release distros with more incremental updates or super long term releases like Almalinux/Rocky, ubuntu LTS or ... wait for it ... Slackware!
With flatpak and appimage, running a distro with an older kernel, desktop, libc and base libraries version is not that big of a deal as you can still use apps in their latest release
I even migrated from Arch to Fedora, just because I was getting tired of the occasional rolling update bricking my system.
Actually, a KDE Plasma desktop would also work well. I recommend the Fedora KDE Edition.
Libre Office is more than sufficient for most people.
> gui configurability
A bit confused by that. Linux desktop environments tend to be more configurable, and you can configure most things end users want to configure in a GUI with the major DEs.
Do you mean the sysadmins cannot configure as much in a GUI? I think that probably is a major barrier as it means a lot of retraining.
Also, when you do something different from everyone else, every problem will be blamed on you for doing that.
I believe this way of configuring is much more efficient. Yes, you have to learn some new things, probably even new paradigm. But once you done, it stays mostly the same for long years, and is dead simple. I am, being Linux user for circa 15 years, see administrating Windows with dread. And most Windows sysadmins I know personally, when I tell them about Linux, and they react like it’s some hidden obscure knowledge they have to spend ten years studying it. Which is vice-versa actually. I cannot imagine what that is, to be a Windows sysadmin, especially supporting all this mediocre engineering.
It may be better, but it needs change and retraining.
> I am, being Linux user for circa 15 years, see administrating Windows with dread.
Me too. I do not much like using Windows either and it seems to be getting worse.
> they react like it’s some hidden obscure knowledge they have to spend ten years studying it.
Partly FUD (lots of people make claims like "you have to compile your own software to use Linux") and partly because people hate change, and partly because it took them 10 years to learn Windows (many years ago) and they expect the same again.
For some reason, things like disks, C:\ and D:\ were logical to me, while I couldn’t grasp why cannot I put my files into root directory, and I’m forced to live in a subdirectory (/home/user) instead. It takes some time to re-learn, but I’m looking back with some dread. Things I accepted as simple, are actually unacceptably complex.
I was mostly thinking about the times I end up needing to tweak something through the terminal. I wouldn't expect most desktop users to want to do this. But maybe you're right that the most important stuff is covered by guis nowadays. There seem to be a lack of guardrails for low to semi-technical users though. I wonder if something like Nix could help with guardrails and being able to backtrack.
Something like nix-gui seems like an interesting approach: https://github.com/nix-gui/nix-gui
Buddy, I love Fedora, but this is nonsense.
The UI for Libre Powerpoint(or whatever its called) doesnt have text size on the main screen. The reddit mods on the subreddit literally ban people for complaining about it.
Libre Office isnt the future.
I just use Google sheets.
I was curious, so opened Impress, typed some text, and saw that the font selection and size was by default open on the right-hand "Properties" panel, alongside all the various text configuration options. So that at least is not true.
Also, security-by-default for apps would be nice. Snap and Flatpak are great starts but it’s still to difficult to manage and too easy to install non-sandboxed software. Some random weather app should never have access to your photos, camera, file system, networking, etc… without the user explicitly granting permission.
Familiarity being used with workflows is the biggest killer, and why I become a stupid user on Windows. Enterprise makes having Linux installed hard mostly because of checkbox security being a thing that favour monopolies
There are office software and I would be interested to know what gui configurability do you need that doesn't exist already. More often than not when someone ask a question about linux in a forum he will gets answers using the command line. This is not because you can't do it with a gui. The reason is that copying and pasting text is much easier than showing people how to navigate into menus using screenshots and videos. Text based interface are just superior when it comes to support and message boards. People use cli a lot on linux because it is convenient.
It's coming, but not necessarily this year - perhaps the next. Until then I need to break out my Windows laptop to see HDR content.
Today I was manually sorting a bunch of files into folders that I had opened as tabs.
Drag file over tab, tab move and I now activate wrong tab.
Second try: drag file to where tab isn’t, such that the tab moves to where my mouse is. I now activate correct tab and can move the file to the designated folder. Single click the file and select a different file because file ends up at bottom of list when released, and then gets sorted after a second or two.
Click F2 to start renaming the file, click left to deselect and move cursor to the beginning of file name. Start adding text, only for the entire string to get selected and everything overwritten.
What a shit show.
/rant
You don’t need to install drivers one by one.
Don’t need to download that huge iso and write it to a usb for a long time etc.
Linux just works on both my laptop and desktop just by installing it with the gui
Hundreds of millions of hours spent microfussing over leetcode and gatekeeping work because your solution isn’t copied from the top 5 solutions character for character. Only for the same devs to just abandon all optimization in the real job where it actually matters and implement an o(n^2) fraudulent time metric bypass.
Then everybody jumped on that cargocult because Google is a trillion dollar company, so they must be doing something right, am I rite, never-mind their immense monopolies and first mover advantages. So now everybody was looking for the mythical 10xers
It all metastasized into the present where you have poor college kids in India grinding Leetcode to get SDET jobs for some Bangalore outsource center. I can't even
See ISBN 0316778494
Nevertheless I broadly agree.
The only major gap remaining, IMO, is on Sheets - performance as sheets get large or have lots of formulas, and plotting. If Google would take that product a little more seriously (rather than trying to turn it into a Notion databases clone), they could become a real alternative.
> "No one ever got fired for buying Microsoft Office".
Basically if you are the head of IT and you use Microsoft Office, the CEO comes to you and complains it is slow, you can say "Well Microsoft makes it slow". The CEO will shrug and move on. But if you instead get rid of microsoft and move the org to Google WOrkspace, then the CEO comes to you and says "Google Sheets doesn't have this one formula that I use" and you tell them that Google doesn't offer it, the CEO fires you for swithcing away from Microsoft Office.
Google Workspace is amazing. But Corporate IT departments just absolutely love paying their Microsoft enterprise subscriptions. So I have to use it for that reason.
Like you said, the office UI is horrible. I can't ever find anything. But in Google Docs, Slides, and Sheets, everything is exactly where I want it. I truly haven't ran into cases where Office products have something significant that Google's workspaces can't. I know there are differences, with office having some more advanced features but I think 99.5% of people don't ever use these advanced features.
I did not realize how many people on HN are still using MS suite, a nice refreshing bubble buster.
Google Docs is always loading like half a document in a reasonable amount of time and then doing who knows what. It's almost usable for text documents, cause you can usually at least see stuff, even if you can't edit it. It's just trash for spreadsheets; especially when it's like i'll take your input but won't run your formula for a while. I'm not entering the Excel World Championships here, my spreadsheets would calculate 'instantly' on a 486 in excel 5.0 if I had such a setup. If the documents fully load and there's no weirdness, sure it's fine enough; and the multiplayer features are handy, but it's not worth it for single player spreadsheets IMHO.
I haven't had the experience of using Microsoft's Office in a browser, I can only imagine the fun involved there.
My only issue is that it cannot handle lots of data. Both Docs and Sheets have caused limitations here. Docs gets unusably slow. Sheets just wont work.
> You can access, analyze, visualize, and share billions of rows of data from your spreadsheet with Connected Sheets, the new BigQuery data connector.
It used to be a nightmare to edit a 150 page file. Now it's no problem at all.
Not sure if it has to do with migrating from HTML to canvas for rendering, or totally separate.
Granted, the corporate malware on my computer doesn't help the situation. I can literally build AI Models from scratch on my computer. But if I boot up a Microsoft Office product I have a 33% chance that it crashes.
How can I build an AI model with no issue, find and replace instantly in an IDE for a projects that is tens of gigabytes and thousands of files with no issue. But I want to write a sentance onto a blank page in Microsoft Word, or Reply "thanks" to an email in Microsoft Outlook and the application crashes or takes 3 minutes to load?
I truly do not understand how Microsoft Office is still the dominant enterprise platform. These applications have horrible UIs, they are bloated, slow, and expensive. Yet every IT department foams at the mouth and gets a hard-on to sign their Microsoft 365 contract for $200 per user.
I'd bet on this being the cause before Office itself
What it would mean (if we can believe this) is that Windows becomes a legacy burden and without proper management and knowledge will become a big ball of mud (if it's not yet like this) unbearable and unmabagable.
Right now I can access at least five styles of UI, from different epochs, each one is doing something important in the system, BC one cannot rewrite everything to the new style without enormous funds
Mail Slots are on their way out, though. Not that they're useful today.
I thought that was just me, and even though all hard-drives seemed healthy, I was planning on switching the oldest out. But if others are having the same issue, guess they just screwed up the software side of things, like usual.
Developers who exclusively test on SSDs is my guess. The UI hasn't taken into account any latency from reads.
Having a HDD now moves you into power user territory.
It's in beta and no network drives or CJK yet but feels like a breath of fresh air.
Granted, this is all Hard Work. I understand that. But it's the right thing to do.
I haven't run it this boot, and I just timed myself taking 7.5 seconds to start Writer, close the welcome popup, and quit. By feel, about half of that was waiting for the loading bar and the other half was figuring out how to dismiss the welcome popup.
On a hot start, with several attempts it takes between 2.1 and 2.5 seconds to start and quit it. For some reason the welcome popup has disappeared.
These experiments were performed using LibreOffice 7.4.7.2 40(Build:2) - wow, are they also copying version numbering from Microsoft? - on the cheapest 120GB SSD I could find in early 2019.
'Hey, I'm loading! I'm going to steal focus from whatever else you're doing and make your taskbar flash! No, that doesn't mean you can give me any input yet'
I'd really like to own a device that can multitask, one day.
Back then there was much less understanding in the software industry of why 90's Microsoft was so successful. A lot of people couldn't work it out and - combined with their anti-trust moves against Netscape - just assumed the whole thing was built on cheating. In reality it was a combination of really buying into GUIs and their own Windows platform early (not an obviously successful move back then), combined with having some truly wizard-level systems hackers. It's hard to understand these days because clever hacking is hardly ever a competitive advantage now, outside of maybe game engines. It can even be a disadvantage, as it causes you to focus on micro-optimization whilst your competitor is shipping another useful feature.
Windows 95 was a massive hit, but it didn't have any particularly unique killer features from the end user's perspective. Apple had similar features in theory. The gap was the quality of their kernel and toolchain. Windows made the transition from being a cooperatively multi-tasked single address space system running on a driver-less "OS" (barely more than a fancy library), to being a pre-emptively multi-tasked OS with a wealth of loadable hardware drivers, and they managed that architecture shift in a way that preserved the hard work of their ecosystem's developers. Apple failed the same transition completely and Microsoft's other competitors were big iron UNIX vendors who delivered the same stability and features only through very expensive proprietary hardware.
This new story is emblematic of Microsoft's trajectory over the years. Their apps used to beat everyone on startup time by using tricks so clever everyone assumed they'd cheated, and now their hacking is so un-wizardly they actually do resort to cheating. These days the wizard level systems hackers are all at Apple. Oh how the wheel turns.
https://www.betaarchive.com/wiki/index.php?title=Microsoft_K...
> The OSA initializes the shared code that is used by the Office 97 programs. The benefit of using the OSA to initialize shared code is that the Office 97 programs start faster.
The rumours were (that I remember) that Microsoft had a secret/invisible way to hook Office into Windows startup. Otherwise, how did it start so much faster than StarOffice, which appeared to have similar functionality.
I currently use Windows, 10 to be exact, to play games, and in a VM to run an income tax fat app (since the online version is so much more expensive). My game machine cannot upgrade to 11. A mobo upgrade won't be that expensive for the game machine, but instead I'll covert it to a Linux box and run the few games that work on Linux.
I believe my Windows days are over as of, say, October 14 this year.
However, I recommend at least testing it on your hardware before that date. Put Ventoy on a USB drive and play with some live distros. Just to make sure everything works the way you expect.
You never know, you may have that one piece of hardware that doesn't work :shrug:
(I'm sure they've got some new improvements tucked away but I don't notice them.)
Or every time I update my OS it's like a 10GB download. What did I just get for that 10GB? I honestly can't tell.
Even my smartphone. Seem often where I'm asked to install a 2GB update aaand it's same as before but slower?
Anyone else have this sentiment?
For performance, it's a harder one to answer, because there are potentially many reasons, I have my opinions as a Software Engineer myself, but others will have different opinions. Ultimately, software moves "forward", which can mean more code, more features, more bug fixes (and thus safety checks etc) and potentially worse performance, although _better_ performance is also possible with optimisations.
That said, as hardware advances, it enables writing more powerful software with more features, which then become more of a struggle for older hardware to run.
From a device manufacturer perspective; they want to sell you new devices, so there's little incentive for them (in my opinion) in spending developer time on trying to optimise new code for older devices.
If you meet the hardware requirements threshold and recently have used Office then preloading it 10 minutes after login is extremely unlikely to impact your startup.
Windows 10 on the other hand takes nearly a minute to get to login and it hasn't stopped booting then, another 20 seconds or so after login it's not responsive.
And only if it doesn't decide to update or do system repairs for 5 minutes, or more if it goes into one if it's restart update loops.
It's not a little more, it literally killed at least an hour of productivity in just a few weeks
(That's not counting the productivity killers once the system is running)
With those caveats aside, I must unfortunately acknowledge that Windows startup is perfectly fine (Linux is faster, but again this competition is pointless. Unless you are some compute infrastructure supplier and need to boot a million VMs a day or whatever).
Sometimes when people post with baffling Windows performance problems, it is because their experience comes from corporate laptops with some mandatory spyware from IT.
No... it's not fine. I don't reboot all the time for work or run a zillion VMs, I'm just a regular user. But sometimes when I'm rebooting - I need to get to necessary information quickly. Waiting 40+ seconds is an eternity when standing at an airport immigration counter pulling up a pre-filed form that they said I did not need to bring but which they're now demanding (because their machines are rebooting).
I'm glad you feel it's fine for you. Not all of us agree. I'm especially annoyed because much of the new bloat slowing my life down during startup is stupid and unnecessary shit I don't even use much (or ever) - like initializing CoPilot, Edge, and now, Office.
Note: I even upgraded my SSD to an expensive Samsung 990 Pro, reportedly one of the fastest available. It's still >40 secs - and I've already gone through and thoroughly pruned all the unnecessary services, tasks and autoruns that I can. It's a top of the line >$3000 laptop that's less than a year old.
Yes, he just said it, it has Windows on it.
But more to the point: Windows slow boot has been a constant ever since the times when I would boot up Windows ME and go make myself a tea. If anything, Windows has always stayed one step of the technology that would bring its boot times down, to the point where I'd guess (as this article suggests) that it's company policy to dump slow components there.
My windows gaming PC starts up in about 30s from a cold boot (though it's not encrypted...), so I would at least put the personal Mac and the Windows machine in the same ballpark. I couldn't have told you which one is faster without timing it. The work machine laptop is clearly noticeably slower.
For me login screen pops up maybe a few seconds from the bios, then everything is fully loaded after I enter my password.
When fast startup is enabled shutting down does a reboot and then a hibernate so that it can wake up from hibernate when you start up but with the same effect as a fresh start. This is generally much faster than a full startup. This should and in many cases must be disabled to dual boot another OS.
Different hardware takes longer to initialize which may delay startup. This is especially true of failing hardware which may whilst in bad shape continue to work after a fashion but take far longer to initialize.
Some hardware is MUCH slower than others.
Does it still need to be disabled if you're dualbooting and not interacting with the windows partition?
And yeah, I have a desktop computer. I bet hardware failure rates are much higher in laptops. All good points.
Best just disable the feature.
Windows has alot of stuff that runs in sequence that takes awhile to churn or times out. It’s much better than it was, but Apple is way ahead.
I see some people think they have fast booting windows PCs but I am sure also they know that's not the case for the average PC
Since most macOS installations use FileVault by default, the login screen looks like it loads only stuff related to the login screen and not anything from the OS. Windows on the other hand, seems to load more stuff in the spinning thingy screen that appears before the login screen.
For instance, if you disable Filevault on macOS, the OS seems to load before the login screen, and then when you input your login and password, it loads to the desktop instantly. That would be a better comparison to a Windows machine, I think.
That said, I am not sure if this is how things really works, but that's how it looks like to work for me. Sorry if I spread any misinformation here :)
I personally don't have issues with startup times on my M2 Air or 5800X3D/Win11, both encrypted.
Windows, by contrast, unlocks the entire OS drive before you get to the login screen. So, a hypothetical login screen hijack would let you get to everything, or cold boot attacks/sniffing keys coming from the TPM to the CPU.
I'd argue the macOS version is better from a security aspect, but it has a necessary downside of being unable to load as much before the user can put in their password.
Linux is blazing fast when configured properly. But in reality we're talking about 2-3 seconds of difference here. How long a machine takes to POST is usually the biggest part of bootup nowadays.
Because Windows is usually a lot less optional than Office, for the average user.
Oh btw every joke has a grain of truth (sigh) https://news.ycombinator.com/item?id=28712108
Fully fabricated problems with fabricated solutions often becoming legally required to be purchased to avoid the problem they cause in the first place.
Could you elaborate? How does the insurance industry create problems like bad weather and misfortune?
The larger the cost, the bigger total cash value they can get on their percentage based profit.
Playing into the cost, they can cut deals with manufacturers directly or in lobbying for parts to be artificially inflated to make this problem even worse. Plastic fuel valve maybe costs 30 cents to manufacture but is sold for $900 and that price is doubled to install it. And the car isn’t safe to drive without it so insurance can demand you pay up or deny all coverage or payouts.
Same for medical inflation though that’s more commonly discussed.
If insurance didn’t exist as a service then these inflated prices would be dramatically cut down. We see this when you don’t use insurance at a doctors office or pharmacy checkout. Though insurance can sometimes demand insurance be used regardless of your consent simply if the cashier is aware they have insurance.
Lobbying and passively steering the direction into bloating end users cost is massively incentivized wherever possible for insurance. Then hiding behind a veil of blame to avoid accountability or even just fair payouts when you actually need them.
It’s like insurance is the IRS who runs a casino and they threaten you if you win the jackpot and then threaten to “randomly” select you for audit if you proceed to cash out for the full amount instead of a $25 Red Robin gift card.
Because I only use those apps on rare occasion, I go remove all those tasks. And each of those apps checks to see if its tasks are still there on every run or update and, if not, re-adds them. I've even tried getting clever and leaving the tasks in place but just changing the run frequency to once every month or something, but they check for that too and change it back.
Anyone know of a way to override this so I can decide if apps I don't use for weeks at a time need to be always silently running, updating and phoning home?
Don't know the solution, but one idea - is it possible to change task permissions so that those Chrome update processes will fail to update tasks?
On my Mac, I can't find any kind of launch item or background process. Chrome doesn't launch anything until I launch Chrome.
Management: Tweak prefetch and call it a new feature.
Dev1: Superfetch!
Dev2: We already did that.
Dev1: Superfetch for Office!
Management: Yes.
https://knowyourmeme.com/memes/all-right-gentlemen
https://windowsground.com/what-is-superfetch-windows-10-shou...
Maybe stack ranking does create terrible culture.
"C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe" --no-startup-window --win-session-start
Did this pattern stop being a thing and we’re back to it now? Or was it just “forgotten”?
Now there’s scrolling through hundreds of scheduled tasks called dfddg.exe with no title or description and located in c:/windows or %appdata%. Disabling the wrong identically named one bricks your system or software licenses.
Then you also have to check the registry and group policy and environment variables and spot the unwanted item that is again often bundled into a critical windows dll. Usually with the same name as the dll and its permissions are set as SYSTEM so you can’t edit it by normal means.
Then after every change you have to do a full rebooting and review all steps again. Often, they will regenerate themselves if deleted in the wrong way or the wrong order.
After all the startup things are killed there may still be kernel level startup recovery processes for things like Adobe.
This sounds like Microsoft is failing spectacularly at enforcing strict limits on what software can do.
https://learn.microsoft.com/en-us/sysinternals/downloads/aut...
Uncheck one thing. Reboot. Test system.Proceed to the next item.
I bet that loop is way faster nowadays than when I was messing around as a teen trying to get our new 1Gb hard drive desktop to boot faster.
Switched my desktop to Linux last month cause I’m just fed up with Microsoft’s user hostile bullshit
Alas at work I can’t do that
Just utterly basic fundamental stuff that was working forever in other clients, like quoting replied text
Everyone at work has to resort to farcical formatting tricks, the email chain ends up a series of people saying like "I added my comments in green bold below", "my comments in blue italic" like it's a Word document instead of a message thread
In years prior the email/usenet etiquette was simple: The '>' sign as the first character indicated a quote block in a reply with new content added after the quoted text.
Then came Microsoft with its Outlook and Outlook Express. First they fucked with everyone just for the hell of it by making their client top post by default. Then they brought in html into emails and usenet posts. Then they worked hard at it to make everything the mess it is today.
[1] https://www.tenforums.com/tutorials/167068-how-enable-disabl...
[2] https://www.tenforums.com/tutorials/160140-disable-continue-...
https://superuser.com/questions/269385/why-does-google-chrom...
Now I never understood why the chrome.exe's would hang out when I didn't install any "background apps" - anyway I suspect a similar setting in Edge is buried in there somewhere.
Microsoft puts its real talent on its customer-grade products, like Azure and SaaS. That's where they make real money, so that's where they send real talent. The only exception right now might be copilot, which will never make money... But they say that's where they're putting their best and brightest. Then again, they're probably spending billions of CPU hours to generate millions of unique disclaimers and pleasantries - when they could instead use a simple look up table to efficiently weed-out the most common/worthless prompts. That isn't the big-brain design innovation that you'd normally expect from top talent. It's not even baseline acceptable from anybody who actually knows the first thing about how computers work, really. They would rather spend 10 billion dollars on a single computer than to prioritize optimization. Its weird.
But do you want an electronics engineer who understands "instructions", "addresses", "registers", "clocks" and even knows why a pointer works? Or do you want a modern CS major who can use a template to quickly crank out non-scalable apps in a software factory? These skills are mutually exclusive.
I would happily pay for software that
was high-quality
was fast
was privacy preserving
had sane defaults
had/provided reasonable support/insight
(forum and developer blog)
had a fair pricing models
(non-subscription, x-years of updates etc)
as in an e-mail client
an office suite
a scheduler
(scheduling learning, tasks, various deadlines, calendar, ...)
photo/video editor
(wouldn't need to be of the scope of a professional suite)
a browser
(earnestly, one that wasn't a mere chrome re-skin, wasn't run by a bloated paid by Google organization like Mozilla, and would take fingerprinting prevention and privacy seriously)
...
or am I underestimating the problem? How many full-time developers working how many hours, building on open-source software where sensible (as in you wouldn't hand-roll your own cryptography, networking protocol implementations, GUI libraries) would it take, for e.g. a good cross-compatible Desktop E-Mail Client? (there's little in terms of software that I hate more than Outlook)And given competitive non-US, maybe even non EU-wages for such developers, how many 'customers' with fair pricing would such a company/startup need?
You could open-source part of your stack (as in singular libraries) for exposure and good will, could maybe offer free-tiered versions, potential fair pricing models could be similar to Sublime's https://www.sublimehq.com/store/text you could build upon technologies people are exited about and willing to take pay cuts for if that's what they could work in (Odin, Zig, Rust, ...) etc...
Even considering vendor lock in, market dominance of existing solutions, the dominance of smartphones over desktops, isn't there still a viable market? Maybe what's left is even more so, given Desktop use seems (besides gamers) consist (to a significant extent) of power users, semi-professionals/professionals & businesses?
And, even though this place here is of course a highly niche bubble, the plights of modern's software lack of quality are real and I'm sure felt beyond us.
And I don't think non-US or non EU-wages, or being open source would help. Microsoft's success is due to the lock in and GTM sales org that Microsoft has. Just see how Teams eclipsed over Slack despite the latter being a first-mover and a better quality product.
Assuming it takes me X-amount of software engineering hours to produce an alpha version of a given product and now let's imagine a rented office space plus four developers; consider renting in a major US city, and paying competitive US-major-city-wages versus doing so in a significantly smaller city in Eastern Europe (Czechoslovakia, Romania, Estonia).
In both cases you could develop an English-Language version of your product for global use and you can distribute software cheaply over the internet; you'd still charge customers in the US US-prices, yet would have saved on development costs.
I'm sure this comes with its own set of difficulties, especially regarding US business customers, but initially it could be an advantage in certain scenarios.
There also seems to be a current push towards non-US (sometimes even specifically from-EU) products in tech, which might give one an interesting market position, albeit I'm lacking details here, and it's yet to pan out how viable this trend is long-term of course.
It seems like all Office-like apps are cursed to be slow and bloated. It also seems like 32GB should be the bare minimum amount of RAM for a Windows machine today (even the “new Teams” app is sluggish nowadays and takes up quite a bit of memory or crashes often).
Google docs/google sheets/notion/coda all load faster than the native MS Word / Excel.
Photopea is another example - it clones out the main user paths of Photoshop and loads in a 1/10th of the time.
I really don't understand it.
"In thirty years our computers will have sixteen threads of execution at 4.5 GHz each, with 4 IPC or better, along with 16 gigabytes of memory that can move data at 50 gigabytes a second. Practically everyone will have solid state storage that loads and saves at more than a gigabyte per second. Many computers will have GPUs capable of beating the fastest supercomputers in the 1995 world, and most of that capacity will be used for little more than just pushing pixels to a monitor."
"Wow! I bet Microsoft Word will load instantly!"
"No. It'll take longer to load than Word 5.1 takes to load on an Amiga with an '060 accelerator running ShapeShifter. It'll be so slow that Microsoft decides to load key parts of Office when the system boots, but only if you have more RAM than can be directly accessed by a 32 bit processor."
It's something you'd expect from a snarky article from The Register or from me, if you know me, but I think both El Reg and I wouldn't've quite gotten the full extent of it.
* Windows itself takes 10 minutes to boot because it's preloading a hundred Office libraries/extensions like Mac OS 7.
* Microsoft Office apps have returned to being incredibly slow to open—because they decided that the added speed of preloading just gave them more leeway to add bloat.
* Microsoft has acquired several AI startups that use different models and provide several high-RAM, GPU-hogging apps. To work around the slowness they have worked with hardware vendors to include multiple GPUs on each actual GPU card. They don't communicate directly with each other though... That feature is only enabled for the "Enterprise" series of GPUs that cost $50,000 each.
* Microsoft Office now automatically use AI prediction for everything. Including the data (not just the formulas) in your Excel spreadsheets. But it gets it wrong so often that people wish they could turn that feature off. They can't, though, because they didn't pay for the "Enterprise" version of Windows or Office (have to have both in order to truly disable the AI).
* AI is now actively watching all inbound and outbound traffic on every PC, increasing base latency a hundredfold. Microsoft claims this allows them to catch viruses, scammers, and bad state actors faster.
On the plus side, it's nostalgic and reminds me of the old MS Word 6 on Windows 95 (or Windows 3.1?) so that's nice.l
The latest Word version does all kinds of weird stuff around formatting and numbering. I often get documents with messed up heading numbers or lists and I have no idea how to fix them. Nothing works.
This is of course problematic if you receive documents from other users :(
Can't fix problems in a project? Increase the scope to make more problems elsewhere. Soon tentacles emerge, everything has problems, and your project doesn't look as relatively bad.
So, Windows only, and "can be set", probably not the default?
Granted, I don’t work in a corporate environment, so I’m free to choose my own tools. Living in Emacs is a blessing in today’s world of bloat, lack of control, short-lived cycles, and SaaS everything.
Fix the problem? No way, Jose; We’ll move the problem somewhere else.
I would like to know how we got to a place where any application taking more than 0.5 seconds to start is acceptable in any way.
I have text editors which have visible input lag, even to my untrained eye. How in the HELL does that even happen?
All of you hustlers out there making story cards and calculating velocity: stop doing this shit! Performance is fucking important.
“CPU is cheap” — fuck you it is. If your application takes more than 0.5 seconds to start on any computer than can run Windows 11, you are either doing something wrong, or you are relying on someone that is doing something wrong and you need to work around that thing even if it is dotnet.
Developer productivity is absolutely dwarfed by the aggregated productivity loss of your customer base. Application performance and customer productivity (think of these as “minimizing the amount of time the customer spends waiting on the computer”) are paramount. PARAMOUNT! — that means they’re one of the, if not the only, most important thing to consider when making decisions.
This world is going to shit so fecking fast
Jokes aside, I did buy a 2019 dell latitude laptop, and it's an old CPU, but it's still amazed me how well it's working. The iGPU is aweful for anything 3d heavy (Gnome's compositor), but still good for anything else.
I also have an MBA and it's quite fast, but all those "you should do this the Apple way" is frustrating.
After a long look at my computing activities, I do not need much other than Emacs, Librewolf, and a video player. I still use the MBA for rare usage like Balsamiq and important video calls.
Office should be modular with a lean core and extensions for those who need them.
UI is clunky, importing/exporting office made docs is glitchy, and I've even run into actions that don't get pushed to the undo stack.
I know this stuff always gets slowly ironed out, and the devs are working really hard, but it's just a shame it's never been a viable alternative for so long.
The worst offender by far is Outlook (which isn't really MSO but looks like it is, or is it?)
Against an on prem Exchange, I get way better performance from Evolution (Linux) than Outlook (Windows).
I'm surprised they don't use the existing Windows Prefetching system for this, though.
The client was initially put off by the 2 second loader, so we designed a "fun fact" loader that had a random blurb about the industry the job seeker was searching on. The client liked that so much he actually suggested we slow down the job seeker search so the end user could see it for a bit longer.
We talked him out of it in the end but occasionally suggest throttling our servers as a feature of our current company. MSFT should look into this
When will there be a viable alternative that runs industry standard software?
[0]: https://www.redhat.com/en/blog/linux-active-directory
[1]: https://documentation.ubuntu.com/server/explanation/intro-to...
[2]: https://unix.stackexchange.com/questions/333/what-is-the-equ...
Are there any alternatives to ActiveDirectory in the Linux ecosystem? Maybe from RedHat?
They're simply too well integrated, too easy to manage, and have more features than their competitors.
> And if by magic that function would appear tomorrow, it would disappear again the day after tomorrow.
That's incorrect.
> ease of use and tight integration blows everything else out of the water.
Agree to disagree.
This is what makes Mac manageable.
I made the switch to Google Workspace and Docs years ago...
At a recent company used the Microsoft largely for teams, but there are so many unnecessary headaches and time consuming log-ins (each taking a few seconds that continually add up) that the next paradigm cannot come soon enough...
The best way off Microsoft is via the browser... Vanilla JS Webgl etc
Prediction: we are less than a year away from this becoming a reality...
Edit: Possible solution: simply boot into a browser, with an underlying cloud syncing filesystem with trusted circles of sharing...
How many seconds would be required to go from power button to accepting input in this paradigm?
Both opened fully in a second. So just how much faster should these apps actually open?
I suspect this is aimed at Enterprise installations where the machines are gunked up with corporate spyware.
Test those things after a reboot (not a cold start) aka "Fast Startup", that's why you have massive uptime when you always shutdown/start and don't do reboot's.
And this is a pretty average laptop. Dell, Intel Core i7 with 16GB of RAM. In fact it's Windows 11 Home edition.
Now I bet if I tried this same test on my work PC which is supposed to be much beefier I'd probably find that it takes an extra second or two.
This being HN I'm almost 100% certain that the only time anyone touches a PC is at work. And work PCs have loads of gunk on them.
But if you open the 2nd spreadsheet (File > Open) from inside Excel, it opens up a lot faster in a separate window.
[0] https://devblogs.microsoft.com/oldnewthing/20250421-00/?p=11...
Sure it is a challenge to write performant code, I know that as well as any other embedded programmer, but my feeling is that in userspace or web programming most people have stopped even trying to be performant.
They will give you paginated content with 10 items per page, items whose data makes up less than 1% of the javascript they are loading. Meanwhile you could literally give people all items with no javascript and be faster (if you have the luxury of knowing the number of items).
I wonder if it even matters though. Corporations are always going to use it, and the cheapest laptops will always come with it.
If you had asked me a minute ago, I could have sworn it's already a well known fact that they do this. They've been doing it since Windows 95 and explorer. At least.
Like office 97 speed.
There's your problem.
It makes no sense that modern Office won't start faster than Office 97. Sure it has more features, it's bigger in any way, but it's also not running of a spinning hard drive and 32Mb of RAM.
Can Office even start up without an Internet connection, or does it just take even longer?
Office doesn't have to talk to the internet at all other than periodic license checks.
I won't run it without Ohook, even though I have a license, because the cloud shit has screwed me a couple of times.
Windows is categorically left out.
Funnily, it is probably going to be Valve that will be the death of Windows.
All my Steam games runs better on Bazzite / Steam OS without any Windows interruption. At some point, this will spread like a wild fire in the gamin community - teaching people about alternatives to Windows.
The next generation will not be bothered with Windows.
I had to invest a little into getting Sunshine to work with virtual displays (like Artemis), but even that took like a day and it'll be easy to set up if I ever need to reinstall.
It needs a plugin to open and save docx but it works well.
I don't know if there's some fantastic functionality I'm missing out on but it works fine for me.
I might be wrong. I was a kid when I read about this.
i dont need the other programs and ALL of them contain a TON of features i'll never ever use (as a private person, not company of course)
Office 2016 is the last year before it went OneDrive and will still autosave documents locally.
Apparently there are activation scripts that can help you if you’ve lost your license information for these older products…
I found this running on a milling machine's control PC, and I was seriously annoyed.
He literally has Vietnam flashbacks if working at Microsoft comes out as a topic when talking
Download microsoft autoruns from their site to turn off everything that runs when windows start to do away with all the crap.
On the other hand, it's disappointing, since I remember early M$ Office version being blazing fast (but again, I suspect that there was some preloading going on).
There is a noticeable lag when you enter something in a cell and then hit enter.
Isn't Microsoft migrating all of their Office apps over to WebView2 (their version of Electron).
If so, I wonder how much of this is related to that than anything else?
Of course if you do not use Office all day, and are OK to wait until it loafs on demand, the preloading should be turned off.
(And, frankly, if you don't use Office, why do you need Windows anyway? To play games that don't run on a Steamdeck?)
To instantly find any file anywhere, nice productivity boost (among many)
Well, on Linux I have `locate` and `fd` which work really well, doing that very thing, and `ripgrep` does even more.
I heard something built-in is also shipped with macOS, it's called something like "Spotlight".
OTOH MS Excel for Windows is one and only, the macOS version barely holds a candle to it, and analogs are even further away.
For work devices for office work you want this.
Edit: Should have known I wouldn't be the first to remember :)
Windows 7 was so good because it was Vista without (much of) the bullshit.
My girlfriend and I work in the same company, I have an M1 mac because I'm a dev and she has a shitty Dell laptop with Windows. Sometimes it's easier for her to send me an Excel, make the edit for her and send it back, because Excel constantly hangs on her laptop.
The minute someone is using a computer and it's noticeably slow their brain will start looking for something else to do and they'll lose any momentum on whatever task they where working on.
It's a little bit limited in number of lines and columns (but if you need that many you should probably be using a proper db).
I'm so sick of Windows.When Elon took over twitter and you stopped being able to say anything, I was banned because I told Microsoft I hate them and I hope they burn - I'd just received an update that forced some bullshit on me and it inconvenienced my personal and work life, I felt they needed told they suck.
I don't miss twitter at all, I still hope Microsoft burns.
The biggest problem is sharepoint. You save your files "somewhere" and links between them barely work
Markdown has docs, slides, tables, diagrams (via Mermaid), and can be read on any system that has a text editor. It's simple, non-proprietary, and future proof.
This repo says it all: https://github.com/microsoft/markitdown
It helped me ditch Windows completely because the start-up experience for Windows 11 was just atrocious even with the smart/cached shutdown thing they're doing (I forgot the official name for it). I'm glad to see even some (un)official confirmation from this article that hogging resources at start-up is pretty much best practice in Windows land.
In Linux land today, FF and Chrome (but Chrome especially) take ages to start-up at first but system boot is as smooth as can be expected.
I thought I've made myself immune to UI bloat because, like all true programmers, I do everything on the terminal (short of browsing the web, like TRUE programmers). Until I noticed that whenever I invoke my terminal, it takes ages for the prompt to even appear, not to mention accept keyboard input.
After much frustration, I figured out that the culprit is---drumroll---NodeJS. Don't quote me on this but I think Node brought Windows best practices into the Linux terminal.
Fortunately, Linux being Linux, I managed to patch my system such that Node doesn't actually do anything unless I invoke it myself. The downside is that I have an odd script every now and then that relies on Node and these scripts would fail if I run them without having ran `node` beforehand.
...and his head is till pontificating on how the world should run...
There are sooo many things to unpack in the world of Windows. We could talk about Windows 11. We can talk about bloated software like Office or Visual Studio. We could even compare the performance difference between old version from the 90s to what people use today.
On one end.. I get it. Microsoft are throwing so much "features" at you, whether part of the OS or particular software like Office. If they dont their competitors will.
Microsoft "Office" use to be a Desktop Publishing suite which included the likes of Excel, Word, Access, etc. Now, Office is just a category of hundreds of applications accessed via the cloud. To think, whether on Windows bootup, or launching an modern application involves many API calls to something, somewhere in the world. No wonder applications like MS Teams takes 8-20 seconds to load (and that is sometimes being nice!)
Considering the specs of my PC.. it should load incredibly fast. Yet, for some reason, I could run something like Visual Studio 6.0 from the late 90s and will load INSTANTLY on a modern machine -- and it will be single threaded! Some may be thinking "but modern Visual Studio has these features I cannot live without!" -- are these excuses why it take so long to LOAD?
The problem we have is.. to some degree.. you have a development team who do not care about performance or memory. On the other end, you have a development team that are frustrated because it is outside their control what is considered IMPORTANT. If they speak up it might cost them their job.
I will always remember during my College days I had a project which was about doing 3D animations. We had to do a Presentation on our work at the end of the assignment. I think most people spend 50% of their project time on Microsoft Powerpoint. For its day, on the hardware available.. even that was bloat! They were fiddling with their text and images on screen. I was sooo fed up with it I decided to amend my program so it could be used as a presentation slideshow as well. Once added, all I had to do was use a text editor, writing scheme-like code to a file, covering how build each slide and animations. It was running smooth and fast and everyone was asking "how did you get it to look so good" thinking I am doing some cool trick in MS Powerpoint. Nope... just OpenGL in C.
Things like Office has always been bulky and slow. The funny thing is -- I bet Microsoft Powerpoint from 1997 would run EXTREMELY fast on todays hardware.
At home, I moved away from Microsoft and Windows since 2006-ish trying Ubuntu. I did experiment with Suse Linux before that but once home internet became solid.. so did Linux in my opinion. Sure, I still use Windows as my job requires it... and I can see Microsoft keeping their power/control thanks to cloud/azure and other things. Also, Excel has such as legacy to it that many people in Finance and other dept RELY on Excel! Point is many will stick with Windows because of that.
You have Valve helping Linux thrive in the gaming space. We need something that can help Linux thrive in the Office space. I am suprised there is not modern Spreadsheet application that takes us away from Excel. Sadly, you need a full DTP suite.
Where did you saw that?
Many such cases.