I don't know how we got here and I don't know how to fix it, but "bring back idiomatic design" doesn't help when we don't have enough idioms. I'm not even sure if those two behaviors are wrong to be inconsistent: you're probably more likely to want fancier formatting in a PR review comment than a chat message. But as a user, it's frustrating to have to keep track of which is which.
Given the reduction to a single key, the traditional GUI rule is that Enter in a multiline/multi-paragraph input doesn’t submit like it does in other contexts, but inserts a line break (or paragraph break), while Ctrl+Enter submits.
Chat apps, where single-paragraph content is the typical case, tend to reverse this. Good apps make this configurable.
It’s turtles all the way down.
But if you click an arrow on the top of the text box, it expands to more than half of the height of the window, and now Enter does a line break and Shift-Enter sends. Which makes a lot of sense because now you're in "message composer" / "word processor" mode.
If you turn on Markdown formatting, shift+enter adds a new line, unless you’re in a multi-line code block started with three backticks, and then enter adds a new line and shift+enter sends the message.
I can see why someone thought this was a good idea, but it’s just not.
Infuriatingly, some apps try to be smart — only one line, return submits; more than one line, return is a new line, and command-return submits; but command-return on just one line beeps an error.
Years of muscle memory are useless, so now I’m reaching for the mouse when I need to be clear about my intent
So much is solved when developers just use the provided UI controls, so much well-studied and carefully implemented behavior comes for free
Then make it easier for users to learn that they can enter more quickly with control+enter which you can advertise via tooltip or adjacent text.
Better that 100% find it trivially usable even if only 75% learn they can do it faster
The best part is, it's super easy to customize them, read others for inspiration or to see how they did something, or even ship multiple per site to deal with different user preferences. Through this "forms" api, and little-known browser features like url-fragments, target/attribute selector, and style combinators, plus "the checkbox hack" you can build extremely responsive UIs out of it by "cascading" UI updates through your site! When do you think they're going to add it to next.js?
I'm tentatively calling this new UI paradigm "no-framework" or "no package manager", not sure yet https://i.imgur.com/OEMPJA8.png
There is incompetence and there is also malevolence in the encouragement of dark patterns by the revenue side of the business.
“But why can’t you just do it?” Because I recognise the importance of consistent UX and an IA that can actually be followed.
Just like developers, (proper) designers solve problems, an we need to stop asking them for faster bikes.
The answer should be "because users will hate it and use a competing product that's better designed".
A shame that it isn't actually true any more.
However, I really wonder how formula 1 teams manage their engineering concepts and driver UI/UX. They do some crazy experimental things, and they have high budgets, but they're often pulling off high-risk ideas on the very edge of feasibility. Every subtle iteration requires driver testing and feedback. I really wonder what processes they use to tie it all together. I suspect that they think about this quite diligently and dare I say even somewhat rigidly. I think it quite likely that the culture that led to the intense and detailed way they look at process for pit-stops and stuff carries over to the rest of their design processes, versioning, and iteration/testing.
Let's take a credit card form:
- Do I let the user copy and paste values in?
- Do I let them use IE6?
- Do I need to test for the user using an esotoric browser (Brave) with an esoteric password manager (KeePassXC)?
- Do I make it accessible for someone's OpenClaw bot to use it?
- Do I make it inaccessible to a nefarious actor who uses OpenClaw to use it?
I could go on...
Balancing accessibility and usability is hard.[0]
[0] Steve Yegge's platform rant - https://gist.github.com/chitchcock/1281611
The system UI frameworks are tremendously detailed and handle so many corner cases you'd never think of. They allow you to graduate into being a power user over time.
Windows has Win32, and it was easier to use its controls than rolling your own custom ones. (Shame they left the UI side of win32 to rot)
macOS has AppKit, which enforces a ton. You can't change the height of a native button, for example.
iOS has UIKit, similar deal.
The web has nothing. You gotta roll your own, and it'll be half-baked at best. And since building for modern desktop platforms is horrible, the framework-less web is being used there too.
First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac). So, as you point out regarding the Win32 API, developers had essentially one way to do things, or at least the far easiest way to do things. Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".
The web started out as a document sharing system, and it only gradually and organically turned over to an app system. There was simply no single default, "easiest" way to do things (and despite that, I remember when it seemed like the web converged all at once onto Bootstrap, because it became the easiest and most "standard" way to do things).
In other words, I totally agree with you. You can have all the "standard idioms" that you want, but unless you have a single company providing and writing easy to use, default frameworks, you'll always have lots of different ways of doing things.
The Windows I remember was in some ways actually less consistent than what we have now. It was common for apps to be themeable, to use weirdly shaped windows, to have very different icon themes or button colors, etc. Every app developer wanted to have a strong brand, which meant not using the default UI choices. And Microsoft's UI guidelines weren't strong enough to generate consistency - even basic things like where the settings window could be found weren't consistent. Sometimes it was Edit > Preferences. Sometimes File > Settings. Sometimes zooming was under View, sometimes under Window.
The big problem with the web and the newer web-derived mobile paradigms is the conflation between theme and widget library, under the name "design system". The native desktop era was relatively good at keeping these concepts separated but the web isn't, the result is a morass of very low effort and crappy widgets that often fail at the subtle details MS/Apple got right. And browsers can't help because every other year designers decide that the basic behaviors of e.g. text fields needs to change in ways that wouldn't be supported by the browser's own widgets.
Now that all we do is “experience” a “journey,” it’s more about the user doing what the app wants instead of the other way around
That's overemphasising the differences considerably: on the whole Windows really did copy the Macintosh UI with great attention to detail and considerable faithfulness, the fact that MS had its own PARC people notwithstanding. MS was among other things an early, successful and enthusiastic Macintosh ISV, and it was led by people who were appropriately impressed by the Mac:
> This Mac influence would show up even when Gates expressed dissatisfaction at Windows’ early development. The Microsoft CEO would complain: “That’s not what a Mac does. I want Mac on the PC, I want a Mac on the PC”.
https://books.openbookpublishers.com/10.11647/obp.0184/ch6.x... It probably wouldn't be exaggerating all that wildly to say that '80s-'90s Microsoft was at the core of its mentality a Mac ISV, a good and quite orthodox Mac ISV, with a DOS cash-cow and big ambitions. (It's probably also not a coincidence that pre-8 Windows diverges more freely from the Mac model on the desktop and filesystem UI side than in regards to the application user interface.) And where Windows did diverge from the Mac those differences often ended up being integrated into the Macintosh side of the "desktop era": viz. the right-click context menu and (to a lesser extent) the old, 1990s Office toolbar. And MS wasn't the only important application-software house which came to Windows development with a Mac sensibility (or a Mac OS codebase).
One reason so many single-person products are so nice is because that single developer didn't have the time and resources to try to re-think how buttons or drop downs or tabs should work. Instead, they just followed existing patterns.
Meanwhile when you have 3 designers and 5 engineers, with the natural ratio of figma sketch-to-production ready implementation being at least an order of magnitude, the only way to justify the design headcount is to make shit complicated.
The bigger issue I see with "got to keep lots of designers employed" problem is the series of pointless, trend-following redesigns you'd see all the time. That said, I've seen many design departments get absolutely slaughtered at a lot of web/SaaS companies in the past 3 years. A lot of the issue designers were working on in the web and mobile for the 25 years prior are now essentially "solved problems", and so, except for the integration of AI (where I've seen nearly every company just add a chat box and that AI star icon), it looks like there is a lot less to do.
Most people only uses one computer. Inconsistency between platforms have no bearing on users. But inconsistency of applications on one platform is a nightmare for training. And accessibility suffers.
As a sibling commenter put it, previously developers had "rails" that were governed by MS and Apple. The very nature of the web means no such rails exist, and saying "hey guys, let's all get back to design idioms!" is not going to fix the problem.
This eroded on the web, because a web page was a bit of a different “boxed” environment, and completely broke down with the rise of mobile, because the desktop conventions didn’t directly translate to touch and small screens, and (this goes back to your point) the developers of mobile OSs introduced equivalent conventions only half-heartedly.
For example, long-press could have been a consistent idiom for what right-click used to be on desktop, but that wasn’t done initially and later was never consistently promoted, competing with Share menus, ellipsis menus and whatnot.
This feels like the root cause to me as well. Or more specifically, the web does have idioms, the problem is that those idioms are still stuck in 1980 and assume the web is a collection of science papers with hyperlinks and the occasional image, data table and submittable form.
This is where the "favourites" list and the ability to select any text on a web pages came from.
Web apps not only have to build an application UI completely from scratch, they also have to do it on top of a document UI that "wants" to do something completely different.
Modern browsers have toned down those idioms and essentially made it "easier to fight them", but didn't remove or improve them.
You can definitely do so, it's just not obvious or straightforward in many contexts.
> building for modern desktop platforms is horrible, the framework-less web is being used there too.
I think it's more related to PM wanting to "brand" their product and developers optimizing things for themselves (in the short term), not for their users.
Ugh, date pickers. So many of these violently throw up when I try to do the obvious thing: type in the damn date. Instead they force me to click through their inane menu, as if the designer wanted to force me into a showcase of their work. Let your power users type. Just call your user’s attention back to the field if they accidentally typed 03/142/026.
If you have an international audience that’s going to mess someone up.
Better yet require YYYY-MM-DD.
This is the equivalent of requiring all your text to be in Esperanto because dealing with separate languages is a pain.
"Normal" people never use YYYY-MM-DD format. The real world has actual complexity, tough, and the reason you see so many bugs and problems around localization is not that there aren't good APIs to deal with it, it's that it's often an after thought, doesn't always provide economic payoff, and any individual developer is usually focused on making sure it "looks good" I'm whatever locale they're familiar with.
Is is the device display language, the keyboard input language, my geo location, my browser language, my legal location, my browser-preferred website language, the language I set last time, the language of the domain (looking at amazon.co.uk), the language that was auto-selected last time for me on mobile or... something else entirely?
And for the rest of the users who have no idea about locales, using whatever locale they have on their computer might be technically incorrect for some of them, but at least they're somewhat used to that incorrectness already, as it's likely been their locale for a while and will remain so.
- Use localization context to show the right order for the user
- Display context to the user that makes obvious what the order is
- Show the month name during/immediately after input so the user can verify
Something not mentioned here (that came from the Mac world as I understand it): everywhere that the text ends with an ellipsis, choosing that action will lead to further UI prompts. The actions not written this way can complete immediately when you click the button, as they already have enough information.
Especially now, in the AI era, where each person can make a relatively working app from the sofa, without any knowledge of UI/UX principles.
Also, the trend of hiding scrollbars, huge wasted spaces, making buttons look really flat, confusing icons, confusing ways of using drop downs rather than using the select/option html controls etc have all made the whole experience far inferior to where desktop UI was even decades ago
Underrated. Except for dyslexic people, and the most obvious icon forms, I am pretty sure most people are just better and faster at recognising single words at a glance than icons.
Only if there are few icons. If every item in that menu in the screenshot of Windows had an icon, and all icons were monochrome only, you'd never quickly find the one you want.
The reason icons in menu items work is because they are distinctive and sparse.
But of course, a good design is adapted to its user: frequent/infrequent is an important dimension, as is the time willing to learn the UI. E.g., many (semi) pro audio and video tools have a huge number of options, and they're all hidden under colorful little thingies and short-cuts.
Space is important there, because you want as many tracks and Vu meters and whatever on your screen as possible. Their users are interested in getting the most out of them, so they learn it, and it pays off.
* Undo & redo
* Help files & context sensitive F1
* Hints on mouse hover
* Keyboard shortcuts & shortcut customisation
* Main menus
* Files & directories
* ESC to close/back
* Drag n drop
Revelation features when they first became common. Now mostly gone on mobile and websites.
UX has gotten from something with a cause to being the cause for something
> that a link? Maybe!
When Apple transitioned from skeuomorphic to flat design this was a huge issue. It was difficult to determine what was a button on iOS and whether you tapped it (and the removal of loading gifs across platforms further aggravated problems like double submits).
Another absurdity with iOS is the number of ways you can gesture. It started simply, now it is complex to the point where the OS can confuse one gesture for another.
Then the website has made its first mistake, and should delete that checkbox entirely, because the correct answer is always "yes". If you don't want to be logged in, either hit the logout button, or use private browsing. It is not the responsibility of individual websites to deal with this.
Since then, the "idiomatic design" seems to have been completely lost.
It might have started in an innocent way, all those A/B tests about call-to-action button color, etc. But it became a full scale race between products and product managers (Whose landing page is best at converting users?, etc.) and somewhere in this race we just lost the sense of why UX exists. Product success is measured in conversion rates, net promoter score, bounce rates, etc. (all pretty much short-term metrics, by the way), and are optimized with disregard to the end-user experience. I mean, what was originally meant by UX. It is now completely turned on its head.
Like I said, I wonder if there is way back of if we are stuck in the rat race. The question is how to quit it.
As soon as UI design became a creative visual thing rather than a functional thing , everything started to go crazy in UI land..
Another annoyance is that many web forms (and desktop apps based on web tech) don’t automatically place the keyboard focus in an input field anymore when first displayed. This is also an antipattern on mobile, that even on screens that only have one or two text inputs, and where the previous action clearly expressed that you want to perform a step that requires entering something, you first have to tap on the input field for the keyboard to appear, so that you can start entering the requested information.
The other day I used Safari on a newly setup macOS machine for the first time in probably a decade. Of course wanted to browse HN, and eventually wanted to write a comment. Wrote a bunch of stuff, and by muscle memory, hit tab then enter.
Guess what happened instead of "submitted the comment"? Tab on macOS Safari apparently jumps up to the addressbar (???), and then of course you press Enter so it reloads the page, and everything you wrote disappears. I'm gonna admit, I did the same time just minutes later again, then I gave up using Safari for any sort of browsing and downloaded Firefox instead.
And this highlights something that I think the author glosses over a little but is part of why idioms break for a lot of web applications. A lot of the keyboard commands we're used to issue commands to the OS and so their idioms are generally defined by the idioms of the OS. A web application, by nature of being an application within an application, has to try to intercept or override those commands. It's the same problem that linux (and windows) face with key commands shared by their terminals and their GUIs. Is "ctrl-c" copy or interrupt? Depends on what has focus right now, and both are "idiomatic" for their particular environments. macOS neatly sidesteps this for terminals because "ctrl-c" was never used for copy, it was always "cmd-c".
Incidentally, what you're looking for in Safari is either "Press Tab to highlight each item on a webpage" setting in the Advanced settings tab. By default with that off, you would use "opt-Tab" to navigate to all elements.
I'm not sure why this isn't the default, but this allows for UI navigation via keyboard on macOS, including Safari.
G Suite (no s) was the old name for Google Workspace. Google Workspace includes GMail, Google Docs, Google Sheets, Google Calendar, etc., so it doesn't really make sense to say that Google Workspace has a different UX than Google Docs, if Google Docs is part of Google Workspace.
Disclosure: I work at Google, but not one of the listed products.
IDK if such was the intent, of course.
laughs in linux wouldn’t that be nice.
> Avoid JavaScript reimplementations of HTML basics, e.g. React Button components instead of styled <button> elements.
I've been hearing that for the entire Internet era yet people continue to reinvent scrollbars, text boxes, buttons, checkboxes and, well, every input element. And I don't know why.
What this article is really talking about is conventions not idioms (IMHO). You see a button and you know how it works. A standard button will behave in predictable ways across devices and support accessibility and not require loading third-party JS libraries.
Also:
> Notwithstanding that, there are fashion cycles in visual design. We had skeuomorphic design in the late 2000s and early 2010s, material design in the mid 2010s, those colorful 2D vector illustrations in the late 2010s, etc.
I'm glad the author brought this up. Flat design (often called "material design" as it is here) has usability issues and this has been discussed a lot eg [1].
The concept here is called affordances [2], which is where the presentation of a UI element suggests how it's used, like being pressed or grabbed or dragged. Flat design and other kinds of minimalism tend to hide affordances.
It seems like this is a fundamental flaw in human nature that crops up everywhere: people feel like they have to do something different because it's different, not because it's better. It's almost like people have this need to make their mark. I see this all the time in game sequels that ruin what was liked by the original, like they're trying to keep it "fresh".
All of these people who keep saying that webapps can replace desktop applications were simply never desktop power users. They don’t know what they don’t know.
I wish more people would avoid or at least introduce abbreviations that may be unfamiliar to the audience.
I don't care about the new features in a browser update. Ideally, nothing at all has changed.
I don't want a "tour" of the software I just installed. I, presumably, installed it to do something, and I just want to do that thing.
I don't want to have to select a preference for how a specific action is performed in your software. If it's not what I expected, I will learn it.
And for the love of GOD, nobody wants to subscribe to your newsletter.
I don't advocate for removal of this checkbox but I would at least re-consider if that pattern is truly a common knowledge or not :)
> Both are very well-designed from first principles, but do not conform to what other interfaces the user might be familiar with
> The lack of homogeneous interfaces means that I spend most of my digital time not in a state of productive flow
There are generally two types of apps - general apps and professional tools. While I highly agree with the author that general apps should align with trends, from a pure time-spent PoV Figma is a professional tool. The design editor in particular is designed for users who are in it every day for multiple hours a day. In this scenario, small delays in common actions stack up significantly.
I'll use the Variables project in Figma as an example (mainly because that was my baby while I was there). Variables were used on the order of magnitude of billions. An increase in 1s in the time it took to pick a variable was a net loss of around 100 human years in aggregate. We could have used more standardized patterns for picking them (ie illustrator's palette approach), or unified patterns for picking them (making styles and variables the same thing), but in the end we picked slightly different behavior because at the end of the day it was faster.
In the end it's about minimizing friction of an experience. Sometimes minimizing friction for one audience impacts another - in the case of Figma minimizing it for pro users increased the friction for casual users, but that's the nature of pro tools. Blender shouldn't try and adopt idiomatic patterns - it doesn't make sense for it, as it would negatively impact their core audience despite lowering friction for casual users. You have to look at net friction as a whole.
Find a run you like, and build off that.
On the web, the rise of component libraries and consistent theming is promising.
Are you serious? Nothing has come close to it. Yeah we have higher resolution screens, but everything else is much less legible and accessible than that screenshot.
When someone asks me for a checkbox so they can have my app work their way instead and everyone else can do theirs, the hair stands up on the back of my neck. The check boxes are hard to discover unless you put them front and center, in which case they remain there forever serving no purpose.
I would rather redesign the entire interface, either to find the right answer that works for everyone, or to learn what makes one class of users different from another. The check box is a mode, and nodes are to be avoided if I possibly can.
I realize that this puts me at odds with a whole class of users who want to make their box do their thing. It's your box and you should do what you want. And I really love style sheets for that. Rather than cobbling together my own set of possible preferences you should have something Turing complete. Go nuts with it.
They added more customizability in Material 2 (or was it 3?), but yeah at that point some of the damage was done.
Tell me you know nothing about web development without saying you know nothing about web dev ...
1. React is an irrelevant implementation detail. You can have a plain HTML button in a button component, or you can have an image or whatever else. React has nothing to do with the design choices.
2. React is also how you get consistent design across a major web app. Can you imagine if every button on every site was the same Windows button gray color, regardless of the site's color? It'd be awful! React components (with CSS classes) are a way for a site like Amazon to make all their buttons orange (although I don't actually know if Amazon uses React specifically). But again, whether they look and act like standard buttons comes down to Amazon's design choices ... not whether their tech stack includes React or not.
Look idiomatic design is incredibly important to web design. One of the most popular web design/usability books, Don't Make Me Think, is all about idiomatic design!
But ultimately it's a design choice, which has very little, if anything at all, to do with which development tools you use.
I don't understand this point specifically. I make all buttons on a site have the same theme without needing a framework, library or build-step!
Why is React (or any other framework) needed? I mean, you say specifically "React is also how you get consistent design across a major web app.", but that ain't true.
However, as you build more complex and interactive applications, you need "framework", like React. It's essential to simply handle the complexity of such applications. You will not find a major web app that is built with out a framework (or if it is, the owners will essentially have to create their own framework).
When you're using such tools, they are how you enforce consistent UI. Take Tailwind, the hugely popular CSS framework (I believe its #1). They have nothing to do with Javascript ... but even they willl tell you (https://v3.tailwindcss.com/docs/reusing-styles#extracting-co...):
"If you need to reuse some styles across multiple files, the best strategy is to create a component if you’re using a front-end framework like React, Svelte, or Vue ..."
The author is completely mistaken in thinking React ... or even that layer of web technology at all (the development layer) ... has anything to do with what he is complaining about. It has everything to do with design choices, which are almost completely separate from which framework a site picks.
Speaking as a user not a developer, it'd be lovely.
Not a webdev, but can't you just use CSS on the <button> element for that?
There's a reason why 99.9% of web apps use JavaScript, and with it a tool (framework) like React, Astro, Angular, or Vue. And if you're using such tools, you use them (eg. you use React "components") to create a consistent UI across the site.
But again, which tool you use to develop a site has very little to do with what design choices you make. A React dev with no designer to guide him might pick the most popular date picker component for React, and have the React community influence design that way, but ... A) if everyone picks the most popular tool, it becomes more idiomatic (it's not doing this that creates divergence), and B) if there is a human designer, they can pick from 20+ date picker libraries AND they can ask the dev team to further customize them.
It's designers (or developers playing at being designers) that result in wacky new UI that's not idiomatic. It has (almost) nothing to do with React and that layer of tooling, and if anything those tools lead to more idiomatic design.
This Twitterism really bugs me.
You took the time to write a really detailed response (much appreciated, you convinced me). There’s no need to explicitly dunk on the OP. Though if you really want to be a little mean (a little bit is fair imo), I think it should be closer to level of creativity of the rest of your comment. Call them ignorant and say you can’t take them seriously or something. The twitterism wouldn’t really stand on its own as a comment.
Sorry for the nitpicky rant.
It bugs me that the author is "dunking on" React without knowledge on the matter (React is the tool you use to enforce consistent UI on a site; it has almost nothing at all to do with a design decision to have inconsistent UI). So I guess I "dunked on him" in response.
But ... too wrongs don't make a right. I'd remove the un-needed smarminess, if it wasn't already too late to edit.