The Claude Code rendering UI is the first place where I realized the TUI is more like a DOS or Borland UI system rather than a command line interface.
I was poking about CLAUDE_CODE_NO_FLICKER=1 setting when I realized what exactly this TUI is, it is layers of stuff showing up on top of each other with terminal codes.
Ended up reading the Ink Terminal implementation of React
https://github.com/vadimdemedes/ink
Fascinating how it ends up looking Wordperfect or Wordstar from the past instead of pixel based graphics.
The usability for a vision impaired user is about the same, though I remember braille pads for DOS tools (80x25) which work better than all the screen readers which came later.
They’re attempts at pretending to have Windows (etc.) GUIs in a terminal.
Same stuff people made for DOS when Windows wasn’t common or good enough yet.
I’m not surprised they’re a disaster. Or built without understanding the abilities of the terminal they’re running on.
Actually, I think that is close to a good name for them: Terminal-based GUIs.
Some are pretty useful, for instance I like lazygit as a simple dashboard/panel for a small repo (or when making small changes to a larger one), some make me wonder what those who made them were smoking!
The less silly ones are handy when you are tinkering with a far away machine and want something a little more interactive than CLI commands and stuff connected by pipes and scripts but don't want to deal with the latency of GUI remoting. Some, though, are so badly thought out that they are slower than using a browser over long-distance X…
My objection to TUI is I don’t think it’s clear enough for what’s happening here. I think you could easily argue that applies to most readline style stream CLI programs.
Would you call a fully 3D UI in VR, not a planar in the VR world but true 3D, a GUI? It is graphical by definition. But if you talked to someone about a GUI that’s not what they’d think you’re talking about without additional context.
That’s my objection. I think TUI implies way less than what these programs are doing. Yeah it can describe them but I don’t think it should be the word for them.
I get it that maybe the constraints of terminals force design of TUIs to be more focused on the purpose of the tool than polish, but it's not that compelling of a point to me.
But the terminal is just fundamentally the wrong basic abstraction on which to build a structured GUI, it just happens to require few enough bits to be sent over the wire that it actually works reasonably well over SSH as opposed to pushing graphics.
Not only forwarding is trivial with Wayland, it also tends to provide better experience than X11 does.
Good performance with X11 is possible if you avail yourself of the X11 primitives.
This is where Lilith Nyan-Nyan (she/they/it, mastodon lilith@catbox.gay), core Wayland developer, pipes up and says "OwO nobody develops like that anymore :3 everything is rendered client-side on the GPU, and then zero-copy transferred to the compositor for display... really that's the only function the X server has anymore, and wayland is a better architecture for this uwu"
But what Mx. Nyan-Nyan, and a lot of developers of her/their/its generation don't realize, is that in the past Unix developers strove to write programs with UIs that were usable across network lines, and availed themselves of the X11 protocol's graphical primitives to do just this. Emacs, for example—particularly emacs-lucid—is incredibly snappy even when an Emacs in the cloud is rendering to your local X server. If you write things the modern way, which developers largely do assuming a 100% local display, then in the remote case the X client will have no choice but to ship uncompressed pixmaps of each frame over the network to the X server, which greatly slows things down.
What we are seeing is that "the modern way" is beginning to be applied to terminal apps as well! Claude Code, for instance, has a React-based layout and rendering flow (does it use Ink?), and (until recently?) redrew the entire chat history on each update, leading to the slowdown and flicker problems it had. It would be absolutely unusable over, say, the kind of 9600-baud link that was commonplace back when people used actual terminals to connect to large systems (and ran full-screen apps in those terminals!). It's only a bloat cycle or two more before these programs start to struggle over a typical ssh link, too!
We've utterly lost the plot when it comes to software.
I typically prefer CLI myself but having a TUI to manage torrents for instance was much more ergonomic.
Like, okay, they are a big step back with accessibility, but they're flickering garbage because they were vibecoded in a weekend and the TS or Python library they're built on was similarly forced upon this world.
If you're going to "run command, edit command, run command", performing the edits from the terminal you're running the commands in seems reasonable/intuitive. (In contrast, for tools like VSCode, I think it's more common for terminals to take up a fraction of the screen space rather than switching it to full screen. And then developers will say they need a huge monitor).
It also seems to be that keyboard-driven programs are more commonly TUI than GUI. e.g. magit or lazygit. Or lazydocker. Or k9s.
- you can use em over ssh
- they’re typically made with keyboard usage in mind, which is often an afterthought in a typical browser based UI
- other GUI options are browser (sandboxes, obvi, not good for lil personal tools), native (not dead simple, compared to TUI/browser/electron), or something like electron (no way lmao)
I don’t seek out TUI’s instead of other solutions. But it’s so dang easy to pop open a new pane and run lazygit. And it makes you look really cool when people walk behind you
If developers want to experiment with various UI configs then let them but keep a CUA in the background that can be called upon by machines and humans alike. (Unfortunately, ergonomics has never been a strong point for developers.)
They're a long way from web apps, far worse on most axes.
Back in the 90s when most SAP systems switched from AS/400 terminals to Windows NT, people reported massive losses in productivity.
I've never worked on SAP, my mother did. And basically, she went from a fully tabular, function-key based oriented workflow, to holding a mouse, moving around and clicking a lot (tabbing and F keys were lost for many functions).
She showed me how she could go from ESC ESC F4 F3 TAB TAB and she was across the whole system a super speed. And this was a terminal, not the actual system!
The short of the story is this
Windows based application work best for discoverability and new users
Terminal based applications work best for faster, memory based navigation and power users.
In fact, all successful applications for professionals/power users are built with fast paths in mind. Even Microsoft's ribbon which gets a lot of hate for some reason is an example of that, it's keyboard-driven, customizable, and discoverable at the same time.
Clearly no one put thought into any of that. It was just “make terminal, but pretty”.
Seems a lot more viable than trying to get new standard escape codes and outputting those along with visual content that may be flickering erratically. Also probably gets too complex faster than those proposals with more intricate UIs, but IMO it's really hard to defend TUIs for anything but relatively simple programs as an in-between a CLI and a native application.
I am surprised, though, that something like "turning off the cursor" enhances the accessibility.
> It isn't fair to blame TUIs.
> The real problem is that pretty much the whole stack has a terrible AX story.
> First, most GPU-rendered terminal emulators don't engage in system-provided accessibility APIs AT ALL. Because text is GPU-rendered, AX tooling can't "read" it, it just shows up as an image. This applies to Kitty, Alacritty, WezTerm. My own terminal Ghostty is AX-readable (on macOS), and so are others like iTerm2 and Terminal.app (which admittedly do it better than me, we have gaps to fill).
> Second, there are no terminal sequences or initiatives at all for TUIs to communicate AX information to the emulator, so the emulator itself can't do much more than display a blob of text to AX tooling. We need the equivalent of ARIA-style annotations but for terminal cells, runs, and regions. No such initiative exists. Even if TUIs do great things with the cursor, this is going to bite a lot of use cases.
> As an example of combining the above, I've been working on something with Ghostty where we integrate semantic prompt (OSC133) and AX APIs so that we can present each shell prompt, input, and command as structurally significant to AX tooling (rather than simply a text box where the cursor is somewhere else). This shows the importance of the relationship between terminal specs (OSC133), TUIs (which must emit OSC133), and terminal emulators (which must both understand OSC133 AND communicate it to AX APIs).
> The whole stack is rotten. And no one is earnestly trying to fix it (including me, I have limited time and I do my best but this is a WHOLE TOPIC that requires a huge amount of time and politicking the ecosystem and I don't have it, sorry).
Bonus: a simultaneously awesome and horrible reality is that AI is really helping to improve AX here. A lot of AI tooling uses/abuses AX APIs to make things happen. How is OpenAI reading your list of windows, typing into them, etc? Accessibility frameworks! So a lot more apps are taking AX integration a lot more seriously since its table stacks for AI using it... Sad it requires that but the glass half full is more software is doing that.
The language of accessibility – Staffnet | ETH Zurich https://ethz.ch/staffnet/en/service/communication/digital-ac...
I wish the terminal-wg was more active. There's a bunch of weird odd OSC's folks have tried to make for enhancing structure of the terminal, for various ways to emit more layout-coupled semantic info. Accessibility APIs are great but in most forms a huge chunk of their capabilities feel pretty disconnected from the actual drawing on the screen, are somewhat a parallel construct to what's on screen. Using OSC to layer in more information about what is being drawn feels righter.
Two examples, collapsible regions, semantic prompt regions, https://gitlab.freedesktop.org/terminal-wg/specifications/-/... https://gitlab.freedesktop.org/terminal-wg/specifications/-/...
But in general feels like, for all the TUI interest, not many folks are about and working together to actually figure out how to advance the terminal itself.