126 pointsby orhunp_6 hours ago10 comments
  • zokier4 hours ago
    > Embedded-graphics includes bitmap fonts that have a very limited set of characters to save space (ASCII, ISO 8859 or JIS X0201). This makes it impossible to draw most of Ratatui's widgets, which heavily use box-drawing glyphs, Braille, and other special characters

    You have a bitmap display, you can just draw lines and stuff without needing to rely on font-based hacks.

    • weinzierl4 hours ago
      Sure, but that's beside the point.

      Text based graphics with fancy or custom fonts is just crazy efficient. That is exactly how we got the amazing graphics of The Last Ninja or Turrican on machines with less than 64KiB useable RAM.

      Same for more modern embedded devices. If you constrain yourself to text you increase both runtime performance and your developer productivity.

      • orbital-decay3 hours ago
        It was crazy efficient on character or tile-based hardware. It makes no difference on bitmap displays, or rather adds some overhead.
        • weinzierl3 hours ago
          At the end of the day it's always pixels - alway has been [1] - and the efficiency of storing and blitting a small number of fixed size rectangles is hard to beat if you can get away with it.

          [1] Except for the early oscilloscope style vector displays maybe.

          • gmueckl3 hours ago
            No, this is technically not fully correct. Early text based display output systems were relying on special character generator hardware to generate the display signals producing the text on screen. Those systems did not have any means of generating arbitrary pixel patterns.
            • weinzierl3 hours ago
              Do you have an example? All the 8-bitters I know drew the characters from memory, which was a character ROM per default but could be changed either with a screw driver or by bank switching some RAM in-place.

              EDIT: If you mean they were not copied in a frame buffer first, you are right. I should not have written 'blitting'.

              • awkwardleon2 hours ago
                Maybe too old to be applicable here, but the TRS-80 Models I and III (and probably more models) had no way to address pixels. You had to use semigraphic characters to emulate larger blocks at sub-character resolutions. https://bumbershootsoft.wordpress.com/2022/01/28/touring-the...
              • adiabatichottub2 hours ago
                I recommend reading the TV Typewriter Cookbook.

                https://archive.org/details/tvtcb_doc

              • direwolf202 hours ago
                With character RAM you can still only have up to 256 unique 8x8 blocks on screen.
              • LoganDark2 hours ago
                The character ROM was not read and processed by the CPU. The CPU set some bytes in video RAM, which served as indexes into the character ROM by the video output hardware.

                I believe on some systems there were some tricks that allowed some bitmap display by redefining glyphs. One example off the top of my head is The 8-Bit Guy's Planet X2, which can use text mode but with glyphs redefined to use for icons, units, terrain, UI, etc.

          • orbital-decay3 hours ago
            Character-based hardware only stores the characters and the grid instead of the full bitmap for the frame, which is very efficient memory-wise. Tile-based hardware (e.g. most console graphics chips in the 8/16 bit era) also had scrolling and layers, and was extremely memory-efficient as well. With bitmap displays you already store full frames.
            • weinzierl3 hours ago
              Sure. Maybe I should not have written 'blitting' when the rectangles are not copied from one memory location to another but end up directly on the screen.

              My original point that putting a fixed number of small and fixed rectangles on a screen is more efficient than line drawing still stands though.

              • codebjean hour ago
                It's still wrong, though.

                Without dedicated sprite hardware it's not more efficient to read a byte from one place and write a byte to another than to write background bytes and write line colour bytes. DMA controllers on µCs won't save you: a character is usually something like 8x8 or 8x16 and displays are rarely more than 8 bit, so we're talking about DMA transfers of just 8 bytes each, and the overhead of setting them up more than offsets any efficiencies gained.

                An 8x12 cell, for example, is 96 pixels to be transferred in 12 rows of 8 pixels. That's 96 reads, 96 writes, and (assuming an unrolled inner loop) 12 branches, to copy as a sprite. Or, it's 96 writes and 12 branches to clear, and (horizontal line) another 8 writes to draw, no branches.

                When your graphics become too complex for simple drawing routines to handle them, they're probably also too complex for simple character ROMs, too.

      • Johanx64an hour ago
        The same isn't true for modern embedded devices, they don't have tile rendering hardware. If you connect a i2c/SPI screen (SSD1306, ST7735), you write all the pixels on the screen (or pixels to some subregion of the screen), these screens do have a backing memory in them.

        So in order to draw a line, you will - objectively - have to copy/move more bytes if you approximate line with character symbols.

        This isn't a big deal, but crazy efficient it is not.

        All the efficiency when drawing on those screens mostly relies on how well you chain together DMA transfers to portions of the screen you want stuff to be drawn to, so that SPI transfers aren't blocking the CPU (that's assuming you don't have memory for a second full-screen buffer).

      • zokier3 hours ago
        Are you claiming that scrapping together boxes and whatnot with line drawing characters is more efficient than just drawing the lines directly?
        • LoganDark2 hours ago
          I think they're claiming that having character-based pipelines and algorithms can be more efficient than doing everything on the level of pixels... I can't help but feel there's a middle-ground somewhere, though.
  • nine_k4 hours ago
    «Mousefood - a no-std embedded-graphics backend for Ratatui!»

    Hence 100% Rust. Works on ESP32, RPi2040, and even STM32. Several displays mentioned, including e-ink.

  • Liftyee2 hours ago
    Really neat project but - Rust on embedded. Haven't tried it yet - has anyone got experience comparing it to C/C++?
    • VorpalWayan hour ago
      My experience is with Aruduino wiring vs Rust with embassy. And very much from a hobbyist POV.

      Rust on embedded uses a HAL layer, which is vendor independent (and resolved at compile time, like templates would be in C++). It doesnt cover everything yet, but basics like GPIO, SPI, I2C etc are covered. This avoids the issue of N drivers times M vendor SDKs: I2C drivers can just be written against the HAL, and you instantiate with a specific HAL in your application. Also reduces vendor lock-in. The setup process still requires some chip specific code to select which pins to use etc, but once you are past that you can be vendor neutral.

      Speaking of which, the API uses some clever patterns (called typestate) to ensure at compile time that your peripheral config is valid: if you "take" GPIO2 you can do that again, so you can't give the same pin to two different pieces of code by mistake. And if the driver expects a pin configured as output you can't give an input pin (you can convert a pin to "dynamic at runtime" if you really need to, so there is an escape hatch).

      Then there is the embassy framework. This is an alternative to RTOSes (there are some Rust RTOSes as well, haven't tried them). It makes use of async/await tasks in Rust that are statically allocated and scheduled. You can have several priority levels of schedulers (even though internally the schedulers are cooperative for the tasks inside, but they are preempting between schedulers by using interrupts).

      Async actually makes many things on embedded easier, such as waiting for a GPIO. No longer do you need to write your own interrupt handler, or figure out when to put the chip on a low power state, the scheduler and HAL futures do it for you.

      All that said: C++ still is a larger ecosystem with more tutorials, drivers and better chip support. But that is advancing rapidly in the Rust world. ESP32 series has official vendor support in Rust for example, as does at least one or two other vendors (or they are in the process of adding it). Popular chips like the RP2040 etc have support, and I have seen HALs for NRF and ST around (but never played with them). Drivers for common chips exist.

      So I would say it is worth experimenting with at least, but you should check up front what HALs and drivers exist for what you want to use and check how complete those are. Two years ago I wanted to do I2S things on the ESP32, but that was still missing. A year ago it had support, but some of the DMA things were clunky still. I should check again some time.

  • orhunp_4 hours ago
    Hey all, thanks for the interest to the crate!

    I'm currently live on YouTube (doing some maintenance & testing). Feel free to join if you have any questions!

    https://www.youtube.com/watch?v=PoYEQJbYNMc

  • piskovan hour ago
    But all the modern TUI are react/solid (claude code, opencode), this should in typescript

    (thank god it isn’t; why do people drag web everywhere is beyond me)

  • onjectic4 hours ago
    Reminds me a lot of the UI styles in the Minecraft mod ComputerCraft.
    • LoganDark2 hours ago
      ComputerCraft was part of how I learned to code.

      I first learned about password hashing when I tried to make the actually most secure door lock program. I first used raw SHA-256, but then someone on the forum introduced me to PBKDF2...

      Sometimes I miss those days.

    • orhunp_4 hours ago
      we're bringing back those aesthetics!
  • wjholden4 hours ago
    Cool! I just recently began learning the Raspberry Pi Pico. Could anyone recommend a specific display that I could use with the Pico 2/2W and Mousefood?
  • GeertJohan4 hours ago
    This is awesome! I love ratatui, having it available on embedded is very cool! I wonder if it will work with async on embedded e.g. embassy..
    • orhunp_4 hours ago
      absolutely, it will work with any other embedded Rust application. The backend only provides a bridge between the embedded-graphics library and the Ratatui widget renderer.
  • dbacar5 hours ago
    Hi Orhun, Could it be used with CYD (Cheap yellow display) ?
    • orhunp_4 hours ago
      Most likely. I just checked and it uses embedded-graphics already which means you can plug in Mousefood directly. The touchscreen might be a bit tricky though, it might need some hacking on the event handler side. But it will most likely work if you map the coordinates to the terminal cells abstraction somehow.
    • nine_k4 hours ago
      At the bottom of the page there is a mention of "Phone-OS - A modern phone OS for ESP32 CYD", so apparently it must be supported.
    • 01HNNWZ0MV43FF4 hours ago
      • 0xbadcafebee2 hours ago
        fwiw, in my research into this, it looks like there are inconsistencies in the devices available, since there's no one manufacturer and they're clones of clones. one might have been reliable but then it goes out of stock
  • IamDaedalus4 hours ago
    aaaaand this how I learn rust I learned go because of bubbletea and mousefood (which combines my work as an embedded systems programmer and love for torminals) is here
    • redanddead4 hours ago
      Oh bubbletea is really cool. Is this how most interactive CLIs are made?
      • GeertJohan4 hours ago
        I used bubbletea for a while but quit it because of inconsistencies in the design. Went to ratatui and never looked back. Go and Bubbletea are nice, but rust is much more suited for building tuis.