3 pointsby andreabergonzi6 hours ago1 comment
  • dtagames6 hours ago
    It's a good argument. Canned UI and pre designed user interactions are already done for. We can't sit around and think of what a user might want to do or how to present that. Those concepts have to be fluid and chosen by the user.

    While agents that can book a hair appointment are interesting, that's more of a workaround than the kind of UI I think we're going for. The visual appearance of the software itself must change dynamically according not only to the task but to the user's preferences. This is something we haven't seen demonstrated yet.

    • andreabergonzi6 hours ago
      Glad it resonated. You hit the nail on the head regarding agents being a 'workaround', that's exactly why I categorize them as the 'Transitional phase' rather than the destination. They are essentially bots trying to navigate a web that wasn't built for them.

      Your point about the visual appearance changing dynamically is the 'Holy Grail' I touch on in the 'Generative UI' section. We are currently stuck designing static screens for dynamic problems.

      I agree we haven't seen a true demonstration yet. Do you think that shift happens at the App level first (e.g., a dynamic Spotify), or does it require a whole new OS paradigm (a 'Generative OS') to work?

      • dtagames6 hours ago
        Good question! I'd say it happens at the app level first because the context of the OS is too big a surface to start with. But a RAG app for a specific vertical could have enough context to dynamically draw a custom UI for every user, given the constraints on what the app is generally about.
        • andreabergonzi5 hours ago
          That makes a lot of sense, it is definitely the safer place to start.

          It implies that design systems are about to change fundamentally. Instead of shipping a library of static components, we'll need to ship a set of constraints and rules that tell the RAG model how it's allowed to construct the UI on the fly.