2 pointsby ma1or5 hours ago1 comment
  • ma1or5 hours ago
    Hi HN,

    For years I’ve been rebuilding UIs — legacy internal tools, SaaS dashboards, competitor clones — and I kept running into the same problem: screenshots lose states, transitions, and behavior design files exist only for new features prompts can guess layout but not actual interaction logic reverse-engineering takes weeks, not minutes

    So I built something for myself: Replay.

    The core idea is simple but (I think) new: treat a video of the UI as the source of truth. Not single frames, not mocks, but how a UI actually behaves over time — clicks, scrolling, navigation, state changes.

    Replay analyzes a recording end-to-end and reconstructs: a responsive UI based on real behavior flow map of transitions and possible paths,componentized code (React + Tailwind), an editable design system

    all states observed in the video

    It doesn’t guess missing features — it faithfully rebuilds what it sees in style we want keeping the data.

    I’ve used this to rebuild several real products and internal systems, and it’s saved days or weeks compared to manual reconstruction. The technology feels possible now because recent advances in AI make it feasible to infer state machines and interaction graphs, not just pixels.

    We’re in early access beta (2 free reconstructions for new users), and I’m curious about a couple of things:

    What are edge cases this might fail on? Do others have workflows that suffer from the same “screenshots aren’t enough” problem? How useful would a tool like this be for your team?

    Beta: replay.build

    Happy to answer questions. Bartosz