2 pointsby idobaiba7 hours ago1 comment
  • idobaiba7 hours ago
    I built Wezzly because I got tired of the “copy-paste dance” with AI.

    Most of the time when I ask AI for help, the hardest part isn’t the question — it’s explaining the context. I have to copy logs from my terminal, take screenshots of errors, paste pieces of a webpage, or describe what I’m looking at.

    So I started experimenting with a different interface: an AI companion that lives on the desktop and can see the same screen you’re looking at.

    Wezzly runs as a lightweight native macOS app (Tauri + Rust). It periodically captures the screen and sends the visual context to the AI model you choose (OpenAI, Claude, Gemini, Grok, DeepSeek, Ollama, etc.). The idea is that the AI understands what you're looking at without needing screenshots or long explanations.

    A few workflows where it’s been useful for me so far:

    • Debugging terminal errors without copying logs • Watching tutorials or videos and asking questions while they play • Getting help filling forms or navigating complex dashboards

    Everything runs locally on the machine. Your API keys stay in your system keychain and screenshots aren’t stored.

    I’m curious what people here think about this interface direction.

    Is “continuous visual context” the missing piece for AI assistants, or does this cross a line where it becomes too intrusive?

    Project: https://wezzly.ai GitHub: https://github.com/idobaibai-wezzly/wezzly-companion-public Demo: https://youtu.be/ya1Kz_iAraE