2 pointsby marioskales2 days ago4 comments
  • nachocoll7 hours ago
    The "no PhD required, grandma-approved" framing is a really honest description of what vibe coding enables at its best: domain experts building useful tools for real people, without needing to be software engineers first.

    The BSL-1.1 license choice is interesting too — shows you're thinking about sustainability and intent, not just shipping fast. That kind of explicit decision-making is actually what separates vibe coding projects that survive from ones that collapse: when the human behind the project is thinking about architecture, maintainability, and accountability.

    Somewhat related: the Agile Vibe Coding Manifesto (https://agilevibecoding.org) is trying to formalize exactly these principles — that customer value and human accountability still drive everything even when AI is writing most of the code. Your project is a good example of vibe coding done with intention.

    Good luck with Skales — the accessibility angle (no Docker, no CLI) is genuinely underserved.

  • marioskales2 days ago
    Added a demo video showing the desktop buddy in action:

    https://www.youtube.com/watch?v=8fXGsQGyxCU

  • hkonte2 days ago
    The accessibility angle is underrated — making local AI agents usable without Docker/CLI knowledge opens it to an entirely different audience. The "IT guy in the family" problem is real and nobody is solving it seriously.

    One thing worth considering as you build out the agent UX: the quality of the default prompts/instructions you ship with Skales will matter a lot for first impressions. A non-technical user can't debug a bad system prompt — they'll just think the AI is dumb. Structuring those instructions carefully (role, constraints, examples) makes a huge difference.

    Built flompt (https://flompt.dev / https://github.com/Nyrok/flompt) for exactly that — visual prompt structuring that helps get the instructions right before they become a UX liability.

    • marioskales2 days ago
      You're spot on, the default system prompts were actually one of the things I spent the most time on. Skales ships with 5 personas (Entrepreneur, Coder, Family, Student, Creative) and each one has a carefully tuned system prompt so the AI feels useful from the first message. There's also a "Friend Mode" that makes responses more casual and conversational - because a non-technical user getting a wall of formal text will bounce immediately.But you're right that there's more work to do there. The gap between "technically capable" and "feels good to use" is huge, especially for people who've never used an AI tool before. Will check out flompt, prompt structuring is exactly the kind of thing that could help refine this further.

      Thanks for the thoughtful feedback!

  • jlongo782 days ago
    [flagged]
    • marioskales2 days ago
      Great question! Multi-agent group chat (3-5 AI personas discussing a topic) works well - no latency issues and my main PC only has 8gb of ram, each round is one API call per participant, so latency depends mostly on your chosen model and provider. With faster models like Gemini Flash or Groq, responses come in 1-2 seconds per turn. Heavier models like Claude or GPT-4 take a bit longer but still smooth (each AI within the group-chat has a 'response timeout', If a participant times out, they are skipped for that round and the next one discusses the topic).

      For Execute Mode (multi-step autonomous tasks), Skales queues steps sequentially so there's no parallel bottleneck – it plans, you approve, then it runs through each step. There's also a desktop buddy (think as Microsofts Clippy, but actually useful) that sits in your system tray (if activated) as soon you minimize or close the main-windows, you can ask it quick questions without even opening the main interface. It runs within the same Electron process, so zero additional RAM overhead. Idle RAM sits around ~300MB (I had 400MB at least) which keeps things snappy.

      The main speed factor is honestly the LLM provider, not Skales itself. With local Ollama models it's purely your hardware.

      Happy to answer more specific questions, thank you for asking jlongo78!