8 pointsby pritesh1908a day ago5 comments
  • KomoDa day ago
    That is an extremely misleading title because it made it sound like Vapi was open sourced, not that you just made a clone.
    • pritesh1908a day ago
      Fair point on the title - should have been clearer. Dograh is an open source alternative to Vapi , not a clone though. Vapi/Retell are closed platforms; this is open source infra you self-host and modify. Like saying n8n is a clone of Zapier because they solve the same problem.

      Same category, but fundamentally different model.

  • a6kmea day ago
    Hello HN. I am Abhishek, one of the creators and maintainers of Dograh - github.com/dograh-hq/dograh

    Please feel free to ask any question you may have or give us feedbacks on how we can make it better for you.

    Thanks!

  • ursula11129 hours ago
    Nice work, will checkout. What’s the average end-to-end latency per turn with STT + LLM + TTS in your default stack?
    • a6kme8 hours ago
      Hello.

      The latency is a factor of the models you are picking up for reasoning. If you are colocating the models by self hosting on GPUs, the latency can be as low as 500 - 600 ms between bot - user turns. With models like Gemini-2.5-flash, the latency is around 800-1000 ms. The latency can be higher with reasoning and larger models, like gpt-4.1.

  • ajabhisha day ago
    This is look pretty promising. Do you guys are focussing on a specific use case or any voice AI use case in general?
    • a6kmea day ago
      Thanks for the kind words @ajabish.

      We are more of a horizontal platform and can support a wide variety of use cases. We are serving large BPO call centres on our managed hosted service for outbound and inbound cases.

      There are individual builders also trying to build inbound use cases for personal use or trying to build their business on top of Dograh.

  • muks356716 hours ago
    [dead]