34 pointsby ttpost8 hours ago8 comments
  • akslp20807 hours ago
    How is it different than Langfuse? sorry if I am off the track but Langfuse also provides some detailed tracing of agentic behavior and decisions.
    • ttpost7 hours ago
      We get this question a lot! We work hand-in-hand with obs tools like Langfuse. Langfuse is great for debugging technical issues on individual traces like timing conditions that resulted in failed API calls.

      Voker focuses on product, business and user outcomes - like what intents did the user bring to your agent that you might not expect. We're built for the whole product team, whereas Langfuse focuses on engineers specifically.

      One way to think about it would be: a PM notices in Voker that a new intent category is coming up frequently and the agent isn't handling it well. The PM can dig into the data with visualizations or our conversation reconstructions. Once they confirm its a real issue worth addressing, they can link their investigation to the AI engineer - who can use Voker AND Langfuse to debug and implement a fix/improvement.

      • zwaps2 hours ago
        For instance, I have intent classifiers running on my traces and most tools offer some sort of analysis agents or API so it's claude sdk and go.

        Maybe let's take Langsmith. Now I know my gripes with that product. How do you see it? What do you add, specifically?

        • zwaps2 hours ago
          Maybe as a comment, you really put weight on intent classification. I am not sure why. For it to work, you are gonna need my expert domain input. And given that, I feel like the classification bit is basically solved. I wonder a bit why this is the feature you seem to put front and center (e.g. screenshots)
        • ttpost2 hours ago
          tl;dr: Langsmith + homegrown intents doesn't scale with contributors and agent usage as an Analytics solution. Voker adds trend and usage insights on collaborative dashboards that work for the whole AI product team.

          Nice, sounds like you've set up your own solution in house. We definitely see some teams do that, and for some it works perfectly, for others, its too expensive to maintain - they get new requests for new dashboards or different subcuts of data from product or design teams, or they run into an issue like way too many intents generated to be useful, and its not worth the tradeoff of investing time in building internal tooling. But for some it makes sense to roll your own! It also really depends on how many people on the team are involved in building the agent products, and how much volume your agents have. If you have millions of conversations a month with thousands of unique intents, you have to set up data eng pipelines just to process categorize, and store all that data in a way thats usable for the whole team.

          When it comes to Langsmith, we hear about them a lot from our customers, pretty much all of them love it as an obs tool, but most say that only the engineers have access or spend time in it, and they've told us the strength of Langsmith is its technical tracing, not its visualizations, ui, or usability. They've told us any "insights" are very canned (because thats not Langsmith's key focus).

          We add self-serve analytics - like how Google Analytics lets marketers see how their website is performing without needing to ask engineers to write SQL queries on cloudwatch logs.

          Ex: PM can self-serve and look at trends in what users are asking of agents, notice a problem, do a quick RCA, look for reproducibility across other sessions - before deciding to assign as an issue to engineer. Old way would be: PM hears a complaint from a customer, asks the engineer to "look into it" and the eng spends 4 hours combing through Langsmith logs to hunt down one session without even knowing if its actually a widespread issues

      • bfeynman5 hours ago
        do you have experience as PMs? Looking at website, it looks like you just use llms to guess what categories are? Seems like trap for garbage in garbage out. Otherwise you would need someone technical to figure out how to setup the proper KPI monitoring things...
        • ttpost5 hours ago
          We do! We have combined experience as PMs, ml engs, and data scientists across many verticals. We also have experience helping PMs and AI eng teams build agents across over 100 customers from our first product.

          You're totally right, the analytics annotation primitives we detect (intents, corrections, resolutions) are the cornerstone to all the other analysis in our platform. It's critical that we get those right or all the data and insights in the world are useless.

          LLMs are a core part of that detection, but we also do things like hierarchical classification, (https://voker.ai/blog/hierarchical-text-classification-with-...) and will eventually add in other ML methods where applicable. On top of our automated detections, we're building ways for the annotations to improve and adapt to your specific agent product, your data, and your feedback on our annotations.

          Our SDK is architected to eventually accept any type of event you want to send as additional information like add to carts, or other conversion metrics that are valuable for analysis on agent performance.

          You're definitely right, we don't expect a PM to instrument this all themselves - similar to web analytics or product analytics tools, the engineering team instruments and maintains the integration, and then our app makes the insights and data accessible to not just the engineer but the whole product team.

          • holoduke3 hours ago
            Your response is AI. It's a bit ai sloppy as well. Sorry to say that. But as a business owner you can and should do better.
            • ttpost3 hours ago
              haha oof that's embarrassing if I really sound like AI, but unfortunately it was written by hand by yours truly!

              Guess its human-sloppy :(

            • zwaps2 hours ago
              how so? genuine question
  • Ozzie_osman7 hours ago
    If the team is here, would love to understand how it compares to something like Amplitude's agent analytics (https://amplitude.com/ai-agents).
    • ttpost7 hours ago
      Yeah, this is a confusing one on wording. TLDR: Amplitude is analytics for your web/product data, Voker is analytics for your agent data.

      We call Amplitude's feature an "AI Analyst". Essentially Amplitude is layering a LLM copilot on top of their own product - so you don't have to click the buttons or write reports to get insights.

      We're an analytics platform built for tracking your agents. Different products with different problems they're solving.

      Not sure if this helps, but essentially Amplitude could use Voker to track how well their AI Analyst agent product is actually working!

      • adrianisbored6 hours ago
        I think the link is off above, but they're thinking of Amplitude's not yet GA agent analytics, not their general analytics: https://amplitude.com/blog/agent-analytics
        • ttpost6 hours ago
          Thanks for clarifying - yes this is a much closer analog to what we're building. That being said, we haven't heard from anyone using it or tried it ourselves yet so can't speak to quality comparison.

          From what I can tell in this video, it still seems like Amplitude is focusing on the obs trace details (latency, tokens, etc).

          They don't seem to go as deep (or at least don't highlight) as much of the semantic data processing and detection we're doing (intents, corrections, resolutions) - and creating higher level classifications and insights from those. We're completely purpose built for monitoring agent products, so we're striving to do more than just visualizations, we intend to be best-in-category at the actual automated annotations and analysis of agent<>user interaction data.

  • Damianf197 hours ago
    What's the data model that lets you compare agents that differ a lot in tools/policies? Curious if you normalize on the "what did the user actually accomplish" layer or on raw token/turn metrics, because the two paint completely different pictures of "is this agent working." We struggle with this on the eval side of our own product (email pipeline outcomes, not agents, but same shape).
    • alrudolph7 hours ago
      For the agent working, we're focusing on the user outcome, we think that the raw usage, number of turns, function calls are useful operationally but think of those as more observability than the core evaluation target. We do show some of these stats in our conversation view but don't aggregate to compare agents. Longer term we will look to add in more of these features so we can compare quality vs cost, for example
  • maxrumpf6 hours ago
    congratulations!
  • ggamecrazy7 hours ago
    > High interaction volume (1k+ chat sessions per month)

    I don't mean to be that typical HN commenter but you did lose me a bit there.

    I know a lot of people are just getting started with agents but even for a lot of scrappy startups usage is a lot higher than that!

    If I may suggest focusing on explaining how you can add value even when usage is super low to controlling costs even when usage can get super high?

    I can validate you that it is a true problem that's solved by large companies but you have to hand-roll yourself @ startups (via airflow or queues, etc). But unfortunately one where I am not sure that a lot of stakeholders understand the benefits of (yet!). I think value has to be shown a bit more clearly here, sadly.

    Congrats on the launch!

    • ttpost7 hours ago
      thanks for the honest insight! Very helpful to hear that you feel the problem is real, that's whats most important to us, we'll keep working on getting the solution and messaging right!

      - we say >1K because two reasons: 1. its still feasible (although tedious) to put the full burden on analyzing agent performance and usage on your engineers equipped with obs tools and logs (not ideal but its what most AI teams we see do) 2. you're spot on - it actually surprised us too how few companies (even the really big public ones that have promised to build 200+ agents back in 2024) still have barely one or two agents in prod with only hundreds of convos. 1K convos was our best first guess at the cutoff point where the manual work of digging through traces, and the insights you can get start to make sense enough to need a tool for this. We're definitely planning to tune that number as time goes on!

      -We definitely don't have pricing figured out yet, we plan to continue to iterate on the event volumes etc to make sure our product gives clear positive roi for the teams using it at every level. But in general we look at other analytics products as our early barometer. Yes the higher your event volume the more cost, but in my experience (back in Ecomm using Heap analytics) it was so incredibly worth it to pay more as our business and data volumes scaled (we ran on data) I think this is the challenge with all analytics products, they're not useful if you only have 5 site visitors. We see the progression as start with obs/logs -> evals -> analytics as your usage scales.

      Curious to hear, whats the session volume of the agent products you run? Every datapoint helps us tune our tiers, pricing, and most importantly our product!

  • KaiShips5 hours ago
    [flagged]
  • WhoffAgents5 hours ago
    [flagged]
  • abhijithbabu5 hours ago
    [flagged]