47 pointsby bediashpreet4 months ago11 comments
  • tomnipotent4 months ago
    This "10,000x" faster claim is specific to how long it takes to instantiate a client object, before actually interacting with it.

    Turns out the LangGraph code uses the official OpenAPI library which eagerly instantiates an HTTPS transport and 65% of runtime was dominated by ssl.create_default_context (SSLContext.load_verify_locations) when I tested using pyinstrument. This overhead is further exasperated by the fact that it's happening twice - once for the sync client, and a second time for the async client. Rest of the overhead seems to be Pydantic and setting up the initial state/graph.

    Agno wrote their own OpenAPI wrapper and defers setting up the HTTPS transport during agent creation, so that cost still exists it's just not accounted for in this "benchmark". Agno still seems to be slightly faster when you control for this, but amortized over a couple of requests it's not even a rounding error.

    I hope the developers get rid of this "claim" and focus on other merits.

    • AustinDev4 months ago
      I mean, both companies are just things I could have cursor do in a few hours. So probably not.
      • tough4 months ago
        LangChain has several products and has been building on the Agent space for years

        im a fan of vibe coding but that's kinda of a stretch

        lmfao

  • turnsout4 months ago
    Is Python execution even a rounding error in the full execution time for a LangChain flow?
    • AStrangeMorrow4 months ago
      I’d wager it can probably a few percents of the full runtime. But no matter the variation in the time it takes for LLMs to generate outputs (depending on the nb of tokens produced/input size) likely drowns it completely.

      Like 5s+/-1s vs 4.95s+/-1s

  • thecleaner4 months ago
    Congratulations on the release. Although I hope the developers take the lesson that AI frameworks are unnecessary. You don't need frameworks to write HTTP calls. Just a good enough SDK would do.
  • yuzhun4 months ago
    Tried agno. Its API has less mental burden than langchain. As for the speed advantage, it hasn't been noticed much.
  • vivzkestrel4 months ago
    how did you arrive at this number 10000?
    • randomtoast4 months ago
      They just made it up.
      • bediashpreet4 months ago
        Wrong, actual code and tests provided than show 10000x speed up. Users can run it themselves and have been seeing better results.

        Appreciate if you didn’t make up stuff.

        • tomnipotent4 months ago
          As I pointed out the 10000x speed up claim is smoke and mirrors, and any of your team could have spent 10 minutes and figured that out by profiling the code. It's a silly claim that doesn't hold up to scrutiny and detracts from your project by setting off the bullshit detector that most programmers have on marketing hyperbole. It's not the flex you think it is.
        • mpalmer4 months ago
          You may not have made up the number, but you did invent a contextual framing where you could claim that number accurately. But the framing is not a useful or practical one. It's like saying that your car is faster because the driver can turn the key in the ignition more quickly.

          Instantiating an agent is not the bottleneck for LLMs. Two hundredths of a second is a rounding error compared to what the model costs in time.

  • slake4 months ago
    Does it work with o1 type reasoning models. I had trouble running phidata (the old named framework) with it.
  • eternityforest4 months ago
    Can this handle smaller models like Qwen 1.5B, or does it need some real intelligence to get the tool calling to work?
  • esafak4 months ago
    LangGraph, not LangChain.
  • barbazoo4 months ago
    How does this compare to pydantic.ai?
  • moltar4 months ago
    But only in Python.
  • dcreater4 months ago
    Another day another unnecessary ai framework.
    • torginus4 months ago
      But at least this one is useless 10000x faster!
    • barbazoo4 months ago
      Why is it unnecessary? I'm genuinely interested, it's not like JS where we have a plethora of industry tested frameworks to choose from.