46 pointsby bediashpreet4 days ago11 comments
  • tomnipotent3 days ago
    This "10,000x" faster claim is specific to how long it takes to instantiate a client object, before actually interacting with it.

    Turns out the LangGraph code uses the official OpenAPI library which eagerly instantiates an HTTPS transport and 65% of runtime was dominated by ssl.create_default_context (SSLContext.load_verify_locations) when I tested using pyinstrument. This overhead is further exasperated by the fact that it's happening twice - once for the sync client, and a second time for the async client. Rest of the overhead seems to be Pydantic and setting up the initial state/graph.

    Agno wrote their own OpenAPI wrapper and defers setting up the HTTPS transport during agent creation, so that cost still exists it's just not accounted for in this "benchmark". Agno still seems to be slightly faster when you control for this, but amortized over a couple of requests it's not even a rounding error.

    I hope the developers get rid of this "claim" and focus on other merits.

    • AustinDev2 days ago
      I mean, both companies are just things I could have cursor do in a few hours. So probably not.
      • tougha day ago
        LangChain has several products and has been building on the Agent space for years

        im a fan of vibe coding but that's kinda of a stretch

        lmfao

  • turnsout3 days ago
    Is Python execution even a rounding error in the full execution time for a LangChain flow?
    • AStrangeMorrow2 days ago
      I’d wager it can probably a few percents of the full runtime. But no matter the variation in the time it takes for LLMs to generate outputs (depending on the nb of tokens produced/input size) likely drowns it completely.

      Like 5s+/-1s vs 4.95s+/-1s

  • thecleaner3 days ago
    Congratulations on the release. Although I hope the developers take the lesson that AI frameworks are unnecessary. You don't need frameworks to write HTTP calls. Just a good enough SDK would do.
  • slakea day ago
    Does it work with o1 type reasoning models. I had trouble running phidata (the old named framework) with it.
  • yuzhun3 days ago
    Tried agno. Its API has less mental burden than langchain. As for the speed advantage, it hasn't been noticed much.
  • vivzkestrel3 days ago
    how did you arrive at this number 10000?
    • randomtoast3 days ago
      They just made it up.
      • bediashpreet3 days ago
        Wrong, actual code and tests provided than show 10000x speed up. Users can run it themselves and have been seeing better results.

        Appreciate if you didn’t make up stuff.

        • tomnipotent2 days ago
          As I pointed out the 10000x speed up claim is smoke and mirrors, and any of your team could have spent 10 minutes and figured that out by profiling the code. It's a silly claim that doesn't hold up to scrutiny and detracts from your project by setting off the bullshit detector that most programmers have on marketing hyperbole. It's not the flex you think it is.
        • mpalmer2 days ago
          You may not have made up the number, but you did invent a contextual framing where you could claim that number accurately. But the framing is not a useful or practical one. It's like saying that your car is faster because the driver can turn the key in the ignition more quickly.

          Instantiating an agent is not the bottleneck for LLMs. Two hundredths of a second is a rounding error compared to what the model costs in time.

  • eternityforest3 days ago
    Can this handle smaller models like Qwen 1.5B, or does it need some real intelligence to get the tool calling to work?
  • esafak3 days ago
    LangGraph, not LangChain.
  • barbazoo3 days ago
    How does this compare to pydantic.ai?
  • moltar3 days ago
    But only in Python.
  • dcreater3 days ago
    Another day another unnecessary ai framework.
    • torginus3 days ago
      But at least this one is useless 10000x faster!
    • barbazoo3 days ago
      Why is it unnecessary? I'm genuinely interested, it's not like JS where we have a plethora of industry tested frameworks to choose from.