32 pointsby tosh8 hours ago7 comments
  • linkregister7 hours ago
    Having only heard of Pi in passing, I derived value from this article. I plan to experiment with it as a replacement for claude-code and gemini-cli.

    [meta] I frequently see criticism about an article having been obviously written by an LLM. Often the author apologizes for it in the HN comments. I wonder what is wrong with me that I am totally unaware of this LLM stench.

    I have gotten a lot of value from hearing people criticize candidates' LLM usage in technical interviews and conversations. I adjusted my style from talking about axioms and best practices. Instead I always relate a personal anecdote to explain a technical decision. This has been universally well-received.

    So I am hoping that someone can respond with some helpful holistic answers beyond a checklist of "uses em-dashes" and "says 'not X, but Y'". I suspect my writing style could be easily declared as having been written by an LLM.

    • eddyg6 hours ago
      See https://github.com/blader/humanizer for a skill that helps avoid some of the common LLM “tells”. It’s based on https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
    • grey-area6 hours ago
      I think the main objections are the effort mismatch between writing and reading, and the likely low informational content and errors in the text, which the author may or may not have read. Some of this like the bit I quoted below is pretty nonsensical, comparing lobsters and quiet engines.

      The writing definitely has a stench and is full of breathless comparisons which pretend some very minor thing is a breakthrough. This is annoying and trite and people dislike it for that alone but also for the more important reasons above.

      This blog post could have been a lot shorter. I’d honestly rather just read the prompt with a link to pi. People like this author should just publish their prompt IMO and they will continue to be called out on it till this bubble pops.

      • raincole4 hours ago
        Are you referring to:

        > "The lobster gets the attention. The engine gets quietly forked into production."

        This line?

        It's funny, cause it was this very sentence that convinced me this article is mostly human written. It captures what the author is trying to say (the lobster = OpenClaw, the engine = Pi; OpenClaw got a disproportional amount of attention from the public), and the nature of the project (Pi's author encourages people to fork Pi and ask it to add features for itself, instead of submitting feature PR).

        It's not something that LLM would give you if you just prompt it to "write an article to hype Pi up."

  • raincole5 hours ago
    That's crazy that people are complaining the 'LLM writing style' and lack of content here. For me it reads very clear and succinct. Can I rewrite it with fewer words? Yes:

    > [Pi](https://github.com/badlogic/pi-mono/tree/main) is a minimal coding agent developed by Mario Zechner (libGDX's dev). I tried every ai coding tool from Cursor to Claude Code to Gemini CLI. But they're bloated with unnecessary system prompts and instructions. With Pi I saved 50% of my context window. It's open sourced and I believe it's the future!

    Is this the future you envision? An HN that looks like Twitter?

  • grey-area7 hours ago
    "The lobster gets the attention. The engine gets quietly forked into production."

    Is there any way to stop LLMs generating text like this?

    Is it really better than just writing it yourself? I guess generating blog posts is lower-effort and thus wins in this attention economy people think they are competing in.

    The actual engine is here: https://github.com/badlogic/pi-mono

    An interesting idea to have a bit more control over what your 'agent' is doing and keep it simple. Some of the prompts do give me pause though, why do we talk to text generators as if they are people, have we found this works best, or is it a sort of cargo-cult?

    https://github.com/badlogic/pi-mono/blob/main/.pi/prompts/is...

    I love that he's telling his tools not to trust people in his comments here!

  • 7777777phil7 hours ago
    The token comparison is what jumped at me.. Half the context window for the same work means mainstream tools are burning your tokens on system prompts and MCP plumbing before you even start.

    I wrote earlier why the agent stack is splitting into specialized layers, and this is a good example of what drives it. Monolithic tools waste the most on their own overhead. https://philippdubach.com/posts/dont-go-monolithic-the-agent...

  • abhikul05 hours ago
    Pi works great for local models in my short testing. Wanted to try out Skills and how small models work with these agentic tools for small tasks, mainly [browser-use](https://github.com/browser-use/browser-use).

    Tried Mistral Vibe, Codex, Opencode, Claude with gpt-oss:20b, ministral 3b,8b, Nemotron3 nano 30b and GLM 4.6V; finally settling on gpt for its impressive pass rates. All the other tools inject upto around 7-10k tokens on the initial prompt while pi takes up ~1.5k. This works out to be quite usable for my m3 pro machine that can take a while when processing the huge initial prompts from other CLIs.

    While I'm not doing any serious work, and the other tools could be tweaked to use a simpler System Prompt; pi felt quick and the llms did use the tool calls correctly without being confused with all the huge prompts being dumped on them.

  • swordsith7 hours ago
    It looks like claude signed this at the end of the article for you 'ai, tools, opinion' lol
  • gas9S9zw3P9c7 hours ago
    How is this highly upvoted and on the front page? It's so clearly at least 50% AI written slop, probably closer to 95%. Wow HN these days... this site is dying, completely overrun by bots.
    • H8crilA6 hours ago
      Because it contains information. Content > form, this is one of the cornerstones of hacker culture actually.
    • linkregister7 hours ago
      I acknowledge it is a skill issue, but I can barely tell. Do you use a checklist to determine if something has been written by an LLM?
      • gas9S9zw3P9c6 hours ago
        https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing is a good starting point. You start seeing patterns at some point.

        But more important than the writing style, there is no interesting content here. It's all generic statements and platitudes with a bunch of generated links.

      • swordsith6 hours ago
        I use a ML model i trained off of AI and human text, it flags mostly as Claude. The article even references claude.
    • 77773322157 hours ago
      Unfortunately it is seemingly being overran by bots. What's the solution? Just read curated lists of blogs directly?
      • gas9S9zw3P9c7 hours ago
        I don't know either what the solution is other than human verification, but nobody wants that. Perhaps the times of semi-anonymous online communities are over and the best you can do now is follow real people you trust that can filter content for you.
        • 77773322157 hours ago
          Even with human verification, people are going to verify, then conduct bot activity. And worse, use other people's identities to verify, then spam.