1 pointby lellansin3 hours ago2 comments
  • lellansin3 hours ago
    Hi HN,

    I wrote an analysis on why Cursor’s Rules approach never became a real standard, while Claude Skills are starting to look like a structural advantage for Anthropic.

    At a surface level, both try to solve the same problem: making LLMs behave consistently in specific contexts using reusable constraints.

    But the difference isn’t about timing or execution quality — it’s about where the capability lives.

    Cursor Rules exist at the application layer. They rely on prompt composition and runtime conventions to influence model behavior. That makes them inherently external, fragile, and hard to standardize across models.

    Claude Skills, on the other hand, are being internalized at the model level. Once a capability is baked into the model’s reasoning and execution path, it becomes more stable, more reusable, and — most importantly — measurable at scale.

    This leads to a larger observation:

    As scaling on raw internet text slows down, the next frontier for LLMs may not be better text prediction, but better behavior prediction — learning which action or step should come next, not just which token.

    By standardizing skills, model providers can collect structured data about real-world capability usage, reinforce effective behavior patterns, and continuously fold them back into the base model. That feedback loop is something downstream tools simply can’t replicate.

    I’d love to hear thoughts from people building IDE tools, agents, or model infrastructure — especially where you think the boundary between “tool-level scaffolding” and “model-level capability” should live.

    Link to the full post: https://lellansin.github.io/2026/01/27/Why-Cursor-Rules-Fail...

  • Agent_Builder3 hours ago
    [dead]