13 pointsby akshay326a day ago10 comments
  • atrooo19 hours ago
    Is anyone else tired of AI generated blog posts about AI generated code? What does the author even get out of it? Upvotes?
    • altmanaltman18 hours ago
      I don't understand why AI-generated text always resort to this pattern. It's not [x], but [y]. If you say that 10 times in a blog post, it's just really bad writing. There is no clarity and you say the same thing 15 times while using the stereotypical car salesman billboard voice. Here are some AI gems from the blog that was totally written by the dev in full ernest.

      > Not ten. Not fifty. Five hundred and twenty-three lint violations across 67 files.

      > You're not fixing technical debt—you're redefining "debt" until your balance sheet looks clean.

      > These are design flaws, not syntax errors. They compile. They might even work. But they're code smells—early warnings that maintainability is degrading.

      > AI-generated code is here to stay. That makes quality tooling more important, not less.

      > This isn't just technical—it's a mindset change:

      > It doesn't just parse your code—it analyzes control flow, tracks variable types, and detects logical errors that Ruff misses.

      > No sales, no pitch—just devs in the trenches.

  • vaishnavsm19 hours ago
    This seems to be focused on Python, but for all the TS devs out there, what you'll see will be implicit `any` errors. Quick word of warning on having LLMs fix those - they love to use explicit `any`s or perform `as any` casts. This makes the lint error disappear, but keeps the actual logic bug in the code.

    Even if you ask it not to use any at all, it'll cast the type to `unknown` and "narrow" it by performing checks. The problem is that this may be syntactically correct but completely meaningless, since it'll narrow it down to a type that doesn't exist.

    The biggest problem here is that all of these are valid code patterns, but LLMs tend to abuse them more than using it correctly.

    • anonzzzies19 hours ago
      We detect any use of any and the LLM has to fix them before our check succeeds. It does and works fine.
      • akshay32617 hours ago
        currently starting to do the same over seer's frontend, i didn't realise how simple yet effective this technique / guardrail could be!
    • Incipient12 hours ago
      Default linting in quasarsjs doesn't like unnecessary casts, or using 'any' types - AI generally then fixes it...in varying degrees of effectiveness - sometimes properly, sometimes with horrific type abominations.
  • Rantenki18 hours ago
    I am somewhat confused by this post. If the AI assistant is doing such a bad job that it lights up the linting tool, and further, is incapable of processing the lint output to fix the issues, then... maybe the AI tool is the problem?

    If I hired a junior dev and had to give them explicit instructions to not break the CI/lint, and they found NEW ways to break the CI/lint again that were outside of my examples, I'd hopefully be able to just let them go before their probation period expired.

    Has the probation period for AI already expired? Are we stuck with it? Am I allowed to just write code anymore?

    • akshay32617 hours ago
      i agree, the tool is indeed broken. its simultaneously stupid and smart in different ways. but i think there's some value in continuing to use and evaluate it
  • Use a linter that can auto fix some of the problems and have an automatic formatter. Ruff can do both. It will decrease your cleanup workload.

    Don't get too hanged up on typing. Pythons duck typing is a feature not a bug. It's ok to have loose types.

    On duplicate code, in general you should see at least two examples of a pattern before trying to abstract it. Make sure the duplication/similarity is semantic and not incidental, if you abstract away incidental duplication, you will very quickly find yourself in a situation where the cases diverge and your abstraction will get in your way.

    In general coding agents are technical debt printers. But you can still pay it off.

    • akshay32619 hours ago
      Totally agree on the debt printer metaphor. I might steal it.
  • andsmi219 hours ago
    Part of my pattern now is forcing lint before push and also requiring code coverage % to stay above a certain threshold and all tests to pass. Sometimes this goes awry but honestly I have same problem with dev teams. This same thing should be done with dev teams. And I’ve had devs fix lint errors these bad ways same as llm as well as “fix” tests in and ways. Llm actually listens to my rules a bit better tha human devs — and the pre commit checks and pre merge checks enforce it.
    • akshay32617 hours ago
      amen! that's my bitter lesson for the time being, unless claude gets eerily better
  • cheapsteak19 hours ago
    would PostToolUse be a better place to do it than pre-commit? (trigger on `"^(Edit|Write|MultiEdit)$"`)

    for lint issues that are autofixable, the tool use can trigger formatting on that file and fix it right away

    for type issues (ts, pyright), you can return something like `{\"hookSpecificOutput\":{\"additionalContext\":$escaped},\"continue\":true}"` to let the edit complete but let Claude know that there are errors to fix next turn

    • akshay32617 hours ago
      thanks i've not used PostToolUse but will checkout. i'm excited about Rust's autofixable issues promise. curious how effective they are, and how deep of a issue can they solve
  • OutsmartDan19 hours ago
    If AI is writing and fixing all code, does linting even matter?
    • akshay32617 hours ago
      LLMs try to cheat. all sorts of evasive ways or smart tricks in some cases to avoid working on context-heavy tasks. i've constantly observed if left unchecked it tries to loosen the lint settings
    • colechristensen19 hours ago
      Linting is a good guardrail for real code problems the LLM catches poorly.

      LLM performance increases with non-LLM guardrails.

  • rcarmo13 hours ago
    Linting and proper tests are the reason why I can use even simple models to get a lot done—preferably writing the tests with a second model.
    • akshay3263 hours ago
      which simple models have you found good?
  • akshay326a day ago
    [dead]
  • seropersona day ago
    TL;DR: Enable strict linting on CI, don't allow AI to change linting configuration.