16 pointsby akshay326a month ago11 comments
  • atroooa month ago
    Is anyone else tired of AI generated blog posts about AI generated code? What does the author even get out of it? Upvotes?
    • altmanaltmana month ago
      I don't understand why AI-generated text always resort to this pattern. It's not [x], but [y]. If you say that 10 times in a blog post, it's just really bad writing. There is no clarity and you say the same thing 15 times while using the stereotypical car salesman billboard voice. Here are some AI gems from the blog that was totally written by the dev in full ernest.

      > Not ten. Not fifty. Five hundred and twenty-three lint violations across 67 files.

      > You're not fixing technical debt—you're redefining "debt" until your balance sheet looks clean.

      > These are design flaws, not syntax errors. They compile. They might even work. But they're code smells—early warnings that maintainability is degrading.

      > AI-generated code is here to stay. That makes quality tooling more important, not less.

      > This isn't just technical—it's a mindset change:

      > It doesn't just parse your code—it analyzes control flow, tracks variable types, and detects logical errors that Ruff misses.

      > No sales, no pitch—just devs in the trenches.

  • Rantenkia month ago
    I am somewhat confused by this post. If the AI assistant is doing such a bad job that it lights up the linting tool, and further, is incapable of processing the lint output to fix the issues, then... maybe the AI tool is the problem?

    If I hired a junior dev and had to give them explicit instructions to not break the CI/lint, and they found NEW ways to break the CI/lint again that were outside of my examples, I'd hopefully be able to just let them go before their probation period expired.

    Has the probation period for AI already expired? Are we stuck with it? Am I allowed to just write code anymore?

    • akshay326a month ago
      i agree, the tool is indeed broken. its simultaneously stupid and smart in different ways. but i think there's some value in continuing to use and evaluate it
  • vaishnavsma month ago
    This seems to be focused on Python, but for all the TS devs out there, what you'll see will be implicit `any` errors. Quick word of warning on having LLMs fix those - they love to use explicit `any`s or perform `as any` casts. This makes the lint error disappear, but keeps the actual logic bug in the code.

    Even if you ask it not to use any at all, it'll cast the type to `unknown` and "narrow" it by performing checks. The problem is that this may be syntactically correct but completely meaningless, since it'll narrow it down to a type that doesn't exist.

    The biggest problem here is that all of these are valid code patterns, but LLMs tend to abuse them more than using it correctly.

    • anonzzziesa month ago
      We detect any use of any and the LLM has to fix them before our check succeeds. It does and works fine.
      • akshay326a month ago
        currently starting to do the same over seer's frontend, i didn't realise how simple yet effective this technique / guardrail could be!
    • Incipienta month ago
      Default linting in quasarsjs doesn't like unnecessary casts, or using 'any' types - AI generally then fixes it...in varying degrees of effectiveness - sometimes properly, sometimes with horrific type abominations.
  • throwawayffffasa month ago
    Use a linter that can auto fix some of the problems and have an automatic formatter. Ruff can do both. It will decrease your cleanup workload.

    Don't get too hanged up on typing. Pythons duck typing is a feature not a bug. It's ok to have loose types.

    On duplicate code, in general you should see at least two examples of a pattern before trying to abstract it. Make sure the duplication/similarity is semantic and not incidental, if you abstract away incidental duplication, you will very quickly find yourself in a situation where the cases diverge and your abstraction will get in your way.

    In general coding agents are technical debt printers. But you can still pay it off.

    • akshay326a month ago
      Totally agree on the debt printer metaphor. I might steal it.
  • andsmi2a month ago
    Part of my pattern now is forcing lint before push and also requiring code coverage % to stay above a certain threshold and all tests to pass. Sometimes this goes awry but honestly I have same problem with dev teams. This same thing should be done with dev teams. And I’ve had devs fix lint errors these bad ways same as llm as well as “fix” tests in and ways. Llm actually listens to my rules a bit better tha human devs — and the pre commit checks and pre merge checks enforce it.
    • akshay326a month ago
      amen! that's my bitter lesson for the time being, unless claude gets eerily better
  • OutsmartDana month ago
    If AI is writing and fixing all code, does linting even matter?
    • akshay326a month ago
      LLMs try to cheat. all sorts of evasive ways or smart tricks in some cases to avoid working on context-heavy tasks. i've constantly observed if left unchecked it tries to loosen the lint settings
    • colechristensena month ago
      Linting is a good guardrail for real code problems the LLM catches poorly.

      LLM performance increases with non-LLM guardrails.

      • akshay326a month ago
        true both - i've observed i end up spending more tokens + time with linting, than without
  • cheapsteaka month ago
    would PostToolUse be a better place to do it than pre-commit? (trigger on `"^(Edit|Write|MultiEdit)$"`)

    for lint issues that are autofixable, the tool use can trigger formatting on that file and fix it right away

    for type issues (ts, pyright), you can return something like `{\"hookSpecificOutput\":{\"additionalContext\":$escaped},\"continue\":true}"` to let the edit complete but let Claude know that there are errors to fix next turn

    • akshay326a month ago
      thanks i've not used PostToolUse but will checkout. i'm excited about Rust's autofixable issues promise. curious how effective they are, and how deep of a issue can they solve
  • rcarmoa month ago
    Linting and proper tests are the reason why I can use even simple models to get a lot done—preferably writing the tests with a second model.
    • akshay326a month ago
      which simple models have you found good?
      • rcarmoa month ago
        gpt-5-mini can go a surprisingly long way, and Mistral's stuff is also quite good so far.
        • akshay326a month ago
          I’ve used mini for synthetic dataset generation extensively. Never tried Mistral; will check it out
  • rurbana month ago
    That's why we are all using -Wall -Werror besides the clang-format commit hooks (with prek of course). Proper languages cannot afford this kind of python or TS slop.
  • akshay326a month ago
    [dead]
  • seropersona month ago
    TL;DR: Enable strict linting on CI, don't allow AI to change linting configuration.