464 pointsby ssiddharth5 hours ago62 comments
  • losvedir5 hours ago
    I really like Oxide's take on AI for prose: https://rfd.shared.oxide.computer/rfd/0576 and how it breaks the "social contract" where usually it takes more effort to write than to read, and so you have a sense that it's worth it to read.

    So I get the frustration that "ai;dr" captures. On the other hand, I've also seen human writing incorrectly labeled AI. I wrote (using AI!) https://seeitwritten.com as a bit of an experiment on that front. It basically is a little keylogger that records your composition of the comment, so someone can replay it and see that it was written by a human (or a very sophisticated agent!). I've found it to be a little unsettling, though, having your rewrites and false starts available for all to see, so I'm not sure if I like it.

    • ssiddharth5 hours ago
      My biggest sorrow right now is the fact that my beloved emdash is a major signal for AI generated content. I've been using it for decades now but these days, I almost always pause for a second.
      • Lioa minute ago
        You can still use them — it’s just that now they have a new purpose; getting things ignored by AI detection or AI;DR.

        Now you can ask for outlandish things at work knowing your boss won’t read it and his summariser will ignore it as slop — win.

      • manuelmoreale2 hours ago
        > I've been using it for decades now but these days, I almost always pause for a second.

        Wrote about this before [0] but my 2c: you shouldn't pause and you should keep using them because fuck these companies and their AI tools. We should not give them the power to dictate how we write.

        [0]: https://manuelmoreale.com/thoughts/on-em-dashes

      • Lalabadie3 hours ago
        For what it's worth, whatever LLMs do extensively, they do because it's a convention in well-established writing styles.

        LLMs have a bias towards expertise and confidence due to the proportion of books in their training set. They also lean towards an academic writing style for the same reason.

        All this to say, if LLMs write like you were already writing, it means you have very good foundations. It's fine to avoid them out of fear, but you have this Internet stranger's permission to use your em dash pause to think "Oh yeah, I'm the reference for writing style."

        • petters3 minutes ago
          I think that bias is not due to the proportion of books and more due to how they are fine-tuned after the pretraining.
        • djhn2 hours ago
          Aren’t books massively outweighed by the crawled internet corpus?
          • r_leean hour ago
            I would doubt that because books are probably weighed as higher quality and more trustworthy than random Reddit posts

            Especially if it's unsupervised training

      • eYrKEC22 hours ago
        I used to enjoy the literate usage of the word "literally".

        You'll get over it.

        • peterashford25 minutes ago
          Using literally to mean figuratively goes back hundreds of years
      • catoc4 hours ago
        Exactly this! I love(d) using em dashes. Now they’ve become ehm dashes, experiencing exactly that pause — that moment of hesitation — that you describe
        • deron124 hours ago
          AI never uses em dashes in a pair like this, whereas most people who like em dashes do. Anyone who calls paired em dash writing AI is only revealing themselves to be a duffer.
          • Yizahi38 minutes ago
            In my limited text generation experience, LLMs use em-dashes precisely like that, only without spaces on the sides and always in pairs in a single sentence. Here some examples from my Gemini history:

            "The colors we see—like blue, green, and hazel—are the result of Tyndall scattering."

            "Several interlocking cognitive biases create a "safety net" around the familiar, making the unknown—even if objectively better—feel like a threat."

            "A retrograde satellite will pass over its launch region twice every 24 hours—once on a "northbound" track and once on a "southbound" track—but because of the way Earth rotates, it won't pass over the exact same spot on every orbit."

            "Central, leverages streaming telemetry to provide granular, real-time performance data—including metrics (e.g., CPU utilization, throughput, latency), logs, and traces—from its virtualized core and network edge devices."

            "When these conditions are met—indicating a potential degradation in service quality (e.g., increased modem registration failures, high latency on a specific Remote PHY)—Grafana automatically triggers notifications through configured contact points (e.g., Slack, PagerDuty)."

            After collecting these samples I've noticed that they are especially probably in questions like explain something or write descriptive text. In the short queries there is not much text in total to trigger this effect.

          • catoc3 hours ago
            > ”AI never uses em dashes in a pair”

            I wish that were true, but I feel a little bit vindicated nevertheless

      • 4b11b44 hours ago
        Also, unfortunately I have in my global instructions to never use em dashes...
        • 4b11b44 hours ago
          Maybe I'll get over it eventually.
      • tkzed494 hours ago
        I've gone back to using two dashes--LLMs typically don't write them that way.
      • nxobject4 hours ago
        What I do – and I know this isn't conventional style – is use ex dashes. (Or, you could use spaces between em dashes, as incorrect as it is.)
        • OGWhales2 hours ago
          I've noticed that LLMs generated text often has spaces around em dashes, which I found odd. They don't always do that, but they do it often enough that it stood out to me since that isn't what you'd normally see.
        • treetalkeran hour ago
          > Or, you could use spaces between em dashes, as incorrect as it is.

          It's a matter of style preference. I support spaces around em-dashes — particularly for online writing, since em-dashes without spaces make selecting and copying text with precision an unnecessary frustration.

          By the way,what other punctuation mark receives no space on at least one side?Wouldn't it look odd,make sentences harder to read,and make ideas more difficult to grok?I certainly think so.Don't you? /s

      • wiseowise2 hours ago
        I use it to trigger false positives in haters – why not?
      • archagon5 hours ago
        To quote Office Space, “Why should I change? He’s the one who sucks.”
        • parsimo20104 hours ago
          Mostly because when I see an em dash now, I assume that it was written by AI, not that the author is one of the people who puts enough effort into their product that they intentionally use specific sized dashes.

          AI might suck, but if the author doesn't change, they get categorized as a lazy AI user, unless the rest of their writing is so spectacular that it's obvious an AI didn't write it.

          My personal situation is fine though. AI writing usually has better sentence structure, so it's pretty easy (to me at least) to distinguish my own writing from AI because I have run-on sentences and too many commas. Nobody will ever confuse me with a lazy AI user, I'm just plain bad at writing.

          • kevstevan hour ago
            As someone who frequently posts online- with em dashes- I wonder if I am part of the problem with training llms to use them so much- and am going to get punished in the future for doing so.

            I also tend to way overuse parenthesis (because I tend to wander in the middle of sentences) but they haven't shown up much in llms so /shrug.

          • 98codes4 hours ago
            > assume

            There's your trouble. The real problem is that most internet users are setting their baseline for "standard issue human writing" at exactly the level they themselves write. The problem is that more and more people do not draw a line between casual/professional writing, and as such balk at very normal professional writing as potentially AI-driven.

            Blame OS developers for making it easy—SO easy!—to add all manner of special characters while typing if you wish, but the use of those characters, once they were within easy reach, grew well before AI writing became a widespread thing. If it hadn't, would AI be using it so much now?

            • 4 hours ago
              undefined
          • wrs4 hours ago
            If you’re judging my writing so shallowly, I don’t think I’m writing for you.
            • lelanthran4 hours ago
              > If you’re judging my writing so shallowly, I don’t think I’m writing for you.

              No, you are writing for people who see LLM-signals and read on anyway.

              Not sure that that's a win for you.

              • wrs2 hours ago
                "Seeing LLM-signals" == "reading shallowly", so I think I covered that case.
        • collingreen2 hours ago
          To continue the story, the guy saying this got fired and probably wouldn't have without taking this stand.
      • Bukhmanizer4 hours ago
        You’re absolutely right. I hate AI writing — it’s not that I hate AI, it’s that it makes everything it says sound a specific combination of smug and authoritative — No matter the content. Once you realize it’s not saying anything, that’s the real aha moment.

        \s

    • woopwoop5 hours ago
      I like the idea that various communications media have implicit social contracts that can be broken. In my opinion, power point presentations break an implicit social contract that is held in handwritten talks: if it's worth you displaying a piece of information, so that I the listener feel the need to take it in or even copy it down, it has to be worth your time to actually physically write it on the board. With power point talks this is not honored, and the average power point talk is much, much worse than the average chalk talk. I bet there are lots of other examples.
      • smithza3 hours ago
        Go thee to the land of government contracting and see thou how well thine ideas hold up.
        • woopwoop2 hours ago
          I actually have worked in this space and it, uh, has not shaken my belief that powerpoint talks are bad.
    • steveBK1234 hours ago
      The problem with Ai writing is that its a waste of everyones time.

      It’s literal content expansion, the opposite of gzip’ing a file.

      It’s like a kid who has a 500 word essay due tomorrow who needs to pad their actual message up to spec.

      • AnimalMuppet4 hours ago
        Well, LLMs can be either side of that. They can also be used to turn something verbose into a series of bullet points.

        I agree that reading an LLM-produced essay is a waste of time and (human) attention. But in the case of overly-verbose human writing, it's the human that's wasting my time[1], and the LLM is gzip'ing the spew.

        [1] Looking at you, New Yorker magazine.

        • steveBK1234 hours ago
          Right we are headed towards LLM generated slop summarized by another LLM. Wire format is expanded slop.
    • jimkleiber3 hours ago
      In 2020 at the start of covid, I did an experiment I called Project 35 where, for 35 days straight before my 35th birthday, I wrote 3 times per day, for 10 minutes each, I livestreamed it and whatever I wrote I would put directly into a book with no edits. While I didn't invite many people to join the calls (maybe fear, maybe just not wanting to coordinate it all), I found the process to be more raw, more human, and less perfect than 10x edited writing. It also helped me get better at typing in the moment and not rewriting everything, especially for social media, HN, and other places.

      Anyway, it's at https://www.jimkleiber.com/p35/ if you wanna check it out, all sessions posted as blog posts, I think there's a link to the ebook (pay-what-you-want) and there may be audio (I recorded myself reading the writing right after each session).

      If you check it out, please let me know :-)

    • comboy4 hours ago
      > https://seeitwritten.com

      Fun, I'd make playback speed something like 5x or whatever feels appropriate, I think nobody truly wants to watch those at 1x.

    • mikestew5 hours ago
      Years ago I wrote something similar to test a biometric security piece that used keystroke timings (dwell and stroke) to determine if the person typing the password is the same person who owns the account. Short version of a long story is that it would be trivial to get data for AI to reproduce human typing. Because I did it years ago using something only slightly more sophisticated than urandom.
    • onionisafruit2 hours ago
      I like the idea, but personally I would rather be thought a bot than show that I’m a human idiot who takes three tries to spell basic words.
    • hinkley5 hours ago
      Based on the programs I was nudged to as a child, it was a surprise to no one but me that I scored higher verbal on the SATs than I did math, which I would have told you was my favorite subject. Despite the fact that French was my easiest subject. I can still picture the look on my french teacher’s face if I’d have mentioned this in front of him.

      There are a lot of people like me in software. I’m tempted to say we are “shouted down”, but honestly it’s hard to be shouted down when you can talk circles around some people. But we are definitely in a minority. There are actually a lot of parallels between creative writing and software and a few things that are more than parallel. Like refactoring.

      If you’re actually present when writing docs instead of monologuing in your head about how you hate doing “this shit”, then there’s a lot of rubber ducking that can be done while writing documentation. And while I can’t say that “let the AI do it” will wipe out 100% of this value, because the AI will document what you wrote instead of what you meant to write, I do think you will lose at least 80% of that value by skipping out on these steps.

    • benob5 hours ago
      You could totally make a believable timing generation model from a few (hundreds) recordings of human writing. Detecting AI is hard...
    • unglaublich4 hours ago
      This can only be fixed by authors paying humans to read instead of the other way around.
    • usefulposter5 hours ago
      LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (Cantrill)

      The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. (Brandolini)

      https://en.wikipedia.org/wiki/Brandolini's_law

    • mystraline4 hours ago
      To be fair, Oxide is a joke.

      They want all this artisnal hand written prose under the candle light with the moon in the background. And you are a horrible person for using AI, blablabla.

      But ask for feedback? And you get Inky, Blinky, Pinky, and Clyde. Aka ghosted. But boy, do they tell a good story. Just ain't fucking true.

      Counter: companies deserve the same amount of time invested in their application as they spend on your response.

  • whaleidk2 minutes ago
    The author thinks that it’s fine to do this for code, which I find strange. I have big time ai:dr; when a commit comes in for me to review and it’s 300 lines for something that is already built into a single function of the framework we use, or I see a (important) comment they forgot to delete. I should not be expected to become more familiar with the authors code than he is himself, and I certainly shouldn’t be the first one verifying that even works.
  • raincole5 hours ago
    > AI-generated code feels like progress and efficiency, while AI-generated articles and posts feel low-effort

    I've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.

    • joshuaissac5 hours ago
      AI-generated code is meant for the machine, or for the author/prompter. AI-generated text is typically meant for other people. I think that makes a meaningful difference.
      • askvictor27 minutes ago
        At the same time, AI-generated code has to be correct and precise, whereas AI-generated text doesn't. There's often no 'correct solution' in AI-generated text.
      • ripe5 hours ago
        Code can be viewed as design [1]. By this view, generating code using LLMs is a low-effort, low-value activity.

        [1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...

      • jvanderbot5 hours ago
        This is precisely correct IMHO.

        Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.

      • acedTrex5 hours ago
        Compiled code is meant for the machine, Written code is for other humans.
        • gordonhart5 hours ago
          For better or worse, a lot of people seem to disagree with this, and believe that humans reading code is only necessary at the margins, similarly to debugging compiler outputs. Personally I don't believe we're there yet (and may not get there for some time) but this is where comments like GP's come from: human legibility is a secondary or tertiary concern and it's fine to give it up if the code meets its requirements and can be maintained effectively by LLMs.
          • threetonesun4 hours ago
            I rarely see LLMs generate code that is less readable than the rest of the codebase it's been created for. I've seen humans who are short on time or economic incentive produce some truly unreadable code.

            Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.

            • gordonhart3 hours ago
              The main coding agent failure modes I've seen:

              - Proliferation of utils/helpers when there are already ones defined in the codebase. Particularly a problem for larger codebases

              - Tests with bad mocks and bail-outs due to missing things in the agent's runtime environment ("I see that X isn't available, let me just stub around that...")

              - Overly defensive off-happy-path handling, returning null or the semantic "empty" response when the correct behavior is to throw an exception that will be properly handled somewhere up the call chain

              - Locally optimal design choices with very little "thought" given to ownership or separation of concerns

              All of these can pretty quickly turn into a maintainability problem if you aren't keeping a close eye on things. But broadly I agree that line-per-line frontier LLM code is generally better than what humans write and miles better than what a stressed-out human developer with a short deadline usually produces.

          • hinkley5 hours ago
            And Sturgeon tells us 90% of people are wrong, so what can you do.
        • philipp-gayret5 hours ago
          Compiled natural language is meant for the machine, Written natural language is for other humans.
          • CivBase4 hours ago
            If AI is the key to compiling natural language into machine code like so many claim, then the AI should output machine code directly.

            But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.

      • ginsider_oaks3 hours ago
        > Programs must be written for people to read, and only incidentally for machines to execute.

        from the preface of SICP.

      • everforward4 hours ago
        A lot of writing (maybe most) is almost the same. Code is a means of translating a process into semantics a computer understands. Most non-fiction writing is a means of translating information or an idea into semantics that allow other people to understand that information or idea.

        I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.

    • pseudosavant3 hours ago
      I think there’s an uncanny valley effect with writing now.

      Yesterday I left a code review comment that someone asked if AI wrote it. The investigation and reasoning were 100% me. I spent over an hour chasing a nuanced timezone/DST edge case, iterating until I was sure the explanation was correct. I did use Codex CLI along the way, but as a power tool, not a ghostwriter.

      The comment was good, but it was also “too polished” in a way that felt inorganic. If you know a domain well (code, art, etc.), you start to notice the tells even when the output is high quality.

      Now I’m trying to keep my writing conspicuously human, even when a tool can phrase it perfectly. If it doesn’t feel human, it triggers the whole ai;dr reaction.

    • acedTrex5 hours ago
      Ya i hate the idea that theres a difference, Code to me has always been as expressive about a person as normal prose. LLMs you lose a lot of vital information about the programmers personality. It leads to worse outcomes because it makes the failures less predictable.
      • jama2114 hours ago
        Code _can_ be expressive but it also can not, it depends on its purpose.

        Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.

        I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.

        • acedTrex4 hours ago
          All code is expressive, if a person emitted it, it is expressive about their state of mind, their values and their context.
    • renato_shiraan hour ago
      the gamedev version of this is wild. i'm working on a mobile game right now and the internal calculus is genuinely confusing: using AI to help write networking code feels totally normal, using it to generate placeholder UI feels fine, but using it for the actual visual identity of the game feels like cheating, even though technically it's all "content creation."

      i think the real line is about whether the AI output is the product or a tool to build the product. AI-generated code that ships isn't really the product, the behavior it creates is. but AI-generated art that ships is the product in a way the user directly perceives. the uncanny valley isn't in the quality, it's in the relationship between the creator and the output.

      • nkriscan hour ago
        Because your users don’t see the network code or the GUI framework.

        But to your users, the visual identity is the identity of the game. Do you really want to outsource that to AI?

    • hinkley5 hours ago
      A flavor of the Primary Attribution Error perhaps? It’s not a snug fit, but it’s close.
    • mrisoli3 hours ago
      Wehad a junior engineer do some research on a handful of different solutions for a technical design and present the team, he came up with a 27-page document with 70+ references(2/3 of which were reddit threads), no more than a few hours later after the task was assigned.

      I would have been more okay with AI generated code, it would likely have been more objective and less verbose, I refused to review something that he obviously didn't put enough effort himself to do a POC on. When I asked for his own opinion on the different solutions evaluated he didn't have one

      It's not about the document per se, but the actual value of these verbose AI-generated slop, code that is executable, even if poorly reviewed, it's still executable and likely to produce the output that satisfies functional requirements.

      Our PM is now evaluating tools to generate documentation for our platform based on interpreting source code, it includes description of things such as what is the title and what the back button is for but wouldn't inform valid inputs for the creation of a new artefact. This AI-generated doc is in addition to our human made Confluence docs, which is likely to add to spam and reduce quality of search results for useful information.

    • HarHarVeryFunny4 hours ago
      > Everyone thinks their use of AI is perfectly justified while the others are generating slops

      No doubt, but I think there a bit of a difference between AI generating something utilitarian vs something expected to at least have some taste/flavor.

      AI generated code may not be the best compared to what you could hand craft, along almost any axis you could suggest, but sometimes you just want to get the job done. If it works, it works, and maybe (at least sometimes) that's all the measure of success/progress you need.

      Writing articles and posts is a bit different - it's not just about the content, it's about how it's expressed and did someone bother to make it interesting to read, and put some of their own personality into it. Writing is part communication, part art, and even the utilitarian communication part of it works better if it keeps the reader engaged and displays good theory of mind as to where the average reader may be coming from.

      So, yeah, getting AI to do your grunt work programming is progress, and a post that reads like a washing machine manual can fairly be judged as slop in a context where you might have hoped for/expected better.

    • dgxyz3 hours ago
      My perspective as an eng lead is it’s all shit. Words, code, the lot. It’s literally an enabler for the worst characteristics of humanity: laziness and disinterested incompetence.

      People are happy to shovel shit if they can get away with it.

    • dfxm124 hours ago
      The author is a blogger (creator and consumer) and coder though. They are speaking from experience in both cases, so it's not apt to your metaphor.

      It's worth pointing out that AI is not a monolith. It might be better at writing code than making art assets. I don't work with gaming, but I've worked with Veo 3, and I can tell you, AI is not replacing Vince Gilligan and Rhea Seehorn. That statement has nothing to do with Claude though...

    • jama2114 hours ago
      Generating art is worse than generating code though IMO. It’s more personal. Everything exists on a spectrum, even slop.
    • Blackthorn4 hours ago
      Turns out it's only slop if it comes from anyone else, if you generated it it's just smart AI usage.
  • written-beyond9 minutes ago
    I've had friendships broken because people couldn't understand why I didn't appreciate their "hard work" on writing a series of 10 articles they wrote with Claude.

    Mind you this person is an excellent writer, they had great success with ghost writing and running a small news website where they wrote and curated articles. But for some reason the opportunity for Claude to write stuff they can never have the time for is too great for them to ignore.

    I don't care if you used AI for 99.99% of your research for writing the content but when I read your content it should be written by you. It's why I never take any article seriously on linkedin, even before AI, they all lack any personalization.

  • tomsyouruncle7 minutes ago
    I think this article overlooks the act of engaging in problem solving with an agent.

    Personally I find it super helpful to discuss stuff back and forth: It takes a view, explores the code and brings some insight. I take a view and steer the analysis. And together we arrive at a conclusion.

    By that point the AI’s got so much context it typically does a great job summarising the thought process for wider discussion so I can tweak and polish and share.

  • dontwannahearit5 hours ago
    It's pretty much over for the human-internet. Search was gamed, its usefulness has plummeted, so humans will increasingly ask their LLM of choice and that LLM will have been trained on the content of the internet.

    So when someone wants to know something about the topic that my website is focused on, chances are it will not be the material from the website they see directly, but a summary of what the LLM learned from my website.

    Ergo, if I want to get my message across I have to write for the LLM. It's the only reader that really matters and it is going to have its stylistic preferences (I suspect bland, corporate, factual, authoritative, avoiding controversy but this will be the new SEO).

    We meatbags are not the audience.

    • netsharc5 hours ago
      Tragedy of the attention-economy. Ad networks gives you money if you placed their ads on your site, so people got machines to generate fluff to earn some money. Now all search result is just bullshit pages to capture your attention until the banner ad..

      A simple query like "Ford Focus wheel nut torque" gives pages with blah blah like:

      > Overview Of Lug Nut Torque For Ford Focus

      > The Ford Focus uses specific lug nut torque to keep wheels secure while allowing safe driving dynamics. Correct torque helps prevent rotor distortion, brake heat transfer issues, and wheel detachment. While exact values can vary by model year, wheel size, and nut type, applying the proper torque is essential for all Ford Focus owners.

      And the site probably has this text for each car model.

      Somehow the ways the ad industry destroyed the Internet got very varied...

      • malfist4 hours ago
        And that site never actually lists the manufacturer recommended torque either. It's just all slop to get eyeballs.
      • dionian4 hours ago
        i think there is a huge market for Quality detection in the future. imagine a browser plugin that could filter AI slop like an ad blocker does ads. im sure it exists already. But im sure it needs to get more advanced
    • comboy4 hours ago
      There there, remember when all images were hand painted? (me neither)

      And I know it's different, but I'm surprised the overall sentiment is so pessimistic on HN. So maybe we will communicate through yet another black box on top of hundreds of existing ones already. But probably mostly when seeking specific information and wanting to get it efficiently. Yes this one is different, it makes human contact over text much more difficult, but the big part of all of this was happening already for years and now it's just widely available.

      When posting on HN you don't see the other person typing like using talk command on unix, but it is still meaningful.

      Ideally we would like to preserve what we have untouched and only have new stuff as an option but it's never been like this. Did we all enjoy win 3.11? I mean it was interesting.. but clicking.. so inefficient (and of course there are tons of people who will likely scream from their GUIs that it still is and windows sucks, I'd gladly join, but we have our keyboard bindings, other operating systems, and get by somehow)

      • kjkjadksj4 hours ago
        There is a mountain of difference between photography and Ai
        • comboy4 hours ago
          This argument works against any new thing. Yes it is totally different than the thing that happened before and perhaps something that has never happened before, I don't deny that at all.

          Perception of new things stays relatively constant over the years though.

          • kjkjadksj3 hours ago
            I get that to an extent but ai is actually different than just some iteration on existing practice. It is going to put a lot of people out of work and devalue a lot of previously valuable skills. I mean new tech never really threatened jobs like scientists or lawyers but that is who is on the block as well. Not just low skilled labor. High skilled. Any skilled. Why do we even need labor? Why have 8 billion people? Just need the minimum number to do whatever work is left yet to be automated.

            And the thought that we’d all be prancing playing guitars by the river on UBI when that happens. No, we just won’t be born anymore.

    • ericpauley3 hours ago
      I (perhaps naively) still believe that communities can successfully curate human writing. While there's lots of AI slop that gets posted on HN, for instance, the amount of thoughtful human content seems well above the base rate.
      • manuelmoreale2 hours ago
        You are not alone and fuck all the people that say that everything is doomed and that there's no way to still have a good internet full of wonderful content made by people.
  • wellpasta minute ago
    $100 says AI got at least some notes effecting the copy of this post
  • ravirajx74 hours ago
    AI has kind of ruined internet for me.

    I no longer feel joy in reading things as almost most of the writing seem same and pale to me as if everyone is putting thoughts in the same way.

    Having your own way of writing always felt personal in which you expressed your feelings most of the time.

    The most sad part for me is I no longer am able to understand someone's true feelings (which anyway was hard to express in writing as articulation is hard).

    We see it being used from our favourite sports person in their retirement post or from someone who has lost their loved ones or someone who just got their first job and it's just sad that we no longer can have that old pre AI days back again.

    • hinkley4 hours ago
      I used to daydream of a 'dark web' but for humans not criminals. But at this point I don't know how you'd keep slop out, given how high human collusion has gotten of late.
    • platinumrad4 hours ago
      Things like retirement posts have always been vetted (if not written by) PR agencies, so in that sense I think it would be a good thing if the mass delusion of parasocial relationships with celebrities were indirectly broken by AI-created skepticism.

      However, I agree that ordinary people filtering and flattening their communication into a single style is a great loss.

  • petetnt5 hours ago
    I agree with the general statement, if you didn’t spend time on writing it, I am not going to spend time reading it. That includes situations where the writer decides to strip all personality by letting AI format the end product. There’s irony in not wanting to read AI content, but still using it for code and especially documentation though, where the same principle should apply.
    • jimmaswell5 hours ago
      I find AI is great at documenting code. It's a description of what the code does and how to use it - all that matters is that it's correct and easy to read, which it almost certainly will be in my experience.
      • b2ccb23 hours ago
        I have quite a different take on that. As much as most people view documentation as a chore, there is value in it.

        See it as code review, reflection, getting a birds eye view.

        When I document my code, I often stop in between, and think: That implementation detail doesn't make sense/is over convoluted/can be simplified/seems to be lacking sanity check etc…

        There is also the art of subtly injecting humor in it, with, e.g. code examples.

      • archagon5 hours ago
        Documentation is needed for intent. For everything else you could just read the code. With well-written code, “what the code does and how to use it” should be clear.
  • dematz5 hours ago
    >I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.

    Doesn't ai;dr kind of contradict ai generated documentation? If I want to know what claude thinks about your code I can just ask it. Imo documentation is the least amenable thing to ai. As the article itself says, I want to read some intention and see how you shape whatever you're documenting.

    (AI adding tests seems like a good use, not sure what's meant by scaffolding)

    • fmbb4 hours ago
      The article is definitely contradicting itself. There are only two sentences between

      > Why should I bother to read something someone else couldn't be bothered to write?

      and

      > I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.

      So they expect nobody to read their documentation.

      • mystifyingpoi3 hours ago
        > So they expect nobody to read their documentation.

        Yes, exactly. Because AI will read it and learn from it, it's not for humans.

      • jama2114 hours ago
        That’s not a contradiction - documentation often needs to be written with no expectation anyone will ever read it.
    • 4 hours ago
      undefined
  • numbers5 hours ago
    I remember this was back in 2023, when ChatGPT had first launched, and I had a manager whose English was not very good. He started sending emails that felt like they were written by a copywriter. And the messaging was so hard to parse through because there's so much ChatGPT fluff around it. Very quickly we realized that what he was saying was usually in the middle somewhere, but we'd have to read through the intro and the ending of the emails just so that we couldn't miss anything. It felt like wasting 2-3 extra minutes per team member.
    • afavour5 hours ago
      I have long believed that LLMs will herald a new corporate data transfer format, unlike most new formats that boast efficiency gains and compression, this new format will be incredibly wasteful and bloat transmission sizes.

      I'll want to communicate something to my team. I'll write 4 bullet points, plug it into an LLM, which will produce a flowing, multi paragraph e-mail. I'll distribute it to my co-workers. They will each open the e-mail, see the size, and immediately plug it into an LLM asking it to make a 4 bullet summary of what I've sent. Somewhere off in the distance a lake will dry up.

      • entuno5 hours ago
        And hopefully they're the same four same bullet points..
      • larsla2 hours ago
        HypoText Transfer Protocol
      • the_af4 hours ago
        Ah, yes, the LLM Exchange Protocol.

        I believe it's already in place, making the internet a bit more wasteful.

    • micromacrofoot5 hours ago
      the solution is simple, ask ChatGPT to summarize it

      a large part of the business models of these systems is going to consist of dealing with these systems... it's a wonderful scheme

      • ethmarks3 hours ago
        Why bother fixing existing problems if you can just create new problems and then fix those? /s
  • trollbridge5 hours ago
    I would be glad to read anyone's prompts they use to generate AI text. I don't see why I need to necessarily read the output, though.

    I can take the other person's prompt and run it through an LLM myself and proceed from there.

    • stevenjgarner5 hours ago
      I think that may be an insight to something quite profound. We used to measure the "doubling of knowledge" against number of peer-reviewed papers of scientific research etc. Now not so much. "Knowledge" has become more proprietary, and condensed into the AI models replacing the libraries of training data. We now measure "doubling of knowledge" as the next version or iteration of a model. In some kind of real sense, the prompt IS more powerful than the output.
  • mikemarsh5 hours ago
    > I can't imaging writing code by myself again, specially documentation, tests and most scaffolding

    > Why should I bother to read something someone else couldn't be bothered to write?

    Interesting mix of sentiments. Is this code you're generating primarily as part of a solo operation? If not, how do coworkers/code reviewers feel about it?

    • improbableinf5 hours ago
      That’s exactly my thoughts. Code and documentation are one of the primary types of „content” by/for engineers. Kind of goes against the main topic of the article.
  • 9999gold5 hours ago
    > I can't imaging writing code by myself again, specially documentation, tests and most scaffolding

    Shouldn’t we bother to write these things?

    • post-it5 hours ago
      Documentation, maybe. Tests and scaffolding, no way. 99% of my time writing tests is figuring out how to make this particular React component testable. It's a waste of time. It's very easy to verify that a test is correct, making them the ideal thing to use AI for.
    • twoodfin5 hours ago
      Other than documentation (where I agree!), those are for communicating desired actions (primarily) to a machine.

      A blog post is for communicating (primarily, these days) to humans.

      They’re not the same audience (yet).

    • jama2114 hours ago
      Nah.
  • furyofantares4 hours ago
    I call out articles on here constantly and have gotten kind of tired of it. Well, very tired of it. I am in full agreement with this post.

    I don't have any solutions though. Sometimes I don't call out an article - like the Hashline post today - because it genuinely contains some interesting content. There is no doubt in my mind that I would have greatly preferred the post if it was just whatever the author promoted the LLM with rather than the LLM output and would have better communicated their thoughts to me. But it also would have died on /new and I never would have seen it.

  • ecshafer5 hours ago
    > When it comes to content..

    This is the root cause of the problem. Labeling all things as just "content". Content entering the lexicon is a mind shift in people. People are not looking for information, or art, just content. If all you want is content then AI is acceptable. If you want art then it becomes less good.

  • weinzierl4 hours ago
    "Growing up, typos and grammatical errors were a negative signal. Funnily enough, that’s completely flipped for me."

    For me too and for writing it has the upside that it's sooo relaxing to just type away and not worry about the small errors much anymore.

    • hinkley4 hours ago
      Autocorrect is going to make me sound like a dumbass anyway.
  • Starlevel0045 hours ago
    I laugh every time somebody qualifies their anti-AI comments with "Actually I really like AI, I use it for everything else". The problem is bad, but the cause of the problem (and especially paying for the cause of the problem)? That's good!
    • Kerrick5 hours ago
      I laugh every time somebody thinks every problem must have a root cause that pollutes every non-problem it touches.

      It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.

    • 4b11b44 hours ago
      It's not as one-dimensional as good vs bad. Transformers generally are extremely useful. Do I want to read your transformer generated writing? Fuck no. Is code generation/understanding/natural language interfaces to a computer good? I'd have to argue yes, certainly.

      I cry every time somebody tries to frame it one dimensionally.

    • the_af4 hours ago
      Why laugh? Why can't a tool have good and bad uses, and why can't one be disappointed about the bad uses but embrace the good ones?
  • yrds96an hour ago
    Good article. Nothing is more frustate than reading a text that I don't even know if the person who "wrote" it, actually read it.

    If someone wants to me read a giant text generated by a small and poor prompt, I don't wanna read it

    If someone wants to fix that by increasing the effort and do a better prompt and express better the ideas, I rather read that prompt over the llm output

  • brikym31 minutes ago
    I've been throwing in typos and shitty grammar for a while just to seem authentic. I suppose now that will be copied.
  • giancarlostoro4 hours ago
    The correct way to use AI for writing is to ask for feedback, not the entire output. This is my personal opinion, English is not my first language, so sometimes I miss what's obvious to a native speaker. I've always used tools that tell me what's wrong with my writing as an opportunity to learn to do better next time. When I finally had Firefox on my computer and it corrected my spelling, it helped me to improve my spelling 100-fold. I still have weird grammar issues with punctuation here and there, and don't ask me where to put a coma (comma?) - that's another one, because I always forget.

    I think using AI for writing feedback is fine, but if you're going to have it write for you, don't call it your writing.

    • patrakov32 minutes ago
      There is another correct way. Denote the general topic, ask it to ask you questions that would be needed to write it. Discard the final output, write something yourself based on what it asked.

      Example (minus the final review): https://chatgpt.com/share/698e417a-4448-8011-9c29-12c9b91318...

      I still think that the final review written by ChatGPT is a bit off. But at least, it asked mostly the right questions.

  • stormed22 minutes ago
    My issue is that I don't necessarily trust content if it looks generated. I think I might've lost the link, but when I was helping my company with integrating Microsoft Entra with Ubuntu, I noticed the documentation from both Microsoft & Canonical had heavily generated documentation that was flat out wrong and had me going into loops figuring out unnecessary steps that were seemingly hallucinated.

    ai;dr is what I'm going to start saying, it's just frustrating to see.

  • hashstring3 hours ago
    Agree but also getting tired from all these blogs that state more or less the same thing about LLMs. I’ve read this before.
    • allknowingfrog3 hours ago
      I could stand to hear less from both the enthusiasts and the detractors. My HN experience has changed substantially in the last couple of years.
  • TheChelsUK5 hours ago
    Thoughts with the people who use AI to help construct their thoughts because their cognitive decline impacts the ability to construct words and sentences, but still enjoying the production of content, blogging and th indieweb.

    These blanket binary takes are tiresome. There is nuance and rough edges.

  • cgriswald5 hours ago
    > For me, writing is the most direct window into how someone thinks, perceives, and groks the world. Once you outsource that to an LLM, I'm not sure what we're even doing here. Why should I bother to read something someone else couldn't be bothered to write?

    Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.

    Outsourcing thinking is bad. Using an LLM to assist in communicating thought is or at least can be good.

    The real problem I think the author has here is that it can be difficult to tell the difference and therefore difficult to judge if it id worth your time. However, I think author/publisher reputation is a far better signal than looking for AI tells.

    • jvanderbot5 hours ago
      If you use an LLM to refine your ideas, you're basically adding a third party to the chat. There's really no need to copy-paste anything - you are the one that changes before you speak.

      If you use an LLM to generate the ideas and justification and formatting and etc etc, you're just delegating your part in the convo to a bot.

    • JoshTriplett5 hours ago
      > Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.

      Homogenization is good for milk, but not for writing.

      • cgriswald4 hours ago
        Clarity is good for writing and homogenization can increase clarity. There is a reason technical writing doesn’t read like journalism doesn’t read like fiction. There’s a reason we have dictionaries and editors. There’s a reason we have style guides. Including an LLM in writing in any of these roles or others isn’t ipso facto bad. I think many people who think it is just don’t like the style. And that’s okay, but the article isn’t about the style per se but about effort. Both lazy writing and effortful writing can be done with or without an LLM.
      • trollbridge5 hours ago
        I'm not sure I'd agree with the statement "homogenization is good for milk". What makes it "good"?
        • JoshTriplett5 hours ago
          Fair enough, tastes vary. Many people prefer that milk not be chunky or lumpy, and want it to be uniform and consistent. Perhaps some do not.
    • jmull4 hours ago
      > author/publisher reputation is a far better signal than looking for AI tells

      Hardly seems mutually exclusive. Surely you should generally consider the reputation of someone who posts LLM-responses (without disclosing it) to be pretty low.

      A lot of people don’t particularly want to waste time reading the LLM-responses to someone else’s unknown/unspecified prompts. Someone who would trick you in to that doesn’t have a lot of respect for their readers and is unlikely to post anything of value.

      • cgriswald3 hours ago
        I think valuing the source of information over its quality is probably a mistake for most contexts. I’m also very skeptical of people’s ability to detect AI writing in general even though AI slop seems easy enough to identify. (Although lots of human slop looks pretty similar to me.)

        Don’t get me wrong. I don’t want to read (for example) AI fiction because I know there’s no actual mind behind it (to the extent that I can ever know this).

        But AI is going to get better and the only thing that’s going to even work going forward is to trust publishers and authors who give high value regardless of how integral LLMs are to the process.

    • NitpickLawyer5 hours ago
      > Outsourcing thinking is bad.

      I keep seeing this and I don't think I agree. We outsource thinking everyday. Companies do this everyday. I don't study weather myself, I check an app and bring an umbrella if it says it's gonna rain. My team trusts each other do do some thinking in their area, and present bits sideways / upwards. We delegate lots of things. We collaborate on lots of things.

      What needs to be clear is who owns what. I never send something I wouldn't stand by. Not in a correctness sense (I have, am and likely will be wrong on any number of things) but more in a "yeah, that is my output, and I stand by it now" kind of way. Tomorrow it might change.

      Also remember that google quip "it's hard to edit an empty file". We have always used tools to help us. From scripts saved here and there, to shortcuts, to macros, IDE setups, extensions and so on. We "think once" and then try not to "think" on every little detail. We'd go nowhere with that approach.

      • Terr_4 hours ago
        IMO it helps to take a scenario and then imagine every task is being delegated to a randomized impoverished human remote contractor, with the same (lack of) oversight and involvement by the user.

        There's a strong overlap between things which bad (unwise, reckless, unethical, fraudulent, etc.) in both cases.

        > We outsource thinking everyday. [...] What needs to be clear is who owns what.

        Also once you have clarity, there's another layer where some owning/approval/delegation is not permissible.

        For example, a student ordering "make me a 3 page report on the Renaissance." Whether the order went to another human or an LLM, it is still cheating, and that wouldn't change even if they carefully reviewed it and gave it a stamp of careful approval.

      • cgriswald4 hours ago
        Right. I don’t think I disagee with anything you’ve said here.

        However, if I had an idea and just fobbed the idea off to an LLM who fleshed it out and posted it to my blog, would you want to read the result? Do you want to argue against that idea if I never even put any thought into it and maybe don’t even care?

        I’m like you in this regard. If I used an LLM to write something I still “own” the publishing of that thing. However, not everyone is like this.

      • pohl4 hours ago
        Managers and business owners outsource thinking to their employees and they deserve huge paychecks for it. Entrepreneurs do it and we celebrate them. But an invention that allows the peon to delegate to an automaton? That’s where I draw the line.
  • logicprog5 hours ago
    Yeah, I use LLM agents extensively for coding, but I have never once allowed an LLM to write anything for me. In the past month, I literally wrote 40,000 words of researched essays on various topics, and every single word was manually written, and every source manually read, myself. Writing is how I think, how I process information, and it's also an activity where efficiency is really not the goal.
  • phito5 hours ago
    I roll my eyes every time I see a coworker post a very a long message full of emojis, obviously generated by a LLM with 0 post editing. Even worse when it's for social communication such as welcoming a new member in the team. It just feels so fake and disingenuous, I might even say gross.

    I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.

  • pwillia74 hours ago
    I am 100% right there with you. Writing in my voice is maybe the last thing I have that I can do differently and 'better' than an LLM in a couple years time, or even right now if I'm really being honest.

    I haven't even really tried to use LLMs to write anything from a work context because of the ideas you talk about here.

    • jama2114 hours ago
      Writing to express yourself sure. Using an llm for a birthday card would be a terrible sin. However if someone used it for, I dunno, drafting an email because you’re in a dispute with an evil real estate agent and you’re trying not to get screwed, I wouldn’t have any qualms about it.

      IMO it’s lazy and bad for expressive writing, but for certain things it’s totally fine.

  • elischleifer5 hours ago
    "The less polished and coherent something is, the more value I assign to it." - maybe a bit of an overstatement ;)
    • arscan5 hours ago
      This absolutely has been the case for me for the last few months. But what’s disheartening is that this signal will just be mimicked through simple prompting if too many people start tuning in to it. Or maybe that’s already happened?
    • jugglinmike4 hours ago
      It also contradicts the author's earlier argument:

      > I need to know there was intention behind it. [...] That someone needed to articulate the chaos in their head, and wrestle it into shape.

      If forced to choose, I'd use coherence as evidence of care than use it as a refutation of humanity.

  • esafak5 hours ago
    The purpose of communication is to reduce the cost of obtaining information; I tell you what I have already figured out and vice versa. If we're both querying the same oracle, there is nothing gained beyond the prompt itself (which can be valuable).
  • Tycho5 hours ago
    When people put together memos or decks in the last, even if that weren’t read very carefully, at least they reassured management that someone had actually things through. But that is no longer a reliable signal.
  • andrewdb4 hours ago
    We are getting to a point where AI will be able to construct sound arguments in prose. They will make logical sense. Dismissing them only because of their origin is fallacious thinking.

    Conclusion:

    Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.

    Premises:

    1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.

    2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.

    3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.

    Assumptions:

    1. AI-generated arguments can have true premises and sound inferences

    2. The genetic fallacy is a legitimate logical error to avoid

    3. Source-based dismissals are categorically inappropriate in logical evaluation

    4. AI should be treated as equivalent to any other source when evaluating arguments

  • soperj5 hours ago
    > specially documentation

    How we can tell that this wasn't written by an LLM.

    • ssiddharth5 hours ago
      I'm too poor for Claude Max 20x. Not that it needs that firepower but eh, there's no real way. As I mentioned, almost every single quirk can be willed away with a little bit of attention and effort.

      At this point, I'm not sure whether you're a clawdbot running amok..

    • micromacrofoot5 hours ago
      You can't.

      Like always we have to lean on evaluating based on quality. You can produce quality using an LLM, but it's much easier to produce slop, which is why there's so much of it now.

      • soperj2 hours ago
        when they make a basic grammar issue (specially, instead of especially) it's a good indicator that it wasn't written by an llm.
  • nate4 hours ago
    i am absolutely on the fence here. I do like the ai cleanup of my rambling can do. but yes, i'm tempted to just leave it rambly, misspelled, etc. i find myself swearing more in my writing, just to give it more signal that: yeah, this probably aint an ai talking (writing) like this to you :) and yes, caps, barely.
  • smithza3 hours ago
    Please read through this incredible book review (book is All Things Are Full of Gods by David Bentley Hart). It is the kind of philosophy that everyone is looking past. Syntactic vs informational determinacy. LLMs is designed to create copy that is syntactically determinate (it is a complex set of statistics functions). Whereas the best human prose actually is the opposite--it does not converge on syntactic determinacy (see quote below) but instead converges on informational determinacy. The plot resolves as the reader's knowledge grows from abstraction and ignorance to empathy, insight and anticipation.

    https://www.thenewatlantis.com/publications/one-to-zero

      Semantic information, you see, obeys a contrary calculus to that of physical bits. As it increases in determinacy, so its syntactical form increases in indeterminacy; the more exact and intentionally informed semantic information is, the more aperiodic and syntactically random its physical transmission becomes, and the more it eludes compression. I mean, the text of Anna Karenina is, from a purely quantitative vantage of its alphabetic sequences, utterly random; no algorithm could possibly be generated — at least, none that’s conceivable — that could reproduce it. And yet, at the semantic level, the richness and determinacy of the content of the book increases with each aperiodic arrangement of letters and words into coherent meaning.
    
    Edit: add-on

    In other words, it is impossible for an LLM (or monkeys at keyboards [0]) to recreate Tolstoy because of the unique role our minds play in writing. The verb writing hardly appears to apply to an LLM when we consider the function it is actually doing.

    [0] https://libraryofbabel.info

  • phyzome5 hours ago
    Seems pretty silly to me to rail against AI-generated writing and then say it's good for documentation.
    • Daishiman4 hours ago
      Documentation can be fairly rote, straightforward and can have a uniform style that doesn't benefit from being opinionated.
  • keyboredan hour ago
    > Before you get your pitchforks out..

    I know it’s just modern writing style to preempt all responses. But can’t you just plainly state your business without professing your appreciation?

    People who waste other’s time with bullshit are aholes. I don’t care if it’s My Great Friend And Partner in Crime, Anthropics LLM or it’s a tedious template written in PHP with just enough substitutions and variations to waste five sentences on it before closing it.

    Actually, saying that it’s the same thing is a bit like saying “guns don’t shoot people”. At least you had to copy-paste that PHP template from somewhere and adapt it to your spam. Back in the day.

  • smallerfish4 hours ago
    I've been doing a lot of AI writing for a site - to do it well takes effort. I have a research agent, a fact check agent, a logical flow agent, a narrative arc analyzing agent, etc etc. Once I beat the article roughly into the shape I want it to be, I then read through end to end, either making edits myself or instructing an editor agent to do it. You can create some high quality writing with it, and it is still quicker than doing it the human-only way. One thing I like (which is not reason enough by itself) is that it gives you a little distance from the writing, making it easier to be ruthless about editing...it's much harder to cut a few paragraphs of precious prose that you spent an hour perfecting by hand. Another bonus is that you have fewer typos and grammatical issues.

    But of course, like producing code with AI, it's very easy to produce cheap slop with it if you don't put in the time. And, unlike code, the recipient of your work will be reading it word by word and line by line, so you can't just write tests and make sure "it works" - it has to pass the meaningfulness test.

  • dizhn5 hours ago
    Short and sweet to have coined the term? Or did it exist already?
  • grishka4 hours ago
    > Before you get your pitchforks out..

    > ..and call me an AI luddite

    Oh please do call me an AI luddite. It's an honor for me.

  • BobAliceInATree4 hours ago
    > I'm having a hard time articulating this but AI-generated code feels like progress and efficiency, while AI-generated articles and posts feel low-effort and make the dead internet theory harder to dismiss.

    I think it's the size of the audience that the AI-generated content is for, is what makes the difference. AI code is generally for a small team (often one person), and AI prose for one person (email) or a team (internal doc) is often fine as it's hopefully intentional and tailored. But what's even the point for AI content (prose or code) for a wide audience? If you can just give me the prompt and I can generate it myself, there's no value there.

  • 0gs5 hours ago
    all writing is developer documentation.
  • ef2k4 hours ago
    I really liked this post. It's concise and gets straight to the point. When it comes to presenting ideas, I think this is the best way to counter AI slop.
  • benatkin4 hours ago
    Don't worry, author, I don't think you're a luddite. You make that quite clear with this:

    > I can't imaging writing code by myself again

    After that, you say that you need to know the intention for "content".

    I think it's pretty inconsistent. You have a strict rule in one direction for code and a strict rule in the opposite direction for "content".

    I don't think that writing code unassisted should be taken for granted. Addy Osmani covered that in this talk: https://www.youtube.com/watch?v=FoXHScf1mjA I also don't think all "content" is the sort of content where you need to know the intention. I'll grant that some of it is, for sure.

    Edit: I do like intentional writing. However, when AI is generating something high quality, it often seems like it has developed an intention for what it's building, whether one that was conceived and communicated clearly by the person working with the AI or one that emerged unexpectedly through the interaction. And this applies not just to prose but to code.

  • xpe2 hours ago
    > Why should I bother to read something someone else couldn't be bothered to write?

    This is an easy but not very insightful framing.

    I want to read intelligent, thoughtful text that is useful in some way: to me, to society, to humanity. Ceteris paribus, the source of the information does not necessarily matter; it only matters as a matter of association. To put it another way, “human” vs “machine” is not the core driving factor for me.

    All other things equal, I would rather read A over B:

    A. high quality AI content, even if it is “only” the result of 6 minutes of human question framing and light editing [1]

    B. low quality purely human content, even if it was the result of 60 minutes of effort.

    There is increasingly less ability to distinguish “human” writing from “AI” writing. Some people fool themselves on their AI-detection prowess.

    To be direct: I want meaningful and satisfying lives for humans. If we want to reward humans for writing more, we better reflect on why, and if we still really want that, we better find ways that work. I don’t think “buy local” as a PR campaign will be easily transferred to a “read human” movement.

    [1]: Of course AI training data is drawn from humans, so I do not discount the human factor. My point is that quantifying the effort put into it is not simple.

  • unconedan hour ago
    >Before you get your pitchforks out and call me an AI luddite, I use LLMs pretty extensively for work.

    Chicken.

    Seriously, the degree to which supposed engineering professionals have jumped on a tool that lets them outsource their work and their thinking to a bot astounds me. Have they no shame?

  • AnimalMuppet3 hours ago
    Does anyone remember the Cluetrain Manifesto? They complained about corporate-speak, saying that it sounded "literally inhuman". Well, AIs are at least that bad. AIs trained on all those corporate statements, and learned to write that way, and we hate it just like we hate corporate PR-speak.
  • fleebee4 hours ago
    I wasn't about to call them a luddite. This is a pretty poorly veiled attempt at drumming the inevitability of AI coding. Did they really need to defend their preference for not reading LLM prose with "I will never write code manually again"?
  • noonker5 hours ago
    I love that we came to almost the same conclusion regarding grammar. I wrote a very similar article you might enjoy

    https://noonker.github.io/posts/2024-07-25-i-respect-our-sha...

  • Der_Einzige3 hours ago
    Just use our antislop techniques and no one will ever know you used an LLM. https://arxiv.org/abs/2510.15061 (ICLR 2026)

    Also you could long use "logit_bias" in the API of models which supported it to ban the EM dash, ban the word "not", ban semicolons, and ban the "fancy quotes" that were clearly added by "those who need to watch" to make sure that they can clearly figure out if you used an LLM or not.

  • martythemaniak5 hours ago
    I use a technique where LLMs help me write, but the final output is manual and entirely mine. It's a bit of heavy process, but I think it blends the power of LLM and authenticity of my thoughts fairly well, I'll paste in my blog post below (which wasn't produced using this method, hence the rambly nature of it):

    If you care about your voice, don't let LLMs write your words. But that doesn't mean you can't use AI to think, critique and draft lots of words for you. It depends on what purpose you're writing it for. If you're writing an impersonal document, like a design document, briefing, etc then who cares. In some cases you already have to write them in a voice that is not your own. Go ahead and write these in AI. But if you're trying to say something more personal then the words should be your own, AI will always try to 'smooth' out your voice, and if you care about it, you gotta write it yourself.

    Now, how do you use AI effectively and still retain your voice? Here's one technique that works well: start with a voice memo, just record yourself maybe during a walk, and talk about a subject you want, free form, skip around jump sentences, just get it all out of your brain. Then open up a chat, add the recording or transcript, clearly state your intent in one sentence and ask the AI to consider your thoughts, your intent and ask clarifying questions. Like, what does the AI not understand about how your thoughts support the clearly stated intent of what you want to say? That'll produce a first draft, which will be bad. Then tell the AI all the things that don't make sense to you, that you don't like, just comment on the whole doc, get a second draft. Ask the AI if it has more questions for you, you can use live chat to make this conversation go smoother as well, when the AI is asking you questions, you can talk freely by voice. Repeat this one or two more times, and a much finer draft will take shape that is closer to what you want to say. During this drafting state, the AI will always try to smooth or average out your ideas, so it is important to keep pointing out all the ways in which it is wrong.

    This process will help you with all the thinking involved being more up-front. Once you're read and critiqued several drafts, all your ideas will be much more clear and sort of 'cached' and ready to be used in your head. Then, sit down and write your own words from scratch, they will come much easier after all your thoughts have been exercised during the drafting process.

  • Handy-Man5 hours ago
    OP took it from here without credit https://www.threads.com/@raytray4/post/DUmB657FR4P
    • dqv4 hours ago
      ... attributionism for such a trivial thing is a waste of time. Multiple people can come up with a term like this independently because it's not that creative. People have been doing this with the ";dr" suffix for as long as it has been popular.

      And you're wrong for suggesting that's the first use of ai;dr and further assuming that the author "stole" it from that post. https://rollenspiel.social/@holothuroid/113078030925958957 - September 4, 2024

  • alontorres5 hours ago
    I think that this requires some nuance. Was the post generated with a simple short prompt that contributed little? Sure, it's probably slop.

    But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.

    • yabones5 hours ago
      I don't see what value the LLM would add - writing itself isn't that hard. Thinking is hard, and outsourcing that to an LLM is what people dislike.
      • alontorres4 hours ago
        I'd push back a bit on "writing itself isn't that hard." Clear writing is difficult, and many people with good ideas struggle to communicate them effectively. An LLM can help bridge that gap.

        I do agree with your core point - the thinking is what matters. Where I've found LLMs most useful in my own writing is as a thinking tool, not a writing tool.

        Using them to challenge my assumptions, point out gaps in my argument, or steelman the opposing view. The final prose is mine, but the thinking got sharper through the process.

      • Zambyte5 hours ago
        Using an LLM to ask you questions about what you wrote can help you explore assumptions you are making about the reader, and can help you find what might be better written another way, or elaborated upon.
    • fwip5 hours ago
      One problem is that it's exceedingly difficult to tell, as a reader, which scenario you have encountered.
      • alontorres4 hours ago
        This is the strongest argument against it, I think. Sometimes you can't easily tell from the output whether someone thought deeply and used AI to polish, or just prompted and published. That adds another layer of cognitive burden for parsing text which is frustrating.

        But AI-generated content is here to stay, and it's only going to get harder to distinguish the two over time. At some point we probably just have to judge text on its own merits regardless of how it was produced.

        • Linux-Fan2 hours ago
          My exposure and usage of “AI” has been very limited so far. Hence that is what I am and have been doing all the time: Read the text mostly irrespective of origin.

          I do note that recently, I wonder what was the point the author wanted to make more often only to then note that there are a lot of what seems to be the agreed on standard telltale signs of excessive AI usage.

          Effectively there was a lot of spam before already hence in general I don't mind so much. It is interesting to see, though, that the “new spam” often gets some traction and interesting comments on HN which used to not be the case.

          It also means that behind the spam layer there is possibly some interesting info the writer wanted to share and for that purpose, I imagine I'd prefer to read the unpolished/prompt input variant over the outcome. So far, I haven't seen any posts where both versions were shared to test whether this would indeed be any better, though.

    • lproven5 hours ago
      You do you.

      I do think there's a great deal wrong with that, and I won't read it at all.

      Human can speak unto human unless there's language barrier. I am not interested in anyone's mechanically-recovered verbiage, no matter how much they massaged it.

  • extra__tofu4 hours ago
    said “groks the world”; didn’t read
  • nubg5 hours ago
    ai;dr
  • exe344 hours ago
    I tell all my friends: send me your prompts. Don't send me the resulting slop.
  • xvector5 hours ago
    Many engineers suck at writing. I'm fine with AI prose if it's more organized and information-dense than human prose. I'm sick of reading 6 page eng blogs to find a paragraph's worth of information.
  • pevansgreenwood33 minutes ago
    [dead]
  • ai_ai4 hours ago
    [dead]
  • FrankRay785 hours ago
    Pop quiz. How much of the following article is AI generated versus hand written intention? Come on, tell me if you actually can tell anymore. https://bettersoftware.uk/2026/01/31/the-business-analyst-ro...
    • meindnoch4 hours ago
      I just skimmed the article, but I can already tell it's chock-full of LLMisms. In other words: ai;dr

      Edit: ok, I've checked your profile and now I see that this is your website that you're astroturfing every thread you reply to. Stop doing that.

      • FrankRay782 hours ago
        I’m seriously not astroturfing, point is serious. Most people here would not be able to tell the difference between good human and good AI generated writing anymore in a blind test, especially a blend of both. So if the reader often can’t tell, why is the source of a well written, interesting piece of writing more important, than the effect of the content itself? I don’t feel the OP made a good argument imho.
  • dsign4 hours ago
    Ever worried that ChatGPT would rattle you to the authorities because there is such a thing as thought crime? For that reason, there is a vast, unexplored territory where abhorrent ideas and pornographic vulgarity combine with literary prose (or convoluted, defective, god-awful prose, like the one I'm using right now) and entertaining story-telling that will remain human-only for a while. May we all find a next read that we love. Also, we all may need to (re-)learn to draw phalli.
  • charcircuit5 hours ago
    >Why should I bother to read something someone else couldn't be bothered to write?

    This take is baffling to me when I see it repeated. It's like saying why should people use Windows if Bill Gates did not write every line of it himself. We won't be able to see into Bill's mind. Why should you read a book if they couldn't bother to write it properly and have an editor come in and fix things.

    The main purpose of a creative work is not seeing intimately into the creator's mind. And the idea that it is only people who don't care who use LLMs is wrong.

    • mikestew4 hours ago
      It's like saying why should people use Windows if Bill Gates did not write every line of it himself.

      What? It’s nothing like that, at all. I don’t know that Gates has claimed to have written even a single line of Windows code. I’m not asking for the perfect analogy, but the analogy has to have some tie to reality or it’s not an analogy at all. I’m only half-joking when I wonder if an AI wrote this comment.

    • ssiddharth5 hours ago
      What is creative about generating an article from a stub? The kernel of the article around which the LLM constructs the content? I'm not trying to be an ass, just curious.
      • charcircuit2 hours ago
        The stub itself. So why not just read the stub? As my other post said there is more value in an article than just the creative idea.