22 pointsby armcat3 hours ago7 comments
  • mlpoknbji33 minutes ago
    > But we know that any person who uses AI is likely to improve at what they do.

    Do we?

    • co_king_528 minutes ago
      I would suggest that any person who uses AI will atrophy their compositional skills unless they specifically take care to preserve those skills.
      • rishabhaiover3 minutes ago
        As a student, I constantly worry about this. But everyone in my class is producing output at a pace I can't compete with without AI assistance.
      • Insanity16 minutes ago
        Yah and this seems to be supported by preliminary evidence on the impact of AI on things like retention and cognitive ability.
    • dsr_9 minutes ago
      Not until large-N research is done without sponsorship, support, or veiled threats from AI companies.

      At which point, if the evidence turns out to be negative, it will be considered invalid because no model less recent than November 2027 is worth using for anything. If the evidence turns out to be slightly positive, it will be hailed as the next educational paradigm shift and AI training will be part of unemployment settlements.

    • throwaw126 minutes ago
      Let me add a single data point.

      > is likely to improve at what they do

      personally, my skills are not improving.

      professionally, my output is increased

  • dmk43 minutes ago
    So I guess the key takeaway is basically that the better Claude gets at producing polished output, the less users bother questioning it. They found that artifact conversations have lower rates of fact-checking and reasoning challenges across the board. That's kind of an uncomfortable loop for a company selling increasingly capable models.
    • Florin_Andrei31 minutes ago
      I think we're still at the stage where model performance largely depends on:

      - how many data sources it has access to

      - the quality of your prompts

      So, if prompting quality decreases, so does model performance.

      • dmk25 minutes ago
        Sure, but the study is saying something slightly different, it's not that people write bad prompts for artifacts, they actually write better ones (more specific, more examples, clearer goals,...). They just stop evaluating the result. So the input quality goes up but the quality control goes down.
      • candiddevmike8 minutes ago
        What does prompting quality even mean, empirically? I feel like the LLM providers could/should provide prompt scoring as some kind of metric and provide hints to users on ways they can improve (possibly including ways the LLM is specifically trained to act for a given prompt).
        • dsr_7 minutes ago
          That would be a quality metric, and right now they are focused on quantity metrics.
  • bargainbinan hour ago
    I’m not alone in finding this against the claims of the product right?

    Claude is meant to be so clever it can replace all white collar work in the next n-years, but also “you’re not using it right?” Which one is it?

    • dsr_6 minutes ago
      Which one will convince you to buy more Claude? Please answer honestly, it's for the sake of profits.
    • SpicyLemonZest37 minutes ago
      I'm not quite convinced of the maximalist claims, but these two aren't incompatible. Every time we talk about a company being "mismanaged" by e.g. a private equity buyout, what we mean is that the owners had access to a large volume of high quality white collar work but couldn't figure out how to use it right.
  • bigstrat2003an hour ago
    To the extent that this should be a thing, there are very few people I would want doing it less than the company who has repeatedly been caught lying about its product's achievements. Anthropic should not be taken seriously after their track record.
  • an hour ago
    undefined
  • Kye2 hours ago
    You could arrive at the essence of this by just having read and internalized Carl Sagan's The Demon-Haunted World. Especially the Baloney Detection Kit.

    In my experience good prompting is mostly just good thinking.

  • sarkarghyaan hour ago
    Honestly to use llms properly all you need to know is that it’s a next word (or action) prediction model and like all models increased entropy hurts it. Try to reduce entropy to get better results. Rest is just sugarcoated nonsense. To use llms properly you need a physics class.
    • rishabhaiover5 minutes ago
      And then some alignment, prompting structure, and task decomposition.
    • Barbing25 minutes ago
      Which class? Or what subjects