18 pointsby devonnull21 hours ago1 comment
  • didibus20 hours ago
    As someone who uses AI for coding, emails, design documents, and so on...

    I'm always a bit confused by the "training" rhetoric. It's the easiest thing to use. Do people need training to use a calculator?

    This isn't like using Excel effectively and learning all the features, functions and so on.

    Maybe I overestimate my ability as a technically savvy person to leverage AI tools, but I was just as good at using on day 1 than 2 years later.

    • dublinben20 hours ago
      >Do people need training to use a calculator?

      Yes? Quite a bit of time was spent in math classes over the years learning to use calculators. Especially the more complicated functions of so-called graphing calculators. They're certainly not self-explanatory.

      What does it say about your skill or the depth of this tool that you haven't gotten better at using it after 2 years of practice?

      • godelski14 hours ago
        Even on just normal calculators.

        Quick, without looking it up, can you tell me what the {mc, m+, m-, mr} buttons do? If you're asking "the what buttons?" or "that's not on my calculator" then we have an answer. If you do know these, did you just intuit them or did you learn them from some instruction? If you really did intuit them, do you really think that's how most people do it? (did you actually intuit them...)

      • watwut20 hours ago
        One of this article claims that failure of AI projects is because the companies failed to train employees for AI. You do get value out of calculators without training. The training is there so you can unlock advanced more complicated functions.

        The article come across as "AI can not fail, it can only be failed" argument.

    • happytoexplain20 hours ago
      In my experience, "training" usually means just telling people not to blindly trust the output. Like... read it. If you can't personally verify in a code-review capacity that what it wrote is apparently correct, then don't use it. The majority of people simply don't care - it's just blind copy-pasting from StackOverflow all over again, but more people are doing it more often. Of course, like most training, it's performative. 90% of the people making this mistake aren't capable of reviewing the output, so telling them to is pointless.
    • Cpoll20 hours ago
      Things I'd include in training: - Mental model of how the AI works. - Prompt engineering. - Common failure modes. - Effective validation/proofreading.

      As for internal stuff like emails/design docs... I think using an AI to generate emails exposes a culture problem, where people aren't comfortable writing/sending concise emails (i.e. the data that went into the prompt).

    • NegativeK20 hours ago
      Are employees aware that they can't trust AI results uncritically, like the article mentions? See: the lawyers who have been disciplined by judges. Or doctors who aren't verifying all conversation transcriptions and medical notes generated by AI.

      Does your organization have records retention or legal holds needs that employees must be aware of when using rando AI service?

      Will employees be violating NDAs or other compliance requirements (HIPAA, etc) when they ask questions or submit data to an AI service?

      For the LLM that has access to the company's documents, did the team rolling it out verify that all user access control restrictions remain in place when a user uses the LLM?

      Is the AI service actually equivalent or better or even just good enough compared to the employees laid off or retasked?

      This stuff isn't necessarily specific to AI and LLMs, but the hype train is moving so fast that people are having to relearn very hard lessons.

    • derektank20 hours ago
      I'm arguably much worse at using ChatGPT today than I was 2 years ago, as back then you needed to be more specific and constrained in your prompts to generate useful results.

      Nowadays with larger context windows and just generally improved performance, I can ask a one sentence question and iterate to refine the output.

    • vrighter11 hours ago
      replace the word "training" with "convincing" and it starts making more sense
    • didntknowyou18 hours ago
      i think the problem is more people punching numbers into the calculator and presenting the answer, without the faintest idea if it is even right (or having the ability to check).
    • dwheeler18 hours ago
      Yes, you need training if you want something good instead of slop. For example, when asked to write functions that can be secure or insecure, 45% of the time they'll do it the insecure way, and this has been stable for years. We in the OpenSSF are going to release a free course "Secure AI/ML-Driven Software Development (LFEL1012)". Expected release date is October 16. It will be here: https://training.linuxfoundation.org/express-learning/secure...

      Fill in this form to receive an email notification when the course is available: https://docs.google.com/forms/d/e/1FAIpQLSfWW8M6PwOM62VHgc-Y...

    • righthand20 hours ago
      No people need training for AI the same way they need training for proof-reading. Quality checking isn’t a natural process when something looks 80% complete and the approvers only care about 80% completeness.

      My coworker still gets paid the same for turning in garbage as long as someone fixes it later.