5 pointsby tobr9 hours ago6 comments
  • noodlesUK7 hours ago
    I think the reality is that most "knowledge work" jobs involve being a wet meat interface between many different systems, other people, the physical world, etc.

    I think there are very few roles where someone has a nice clean set of inputs and outputs and a clear connection between them. Software development (at times) is amongst the cleanest, because all the inputs are typically computer friendly, and I think that's why GenAI has had a lot of traction in our industry.

    I therefore believe that even with incredibly advanced AI, there will still be a huge amount of work to because the world simply isn't as neat as people imagine it is. In other industries this will be even more true.

  • loloquwowndueo7 hours ago
    > how do you think it'll do at sending emails, doing analysis, writing reports

    Unless it learns to make every single email or report NOT a wall of text and reusing the same AI-written telltale constructs (it’s not A. It’s B. You’re absolutely right - the key insight is that I’m writing to express my FEELING). Probably not much better than they do now.

    > So companies have the choice of paying Chris $84,000 plus a whole bunch of benefits for 40 hours of mediocre work, or they can pay probably $100-$1000 for an AI

    What makes you think they won’t price the AI much closer to what Chris was costing? They know the employer already pays that cost and the premise here is the AI works better and 24/7 (service outages notwithstanding).

    > I think what we get on the other side will be far more human and meaningful. Humans building things and sharing value with other humans doing the same.

    I want my AI to do dishes and laundry so that I can write and paint. Not for it to write and paint so I can do dishes and laundry.

  • joshuablais6 hours ago
    So we are just going to take companies at face value now while this model is not publically available? OpenAI literally said the same thing about their next model a day later.

    Both companies have models "too dangerous to release". Both companies' girlfriends go to another school.

    • fleischhauf4 hours ago
      didn't they also claim this about gpt-2? for sure there is a lot of PR involved as well. Models can also be both, really good at cyber security and bad at writing emails.
      • cassianoleal4 hours ago
        Yes, and Anthropic has also claimed Claude has become sentient on at least 3 separate occasions in the last few years.
  • denidoman7 hours ago
    We are still far from it. Same issue as with robots: you have to build a new environment to let them work efficiently, or at least severely adjust the existing one.

    This will be the next step, and it will be nasty: changing our workflows to "agent-friendly" even if they less convenient for humans. And then - yes, partial replacement.

  • adithyassekhar7 hours ago
    Even the website follows anthropic’s piss yellow design.
  • 0xCAP3 hours ago
    [dead]