32 pointsby paulpauper10 hours ago7 comments
  • thfuran9 hours ago
    There's no way under our current economic system that the result of a tool that makes some work easier/faster would be anything other than filling the gap with more or other work.
    • bwhiting23566 hours ago
      The amount of work left to do is massive if you stack up all the unsolved problems and potential R&D, like diseases with no cure of that we don't even begin to understand. We could choose to simply stop new R&D, but as long as there continues to be suffering from those unsolved problems it's in our nature to continue to try and solve them. And it's in our nature to prevent the free rider problem, where people expect to benefit from the solutions without contributing.
      • figassis3 hours ago
        This is fine. But companies seem to not have a control lever for employee wellbeing. If humanity works to solve problems, don’t you think overwork is also a problem that needs to be addressed?
      • danny_codes3 hours ago
        Capitalism is designed to have a lot of free riders. Anyone with sufficient capital not to work is free-riding by design (if they choose not to work).
  • ahartmetz5 hours ago
    Cory Doctorow calls it becoming a reverse centaur: instead of you using machines to automate the boring parts, machines use you to do the messy parts - which includes taking liability(!) for the spicy autocomplete's semirandom output that you totally reviewed 100%, right?
  • QuadrupleA8 hours ago
    One thing I haven't seen mentioned much, in AI coding and other AI-assisted work, is the sheer needless verbosity of models, the walls of text they spew out for us to read through. This alone adds to the workload & fatigue.

    There's a thing in writing, "pity the reader" - respect your audience's time, get to the point. In The Elements of Style, "omit needless words."

    You can prompt models to be succinct, but the latest ones - GPT 5-series especially - ignore your requests and spew paragraphs upon paragraphs of noise. Maybe it's the incentives of charging per token?

    If you want, I can expand on this topic and generate a lengthy comparison chart.

    • dag1008 hours ago
      This is basically a violation of the robustness principle ("be liberal in what you accept, be conservative in what you produce"), but I doubt there will be much improvement on this front seeing as tokens are fed back into the model. A succinct phrase is a compressed form of a longer sentence that expresses the same idea, so from the perspective of having to feed the model's output back into it, more tokens presumably work better by providing a greater of surface area for processing, so to speak. This is just my intuition, however.
      • thfuran4 hours ago
        That principle deserves to be violated. Invalid input is invalid. Rather than everyone everywhere trying to handle it and producing subtly different implicit extensions of whatever standard they’re nominally ingesting, everything should reject it so the producing system is forced to correct itself.
  • simianwords9 hours ago
    > In fact, AI is increasing the speed, density and complexity of work rather than reducing it, according to an analysis of 164,000 workers’ digital work activity.

    Isn't this obvious? This is exactly what I would expect!

    • bluefirebrand8 hours ago
      Yup. And because the "speed, density, complexity" is increasing, expect burnout to increase too!
  • ChrisArchitect4 hours ago
    Related:

    The risk of AI isn't making us lazy, but making "lazy" look productive

    https://news.ycombinator.com/item?id=47555081

  • erelong2 hours ago
    > Managers Aren't Succeeding in Using AI to Lighten Workloads

    ftfy?