3 pointsby rmoff8 hours ago1 comment
  • _wire_3 hours ago
    The mindset deployed by this article is harrowing.

    The authors premise, written as a conclusion in via grotesque rhetoric, is that although, by his own reckoning, AI is dangerously unpredictable automation that will wreck the environments you connect it to, leaving you in a position of co-dependent bargaining with it about why it's abusing you, your proper course is to form a more deeply dependent relationship with it because the job market is run by it!

    The author waves away the hazard that this form of computation is fundamentally erroneous in total, and therefore dangerous to apply in any form of automation, with the observation that, by chance, it appears to work for some. Therefore the lede: shit is going to be fucked up and you must run into the flames.

    This sort of perverse pandering to disaster is a sea change in engineering: Imagine engineering policy as a city is built along a river that accepts that half the bridges being built will fail under load, but that's ok because half the users of half that haven't yet collapsed get across town.

    Yet this policy not only appears to be a pervasive position in the computer industry, with application of this dangerous tech accelerating, but there's growing drum beat of evangelism for this disastrous orientation.

    Due to what I deem to be a virtual cocaine addition among the current generation of developers, the art of engineering is being defenestrated.

    But this generation of developers was raised in the PC industry which thrived by convincing adopters that when a PC failed it was the user's fault. And high degrees of failure are accepted because when PCs go wrong the systems they control typically stop.

    But AI runs amok.

    It's another world of risk when the AI goes rogue disrupting everything it's connected to with volition, especially seeing that it will distort status in patterns that mimic human lying.