While my initial reaction was dystopian horror that we're losing our humanity, I feel slightly different after sitting with it for a while.
Ask yourself, how effective was all that effort really? Did any humans actually read and internalize what was written? Or did it just rot in the company wiki? Were we actually communicating effectively with our peers, or just spending lots of time on trying to? Let's not retcon our way to believing the pre-AI days were golden. So much tribal knowledge has been lost, NOT because no one documented it but because no one bothered to read it. Now at least the AI reads it.
So in a sense, we are more forgiving of ourselves more than anything.
in fact, in my domain, that's almost always the case. LLM's rarely get it right. Getting something done that would take me a day, takes a day with an LLM, only now I don't fully understand what was written, so no real value add, just loss.
It sure can be nice for solved problems and boilerplate tho.
It's worth noting that much of the frustration stems from expectations.
I don't expect an AI to learn and "update their weights"..
I do however expect colleagues to learn at a specific rate. A rate that I a believe should meet or exceed my company's standards for, uh, human intelligence.
At least personally this was obvious to me years before AI was around. Whenever we had clear data that came to an obvious conclusion, I found that it didn't matter if _I_ said the conclusion, regardless of if the data was included. I got a lot more leeway by simply presenting the data to represent my conclusion and let my boss come to it.
In the first situation the conclusion was now _my_ opinion and everyone's feelings got involved. In the second the magic conch(usually a spreadsheet) said the opinion so no feelings were triggered.
[citation needed]
This entire article is just meaningless vibes of one guy who sells AI stuff.
Bruh either had help, or he's the most trite writer ever.