Similar to the tendency of other companies to cram in useless AI buttons for no other reason than to pump the stock. Like I have one in outlook, with two disclaimers "I can help—but I don’t currently have access to your whole inbox, only to the specific email thread you had open", and "AI answers may be incorrect". Or the Notepad one, complete with CVE.
It has much more the feeling of a management reorganization fad, like edicts of the form "we're all going to be agile now! Please submit your sprint planning for the next six months by Friday".
On one hand I'm getting direction to use more AI. On the other hand, I incurred a mid 5-figure spend generating tokens in January and a LOT of people started sweating when the bill showed up. :/
Seems like a good way to burn bridges, but maybe I don't care anymore. I am strongly considering a career change. Tech is probably getting gutted anyways, maybe I should become a carpenter or something
I ask SME's questions, and I get AI generated responses. The message it sends is this: don't bother me; I can't be bothered, just use AI. People are sending these low-effort Copilot writeups to our customers too - I can't imagine what they think of it.
The reason we pay SME's is because they can be trusted to provide correct answers that we don't have to be skeptical about. The people doing this haven't figured out yet they are undermining their own credibility and optics on willingness to help out the team by doing so.
Just this week, I had several people suggest, "Hey, have you asked copilot about [esoteric networking thing]?". Indeed, I have, and in the absence of documentation, it gave me 5 convincing theories - none of which actually checked out when I dug into them. It just made wrong shit up.
Most frustrating is the integration with our tenant. I try and ask questions about things I need deeper information on.
Me: Hey Copilot, can you dig me up more information on X thing?
Copilot: Have you considered (waves hand in sweeping motion), everything you yourself have written on the topic?
If I want real answers I turn to the free ChatGPT, or my personally paid Claude subscription. Then I copy-pasta the stuff which is useful, and maintain my own writing style, and present the information in a way which I think works best for who I am talking to.
But sadly over the years, some unions became very corrupt and others were allowed to be killed of by Companies and the US Gov.
Again I am glad I am at the age I am at. With that, I feel real bad for the young. From what I am seeing, between Climate Change, Living Costs and now AI, the young seems really screwed :(
My generation allowed these oligarchs to take over the US, it is not like no one knew that started happening in the 80s. So here we are.
The way that I guarantee my job treats me well is by being willing to quit whenever it stops working for me. Despite everyone panicking about AI layoffs, I still consistently get messages from recruiters trying to fill AI-related jobs. In your ideal world where the union is supposed to represent me but oppose AI, do those jobs still exist?
That will work until it won't.
But you're deciding to leave power on the table. That's kinda like leaving money on the table. And of course, it's typically the unsophisticated people who do that.
The literature backs this up: not all of the productivity gains from AI are captured by employers. At least some of it is captured by employees, with the split varying by study.
You can call me unsophisticated, but that's like telling a 1970s assembly programmer that they're a moron for ever supporting using a C compiler. Obviously they're working against their job security, right?
Enforce a good diet so we'll finally eat our vegetables, boss. Get out of here. If these are the commonly established boundaries, I'm either gaming it or simply out. Easy choice.
All beside the point, anyway. I'll worry about meeting agreeable expectations in the next place... where I can renegotiate my side of the terms, too. The work doesn't really call for it, I'm already more productive than my enabled peers. Not pressed, options exist (both internal and external). Competitors more to my liking surely exist. I'm entirely fine failing to meet demands that I don't believe can/should be met. Call me fortunate [and perhaps naive] :)
My 'agents' were called 'pipelines' 20 years ago, they serve us well. The... 'real world' logistics need to be considerably shortened before an agent [or more pipelines] might have any meaningful impact. We have all the code/docs/whatever we might need, and a lot of built-in downtime, so I suspect it's a wash. Moving parts or people to datacenters, for instance.
All that's not to say an LLM can't be useful. They could spare us some shoveling, so to speak. Less work, not necessarily further or faster. Easier. There's not a lot of juice to squeeze and I'm not sure one should be willing [without proper consideration/compensation].
I'm in this weird space where I'm working for a company on behalf of my actual company. And the company I'm being farmed out to does not care for us because we are contractors in a sense. This resulted in us being brought on board without any training, any knowledge transfer, and guidance, and being told "get to work".
The result is when I ask people for information regarding the organization or code bases they come back to me with "ChatGPT said this", where ChatGPT is an internally hosted AI stack.
It's gotten to the point where I've given up and just have the internally hosted AI writing unit tests for me because their organization doesn't care about us, and as a result I just don't care about them.
The worst part is the tests seem to be reasonably working. Which is terrifying because I actually know the language or testing framework very well. I'm effectively working in a junior capacity without any training or guidance and my merge requests are getting pushed through.
It's going to be interesting seeing how these organizations that force AI in the workplaces age, because there are no longer experts in the code base. Only slop.