What is the broader context of OP trying to prove a theorem here? There are multiple layers of purpose and intent involved (so he can derive the satisfaction of proving a result, so he can keep publishing and keep his job, so their university department can be competitive, etc), but they all end up pointing at humans.
Computers aren’t going to be spinning in the background proving theorems just because. They will do so because humans intend for them to, in service of their own purposes.
In any discussion about AI surpassing humans in skills/intelligence, the chief concern should be in service of whom.
Tech leaders (ie the people controlling the computers on which the AIs run) like to say that this is for the benefit of all humanity, and that the rewards will be evenly distributed; but the rewards aren’t evenly distributed today, and the benefits are in the hands of a select few; why should that change at their hands?
If AI is successful to the extent which pundits predict/desire, it will likely be accompanied with an uprising of human workers that will make past uprisings (you know, the ones that banned child labor and gave us paid holidays) look like child’s play in comparison.
"Get out an English thesaurus and recreate Mona Lisa in different words."
If you really want to be a cognitive maverick, you would encourage them to make up their own creole, syntax and semantics.
Still, the result is describing the same shared stable bubble of spacetime! But it's a grander feat than merely swapping words with others of the same relative meaning.
You totally missed the point of "put this in your own words" education. It was to make us aware we're just transpiling the same old ideas/semantics into different syntax.
Sure, it provides a nice biochemical bump; but it's not breaking new ground.
It also significantly changes my current job to something I didn't sign up to.
To me it's like a halfway step toward management. When you start being a manager, you also start writing less code and having a lot more conversations.
I didn't want to get into management, because it's boring. Now I got forced into management and don't even get paid more.
That's certainly not the reason most HNers are giving - I'm seeing far more claims that LLMs are entirely meaningless becauzs either "they cannot make something they haven't seen before" or "half the time they hallucinate". The latter even appears as one of the first replies in this post's link, the X thread!
Or at least my school system tried to (Netherlands).
This didn’t fully come out of the blue. We have been told to expect the unexpected.
It absolutely did. Five years ago people would have told you that white collar jobs where mostly un-automatable and software engineering was especially safe due to the complexity.
I'm not complaining to stop this. I'm sure it won't be stopped. I'm explaining why some people who work for a living don't like this technology.
I'm honestly not sure why others do. It pretty much doesn't matter what work you do for a living. If this technology can replace a non-negligible part of the white collar workforce it will have negative consequences for you. You don't have to like that just because you can't stop it.
In the past this tradeoff probably was obvious: a farmer's individual fulfillment is less important than feeding a starving community.
I'm not so sure this tradeoff is obvious now. Will the increased productivity justify the loss of meaning and fulfillment that comes from robbing most jobs of autonomy and dignity? Will we become humans that have literally everything we need except the ability for self-actualization?
Compiled databases and search engines have completely different capabilities than groups of people.
Well that's comforting.
All of these seem to subscribe to "inevitability", have no issues that their research relies on a handful of oligarchs and that all of their thoughts and attempts are recorded and tracked on centralized servers.
I bet mathematical research hasn't sped up one bit due to "AI".
Whenever you start to prove new results, you get a lot of small lemme that are probably true but you need to check them and find a good constant which works with them.
Checking is by theorem provers and searching is by machines. You still need to figure out what you want to prove (which results are more important).
But rest can get automated away quite quickly.