Paying for Pro from any of my current academic budgets is completely ouf of the field of reality here -- all budgets tend to have restricted uses and software payments fit into very few categories. Effectively, I'd have to ask for a brand new grant and hope the grant rules allow for large software payments and I won't encounter an anti-AI reviewer; such a thing would take one year at least.
As a nail to the coffin, I was "denied" all Claude Opus recently as part of Microsoft's clampdown on individual (and academic) use of Copilot.
(Chagpt 5.5 Plus does not seem sufficient for any deeper investigations into new research topics, I've tried.)
Apologies for the rant.
There’s the example of a poor person and a rich person buying boots. The poor person’s boots wear out and have to be replaced while the rich persons boots last for many years due to higher quality craftsmanship. Over years, the poor person’s boots wear will pay may for boots.
Of course if you are really poor, then you have to take expensive shortcuts, but for most people that shouldn’t be the case. Learning to do more with less money isn’t as bad as many people think. It’s also good for the brain to be a bit more creative.
And $200/month may look small from a U.S. perspective, but I looked up some average figures for Eastern European assistant professors. In Poland, for example, assistant professor base pay is around 73% of a professor’s base salary, roughly PLN 6,840/month gross, or about €1,500–1,600/month gross. At that level, a $200/month subscription can be around 10–15% of personal monthly income after accounting for taxes and local conditions.
I also work as a freelancer and sometimes work with professors. In my experience, academic budgets are often much tighter than people expect.
And the situation is better, ten years ago it would have been 80%.
I'll take a Euro salary and qualify life over a FIRE-typs salary and daily fear of falling into the abyss any day.
Are you honestly saying most families would be able to justify 200 usd a month for ChatGPT?
https://www.federalreserve.gov/publications/2025-economic-we...
There are several other philosophical arguments against what you propose but I do not wish to go down that route.
> It seems to me that training beginning PhD students to do research [...] has just got harder, since one obvious way to help somebody get started is to give them a problem that looks as though it might be a relatively gentle one. If LLMs are at the point where they can solve “gentle problems”, then that is no longer an option. The lower bound for contributing to mathematics will now be to prove something that LLMs can’t prove, rather than simply to prove something that nobody has proved up to now and that at least somebody finds interesting.
Training must start from the basics though. Of course everybody's training in math starts with summing small integers, which calculators have been doing without any mistake since a long time.
The point is perhaps confirmed by another comment further down in the post
> by solving hard problems you get an insight into the problem-solving process itself, at least in your area of expertise, in a way that you simply don’t if all you do is read other people’s solutions. One consequence of this is that people who have themselves solved difficult problems are likely to be significantly better at using solving problems with the help of AI, just as very good coders are better at vibe coding than not such good coders
People pay coders to build stuff that they will use to make money and I can happily use an AI to deliver faster and keep being hired. I'm not sure if there is a similar point with math. Again from the post
> suppose that a mathematician solved a major problem by having a long exchange with an LLM in which the mathematician played a useful guiding role but the LLM did all the technical work and had the main ideas. Would we regard that as a major achievement of the mathematician? I don’t think we would.
This made me a little sad
We care about sports with humans.
But one thing seems clear to me. If I try to describe the situation in mathematics presented here, it sounds like there were already precedents or existing pieces of knowledge, but humans had not thought to connect them. AI seems to have helped make that connection.
If AI can connect different fields in this way, then perhaps something even more significant could emerge from it.
That said, I could not understand most of the article. And if using LLMs properly requires this level of background knowledge, I honestly worry about whether I can really use them well.
The human doctors kept ignoring the signals, kept putting it down to 'diet' and 'exercise' (even though she does plenty of both)
A lot of math research is like that. And, like the blog post suggests, problems one gives PhD students are 95% like that.
Most of what I do is just assemble things that other people have already built.
We used to call that "low hanging fruit."
At the time I thought the key missing tool was a natural language search that acted like mathoverflow, where you could explain your problem or ideas as you understood them and get references to relevant literature (possibly outside your experience or vocabulary).
> This reminds me of Antirez's "Don't fall into the anti-AI hype". In a sentence: These foundation models are really good at optimizing these extremely high level, extremely well defined problem spaces (ie multiply matrices faster). In Antirez's case, it's "make Redis faster".
This is a cultural choice. It makes sense that in the mathematics culture we currently have, this is alien. But already, other fields, and many individuals, would disagree and say that the human did have a major achievement here. As long as human-AI collaborations are producing the best results, there is meaningful contribution by the humans, and people that are deeper experts and skilled LLM whisperers should be able to make outsized contributions. The real shoe drops when pure AI beats humans and human-AI collaboration.
And certainly not to send it to a fellow colleague to ask its opinion first.
LLMs are certainly becoming capable to code, find vulnerabilities, solve mathematical problems, but we need to avoid putting their works in production, or in front of other humans, without assessing it by any possible mean.
Otherwise tech leads, maintainers, experts get overwhelmed and this is how the « AI slop » fatigue begins.
To be clear I’m talking about this step:
> That preprint would have been hard for me to read, as that would have meant carefully reading Rajagopal’s paper first, but I sent it to Nathanson, who forwarded it to Rajagopal, who said he thought it looked correct.
I think this is good advice in general, maybe with an emphasis on public vs. private, friendly contact. Having 0 thought AI slop thrown at you out of the blue is rude. "could have been a prompt" indeed. But having a friend/colleague ask for a quick glance at something they know you handle well is another story for me.
If I've worked on a subject for a few years, and know the particulars in and out, I'd have no trouble skimming something that a friend or a colleague sent me. I am sparing those 5-10 minutes for the friend, not for what they sent. And for an expert in a particular domain, often 5 minutes is all it takes for a "lgtm" or "lol no".
Does the author know about CAISc 2026 [0]?
https://github.com/vjeranc/fixed-rtrt
M3 module was formalized fully purely from experimental data and from a nudge by earlier versions of codex in 15-30 minutes in a simple write/compile/fix-first-error loop. I was a bit surprised how fast it picked up the pattern but given there was a paper from '70s it became clear why later.
Maybe if you find AI to be doing stuff you find impressive, the stuff you were doing wasn't that impressive? Worth ruminating on your priors at least.
Graduate? Yes.
For publications and theses, as long as the final results hold and can be replicated and validated, I don’t see why we shouldn’t allow the wholesale use of LLMs
Sure sure sure ... PhD slop. It is fascinating how people who think they are clever, suddenly operate in the AI skyne slop bubble. Copy/paste is not really AI, yet this is the most common way of operation of AI slop. I've noticed this again recently when looking at grok-media-slop. There are so many smaller mistakes if you know a specific topic and look at it. The hallucination is wild through AI slop.