The underlying paper itself is more precise, comparing against LUAR, a 2021 method based on bert-style embeddings (i.e. a model with 82M parameters, which is 0.2% the size of e.g. the recent OS Gemma models). I don't fault the authors of the paper at all for this, their method is interesting and more interpretable! But you can check the publication history, their paper was uploaded originally in 2024: https://arxiv.org/abs/2403.08462
A good example of why some folks are bearish on journals.
"AI bad" seems to sell in some circles, and while there are many level-headed criticisms to be made of current AI fads, I don't think this qualifies.
The stock market crashes once in a while. Shit happens. The long-term outlook is unlikely to change nearly as much, unless you think there will be systemic macroeconomic changes.
What applications do you think make the most sense so far?
https://www.nature.com/articles/s41599-025-06340-3/figures/2
Because LLM models have already amortized the man-years cost of collecting, curating and training on text corpuses?
Example of LLMs doing well in similar tasks: https://arxiv.org/abs/2602.16800