Additionally I think because of how esoteric some algorithms are, they are not always implemented in the most efficient way for today's computers. It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians. I hope to see an application of AI here to bring more SoTA tools to mathematicians--I think it is much more value than formalization brings to be completely honest.
Unfortunately, the bank doesn't accept spirit of science dollars, and neither does the restaurant down the street from me either.
Matlab definitely took a big hit in the last decade and is losing against the python numpy stack. Others will follow.
As a former Mathematica user, a good part of the core functionality is great and ahead of open source, the rest and especially a lot of me-too functionality added over the years is mediocre at best and beaten by open source, while the ecosystem around it is basically nonexistent thanks to the closed nature, so anything not blessed by Wolfram Research is painful. In open source, say Python, people constantly try to outdo each other in performance, DX, etc.; and whatever you need there's likely one or more libraries for it, which you can inspect to decide for yourself or even extend yourself. With Wolfram, you get what you get in the form of binary blobs.
I would love to see institutions pooling resources to advance open source scientific computing, so that it finally crosses the threshold of open and better (from the current open and sometimes better).
On top of that, and often competing with the former, professors are constantly exploring (heavily subsidized with public grants and staffed with free grad students) spin-offs to funnel any commercial potential of their research into their own or their buddie's pockets. It's just like in politics with revolving doors and plushy 'speaking engagements' or 'board seats' galore.
Most (all?) of that funding goes to private pockets: researchers work for money, equipment costs money, etc.
> I think it would be good service to use AI tools to bring open source alternatives like sympy and sage and macaulay to par.
> It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians.
And my response is that I think that this sort of work, which is in the public scientific interest should be funded by tax money, and the results distributed under libre licenses.
what country are you in, and what percentage of the public purse goes to funding science? In the U.S about 11%, and with that number I often read articles, linked to from this site, about U.S Scientists quitting and going into private sector work or other non-scientific fields to get adequate compensation.
>while also paying good scientists with actual dollars that they could spend in restaurants.
see, my admittedly vague understanding of how things are structured tells me this part isn't what is happening.
Looking at https://www.cbpp.org/research/federal-budget/where-do-our-fe..., federal tax revenue used for "science" seems to be <=1%?
Education is another 5% accroding to that site.
That’s why I’m working on an open source implementation of Mathematica (i.e. an Wolfram Language interpreter):
Maybe i’m just missing something. But it looks like nobody is really using it except for some very specific math research which has grown from within that ecosystem from the beginning.
I think one of the basic problems is that the core language is just not very performant on modern cpus, so not the best tool for real-world applications.
Again- maybe i’m missing something?
(And this one popped in Google as second when I just searched; https://github.com/Mathics3/mathics-core)
Stephen Wolfram on Computation, Hypergraphs, and Fundamental Physics - https://podbay.fm/p/sean-carrolls-mindscape-science-society-... (2hr 40min)
I'm a fan of his work and person too. Not a fanatic or evangelical level, but I do think he's one of the more historically relevant computer scientists and philosophers working today. I can overlook his occasional arrogance, and recognize that there's a genuine and original thinker who's been pursuing truth and knowledge for decades.
the CAG framing is clever marketing but the underlying idea is sound: treat the LLM as a natural language interface to a computational kernel rather than the computation itself. weve been doing something similar with python subprocess calls from agent pipelines and it works well. the question is whether wolfram language offers enough over python+scipy+sympy to justify the licensing cost and ecosystem lock-in.
Mathematica / Wolfram Language as the basis for this isn't bad (it's arguably late), because it's a highly integrated system with, in theory, a lot of consistency. It should work well.
That said, has it been designed for sandboxing? A core requirement of this "CAG" is sandboxing requirements. Python isn't great for that, but it's possible due to the significant effort put in by many over years. Does Wolfram Language have that same level? As it's proprietary, it's at a disadvantage, as any sandboxing technology would have to be developed by Wolfram Research, not the community.
The first part covers how LLMs work and the second is a (pretty compelling) sales pitch for integrating Wolfram Alpha into LLMs
For example, if it can reduce parts of the problem to some choices of polinomials, its useful to just "know" instantly which choice has real solutions, instead of polluting its context window with python syntax, Google results etc.
however, even this advantage is eaten away somewhat because the models themselves are decent at solving hard integrals.
But for most internet applications (as opposed to "math" stuff) I would think Python is still a better language choice.
Even the documentation search is available:
```bash
/Applications/Wolfram.app/Contents/MacOS/WolframKernel -noprompt -run '
Needs["DocumentationSearch`"];
result = SearchDocumentation["query term"];
Print[Column[Take[result, UpTo[10]]]];
Exit[]'
```
Aside, I hate the fact that I read posts like these and just subconsciously start counting the em-dashes and the "it's not just [thing], it's [other thing]" phrasing. It makes me think it's just more AI.
e.g. https://writings.stephenwolfram.com/2014/07/launching-mathem...
"It's not just X, it's Y" definitely seems to qualify today. It's a stale way to express an idea.
I hadn't revisited that essay since LLMs became a thing, but boy was it prescient:
> By using stale metaphors, similes, and idioms [and LLMs], you save much mental effort, at the cost of leaving your meaning vague, not only for your reader but for yourself ... But you are not obliged to go to all this trouble. You can shirk it by simply throwing your mind open and letting the ready-made phrases come crowding in. They will construct your sentences for you — even think your thoughts for you, to a certain extent — and at need they will perform the important service of partially concealing your meaning even from yourself.
[0]: https://bioinfo.uib.es/~joemiro/RecEscr/PoliticsandEngLang.p...
Somehow I don't think "trying to make my writing look professional" is very high on the priority list.
Does he speak the same way - pausing for emphasis?
> LLMs don’t—and can’t—do everything. What they do is very impressive—and useful. It’s broad. And in many ways it’s human-like. But it’s not precise. And in the end it’s not about deep computation.
This is a mess. What is the flow here? Two abrupt interrupts (and useful) followed by stubby sentences. Yucky.
Hence math can always be part either generic llm or math fine tuned llm, without weird layer made for human ( entire wolfram) and dependencies.
Wolfram alpha was always an extra translation layer between machine and human. LLM's are a universal translation layer that can also solve problems, verify etc.
computation-augmented generation, or CAG.
The key idea of CAG is to inject in real time capabilities from our foundation tool into the stream of content that LLMs generate. In traditional retrieval-augmented generation, or RAG, one is injecting content that has been retrieved from existing documents.
CAG is like an infinite extension of RAG
, in which an infinite amount of content can be generated on the fly—using computation—to feed to an LLM."
We welcome CAG -- to the list of LLM-related technologies!
A big disappointment as I’m a fan of his technical work.
The linked article isn't about mathematics, technology or human knowledge. It's about marketing. It can only exist in a kind of late-stage capitalism where enshittification is either present or imminent.
And I have to say ... Stephen Wolfram's compulsion to name things after himself, then offer them for sale, reminds me of ... someone else. Someone even more shamelessly self-promoting.
Newton didn't call his baby "Newton-tech", he called it Fluxions. Leibniz called his creation Calculus. It didn't occur to either of them to name their work after themselves. That would have been embarrassing and unseemly. But ... those were different times.
Imagine Jonas Salk naming his creation Salk-tech, then offering it for sale, at a time when 50,000 people were stricken with Polio every year. What a missed opportunity! What a sucker! (Salk gave his vaccine away, refusing the very idea of a patent.)
Right now it's hard to tell, but there's more to life than grabbing a brass ring.
There is a difference between cashing-in and selling-out... but often fame destroys peoples scientific working window by shifting focus to conventional mundane problems better left to an MBA.
I live in a country where guaranteed health care is part of the constitution. It was a controversial idea at one time, but proved lucrative in reducing costs.
Isaac Newton purchased the only known portrait of the man who accused him of plagiarism, and essentially erased the guy from history books. Newton also traded barbs with Robert Hooke of all people when he found time away from his alleged womanizing. Notably, this still happens in academia daily, as unproductive powerful people have lots of time to formalize and leverage grad student work with credible publishing platforms.
The hapless and unscrupulous have always existed, where the successful simply leverage both of their predictable behavior. =3
"The Evolution of Cooperation" (Robert Axelrod)
https://ee.stanford.edu/~hellman/Breakthrough/book/pdfs/axel...