Biggest drawback though is that it's over-optimized for matrix math, that it forces you to think about everything as matrices, even if that's not how your data naturally lies. The first thing they teach about performant Matlab code is that simple for-loops will tank performance. And you feel it pretty quickly, I saw a case once of some image processing, with a 1000x speedup from Matlab-optimized syntax.
Other things issues I've run into are string handling (painful), and generally OOP is unnatural. Would love to see something with the convenient math syntax of Matlab, but with broader ease of use of something like JS.
Author of RunMat (this project) here --
> The first thing they teach about performant Matlab code is that simple for-loops will tank performance.
Yes! Since in RunMat we're building a computation graph and fusing operations into GPU kernels, we built the foundations to extend this to loop fusion.
That should allow RunMat to take loops as written, and unwrap the matrix math in the computation graph into singular GPU programs -- effectively letting loop written math run super fast too.
Will share more on this soon as we finish loop fusion, but see `docs/fusion/INTERNAL_NOTE_FLOOPS_VM_OPS.md` in the repo if curious (we're also creating VM ops for math idioms where they're advantageous).
> Would love to see something with the convenient math syntax of Matlab, but with broader ease of use of something like JS.
What does "convenient math syntax of Matlab, but with broader ease of use of something like JS" look like to you? What do you wish you could do with Matlab but can't / it doesn't do well with?
Honest question, Octave is an old project that never gained as much traction as Julia or NumPy, so I'm sure it has problems, and I wouldn't be surprised if you have excellent reasons for starting fresh. I'm just curious to hear what they are, and I suspect you'll save yourself some time fielding the same question over and over if you add a few sentences about it. I did find [1] on the site, and read it, but I'm still not clear on if you considered e.g. adding a JIT to Octave.
We like Octave a lot, but the reason we started fresh is architectural: RunMat is a new runtime written in Rust with a design centered on aggressive fusion and CPU/GPU execution. That’s not a small feature you bolt onto an older interpreter; it changes the core execution model, dataflow, and how you represent/optimize array programs.
Could you add a JIT to Octave? Maybe in theory, but in practice you’d still be fighting the existing stack and end up with a very long, risky rewrite inside a mature codebase. Starting clean let us move fast (first release in August, Fusion landed last month, ~250 built-ins already) and build toward things that depend on the new engine.
This isn’t a knock on Octave, it’s just a different goal: Octave prioritizes broad compatibility and maturity; we’re prioritizing a modern, high-performance runtime for math workloads.
Unfortunately, mathworks is a quite litigious company. I guess you are aware of mathworks versus AccelerEyes (now makers of ArrayFire) or Comsol.
For our department, we mostly stop to use MATLAB about 7 years ago, migrating to python, R or Julia. Julia fits the "executable math" quite well for me.
> you can literally write python wrappers of Julia compiled libraries like you would c++ ones
Yes, please. What do I google? Why can't julia compile down to a module easily?
No offense but once you learn to mentally translate between whiteboard math and numpy... it's really not that hard. And if you were used to Matlab before Mathworks added a jit you were doing the same translation to vectored operations because loops are dog slow in Matlab (coincidentally Octave is so much better than Matlab syntax wise).
And again python has numba and maybe mojo, etc. Because julia refused to fill the gap. I don't understand why there's so much friction between julia and python. You should be able to trivially throw a numpy array at julia and get a result back. I don't think the python side of this is holding things back. At least back in the day there was a very anti-python vibe from julia and the insistence that all the things should be re-implemented in julia (webservers etc) because julia was out to prove it was more than a numerical language. I don't know if that's changed but I doubt it. Holy wars don't build communities well.
The loop fusion idea sounds amazing. Another point of friction which I ran into is that MATLAB uses 1-based offsets instead of 0-based offsets for matrices/arrays, which can make porting code examples from other languages tricky. I wish there was a way to specify the offset base with something like a C #define or compiler directive. Or a way to rewrite code in-place to use the other base, a bit like running Go's gofmt to format code. Apologies if something like this exists and I'm just too out of the loop.
I'd like to point out one last thing, which is that working at the fringe outside of corporate sponsorship causes good ideas to take 10 or 20 years to mature. We all suffer poor tooling because the people that win the internet lottery pull up the ladder behind them.
Julia has OffsetArrays.jl implementing arbitrary-base indexing: https://juliaarrays.github.io/OffsetArrays.jl/stable/
The experience with this has been quite mixed, creating a new surface for bugs to appear. Used well, it can be very convenient for the reasons you state.
julia> A = collect(1:5)
5-element Vector{Int64}:
1
2
3
4
5
julia> B = OffsetArray(A, -1)
5-element OffsetArray(::Vector{Int64}, 0:4) with eltype Int64 with indices 0:4:
1
2
3
4
5
julia> A[1]
1
julia> B[0]
1I think this is what inspired the creation of Julia -- they wanted a Matlab clone where for loops were fast because some problems don't fit the matrix mindset.
Earlier in my career, I found that my employers would often not buy Matlab licenses, or would make everyone share even when it was a resource needed daily by everyone. Not having access to the closed-source, proprietary tool hurt my ability to be effective. So I started doing my "whiteboard coding" in Julia and still do.
It also removes many of Matlab's footguns like `[1,2,3] + [4;5;6]`, or also `diag(rand(m,n))` doing two different things depending on whether m or n are 1.
The most cogent argument for the use of parentheses for array slicing (which derives from Fortran, another language that I love) is that it can be thought of as a lookup table, but in practice it's useful to immediately identify if you are calling a function or slicing an array.
But Julia also introduces new problems, such as JIT warmup (so it's not really suitable for scripting) and is still not considered trustworthy:
This is a huge understatement. At the hedge fund I work at, I learned Julia by porting a heavily optimized Python pipeline. Hundreds of hours had gone into the Python version – it was essentially entirely glue code over C.
In about two weeks of learning Julia, I ported the pipeline and got it 14x faster. This was worth multiple senior FTE salaries. With the same amount of effort, my coworkers – who are much better engineers than I am – had not managed to get any significant part of the pipeline onto Numba.
> And if something is truly performance critical, it should be written or rewritten in C++ anyway.
Part of our interview process is a take-home where we ask candidates to build the fastest version of a pipeline they possibly can. People usually use C++ or Julia. All of the fastest answers are in Julia.
That's surprising to me and piques my interest. What sort of pipeline is this that's faster in Julia than C++? Does Julia automatically use something like SIMD or other array magic that C++ doesn't?
In my view, it's not that Julia itself is faster than Rust - on the contrary, Rust as a language is faster than Julia. However, Julia's prototyping, iteration speed, benchmarking, profiling and observability is better. By the time I would have written the first working Rust version, I would have written it in Julia, profiled it, maybe changed part of the algorithm, and optimised it. Also, Julia makes more heavy use of generics than Rust, which often leads to better code specialization.
There are some ways in which Julia produces better machine code that Rust, but they're usually not decisive, and there are more ways in which Rust produces better machine code than Julia. Also, the performance ceiling for Rust is better because Rust allows you to do more advanced, low level optimisations than Julia.
Everything you can do in Julia you can do in C++, but lots of projects that would take a week in C++ can be done in an hour in Julia.
The pipeline was pretty heavily focused on mathematical calculations – something like, given a large set of trading signals, calculate a bunch of stats for those signals. All the best Julia and C++ answers used SIMD.
It would be fun if you could share a similar pipeline problem to your take-home (I know you can't share what's in your interview). I started off in scientific Python in 2003 and like noodling around with new programming languages, and it's great to have challenges like this to work through. I enjoyed the 1BRC problem in 2024.
> I don't think Julia really solves any problems that aren't already solved by Python.
I don't really need proper furniture, the cardboard boxes and books setup I had previously "solved" the same problems, but I feel less worried about random parts of it suddenly buckling, and it is much more ergonomic in practice too.
At least it has those tools and libraries, what cannot be said about Julia.
My experience with this website is that it would be rather pointless to enumerate, because you will then point to some poorly documented, buggy and supporting fraction of features Julia "alternatives" to Python packages or APIs that are developed and maintained by well-resourced organizations.
The same thing for tooling - unstable, buggy Julia plugin for VSCode is not the same as having products like PyCharm and official Python plugins made by Microsoft for VS and VSCode.
Now, I will admit that Julia also has some niceties that would be hard to find in Python ecosystem (mainly SciML packages), but it is not enough.
> Have you used the language or merely speculating?
I just saw the logo in Google Images.
But isn't the whole point of this article that Matlab is more readable than Python (i.e. solves the readability problem)? The Matlab and Julia code for the provided example are equivalent[1]: which means Julia has more readable math than Python.
[1]: Technically, the article's code will not work in Julia because Julia gives semantic meaning to commas in brackets, while Matlab does not. It is perfectly valid to use spaces as separators in Matlab, meaning that the following Julia code is also valid Matlab which is equivalent to the Matlab code block provided in the article.
X = [ 1 2 3 ];
Y = [ 1 2 3;
4 5 6;
7 8 9 ];
Z = Y * X';
W = [ Z Z ];It's wild what people get used to. Rustaceans adapt to excruciating compile times and borrowchecker nonsense, and apparently Pythonistas think it's a great argument in favor of Python that all performance sensitive Python libraries must be rewritten in another language.
In fairness, we Julians have to adapt to a script having a 10 second JIT latency before even starting...
It is, because usually someone already did it for them.
You read the article that compares MATLAB to Python? It's saying MATLAB, although some issues exist, still relevant because it's math-like. GP points out Julia is also math-like without those issues.
Our goal is to make a runtime that lets people stay at the math layer as much as possible, and run the math as fast as possible.
The diag is admittedly unfortunate and it has confused me myself, it should actually be 2 different functions (which are sort of reverse of each other, weirdly making it sort of an involution).
Matlab's functions like to create row vectors (e.g., linspace) in a world where column vectors are more common, so this is a common occurrence.
So `[1,2,3] + [4;5;6]` is a concise syntax for an uncommon operation, but unfortunately it is very similar to a frequent mistake for a much more common operation.
Julia tells the two operations (vector sum and outer sum) apart very elegantly: one is `S - T` and the other is `S .- T`: the dot here is very idiomatic and consistent with the rest of the syntax.
>> [1 2 3] + 1
ans = [2 3 4]
In this case, the operation `+ 1` is applied in all columns of the array. In this exact manner, when you add a (1 x m) row and a (n x 1) column vector, you add the column to each row element (or you can view it the other way around). So the result is as if you repeat your (n x 1) column m times horizontally, giving you a (n x m) matrix, do the same for the row vertically n times giving you another (n x m) matrix, and then you add these two matrices. So basically adding a row and a column is essentially a shortcut for repeating adding these two (n x m) matrices (and runs faster than actually creating these matrices). This gives a matrix where each column is the old column plus the row element for that row index. For example >> [1 2 3] + [1; 2; 3]
ans = [2 3 4
3 4 5
4 5 6]
A very practical example is, as I mentioned, getting all differences between the elements of a time series by writing `S - S'`. Another example, `(1:6)+(1:6)'` gives you the sums for all possible combinations when rolling 2 6-sided dice.This does not work only with addition and subtraction, but with dot-product and other functions as well. You can do this across arbitrary dimensions, as long as your input matrices non-unit dimensions do not overlap.
Z = np.array([[1,2,3]])
W = Z + Z.T
print(W)
Gives: [[2 3 4]
[3 4 5]
[4 5 6]]
It's called broadcasting [1]. I'm not a fan of MATLAB, but this is an odd criticism.[1] https://numpy.org/devdocs/user/basics.broadcasting.html#gene...
Z = [1,2,3]
W = Z .+ Z' # note the . before the + that makes this a broadcasted
This has 2 big advantages. Firstly, it means that users get errors when the shapes of things aren't what they expected. A DimmensionMismatch error is a lot easier to debug than a silently wrong result. Secondly, it means that julia can use `exp(M)` etc to be a matrix exponential, while the element-wise exponential is `exp.(M)`. This allows a lot of code to naturally work generically over both arrays and scalars (e.g. exp of a complex number will work correctly if written as a 2x2 matrix)Companies do not buy matlab to do scientific computing. They buy matlab, because it is the only software package in the world where you can get basically everything you ever want to do with software from a single vendor.
I say this as someone who’d be quite happy never seeing Matlab code again: Mathworks puts a lot of effort into support and engineering applications.
https://www.youtube.com/watch?v=kc9HwsxE1OY
I think it seems pretty interesting.
matlab is like what it would look like to put the math in an ascii email just like how python is what it would look like to write pseudocode and in both cases it is a good thing.
It was a very unpleasant feeling when I graduated from my PhD and realized that most, if not all, of the Matlab scripts I had used for my research would now be useless to me unless I joined a company or national laboratory that paid for licenses with the specific toolboxes I had used.
I'm glad that a significant portion of tools in my current field are in open source languages such as Python and Julia. It widens access to other researchers who can then build upon it.
(And yes, I'm aware of Octave. It does not have the capabilities of Matlab in the areas that I worked in, and was not able to run all of my PhD scripts. I have not tried RunMat yet, but am looking forward to experimenting with it.)
And this is why you should write free software and, as a scientist, develop algorithms that do not rely on the facilities of a specific language or platform. Nothing is more annoying than reading a scientific paper and finding out that 90% of the "implementation" is calling a third party library treated as a blackbox.
Was there a specific reason for that? Or was it simply nobody wrote the code?
https://hg.savannah.gnu.org/hgweb/octave/file/tip/scripts/he...
EDIT: If the original link above isn't working, here's a fairly recent archived version:
https://web.archive.org/web/20250123192851/https://hg.savann...
It’s like how open source will never replace Excel but probably worse because it’s multiple fields and it’s way harder to replicate it.
Julia's MATLAB-inspired syntax is at least as nice, but the language was from the ground up designed to enable you writing high-performance code. I have seen numerous cases where code ported from MATLAB or NumPy to Julia performed well over an order of magnitude faster, while often also becoming more readable at the same time. Julia's array-broadcast facilities, unparalleled in MATLAB, are just reason for that. The ubiquitous availability of in-place update versions of standard library methods (recognizable by an ! sign) is another one.
In our group, nobody has been using MATLAB for nearly a decade, and NumPy is well on its way out, too. Julia simply has become so much more productive and pleasant to work with.
https://docs.julialang.org/en/v1/manual/noteworthy-differenc...
import numpy as np
X = np.matrix([1, 2, 3])
Y = np.matrix([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Z = Y * X.T
W = np.hstack([Z, Z])
That way, we can extend our languages. If np.matrix is "too many keystrokes", it can be imported as M, or similar.X.T is as readable as X' - but on top of that, also extensible. If we want to add other operations, we can do so. Especially since transpose is a very limited operation: it only makes sense for vectors and matrices. In much of numerics (quantum physics, deep learning, etc.), we often work with tensors. For example, within matrix notation, I would expect [Z, Z] to create a tensor, not concatenate matrices.
To make it clear, I agree with the main premise that it is important to make math readable, and thus easy to read and debug. Otherwise, it is one of the worst places for errors (numbers in, numbers out).
When it comes to matrix notation, I prefer PyTorch over NumPy, as it makes it easy to go from math on a whiteboard to executable code (see https://github.com/stared/thinking-in-tensors-writing-in-pyt...).
Also, for rather advanced custom numerics in quantum computing, I used Rust. To my surprise, not only was it fast, but thanks to macros, it was also succinct.
X = [1 2 3]
Y = [1 2 3;
4 5 6;
7 8 9]
Z = Y * X'
W = hcat(Z, Z)Real advantages of matlab:
* Simulink
* Autocoding straight to embedded
* Reproducible & easily versioned environment
* Single-source dependency easier to get security to sign off on
* Plotting still better than anything else
Big disadvantages of matlab:
* Cost
* Lock-in
* Bad namespaces
* Bad typing
* 1-indexing
* Small package ecosystem
* Low interoperability & support in 3rd party toolchains
I will add to that:
* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.
Ironically, the snippet in the article shows that MATLAB has forced them into this awkward mindset; as soon as they get a 1d vector they feel the need to artificially make it into a 2d column. (BTW (Y @ X)[:,np.newaxis] would be more idiomatic for that than Y @ X.reshape(3, 1) but I acknowledge it's not exactly compact.)
They cleverly chose column concatenation as the last operation, hardly the most common matrix operation, to make it seem like it's very natural to want to choose row or column vectors. In my experience, writing matrix maths in numpy is much easier thanks to not having to make this arbitrary distinction. "It's this 1D array a row or a column?" is just over less thing to worry about in numpy. And I learned MATLAB first, do I don't think I'm saying that just because it's what I'm used to.
I despise Matlab, but I don't think this is a valid criticism at all. It simply isn't possible to do serious math with vectors that are ambiguously column vs. row, and this is in fact a constant annoyance with NumPy that one has to solve by checking the docs and/or running test lines on a REPL or in a debugger. The fact that you have developed arcane invocations of "[:,np.newaxis]" and regular .reshape calls I think is a clear indication that the NumPy approach is basically bad in this domain.
You do actually need to make a decision on how to handle 0 or 1-dimensional vectors, and I do not think that NumPy (or PyTorch, or TensorFlow, or any Python lib I've encountered) is particularly consistent about this, unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana, followed by subsequent .reshape calls to avoid these issues. As much as I hated Matlab, this shaping issue was not one I ran into as immediately as I did with NumPy and Python Tensor libs.
EDIT: This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why. And, frankly, if you have gone through proper math texts, they are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly. It's not that you can't figure it out from context, it is that having to figure it out and check seriously damages fluent reading and wastes a huge amount of time and mental resources, and terrible shaping documentation and consistency is a major sore point for almost all popular Python tensor and array libraries.
(There is unhelpful subtext here that I can't possibly have done serious math, but putting that aside...) On the contrary, most actual linear algebra is easier when you have real 1D arrays. Compare an inner product form in Matlab:
x' * A * y
vs numpy: x @ A @ y
OK, that saving of one character isn't life changing, but the point is that you don't need to form row and column vectors first (x[None,:] @ A @ y[:,None] - which BTW would give you a 1x1 matrix rather than the 0D scalar you actually want). You can just shed that extra layer of complexity from your mind (and your formulae). It's actually Matlab where you have to worry more - what if x and y were passed in as row vectors? They probably won't be but it's a non-issue in numpy.> math texts ... are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly.
That's because they use the blunt tool of matrix multiplication for composing their tensors. If they had an equivalent of the @ operator then there would be no need, as in the above formula. (It does mean that, conversely, numpy needs a special notation for the outer product, whereas if you only ever use matrix multiplication and column vectors then you can do x * y', but I don't think that's a big deal.)
> This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why.
I don't often use scikit-learn but I tried to look for 1D/2D agreement issues in the source as you suggested. I found a couple, and maybe they weren't representative, but they were for functions that could operate on a single 1D vector or could be passed as a 2D numpy array but, philosophically, with a meaning more like "list of vectors to operate on in parallel" rather than an actual matrix. So if you only care about 1d arrays then you can just pass it in (there's a np.newaxis in the implementation, but you as the user don't need to care). If you do want to take advantage of passing multiple vectors then, yes, you would need to care about whether those are treated column-wise or row-wise but that's no different from having to check the same thing in Matlab.
Notably, this fuss is precisely not because you're doing "real linear algebra" - again, those formulae are (usually) easiest with real 1D arrays. It when you want to do software-ish things, like vectorise operations as part of a library function, that you might start to worry about axes.
> unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana
You shouldn't have to call .ravel or .flatten if you want a 1D array - you should already have one! Unless you needlessly went to the extra effort of turning it into a 2D row/column vector. (Or unless you want to flatten an actual multidimensional array to 1D, which does happen; but that's the same as doing A(:) in Matlab.)
Writing foo[:, None] vs foo[None, :] is no different from deciding whether to make a column or row vector (respectively) in MATLAB. I will admit it's a bit harder to remember - I can never remember which index is which (but I also couldn't remember without checking back when I used Matlab either). But the numpy notation is just a special case of a more general and flexible indexing system (e.g. it works for higher dimensions too). Plus, as I've said, you should rarely need it in practice.
I used this twenty-something years ago. It worked, but I would not have wanted to use it for anything serious. Admittedly, at the time, C on embedded platforms was a truly awful experience, but the C (and Rust, etc) toolchain situation is massively improved these days.
> Plotting still better than anything else
Is it? IIRC one could fairly easily get a plot displayed on a screen, but if you wanted nice vector output suitable for use in a PDF, the experience was not enjoyable.
and I just straight up installed GNU Octave on the server and called out to it from python, using the exact code the mathematician had devised.
I have gone further and asked AI to port working but somewhat slow numerical scripts to C++ and it's completely effortless and very low risk when you have the original implementation as test.
I’m going to stop you right there. Matlab has 5 issues:
1. The license
2. Most users don’t understand what makes Matlab special and they write for loops over their arrays.
3. The other license
4. The other license
5. The license server
Mathworks seems to have set up licensing to maximize how much revenue they can extract with no thought given to how deeply annoying it is to use.
In my case, trivial uses are as important as high-visibility projects. I can spin up a complete Python installation to do something like log data from some sensors in the lab, while I do something in another lab, and have something going at my desk, and at home. I use hobby projects to learn new skills. I've played with CircuitPython to create little gadgets that my less technically inclined colleagues can work with. I encouraged my kids to learn Python. I write little apps and give them to colleage. I probably have a dozen Python installations running here and there at any moment.
This isn't a slam on Matlab, since I know it has a loyal following. And I'm unaware of an alternative to Simulink, if that's your bag. And Matlab might be doing the right thing for their business. My impression is that most "engineering software" is geared towards the engineer sitting at a stationary workstation all day, like a CAD operator. And this may be the main way that software is used. Maybe I'm the freak.
thankfully there are fast open source alternatives out there now, hint hint runmat ;)
The OP's argument is weakened a little by their Python, which could be simpler (here X is now a 3x1 column "vector"):
X = np.array([[1, 2, 3]]).T
Y = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Z = Y @ X
np.hstack((X, X))
which is almost as readable as the Matlab (and let's not forget those semi-colons and ellipses are awkward too).The OP mentions "Licensing Pain" without citing the high price of a Matlab license: the issues they mention are real (license server not behaving; shared license not available), but the true problem is that in many parts of the world, the money just isn't there. So people use cracked licenses instead.
As a professor, it makes me wildly uncomfortable to see my colleagues inculcate in the next generation of engineers a lifetime dependence on an expensive, closed-source, commercial product. Free alternatives like Octave and RunMat are nice, but niche: in that respect, I wish the RunMat project the best of luck. I very much hope they don't get targeted by MathWorks lawyers.
Also there are high quality free and/or open source alternatives.
GNU Octave https://octave.org and Octave online https://octave-online.net/
Freemat https://freemat.sourceforge.net/ (sadly no ongoing development)
Scilab https://www.scilab.org/ and Scilab online https://cloud.scilab.in/
Among modern alternatives that don't strictly follow MATLAB syntax, Julia has the biggest mindshare now?
GNU Octave, as a superset of the MATLAB language, was (is) most capable of running existing MATLAB code. While Octave implemented some solvers better than MATLAB, the former just could not replicate a large enough portion of the latter's functionality that many scientists/engineers were unable to fully commit to it. I wonder whether runmat.org would run up against this same problem.
The other killer app of MATLAB is Simulink, which to my knowledge is not replicated in any other open source ecosystem.
as much as I love the meme in your post, it's the reason I won't be able to share it with work colleagues who use matlab every day
just something to consider
Cheers for the comment!
One of the best things about Matlab was that it had an absolutely enormous library of tools, I could reasonably do everything I wanted, and more importantly, all the notation and convention matched what was used in EE, so I could easily translate whitepapers, and my own and others' calculations into code.
In scientific computing, each disciple has their own preferred way of writing things, so a system identification problem might be stated completely differently by a power engineer, a communications engineer, and a physicist, and deciphering each others' formulas and notations often is just as difficult as understanding the core point of the paper.
That's why a lot of math packages, which were written by academic physicists for other physicists were essentially impossible to use for EEs, and Matlab actually adopting the EE conventions was a godsend, even if it was proprietary.
As a programming language, I didn't like/hate it that much, I guess if I tried to develop an application suite in it like many others did, I'd have had an awful time, but for the simple stuff (in terms of programming) I did with it, it was fine.
Octave is not particularly fast.
RunMat is very fast (orders of magnitude -- see benchmarks).
I had to jump like 3 links and 4 pages down to figure out what runmat actually "is" / "does".
As someone who's done their whole thesis using Octave this looks interesting.
I love Octave, it's one of my favourite languages. And, for reasons I don't understand even myself, I don't like matlab that much (though I admit their documentation is excellent).
How would you "sell" runmat to someone like me?
Coming from Octave, you'll notice significant speedup advantages, you can see some of our benchmarks with it here https://runmat.org/blog/introducing-runmat
Last month, we put out 250+ built-in functions and Accelerate, which fuses operations and routes between CPU/GPU without any extra code/memory management, i.e. no GPUarray.
We're still flushing out the plotting function, but we'll have updates to share around that and a browser version very soon.
I feel like you should be saying Matlab / Octave wherever possible; especially since your target audience is far more likely to be the one that wants a "faster Octave" rather than a "cheaper Matlab".
PS: Don't trust github language stats; half of that code is octave specific, but still gets labelled as Matlab.
RunMat is an interesting idea, but a lot of MATLAB's utility comes from the toolboxes, and unless RunMat supports every single toolbox I need, I'm going to be reaching for that expensive MATLAB license over and over again.
Will have a really solid rust inspired package manager soon, and a single #macro to expose a rust function in the RunMat script's namespace (= easy to bring any aspects of the rust ecosystem to RunMat).
When I've had to write similar code in Python, it's a massive pain to "prove" that my conversation code is correct. Often I've resorted to using MATLAB's trusted functions to generate "truth" data and then feeding that to Python to verify it gets the same results.
Obviously this is more work than just using the premade stuff that comes with the toolbox.
Any MATLAB alternative faces the same trust issue. Until it reaches enough mindshare that people assume that it's too popular to have incorrect math (which might not be a good assumption but it is one that people make about MATLAB) then it doesn't actually mimic the main benefit of MATLAB which is that I don't need to check its work.
Z = Y @ X
W = np.c_[Z, Z]They came to speak at my school and described open source alternatives (Python in particular) as the biggest threat to MATLAB.
I think if they open-sourced the MATLAB runtime and embraced a model similar to Canonical or Red Hat where users paid for support or integrations, they'd make more money. But it's hard to get there from where they are now.
Matlab is successful because of precisely one thing, which nobody has replicated. It offers a complete software environment from one source.
Nowhere else can you get scientific computing, a GUI toolkit, a high level embedded software environment, a HiL/SiL toolkit, a model based simulation environment, a plotting and visualization toolkit and so much more in a single cohesive package. Nobody else has any offering that comes even close.
>The engine is closed source. You cannot see how fft or ode45 are implemented under the hood. For high-stakes engineering, not being able to audit your tools is a risk.
This is just a lie. Open matlab and you can inspect all the implementation details behind ode45. It is not a black box.
>The Cloud Gap: Modern engineering happens in CI/CD pipelines, Docker containers, and cloud clusters. Integrating a heavy, licensed desktop application into these lightweight, automated workflows is painful.
Another lie. See: https://de.mathworks.com/help/compiler/package-matlab-standa... Mathworks has done everything hard for you already. I do not understand why the author feels the need to authoritatively speak on a subject he absolutely does not understand.
Mathematica does. Arguably Mathematica is even more cohesive because it's not split up into "feature sold separately" packages.
How do I see the .c files / trace how `ode45` will execute on my machine? Can I see the JIT's source code?
--
Entitled to your view, but clearly difference of opinion here. From perspective of open / closed source -- maybe for you it qualifies as open source, but I can't follow the logic chain, so to me MATLAB is not open source.
"You cannot see how fft or ode45 are implemented under the hood." is a totally false statement. You absolutely can do exactly that. This is not a matter of opinion. Right click the function and open it, you can view it like any other matlab function.
> From perspective of open / closed source -- maybe for you it qualifies as open source
Matlab is obviously not open source. Who said anything about that? The article claims you can not audit ode45, that is false and it seems pretty embarrassing for someone speaking authoritatively about matlab to make such basic claims, which every matlab user can disprove with two clicks. Every single matlab user has the ability to view exactly how ode45 is implemented and has the ability to audit that function. This is not a matter of opinion, this is a matter about being honest about what matlab offers.
But okay -- as I mentioned, you're entitled to your views!
% Built-in function.
The algorithms written in C and compiled by mex are the "built-in" ones that are not viewable.
% Built-in function.
If the algorithm is implemented as a compiled mex function, then you cannot inspect its details.
EDIT: Specifically, it is extremely hard for me to think that anyone should be convinced to learn Matlab in 2025 - this seems to be a statistically useless and obviously soon-dying skill. Any logical arguments about what Matlab offers NOW seem to entirely ignore - what seems to me - this obvious practical reality.
> # or rely on broadcasting rules carefully.
> Z = Y @ X.reshape(3, 1)
Why not use X.transpose()?
It's because in Python 1-dimensional arrays are actually a thing, unlike in Matlab. That line of code is a non-example; it is easier to make it work in Python than in Matlab.
To make `Z` a column vector, we would need something like `Z = (Y @ X)[:,np.newaxis]`.
Although, I'm not sure why the author is using `concatenate` when the more idiomatic function would be stack, so the change you suggest works and is pretty clean:
Z = Y @ X
np.stack([Z, Z], axis=1)
# array([[14, 14],
# [32, 32],
# [50, 50]])
with convention that vectors are shape (3,) instead of (3,1).I know what all of these do, but I just can’t shake the feeling that I’m constantly fighting with an actual python. Very aptly named.
I also think it’s more to do with the libraries than with the language, which I actually like the syntax of. But numpy tries to be this highly unopinionated tool that can do everything, and ends up annoying to use anywhere. Matplotlib is even worse, possibly the worst API I’ve ever used.
Doesn't just (Y @ X)[None] work? None adding an extra dimension works in practice but I don't know if you're "supposed" to do that
(Y @ X)[None]
# array([[14, 32, 50]])
but `(Y @ X)[None].T` works as you described: (Y @ X)[None].T
# array([[14],
# [32],
# [50]])
I don't know either RE supposed to or not, though I know np.newaxis is an alias for None.This seems to work,
Z = Y @ X[:,np.newaxis]
thought it is arguably more complicated than calling the `.reshape(3,1)` method.Cases where a JIT running would conflict with requirements notwithstanding (e.g. HIL with strict requirements and whatnot)...
In fact, the entire safety argument is undermined by the author themselves:
> The engine is closed source. You cannot see how fft or ode45 are implemented under the hood. For high-stakes engineering, not being able to audit your tools is a risk.
What's the point of optimizing your code to be easy for physicists/mathematicians to read for safety, when you can't even verify what the compiler will produce?
I suppose it basically boils down to whether your orgs engineering is run by academics or software engineers, but Matlab doesn't really do anything that python can't for free. And python is more accessible, has more use cases, and strong academic support already.
At the time, we had massive issues with using Matlab with large fMRI/EEG/MEG data sets, and attempts to write naive matrix-based versions of code would occasionally blow up memory consumption, and turn a 3-week analysis into a 50-year analysis.
So, yeah, I had to replace a decent amount of pretty matrix code into gnarly, but performant, for loops. Maybe the situation has improved since then, but I don't care to find out.
---
Want strings? You had your choice of cells or 2D char matrices? Who ever thought char matrices were a good idea? strfind() vs findstr()? Even after years of Matlab, I had to double-check the docs to recall which one I wanted.
---
Anything to encourage reliability or assist scientists in their workflows, like built-in version control? Nope. Or basic testing support for your ad hoc statistical functions? No.
I guarantee there's a ton of Matlab code that produced biased/wrong results, and nobody knows because it produced numbers in the expected range, and nobody ever thought to check it.
Mathworks was in a unique position to improve scientific code quality, and did nothing with it.
---
Matlab really excelled at only two things: matrix math and making pretty plots. As soon as you needed to do anything else, it was unbelievably painful, and that's where my personal dislike came from.
In my experience, those arguing for the value of Matlab are mostly 50+ years old, or are in an extremely niche industry using something like e.g. Simulink or other highly-industry-specific tooling, in which case it seems the considerations are irrelevant to something like 99.5% of the modern population.
Matlab will clearly be dead and irrelevant otherwise, in a short amount of time and in almost all domains.
EDIT: And few things indicate an out-of-touch / cookie-cutter or almost-certainly p-hacked neuroscience paper like the use of MATLAB. It is a smell for incompetent legacy research in this domain.
Wild to hear. At the time, almost everybody in the field used it. The then-dominant fMRI package (SPM) and EEG/MEG package (Fieldtrip) were both open-source Matlab. (I think I knew one prof who used BrainVoyager, and that's because he hired a former BV employee as an RA.)
I think the MATLAB JIT compiler is probably difficult to match.
You can do the same thing in other languages but it won't be built in like that.
Plotting in Python is still a pain. Notebooks help, but they still don’t let you add to a plot piece-by-piece; and 3D plots are possible but nowhere close to as simple as it is in Matlab.
It gets better and better all the time.
Terrible HPC integration.
Proprietary runtime.
Granted, I've seen Python horrors on university HPC clusters too, but at least there are libraries and clear documentation (e.g. Lightning, Ray, etc) for how to properly manage these things. Good luck finding that with Matlab.
You claimed they asserted that "Universities are wasteful".
Put the goalposts back where they were.
I am saying that because it is much harder to find good documentation on using MATLAB on HPCs, a lot of computations on HPCs that use MATLAB are highly wasteful compared to if they had been written using a language and/or tools that make it much easier to use HPC resources more efficiently. I was NOT in any way saying that "universities are wasteful".
All other things being equal, suppose two research programs are proposing to study roughly the same thing (say, some novel optimization or basic stuff on something like simple neural networks; and let's pretend this is some years ago when people still actually reached for MATLAB neural net tools). If both request significant compute allocations, and I see one is planning on MATLAB, and the other on PyTorch Lightning, I know for sure I would want to give the MATLAB users far less funding, or even none at all, since they're really going to struggle to properly leverage the CPUs and GPUs available to them, whereas the Lightning people will largely just have this work immediately, and almost certainly be able to iterate faster and be more likely to find something meaningful.
It's a contrived and unfair example, and in practice the real problem is actually the annoying MATLAB licensing mostly, but also it really is a fact that MATLAB screws up even basic stuff in HPC environments (see e.g. https://docs.alliancecan.ca/wiki/MATLAB#Simultaneous_paralle...).
I want to say the least offensive way possible that I really do hope you do not ever get to review NSF/NIH grants because of bias like this.
The example that you gave is/was also a solved problem in the HPC systems I used. SLURM/PBS, when properly configured, can ensure that local clusters do not spill over and not use resources that are not supposed to be used.
And yes, clearly, based on what I linked, this MATLAB issue is a solved problem on Canadian SLURM HPC clusters too. It's just, so much more is solved by e.g. modern Python libs like Ray, Lighting, and etc., it is really silly to pretend that MATLAB users can as easily make as good use of HPC clusters as everyone else can, especially today if GPUs are involved.
I'm still going to upvote you here because I think you are engaging honestly and fairly, and we just generally disagree here.