I've used OCaml a bit and found various issues with it:
* Terrible Windows support. With OCaml 5 it's upgraded to "pretty bad".
* The syntax is hard to parse for humans. Often it turns into a word soup, without any helpful punctuation to tell you what things are. It's like reading a book with no paragraphs, capitalisation or punctuation.
* The syntax isn't recoverable. Sometimes you can add a single character and the error message is essentially "syntax error in these 1000 lines".
* Ocamlfmt is pretty bad. It thinks it is writing prose. It will even put complex `match`es on one line if they fit. Really hurts readability.
* The documentation is super terse. Very few examples.
* OPAM. In theory... I feel like it should be great. But in practice I find it to be incomprehensible, full of surprising behaviours, and also surprisingly buggy. I still can't believe the bug where it can't find `curl` if you're in more than 32 Unix groups.
* Optional type annotation for function signatures throws away a significant benefit of static typing - documentation/understanding and nice error messages.
* Tiny ecosystem. Rust gets flak for its small standard library, but OCaml doesn't even have a built in function to copy files.
* Like all FP languages it has a weird obsession with singly linked lists, which are actually a pretty awful data structure.
It's not all bad though, and I'd definitely take it over C and Python. Definitely wouldn't pick it over Rust though, unless I was really worried about compile times.
I couldn't agree more with the parent commenter about OCaml documentation. Functional programmers appear to love terseness to an almost extreme degree. Things like `first` are abbreviated to `fst`, which is just odd. Especially now that good IntelliSense means there is no real functional (heh) difference between typing `.fi` and pressing Tab, and typing `.fs` and pressing Tab.
The F# documentation is comparatively very spiffy and detailed, with plenty of examples[1][2][3].
[1]: https://learn.microsoft.com/en-gb/dotnet/fsharp/language-ref...
[2]: https://fsharp.github.io/fsharp-core-docs/
[3]: https://fsprojects.github.io/fsharp-cheatsheet/fsharp-cheats...
C++ 100 19.57s
Rust 96 20.40s
F# 95 20.52s
Nim 75 26.04s
Julia 64 30.40s
Ocaml 48 41.07s
Haskell 41 47.64s
Chez 39 49.53s
Swift 33 58.46s
Lean 7 278.88s
Tarjan, n = 10
Nyx - Apple M4 Max - 12 performance and 4 efficiency cores
n! * 2^n = 3,715,891,200 signed permutations
score = gops normalized so best language averages 100
time = running time in seconds
This had me briefly smitten with F#, till I realized the extent that rusty .NET bed springs were poking through. Same as the JVM and Clojure, or Erlang and Elixir. The F# JIT compiler is nevertheless pretty amazing.I nearly settled on OCaml. After AI warning me that proper work-stealing parallelism is a massive, sophisticated project to code properly, the 40 lines of OCaml code I wrote that beat available libraries is my favorite code file in years.
Nevertheless, once one understands lazy evaluation in Haskell, it's hard to use any other language. The log slowdown for conventional use of a functional data structure becomes a linear speedup once one exploits persistence.
I'd really like to hear more about this. From what I've used of F# and OCaml, both languages are around 95% the same.
F# is worse because the type inferencing isn't as good. You need to type annotate in more places. It's a drag, because it feels like a missed opportunity to let the machine do work for you.
Additionally, one of the most pleasant and unique features of OCaml, strong named arguments, doesn't exist in F# (except in methods or whatever). Most programming languages don't have this (or it's hamfisted like in more dynamic languages) so it doesn't seem like a loss, unless you are very cozy with them in OCaml.
(I'm bitter about this one because named arguments give safety and convenience. But when you combine them with currying it's like 4 or 5 design patterns that are elegantly provided for you. You just do it implicitly, without ever having to study a book about using them.)
Finally, F# brings back the NULL problem that OCaml worked so hard to eradicate.
let f a ~x ~y b = a + x + y + b
let g = f ~y:1 (* g is closure with the argument named ~y filled in *)
let h = g 2 (* another closure with the first positional argument filled in *)
let int_15 = h 8 ~x:4 (* = g 2 8 ~y:4 = f ~y:1 2 8 ~x:4 *)
The more complex interaction is rather with type inference and currying, or the interaction between currying and optional arguments.I couldn’t believe this was an actual bug in opam, and I found it: https://github.com/ocaml/opam/issues/5373
I don’t think that’s an opam bug, it’s an issue with musl, and they just happened to build their binaries with it.
> * Ocamlfmt is pretty bad. It thinks it is writing prose. It will even put complex `match`es on one line if they fit. Really hurts readability.
I suggest configuring ocamlformat to use the janestreet profile for better defaults.
> * Optional type annotation for function signatures throws away a significant benefit of static typing - documentation/understanding and nice error messages.
People should be providing .mli files, but don't. That said, an IDE with type hints helps this enormously. The VS Code plugin for OCaml is the best experience for noobs, hands down.
> OPAM
yes
This made me chuckle. I've had that thought before, shouldn't the default be a vector on modern devices? Of course other collection types are available.
But you can have a data structure that is more like vector under the hood while still supporting efficient copy-with-modifications. Clojure vectors, for example.
LinkedIn Lists you say? (Sorry. (But not that sorry.))
Edit meta sidenote: in this modern meme world, it's virtually impossible to know if a typo is a typo or just a meme you haven't heard of yet.
That said, I think this is somewhat unrelated to the idea of making linked lists disappear - Koka is still using linked lists, but optimising their allocation and deallocation, whereas Haskell can convert a linked list to an array and do a different set of optimisations there.
See: https://koka-lang.github.io/koka/doc/book.html#sec-fbip
If what you mean is the ability to think in terms of "first" and "rest", that's just an interface that doesn't have to be backed by a linked list implementation.
just to give an idea how bad, until recently, you could not just go to ocaml.org and download ocaml for windows, you had to either download one for mingw or wsl
so for many it was just not installable, i.e. for many we didnt have ocaml for windows, until very very recently
On the other hand, you could get ocaml for Windows from Microsoft ever since 2005.
One of the things people often neglect to mention in their love letters to the language (except for Anil Madhavapeddy) is that it actually feels UNIXy. It feels like home.
> it actually feels UNIXy. It feels like home.
They use single dashes for long options.
This is not home.
Normally the Unix/GNU opposition is irrelevant at this point, but you managed to pick one of the few significant points of difference.
Short options were a compromise to deal with the limits of the input hardware at the time. Double dashes were a workaround for the post-dash option car crash traditional Unix tooling allows because teletypes were so slow. There is nothing particularly Unixy about any of these options other than the leading hyphen convention.
OCaml using single hyphens is not un-Unixy.
I will turn in my Plan 9 install media and my copy of The Design and Implementation of the 4.3BSD Operating System at the nearest DEC service center.
* Compile time is only ok. On par with C++.
* Async has a surprisingly number of footguns and ergonomic issues.
* There's no good solution to self-borrowing or partial borrows.
* While using macros is fine, writing them is pretty awful. Fortunately you rarely need to do that. Relatedly it is missing introspection support.
* Sometimes the types and lifetimes get very complex.
But overall I still much prefer it to OCaml. The syntax is much nicer, it's easier to read, the ecosystem and tooling are much better, the documentation is much better, and it actively hates linked lists!
* Crates.io is an unmoderated wasteland of infinite transitive dependency chaos
But my "favourite":
* It's 2025 and allocator-api (or its competitors) is still in nightly only and no sign of stabilizing
I work almost exclusively in Rust, and I like the language but in many respects a) Rust is the language that tokio ate, and b) it has been deluged with people coming from the NodeJS/NPM ecosystem doing Web Scale Development(tm) and its terrible practices around dependency management.
Well, I'm not sure I agree with that. If by "unmoderated" you mean anyone can upload a crate without going through bureaucratic approvals processes then that seems like a good thing to me!
But you did remind me of one other majorly annoying thing about the crate system: the global namespace. It doesn't have the same security issues as Python's global package namespace, but it is really really annoying because you pretty much have to break large projects down into multiple crates for compile time reasons, and then good luck publishing those on crates.io. Plus you end up with a load of crate names like `myproject-foo`, `myproject-bar`, which is very verbose.
Oh and yeah, the Tokio situation is a bit sad.
Also the proliferation of some types of crates like error handling. This should really be in the standard library by now (or at least fairly soon). Any large project ends up with like 5 different error handling libraries just because dependencies made different choices about which to use. It's not a major issue but it is silly.
Overall though, still my favourite language by far.
Thing is, if I sketch something in pseudocode, I should be able to translate it to any mainstream programming languages. With Rust I can't just do that, I have to bend the problem to fit the way the language works.
I agree. OCaml is a complex language with very beginner-unfriendly documentation. In fact, I would even say it's unfriendly to engineers (as developers). The OCaml community prefers to see this language as an academic affair and doesn't see the need to attract the masses. E.g. Rust is an example of the opposite. It's a complex language, but it's pushing hard to become mainstream.
Just taking the first example I can find of some auto-formatted OCaml code
https://github.com/janestreet/core/blob/master/command/src/c...
It doesn't look more a soup of words than any other language. Not sure what's hard to parse for humans.
This was my problem as well, the object oriented related syntax is just too much. ML of which caml is a version of, has really tight syntax. The “o” in ocaml ruins it imo.
Considering it only impacts a fairly small subset of the language, could you explain how it supposedly ruins everything?
Funny how tastes differ. I'm glad it has a syntax that eschews all the noise that the blub languages add.
* Windows support has improved to the point where you can just download opam, and it will configure and set up a working compiler and language tools for you[^1]. The compiler team treat Windows as an first tier target. opam repository maintainers ensure new libraries and library versions added to the opam repository are compiled and tested for Windows compatibility, and authors are encouraged to fix it before making a release if its reasonably straightforward
* debugger support with gdb (and lldb) is slowly being improved thanks to efforts at Tarides
* opam is relatively stable (I've never found it "buggy and surprising"), but there are aspects (like switches that behave more like python venvs) which don't provide the most modern behaviour. dune package management (which is still in the works) will simplify this considerably, but opam continues to see active development and improvement from release to release.
* the platform team (again) are working on improving documentation with worked recipes and examples for popular uses cases (outside of the usual compiler and code generation cases) with the OCaml Cookbook: https://ocaml.org/cookbook
There are other things I find frustrating or that I work around, or are more misperceptions:
* there isn't a builtin way to copy files because the standard library is deliberately very small (like Rust), but there is a significant ecosystem of packages (this is different to other languages which cram a lot into their standard library). The result is a lot of friction for newcomers who have to install something to get what they need done, but that's valued by more experienced developers who don't want the whole kitchen sink in their binary and all its supply chain issues.[^2]
* the type inference can be a bit of a love/hate thing. Many people find it frustrating because of the way it works, and start annotating everything to short-circuit it. I've personally found it requires a bit of work to understand what it is doing, and when to rely on it, and when not to (essentially not trying to make it do things it simply will never be able to do).[^3]
* most people use singly-linked lists because they work reasonably well for their use cases and don't get in their way. There are other data structures, they work well and have better performance (for where it is needed). The language is pragmatic enough to offer mutable and immutable versions.
* ocamlformat is designed to work without defaults (but some of them I find annoying and reconfigure)
Please don't take this as an apology for its shortcomings - any language used in the wild has its frustrations, and more "niche" languages like OCaml have more than a few. But for me it's amazing how much the language has been modernised (effects-based runtime, multicore, etc) without breaking compatibility or adding reams of complexity to the language. Many of these things have taken a long time, but the result is usually much cleaner and better thought out than if they were rushed.
[^1] This in itself is not enough, and still "too slow". It will improve with efforts like relocatable OCaml (enabling binary distribution instead of compiling from source everywhere) and disentangling the build system from Unixisms that require Cygwin.
[^2] I particularly appreciate that the opam repository is actively tested (all new package releases are tested in a CI for dependency compatibility and working tests), curated (if its too small to be library, it will probably be rejected) and pruned (unmaintained packages are now being archived)
[^3] OCaml sets expectations around its type inference ("no annotations!") very high, but the reality is that it relies on a very tightly designed and internally coherent set of language constructs in order to achieve a high level of type inference / low level of annotation, but these are very different to how type inference works in other languages. For example, I try and avoid using the same field name in a module because of the "flat namespace" of field names used to infer record types, but this isn't always possible (e.g. generated code), so I find myself compensating by moving things into separate modules (which are relatively cheap and don't pollute the scope as much).
You'd be right.
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"
"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"
Talking as someone who wrote OCaml at work for a while, the benefits of functional programming and the type guarantees that it's ilk provides cannot be understated; you only start to reach them, however, once most developers have shifted their way of thinking rather extremely, which is a time cost that that designers of Go did not want new Googlers to pay.
It's as old as C. C and Go are the only two significant languages which end up constantly checking for errors like this.
>have shifted their way of thinking rather extremely
What could I read to shift my way of thinking?
The signals & threads episode about OCaml strongly piqued my interest, and not because I have any JS delusions (they would never, lol).
This, at least for me, brings the act of writing a specific piece of code more inline with how I think about the system as a whole. I spend less energy worrying about the current state of the world and more about composing small, predictable operations on relationships.
As for what you can read, I find it's just best to get going with something like OCaml or F# and write something that can take advantage of that paradigm in a relatively straightforward way, like a compiler or something else with a lot of graph operations. You'll learn pretty quickly what the language wants you to do.
I know there are great many Polish people in the world, but why it matters so much in this case? They could have been any nationality, even French!
I would never.
Why must you call me out in this way? How did I wrong you, that I deserve this?
If people start using the term AI, we better be living in I, Robot. Not whatever the hell this is.
Tangential rant. Sorry.
I think Richard Feldman [0] proposed some of the most reasonable theories as to why functional programming isn't the norm. Your language needs to be the platform-exclusive language for a widely used platform, have a killer application for a highly desired application domain, or be backed by a monster war-chest of marketing money to sway opinions.
Since Feldman's talk, Python has grown much faster in popularity in the sense of wide use in the market place... but mostly because it's the scripting language of choice for PyTorch and AI-adjacent libraries/tooling/frameworks which is... a killer application.
I like OCaml. I started evaluating functional programming in it by taking the INRIA course online. I spent the last four and half years working in Haskell. I've built some side projects in Zig. I was a CL stan for many years. I asked this question a lot. We often say, "use the best tool for the job." Often that means, "use the tool that's available."
I think languages like OCaml, Rust, Haskell etc can be "popular," but in the sense that people like to talk about them and want to learn them and be able to use them (at least, as trends come and go). It's different from "popular" as in, "widely adopted."
I would politely disagree. Torch started in Lua, and switched to Python because of its already soaring popularity. Whatever drove Python's growth predates modern AI frameworks
As far as I could tell, it had to do with two things. First, Python is notoriously dynamic and extensible, making it possible to implement "sloppy" syntax like advanced slicing or dataframes. But also, those guys had lots of pre-existing C and Fortran code, and Python had one of the easiest extensibility APIs to wrap it as high-level packages. And with IPython, you had a nice REPL with graphing to use all that from, and then of course notebooks happened.
It had boosts from Django... but it never had Rails' level of popularity. You kinda have to be first-to-market and that good to get the "killer app" effect.
It's also been integrated as the scripting language for several popular software pacakges (Blender comes to mind).
Machine learning and now... "AI"; seems to be a market cornered by Python quite a bit.
It hit the front page of Slashdot, Digg, Reddit, made the rounds on Hacker news, etc... (https://news.ycombinator.com/item?id=86246)
Django was also very popular at the time.
I had already learned Basic, C++, Java, and C#. I wanted to add a dynamic scripting language that was cross-platform under my belt.
A lot of my peers were in the same boat.
Python seemed at the time, to be the only general purpose scripting language that was easy to use on multiple platforms.
I had heard bad things about Perl being write only, and Ruby being tough to deploy, I also found it hard to read. (Which is a shame they are wonderful languages, though Ruby is dog slow, Python is slow too, but Ruby is worse somehow).
IIRC Google and some other large companies were pushing it as one of their official languages.
Right as Python was rocketing in popularity, Go came out, and I also heard a lot of good things about Clojure (they seemed neck and neck in popularity from my incorrect perspective at the time, lol).
Do you mean the comic was responsible, or the comic explains why Python is popular? It is definitely the ecosystem. As you said its general purpose. It is used for numerical computing and visualisation, web apps, GUIs, sysadmin. Even a reasonably popular DVCS is written in Python.
I wasn’t talking chronology of first release, just describing the overlap in hype cycles back then.
The comic was released in 2007, and started heading to the moon. Go came out around 2009, and almost instantly got traction.
Let's face it, syntax matters. We saw that with Elixir becoming much more popular than Erlang ever did. We saw it with TypeScript being able to introduce a fairly complex type system into JavaScript, and becoming successful among web devs by adapting to established ecosystem and offering a gradual approach, rather than forcing an entirely new, incompatible paradigm on it. The TypeScript story seems a little improbable in hindsight, but it was able to get there by offering an incremental path and making a lot of compromises (such as config options to allow less strict enforcement) along the way.
Personally, I think a new syntax for OCaml might actually be successful if done right. Sure, there have been multiple attempts (the "revised" syntax, Reason, etc.), but none of them are really willing to modernize in ways that would attract your average programmer. The toolchain also needs work to appeal to non-OCaml programmers.
For example, in Reason, a function is:
let x: (int, int) => int = (a, b) => a + b;
A more JS-like syntax might be: fn x(a, b int): int {
a + b
}
Or at least: let x = fn(a, b int): int {
a + b
}https://www.typescriptlang.org/play/?#code/DYUwLgBAHgXBAUBDO...
I've seen JS using lambdas instead of classic functions like that in the wild. And that Reason can be made more JS-idiomatic without changes to the syntax:
let x = (a: int, b: int): int => a + b;I like TCL, but its not really comparable to lisp. Its definitely a great scripting language and is a lot more flexible than it gets credit for.
It got a lot of things right. Event driven concurrency (and its got a good threading model) which is often given as a reason to use JS to right servers. Sandboxing untrusted code. Easy GUIs with Tk. Easy cross platform deployment. Unfortunately it never got traction.
Categorically not true. Tcl got a LOT of traction. Expect was everywhere. Tk kicked ass and took names. AOLserver, anyone? VLSI CAD stuff still uses it today.
It just got pushed out of the way over time. IMO, mostly because VB6 took its niche of easy GUI.
If one can stand a language that is just a little bit older, there is always Standard ML. It is like OCaml, but perfect!
While it's not yet standard nearly all Standard ML implementations support what has become known as "Successor ML" [0]. A large subset of Successor ML is common to SML/NJ, MLton, Poly/ML, MLKit and other implementations.
That includes record update syntax, binary literals, and more expressive patterns among other deficiencies in Standard ML.
For me the two big remaining issues are:
1) There's only limited Unicode support in both the core language and the standard library. This is a big issue for many real-world programs including these days the compilers for which SML is otherwise a wonderful language.
2) The module system is a "shadow language" [0] which mirrors parts of SML but which has less expressiveness where modules cannot be treated as first-class values in the program. Also if you define infix operators in a module their fixity isn't exported along with the function type. (Little annoyance that gets me every time I am inclined to write Haskell-style code with lots of operators. Though maybe that's just another hint from the universe that I shouldn't write code like that.) Of course, the fix to that would be a fundamentally different language; not a revised SML.
[0] http://mlton.org/SuccessorML
[1] https://gbracha.blogspot.com/2014/09/a-domain-of-shadows.htm...
I certainly agree that SML isn't really a production language, though.
Ironically probably because it had the "O"bjects in it, "which was the style of the time"... something that has since dropped off the trendiness charts.
I'd like to hear some practical reasons for preferring OCaml over F#. [Hoping I don't get a lot about MS & .NET which are valid concerns but not what I'm curious about.] I want to know more about day to day usage pros/cons.
Meanwhile, OCaml got rid of its global lock, got a really fast-compiling native toolchain with stable and improving editor tooling, and has a cleaner language design with some really powerful features unavailable to F#, like modules/functors, GADTs, effects or preprocessors. It somehow got immutable arrays before F#!
F# still has an edge on some domains due to having unboxed types, SIMD, better Windows support and the CLR's overall performance. But the first two are already in the OxCaml fork and will hopefully get upstreamed in the following years, and the third is improving already, now that the opam package manager supports Windows.
If a language makes "unboxed types" a feature, a specific distinction, and has to sell "removing global lock" as something that is a massive breakthrough and not table stakes from 1.0, it can't possibly be compared to F# in favourable light.
Let's not dismiss that their solution to remove the global lock has been both simpler than Python's (better funded?) ongoing work and backwards compatible; and the multicore support came with both a memory model with strong guarantees around data races, and effect handlers which generalize lightweight threads.
I agree that lack of unboxed types is a fatal flaw for several domains (Worse, domains I care about! I usually consider Rust/Go/.NET for those.) But when the comparison with OCaml is favourable to F#, the gap isn't much wider than just unboxed types. F# has an identity crisis between the ML parts and the CLR parts, its C# interop is decaying as the ecosystem evolves, and newer .NET APIs aren't even compatible with F# idioms (here's a .NET 6+ example [1]).
[1]: https://sharplab.io/#v2:DYLgZgzgNAJiDUAfA9gBwKYDsAEBlAnhAC7o...
Where's the equivalent to https://hackage-content.haskell.org/package/vector-0.13.2.0/... ?
These are usually defined with the [<Struct>] attribute over a regular type definition, or using the struct keyword before tuple/anonymous record types. Also, the original 'T option type now has a value-type successor 'T voption.
1. Interop with C# is great, but interop for C# clients using an F# library is terrible. C# wants more explicit types, which can be quite hard for the F# authors to write, and downright impossible for C# programmers to figure out. You end up maintaining a C#-shell for your F# program, and sooner or later you find yourself doing “just a tiny feature” in the C# shell to avoid the hassle. Now you have a weird hybrid code base.
2. Dotnet ecosystem is comprehensive, you’ve got state-of-the web app frameworks, ORMs, what have you. But is all OOP, state abounds, referential equality is the norm. If you want to write Ocaml/F#, you don’t want to think like that. (And once you’ve used discriminated unions, C# error-handling seems like it belongs in the 1980’ies.)
3. The Microsoft toolchain is cool and smooth when it works, very hard to wrangle when it doesn’t. Seemingly simple things, like copying static files to output folders, require semi-archaic invocations in XML file. It’s about mindset: if development is clicking things in a GUI for you, Visual Studio is great (until it stubbornly refuses to do something) ; if you want more Unix/CLI approach, it can be done, and vscode, will sort of help you, but it’s awkward.
4. Compile-times used to be great, but are deteriorating for us. (This is both F# and C#.)
5. Perf was never a problem.
6. Light syntax (indentation defines block structure) is very nice until it isn’t; then you spend 45 minutes how to indent record updates. (Incidentally, “nice-until-it-isn’t” is a good headline for the whole dotnet ecosystem.
7. Testing is quite doable with dotnet frameworks, but awkward. Moreover. you’ll want something like quickcheck and maybe fuzzing; they exist, but again, awkward.
We’ve been looking at ocaml recently, and I don’t buy the framework/ecosystem argument. On the contrary, all the important stuff is there, and seems sometimes easier to use. Having written some experimental code in Ocaml, I think language ergonomics are better. It sort of makes sense: the Ocaml guys have had 35 years or so to make the language /nice/. I think they succeeded, at least writing feels, somehow, much more natural and much less inhibited than writing F#.
Bigger native ecosystem. C#/.net integration is a double edged sword: a lot of libraries, but the libraries are not written in canonical F#.
A lot of language features F# misses, like effect handlers, modules, GADTs etc.
As for missing language features, they can also be a double-edged sword. I slid down that slippery slope in an earlier venture with Scala. (IIRC mostly implicits and compile times).
# let foo x = x#frob;;
val foo : < frob : 'a; .. > -> 'a = <fun>
F# is often called "OCaml for .NET", but it is a misrepresentation. It is an ML-family language for .NET, but aside from that ML core they don't have much in common.Whether those features are more valuable to you than the ability to tap into .NET libraries depends largely on what you're doing.
Yeah, its more just extensions to support their use cases at scale. Think of it more as bleeding edge ocmal, once they work out kinks/concerns they'll get merged back into the language OR if it remains ultra specific it'll stay in oxcaml.
Not a complete own version lol
Python gets forked in other investment banks as well. I wouldn’t say that is evidence of any deficiencies, rather they just want to deal with their own idiosyncrasies.
The main user has been writing extensions to the compiler that they test before pushing for integration like they have done for the past twenty years or so. They publish these versions since last year.
Hardly a failure and certainly not something mandatory to keep things from failing over. Your initial comment is extremely disingenuous.
The latter going much further than most mainstream languages.
There might be some power in attracting all the people who happen to love ocaml, if there are enough of competent people to staff your company, but that's more a case of cornering a small niche than picking on technical merits
Regarding attracting talent, they've said they don't care about existing knowledge of OCaml as the language part is something they train staff on anyway. Their interviews are brutal from what I recall. I could be an OCaml expert and have no chance of making it through an interview without equal talent in high performance fintech and relevant mathematics.
Unless their hiring process has changed in the past few years, if you're a dev they're not hiring you for your financial skills, just general problem solving and software development ability. It is (was?) the usual Google-style algorithms/data structures rigamarole, but somewhat more challenging.
Like I said, my information might be a hair out of date, but it's first-hand.
Everybody in the company is expected to be able to sling numbers in text on a computer super efficiently.
So that does seem to be a good use-case for the language.
I don't build HFTs and my compilers are just for fun. None of my day jobs have ever been a situation where the smaller ecosystem and community of ocaml was offset by anything ocaml did better than the selected options like .net, Java, go, rails, C or anything else I've touched. Heck, I've written more zig for an employer than ocaml, and that was for a toy DSL engine that we never ended up using.
Because plenty of people have been shipping great projects in Ocaml since it was released so it doesn’t seem to be much of an issue to many.
I doubt Ocaml will be surpassed soon. They just added an effect system to the multicore rewrite so all things being considered, they seem to be pulling even more ahead.
Beginners face the following problems: there's multiple standard libraries, many documents are barely more than type signatures, and data structures aren't printable by default. Experts also face the problem of a very tiny library ecosystem, and tooling that's often a decade behind more mainstream languages (proper gdb support when?). OCaml added multicore support recently, but now there is the whole Eio/Lwt/Async thing.
I used to be a language nerd long ago. Many a fine hour spent on LtU. But ultimately, the ecosystem's size dwarfs the importance of the language itself. I'm sympathetic, since I'm a Common Lisp man, but I don't kid myself either: Common Lisp isn't (e.g.) Rust. I like hacking with a relic of the past, and that's okay too.
> there's multiple standard libraries
Scala has a far more fragmented ecosystem with Cats, Scalaz, Zio and Akka. C++ and Java have a bunch of stdlib extensions like boost, Guava, Apache Commons etc.
> many documents are barely more than type signatures
Can be said of most of Java, Kotlin, Scala, Erlang etc etc. Just compiled javadocs, sometimes with a couple of unhelpful lines.
> data structures aren't printable by default
Neither they are in C++
I think the real reason it's not popular is that there are languages which solve more or less the same problems of system programming but look far more familiar to an avg. programmer who was raised on C++ and Java.
I wanted to use OCaml since 2002, since it was a GC'd language with good performance, achieving a lot with relatively few lines of code. Being a language nerd, I was (am?) positively inclined to the language. Yet there was always something that made it notably less pleasant to solve my current problem in than in than some other language.
If it had trouble getting traction with me, that's bad news.
Meanwhile the mainstream has progressed a lot since 2000. GC is the standard, closures are normal, pattern matching and destructuring are increasingly so. While HM-style type inference is not mainstream, local type inference is (and I'm no longer convinced that global type inference is the way). Algebraic data types aren't unusual anymore.
A few years back I just threw in the towel and went with Rust; this happened after I volunteered to improve OCaml's gdb support (including DWARF hell), which went nowhere. I wish Rust compiled faster, and a GC'd language is usually more productive for my problems, but in every other regard it stole what should have been OCaml's thunder. And when popular successor languages eventually appear, they'll do it even better.
(Rust has refcounting, but it's slow, and needing to handle cycles manually limits its usefulness.)
https://www.ponylang.io/ https://github.com/ponylang/ponyc
The Pony community is here:
https://ponylang.zulipchat.com/
Pony's garbage collection and runtime can yield performance faster than C/C++/Rust because the compiler eliminates data races at compile time, and thus no locks are needed.
It features implementations of Dmitri Vyukov (www.1024cores.net) algorithms for work stealing and Multi-producer single consumer queues that use only a single atomic instruction to read. The designer, Sylvan Clebsch, is an ex-game-developer and ex-high-frequency-trading-infrastructure engineering lead; he knew what he was doing when he designed the language for high performance on multicore systems.
On the other hand, you do have to re-wire your thinking to think in terms of Actors and their state, and this is kind of hard to wrap one's head around if you are used to Go's CSP or the popular async/await approaches -- it is a somewhat different paradigm.
There is only one standard library and it shipped with the compiler.
> data structures aren't printable by default
So? That’s the case with most languages. Ocaml has a deriver to make types printable and a REPL which automatically prints for testing.
> tooling that's often a decade behind more mainstream languages
Opam is a fully featured package manager, dune works fine, bucks2 supports Ocaml.
> proper gdb support when?
Ocaml has had a rewindable debugger since approximately forever.
> OCaml added multicore support recently, but now there is the whole Eio/Lwt/Async thing.
Lwt was the default and everyone agrees eio is the future now that effects are there. Async is a Janestreet thing with pretty much no impact on the language outside of Janestreet.
Honestly, I maintained my initial point. Ocaml alleged frictions were always widely overstated.
When I was writing Ocaml professionally 15 years ago, there was no dune and no opam and it was already fairly easy to use the language.
What applications are written in OCaml? All I can think of (which says more about me than it does about OCaml) is the original Rust compiler.
Even Haskell has Pandoc and Xmonad.
Generally, you want stuff where you have to build a fairly large core from scratch. Most programs out there doesn't really fit that too well nowadays. We tend to glue things more than write from nothing.
For example, there is no OAuth2 client library for OCaml [1]
Check out the most popular music today. Like the top ten songs currently. Do you think those are really the best songs out there?
Popularity is mostly driven by either trends or momentum.
Yes! To add to that, the question itself is wrong. We should be asking, how is OCaml able to be so good without being popular? People get the whole thing backward.
The popular languages are typically popular first, then get good later as a result of that popularity. They have to work, they have to be good, they're too big to fail.
This is what happened with Java. It was marketed like crazy at first and only later got refined in terms of tooling and the JVM itself. The R programming language was a mess for data wrangling, but once it was popular, people built things like the tidyverse or the data.table library. The Python ecosystem was disaster with all different testing packages, build tools, and ways to create and manage virtual environments, until the relatively recent arrival of uv, more than three decades after the creation of Python itself. And then there's javascript that's had more money, blood, sweat, and tears poured into it to be improved in one way or another because that's what practically anything running in a browser is using.
Many of the most popular languages are also the most hated. Many of the more niche languages are viewed the most favorably.
It is easy to dislike something yo are familiar with, and easy to be overoptimistic about something you don't know as well.
"the grass is always greener ... "
The reason for why OCaml is not more popular, thus, is that this subset is small. The reason for this may be either (a) habit or (b) it's not that much better than other languages. I'm gravitating to (b). OCaml guys seem to be quite dogmatic for the wrong reasons.
Also it didn’t employ a marketing team to work on outreached and write fancy comments here and some people who have used it for 10 minutes are apparently offended by the Pascal-like syntax and can’t stop discussing it on every Ocaml discussion making every actual users tired.
It has basically all of the stuff about functional programming that makes it easier to reason about your code & get work done - immutability, pattern matching, actors, etc. But without monads or a complicated type system that would give it a higher barrier to entry. And of course it's built on top of the Erlang BEAM runtime, which has a great track record as a foundation for backend systems. It doesn't have static typing, although the type system is a lot stronger than most other dynamic languages like JS or Python, and the language devs are currently adding gradual type checking into the compiler.
"Type Providers" are an example of such negligence btw, it's something from the early 2010's that never got popular even though some of its ideas (Typed SQL that can generate compile-time errors) are getting traction now in other ecosystems (like Rust's SQLx).
My team used SQL Providers in a actual production system, combined with Fable (to leverage F# on the front end) and people always commented how our demos had literally 0 bugs, maybe it was too productive for our own good.
I always wanted to learn Elixir but never had a project where it could show it strengths. Good old PHP works perfectly fine.
Also corporations like their devs to be easily replaceable which is easier with more mainstream languages, so it is always hard for "newer" languages to gain traction. That said I am totally rooting for Elixir.
I know of a Haskell shop and everybody said they’d have a hell of a time finding people… but all them nerds were (and are) tripping over themselves to work there because they love Haskell… though some I’ve talked to ended up not liking Haskell in production after working there. There seems to be a similar dynamic, if a bit less extreme, in Elixir shops.
If you use type guards correctly the performance can be surprisingly good. Not C/C++/Rust/Go/Java good, but not too far off either. This is definitely a corner-case though.
But even bog-standard business processes eventually find the need for data-processing, crypto and parsing - the use-cases where people code Elixir NIF's. That is why for example you have projects like html5ever_elixir for parsing HTML/XML. Another use case is crypto - you have NIF's for several crypto libraries. Data processing - there are NIF's for Rust polars.
From a technical perspective, this is the overwhelming majority of what makes the Net happen. BEAM is great for that and many other things, like extremely reliable communication streams.
Use the right tool for the job. Rust sucks for high-level systems automation but that doesn’t make it any less useful than bash. It’s all about about use cases and Elixir fits many common use cases nicely, while providing some nice-to-haves that people often ask for in other common web dev environments.
The current tool to wrap your bytecode with a VM so that it becomes standalone is Burrito[1], but there's some language support[2] (I think only for the arch that your CPU is currently running? contra Golang) and an older project called Distillery[3].
1: https://github.com/burrito-elixir/burrito
Talk about "immutable by default". Talk about "strong typing". Talk about "encapsulating side effects". Talk about "race free programming".
Those are the things that programmers currently care about. A lot of current Rust programmers are people who came there almost exclusively for "strong typing".
-- ghci> example
-- Triangular number 1 is 0
-- Triangular number 2 is 1
-- Triangular number 3 is 3
-- Triangular number 4 is 6
-- Triangular number 5 is 10
example = runEff $ \io -> evalState 0 $ \st -> do
for_ [1..5] $ \i -> do
n <- get st
let msg = "Triangular number " <> show i <> " is " <> show n
effIO io (putStrLn msg)
st += i
where
st += n = modify st (+ n)
(This is not a trick question. Simon Peyton Jones described Haskell as "the world's finest imperative language" [1], and I agree. This code is written using https://hackage.haskell.org/package/bluefin)[1] Tackling the Awkward Squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell
In 2025, Elixir is a beautiful system for a niche that infrastructure has already abstracted away.
Do you mean Kubernetes?
My mental model of Erlang and Elixir is programming languages where the qualities of k8s are pushed into the language itself. On the one hand this restricts you to those two languages (or other ports to BEAM), on the other hand it allows you to get the kinds of fall over, scaling, and robustness of k8s at a much more responsive and granular level.
That's like complaining that unsafe{} breaks Rust's safety guarantees. It's true in some sense, but the breakage is in a smaller and more easily tested place.
The throughput loss stems from a design which require excessive communication. But such a design will always be slow, no matter your execution model. Modern CPUs simply don't cope well if cores need to send data between them. Neither does a GPU.
The grand design of BEAM is that you are copying data rather than passing it by reference. A copy operation severs a data dependency by design. Once the copy is handed somewhere, that part can operate in isolation. And modern computers are far better at copying data around than what people think. The exception are big-blocks-of-data(tm), but binaries are read-only in BEAM and thus not copied.
Sure, if you set up a problem which requires a ton of communication, then this model suffers. But so does your GPU if you do the same thing.
As Joe Armstrong said: our webserver is a thousand small webservers, each serving one request.
Virtually none of them have to communicate with each other.
Interactive Elixir (1.19.0) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> x = 1
1
iex(2)> x = 2
2
iex(3)>
What's immutable about elixir? It's one of the things which I MISS from Erlang -- immutability.Data is immutable and thats much more important than whether local variables can be modified imo.
{
const int x = 1;
{
const int x = 2;
}
}
which is to say, there are two different `x` in there, both immutable, one shadowing the other. You can observe this if you capture the one that is shadowed in a closure: iex(1)> x = 1
1
iex(2)> f = fn () -> x end
#Function<43.113135111/0 in :erl_eval.expr/6>
iex(3)> x = 2
2
iex(4)> f.()
1I don't understand why this isn't more popular. For most areas, I'd gladly take a garbage collector over manual memory management or explicit borrow checking. I think the GC in D was one of it's best features, but it's downfall nonetheless as everyone got spooked by those two letters.
Nobody wants to effectively learn a lisp to configure a build system.
I would love to spend more time but even though Microsoft gives it plenty of support (nowhere near as much as C#), the community is just too small (and seems to have gotten smaller).
Looking at https://www.tiobe.com/tiobe-index/ numbers fall off pretty quickly from the top 5-7.
Guessing this is the same for OCaml, even if the language as such is nice.
Community is an interesting thing, and for some people I guess it is important. For me language is just a tool having coded for quite some time and seen communities come and go; don't care about being known or showing an example per se. If the tool on the balance allows me to write faster code, with less errors quicker and can be given to generic teams (e.g. ex Python, JS devs) with some in house training its a win. For me personally I just keep building large scale interesting systems with F#; its a tool and once you get a hang of its quirks (it does have some small ones) quite a good one that hits that sweet spot IMO.
My feeling however is with AI/LLM's communities and syntax in general is in decline and less important especially for niche languages. Language matters less than the platform, ecosystem, etc. Its easier to learn a language then ever before for example, and get help from it. Any zero cost abstraction can be emulated with more code generation as well as much as I would hate reviewing it. More important is can you read the review the code easily, and does the platform offer you the things you need to deliver software to your requirements or not and can people pick it up.
I don't know if AI can change that but when using python, there is a feeling that there is an awesome quality library for just about anything.
Otherwise in most mainstream platforms there is enough libraries for most things already; which includes .NET. It's rare not to find a well maintained lib for the majority of use cases in general whether it is .NET, Java, Go, etc which is why w.r.t long term risk a used platform is more important than the syntax of a language and its abstractions. Web frameworks, SDK's, DB drivers, etc etc are all there and generally well tested so you won't be stuck if you adopt F#. I evaluate on more objective metrics like performance, platform improvements, compatibility with other software/hardware, etc etc. It isn't that risky to adopt F# IMO (similar risk to .NET in general) - to me its just another syntax/tool in my toolbelt with some extra features than usual if I'm developing things typical in that .NET/Java/Go abstraction level.
I'm still surprised it can do so many things so well, so fast.
I've never used it so can't speak from any experience, and unfortunately it doesn't seem particularly active (and doesn't mention a current status anywhere), and doesn't have a license, so shrug. When it's been posted here (https://news.ycombinator.com/item?id=40211891), people seemed pretty excited about it.
I feel a new simple ocaml like language that just compiled to Go would be really popular, really fast. And it would not even need to build a ecosystem, as Go already have all the things you need.
Something like what Gleam is for Erlang.
15% are people trying to sell their own language of choice sometimes with the argument that "it’s less scary, look".
I would be shocked if a mere 5% is actual engagement with the topic at hand sometimes while pointing flaws which are very real.
From there, I gather two things, the main one being: maybe Meta was right actually. People are that limited and the syntax should have been changed just to be done with the topic.
There's nothing inherently wrong with using Jane Street's stdlibs if you miss the goodies they provide, but be aware the API suffers breaking changes from time to time and they support less targets than regular OCaml. I personally stopped using them, and use a few libraries from dbunzli and c-cube instead to fill the gaps.
{Ecosystem, Functors} - choose 1
F# is not stagnant thankfully, it gets updates with each new version of dotnet (though I haven't checked what is coming with dotnet 10), but I don't recall anything on the level of the above Ocaml changes in years.
Unfortunately lots of the more advanced stuff seems to be blocked on C# team making a decision. They want a smooth interop story.
But F# remains a solid choice for general purpose programming. It's fast, stable and .NET is mainstream.
C# has lots of anti-features that F# does not have.
> Fewer abstractions, and an easy to understand runtime
> Strong static guarantees
> Functional programming constructs. Especially pattern matching and sum types.
> Good performance
> Good documentation
I feel this is also Elm!
Languages have features/constructs. It's better to look at what those are. And far more importantly: how they interact.
Take something like subtyping for instance. What makes this hard to implement is that it interacts with everything else in your language: polymorphism, GADTs, ...
Or take something like Garbage Collection. It's presence/absence has a large say in everything done in said language. Rust is uniquely not GC'ed, but Go, OCaml and Haskell all are. That by itself creates some interesting behavior. If we hand something to a process and get something back, we don't care if the thing we handed got changed or not if we have a GC. But in Rust, we do. We can avoid allocations and keep references if the process didn't change the thing after all. This permeates all of the language.
I personally love ML languages and would be happy to keep developing in them, but the ecosystem support can be a bit of a hassle if you aren't willing to invest in writing and maintaining libraries yourself.
OCaml has some high profile use at Jane Street which is a major fintech firm. Haskell is more research oriented. Both are cool, but wouldn't be my choice for most uses.
ML is a family of languages and we have StandardML with different implementations, OCaml with official path and JS path, F# and whatnot.
This is the problem for Lisp, too as there are many Lisps.
OCaml did become popular, but via Rust, which took the best parts of OCaml and made the language more imperative feeling. That's what OCaml was missing!
It has no dogmatic inclination towards functional. It has a very pragmatic approach to mutation.
The similarities are fairly superficial actually. It’s just that Rust is less behind the PL forefront that people are used to and has old features which look impressive when you discover them like variants.
There is little overlap between what you would sanely use Rust for and what you would use Ocaml for. It’s just that weirdly people use Rust for things it’s not really suited for.
I'm not saying that Rust feels like Ocaml as some are interpreting, I said Rust is more imperative feeling, they're not the same. The reason Rust has had success bringing these features to the mainstream where Ocaml has not, I believe, is because Rust does not describe itself as a functional language, where as Ocaml does, right up front. Therefore, despite Rust having a reputation for being difficult, new learners are less intimidated by it than something calling itself "functional". I see it all the time. By the time they learn Rust, they are ready to take on a language like Ocaml, because they've already learned some of the best parts of that language via Rust.
Note my comment about their similarities is not at the level of borrow checkers and garbage collectors.
What?
But if you write without the escape hatches in both languages, in my experience the safety is exactly the same and the cost of that safety is lower in TypeScript.
A very common example I've encountered is values in a const array which you want to iterate on and have guarantees about. TypeScript has a great idiom for this:
``` const arr = ['a', 'b'] as const; type arrType = typeof arr[number];
for (const x of arr) { if (x === 'a') { ... } else { // Type checker knows x === 'b' } } ```
I haven't experienced the same with Ocaml
And second, you're dismissing the fact that TypeScript is unsound, even worse it is so by design. Easy examples: uninitialized variables holding undefined when their type says they can't [1]; array covariance [2]; and function parameter bivariance, which is part of the TypeScript playground's own example on soundness issues [3] but at least this one can be configured away.
C# and Java made the same mistake of array covariance, but they have the decency of checking for it at runtime.
[1]: https://www.typescriptlang.org/play/?#code/DYUwLgBAHgXBB2BXA... [2]: https://www.typescriptlang.org/play/?#code/MYewdgzgLgBAllApg... [3]: https://www.typescriptlang.org/play/?strictFunctionTypes=fal...
As for your example, I agree that TypeScript unions and singleton types are powerful, but I can't see what are you speficially missing here that pattern matching and maybe an additional variant type doesn't get you in OCaml. You get exhaustiveness checking and can narrow the value down to the payload of the variant.
> I can't see what are you speficially missing here that pattern matching and maybe an additional variant type doesn't get you in OCaml. You get exhaustiveness checking and can narrow the value down to the payload of the variant.
A couple things, being constrained to pattern matching in places where imperative programming would be more natural, having to declare lots of special purpose types explicitly vs being able to rely on ad hoc duck typing.
sounds superficially similar to Common Lisp