This resonates with me. That's the kind of drive that results in great output. Buying it just for that.
I've been approached by publishers several times throughout my career. Each time the process was similar: they had an idea, I had an idea, we tried to come to common ground, and then the deal fell through because we couldn't find any. E.g. I didn't want to write a Java book aimed at 14 year olds. They didn't want me to write about classloaders (or whatever niche subject I was diving into at the time).
Would love to learn how people find (non-empty) intersections of their passions & what readers want.
Two of my five did have publishers. I’m grateful for that experience. Learned a lot!
Teaching is the best way to learn. I found that out when I started tutoring classmates for math in high school.
Same thing with writing a book. Something about learning a subject and turning around to speak/write out about it that really crystallizes the in-depth understanding beneath the surface.
Your average person "knows" how a toilet works; water is pumped in to fill the cistern, released into the bowl when you pull the plunger, and flushes out the drain. Ask them to explain in detail how that happens, and most realize that they don't actually know how the cistern doesn't just keep filling until it overflows, or how it's not constantly leaking water into the bowl, or how the bowl can be flushed while neither overflowing nor draining completely.
I was ready to self publish but found a publisher who was interested. I had to make some changes to make it more readable, but you might have luck approaching publishers yourself
To be clear, I aimed to avoid prescribing certain routines through most of the book. I wanted to basically provide a knowledge foundation for readers to evaluate routines or create their own. So instead of saying eg you should campus board, I try to explain that power has to be trained separately from max strength if you care about increasing power
https://nerocam.com/DrFun/Dave/Dr-Fun/df200002/df20000210.jp...
And, depending on the book, yes the distributor the publisher has can be very helpful for sales. It's nice to be able to grab your book off the shelf at Barnes and Noble (and, does lend a bit more credibility to your work).
All that said, if you're writing for purely economic reasons (which I would caution against regardless), you're probably going to make roughly the same if you self-publish for a small audience vs go with a traditional publisher for a larger audience, and if you can get a larger audience self-publishing then there will be no comparison.
If I want to write for just myself, I can just journal or blog. A book is a significant undertaking, writing one which no one reads would just be depressing.
the gist of the idea: >
yeah idk but something like curation is done by commitee (to try and maintain a minimum of quality overall)
(that’d be the self-serve part i guess)
tbh hardest is still marketing. good books are not only text but also covers and the like
This is very much what pragprog.com is meant to be. I'm only on the volunteer curation committee so have less insight into the feedback cycle for authors post-acceptance, but every author who's published on the platform I've talked to has been pretty positive about the experience.
The OP didn't go into nearly as many (indeed, any) details as to why their second publishing attempt with them in particular did not work out, I'd be curious to learn more.
The hardest part still being networking into others lives to distribute the message
You just re-invented our economy
Also, piracy is rampant and Amazon doesn't do much about it. Publishers have more resources to stay on top of it, I suspect.
1. marketing and reach
2. financial risk on paper copies
3. Production services (e.g. editing and artwork)
If you don’t need those or can get them some other way (e.g. hire an independent editor), then you are better off self publishing and not giving the publisher a cut.
In theory, yes, but they have less expertise than you might imagine. For technical writing, keep in mind that editors at publishing companies aren't actually tech people. They may have been at one point, years ago, but they don't really know what matters to programmers today in that way that an active working engineer does.
> 2. financial risk on paper copies
That was much more of a thing before print-on-demand. You don't have to take the risk of a several thousand copy offset press run anymore.
There is maybe an argument that offset printing is higher quality, but I have textbooks from major academic publishers whose print quality is clearly worse than the POD stuff I get from Amazon for my book.
> 3. Production services (e.g. editing and artwork)
This is absolutely critical, agreed. Though they are often contracting out for this and if you're comfortable finding and vetting freelancers yourself, then they don't add a ton of value.
The BEAM really feels like alien tech left from a highly advanced civilization and this book dropped in a such a great timing! Bought it right away, kudos to Dr. Erik Stenman for keeping it up after two cancellations!
Example -- whatsapp.
Why Elixir + Erlang are not more popular for high concurrency projects is a mystery to me.
I work at an Erlang shop.
For Erlang to be useful you need to have massive scale, millions of DAU. Yes Elixir might run your website serving thousands of DAU but Erlang and the BEAM was not invented for you. Few companies have the scale that makes Erlang a reasonable choice.
More pressing these days I believe is that the Erlang ecosystem is all or nothing, the BEAM is like its own little operating system and has extremely (IMHO) complicated setup and configuration with many moving parts: ERTS, epmd, Rebar,... You don't need or shouldn't use container tech and k8s with Erlang because it's already doing that stuff in its own Erlang way - but it's the Erlang way it's extremely unique and unfamiliar to most.
When you have the right use case for Erlang and you see it in action it is truly like black magic, it's been a highlight of my career to work with it.
We'll run prod on one server and dev on 3-4 workstations and nothing will match between any of them without a docker container to give this Elixir app a cleanroom environment to work from.
The project we were trying this on eventually ran out of funding before it got off the ground, and we lost access to our main guy who understood Elixir setup really well, so nowadays the rest of us can't even manage to stand up the service to demo it because of all of the moving parts that get angry about not having their environment set up just right.
I've basically found it the only language more difficult than python to set up an environment for. (Well.. the more I think about it, Gradle and every other mobile development stack I have yet seen is literally Satan's armpit..)
With python (though I rarely code in that either) I can stand up almost anything on almost any machine with miniconda and venv and not have to drag Docker into things.
Node/NPM is a prima donna and a PITA but IME as long as you whack it with a wrench long enough following up on each of its complaints then you'll eventually get things running.
My experience still revolves around PHP or Perl or C on the backend, Raw Javascript or sometimes Elm on the front end, and those all seem to be a lot easier to maintain over a timescale of decades as servers update and browsers gobble up new and different features.
---
What I can say in favor of Elixir Liveview is that we built a smooth as hell SPA using that. It was incredibly user friendly to work with and aesthetically amazing, but the maintenance right down at the foundation was the biggest challenge.
For the dev experience, I'd also recommend NextLS/Lexical over Elixir LS until the official one is out. It should compile much faster for nontrivial Elixir applications.
https://elixir-lang.org/blog/2024/08/15/welcome-elixir-langu...
One of your sibling commenters noted...
> I've basically found it [Elixir] the only language more difficult than python to set up an environment for.
Quite the difference.
I'm not saying either one of you is wrong; I'm sure you both experienced what you say you did, it's just interesting to see the dichotomy, literally (as of this writing) next to each other.
This is a reality that I wish more people would embrace, for so many things.
x = doubler(x)
Instead of treating the function like a "subroutine"? Like:
x = 10
doubler(x)
print x
I'm unsure why, but I use the former. Probably due to most sources saying global variables are bad and if I do it the latter way I get errors, whereas with the former I only get errors unrelated to scope.
I'm not a professional or even amatuer programmer, that's why I ask. My only exposure to Erlang unfortunately is from coding contests like advent of code and Codegolf stack; and those are "clever" and my brain doesn't abide clever. The inverse of "why use many words when few words will do"
In my own programs, if I go back and read the code I can tell what I copy-pasted vs what I understood as I wrote it, if that makes sense. Because clever stuff that solves the problem I've been unable to solve in the time I alloted will be in my code. You see anything pretending to be, or being a lambda in my code, I lifted it.
I would like to learn Erlang and there's been a few book recommendations aside from TFA. This comment also serves as my reminder to get some books.
that's probably why!!
but don't forget, there are core routines that must be used to set certain data that are of the latter form, so at some level you have no choice.
d = {'foo': 1}
d.pop('foo') # <== mutating function call
Other languages (Java, C#, Go) are supported by massive corporate backers, who have vested interest in seeing them succeed. Erlang's own corporate owner tried to strangle it in the crib, and even since then they've been standoffishness towards anything apart from technical resources.
We didn't really see much marketing like material arise until things like Elixir came about, and even that is more following the ruby model, which is very dev oriented. And the world to which Elixir emerged in 2014 is very different than the world rails sprung into in 2004.
Devs can usually be convinced about how good BEAMlangs are, but without the glossy, you have a harder time convincing the MBA set
Erlang and Elixir are nice because concurrency isn't tacked onto the language like those other languages.
1) Because the amount of "high concurrency" you can handle with a single machine with "standard" languages keeps moving up every year.
2) Because you had a single, nigh undocumented, very impenetrable to port implementation of BEAM for the longest time.
3) Because "Erlang" isn't the magic. OTP is the magic. And that requires that you shift your mindset dramatically. And you can do the OTP abstractions in other languages!
4) I think Scala sucked up a lot of Erlang's oxygen for a long time. Erlang is helping with that.
I think marketing is a way better explanation to be honest. Jose lives in Poland, was very active and a lot of Ruby shops moved to Elixir.
Kubernetes does a lot of things that Beam does but better and completely language neutral. Then you have builtin queues and db in Erlang but they are inferior to industry standard and again only work with Erlang.
While Erlang clearly lost the popularity war, using it (well, Elixir in my case) feels like going to an alternate timeline where it wasn’t a given that a good application used separately developed tools for frontend, backend, database, cache, service discovery, load balancing, containerization, orchestration, and so on. We really could have been on a platform that incorporated all of those things into a single runtime.
The way I explain Erlang to folks is: “it’s not a competitor to Python running Django in Gunicorn; it’s a competitor for Python+Django+Gunicorn in Docker on Kubernetes on Linux, talking via a service mesh and storing data in disk-persisted Redis.” In many ways, swapping out only the top of that stack with Erlang is going to leave you worse off than you were before.
Sadly, the Erlang renaissance was years late to the boom of containerization+orchestration+distributed systems, so we’ll never know how good (or bad) things could have been. Ah well, c’est la vie.
"But what if we hire an engineer to stupid to learn a new language in a few weeks???" and on and on and on
Elixir is also getting some tooling love in the form of a new language server and some very much appreciated type checking
you have to import the otp package in gleam to use the famous actor architecture
Thank you for writing this book! I really wanted this a few years ago as I was debugging production Elixir, but existing learning sources were pretty dense and dry (or too simple and shallow).
I do this because it helps those unfamiliar with the system more intuitively understand relative sizes, those unfamiliar with converting between units in their head to understand numbers, and even for those familiar, it can avoid confusion or misattributed units.
Also, the great thing is you might not even have an explicit `receive` statement in your gen_server code. You might just be using a library that is using a `receive` somewhere that is unsafe with a large message queue and now you are burned. The BEAM also added some alternative message queue thing so you are able to use this instead of the main message queue of a process which should be a lot safer but I think a lot of libraries still do not use this. This alternative is 'alias' (https://www.erlang.org/doc/system/ref_man_processes.html#pro...) which does something slightly different from what I thought which is to protect the queue from 'lost' messages. Without aliases 'timeouts' can end up causing the process message queue to be polluted with messages that are no longer being waited on. This can lead to the same problems with large message queues causing throughput of a process to drop. However, usually long lived processes will have a loop that handles messages in the queue.
Then what if the OS/thread hangs? Or maybe a hardware issue even. Seems a bit weird to have critical path be blocked by a single mutex. That's a recipe for problems or am I missing something?
What's real trouble is when the hardware fault is like one of the 16 nic queues stopped, so most connections work, but not all (depends on the hash of the 4-tuple) or some bit in the ram failed and now you're hitting thousands of ECC correctable errors per second and your effective cpu capacity is down to 10% ... the system is now too slow to work properly, but manages to stay connected to dist and still attracts traffic it can't reasonably serve.
But OS/thread hangs are avoidable in my experience. If you run your beam system with very few OS processes, there's no reason for the OS to cause trouble.
But on the topic of a 15ms pause... it's likely that that pause is causally related to cascading pauses, it might be the beginning or the end or the middle... But when one thing slows down, others do too, and some processes can't recover when the backlog gets over a critical threshold which is kind of unknowable without experiencing it. WhatsApp had a couple of hacks to deal with this. A) Our gen_server aggregation framework used our hacky version of priority messages to let the worker determine the age of requests and drop them if they're too old. B) we had a hack to drop all messages in a process's mailbox through the introspection facilities and sometimes we automated that with cron... Very few processes can work through a mailbox with 1 million messages, dropping them all gets to recovery faster. C) we tweaked garbage collection to run less often when the mailbox was very large --- i think this is addressed by off-heap mailboxes now, but when GC looks through the mailbox every so many iterations and the mailbox is very large, it can drive an unrecoverable cycle as eventually GC time limits throughput below accumulation and you'll never catch up. D) we added process stats so we could see accumulation and drain rates and estimate time to drain / or if the process won't drain and built monitoring around that.
What happens to the messages? Do they get processed at a slower rate or on a subsystem that works in the background without having more messages being constantly added? Or do you just nuke them out of orbit and not care? That doesn't seem like a good idea to me since loss of information. Would love to know more about this!
Mostly this is happening in the context of request/response. If you're a client and connect to the frontend you send a auth blob, and the frontend sends it to the auth daemon to check it out. If the auth daemon can't respond to the frontend in a reasonable time, the frontend will drop the client; so there's no point in the auth daemon looking at old messages. If it's developed a backlog so high it can't get it back, we failed and clients are having trouble connecting, but the fastest path to recovery is dropping all the current requests in progress and starting fresh.
In some scenarios even if the process knew it was backlogged and wanted to just accept messages one at a time and drop them, that's not fast enough to catch up to the backlog. The longer you're in unrecoverable backlog, the worse the backlog gets, because in addition to the regular load from clients waking up, you've also got all those clients that tried and failed going to retry. If the outage is long enough, you do get a bit of a drop off, because clients that can't connect don't send messages that require waking up other clients, but that effect isn't so big when you've only got a large backlog a few shards.
In many cases it's not a big problem if some traffic is wasted, compared to desperately trying to process exactly all of it in the correct order, which at times might degrade service for every user or bring the system down entirely.
Wait... I thought all you had to do is write it in Erlang and it scales magically!
There's no magic fairy dust. Just a lot of things that fit together in nice ways if you use them well, and blow up in predictable ways if you have learned how to predict the system.
Here’s an old post with some examples - https://www.erlang-solutions.com/blog/which-companies-are-us...
But those who know, know. I mean it's on the front page of HN pretty often.
More importantly, you generally don't need an external queue service, in-memory KV store, task scheduler or many of the other things that JS/Ruby/Python stacks need. By consolidating just about everything but the DB in a single, well designed system, it's possible for a very small team to take on relatively large challenges on a smaller budget.
Kubernetes does all of that in a standard and easy way but also is completely language agnostic. So your Python code without modidication can benefit from it.
> So your Python code without modidication can benefit from it.
that's not completely true though, say you have two python processes, you need to solve yourself how they communicate, HTTP? a message broker? through the DB? You need to handle errors, stateful deployments.
You can deploy python code without modification if the python code does very simple things. My point is that the BEAM gives you a lot of this mostly presolved for you without having to add more infrastructure.
so you could use beam to orchestrate go or rust services communicating over IPC? Nice
There are many dead efforts to implement something like the BEAM or OTP within other ecosystems. Usually not as a VM.
Share nothing, green-thread/coroutines also seem popular now a days.
Nowadays there isn't anywhere near as much stuff that it does uniquely. That's probably why there isn't another one. All of the compiled languages off-the-shelf can solve the same problems that BEAM does now, and often with other advantages to boot.
There's something about the Erlang community that convinces people in it that if it isn't solve the exact same way that BEAM does, then it must ipso facto not be as good as BEAM, but that's not true. If you ask the question can I solve the same problems, even if it's in a different way?, you've got a zoo of options in 2025, whereas your options in 2000 were much much weaker.
And yes, being BEAM-compatible is harder than meets the eye. There are projects like https://github.com/ergo-services/ergo that can do it, and I believe there are some in other languages. It's a pretty niche need in my opinion, though. If you don't need to hook up to some existing BEAM infrastructure, I don't consider it a very good solution for a greenfield project. You're better off with more modern tooling and solutions that are more native to your chosen dev environment.
You have that on a single node. If you need to run more than one node, you will end up inventing your own on top of mnesia and usually the results are not spectacular or/and you will end up paying happihacking to do it for you. Or one of the other Erlang oldtimers who you can count on the fingers of your hands.
This is really suboptimal compared to what you can achieve by using any normal language + any message bus. You are actually much better using a proper message bus even if you use Erlang.
I really strongly disagree with the idea that there's no modern use for BEAM because of these other solutions. It's not simply that we've convinced ourselves that "if it isn't solve the exact same way that BEAM does, then it must ipso facto not be as good as BEAM" though I understand how you could see it that way.
Frankly what it is is that BEAM has an exceptionally well chosen position among the possible tradeoffs of solving these problems, which you are otherwise on your own to figure out and which are in themselves some of the most difficult practical considerations of systems design.
So again it's not that only BEAM can possibly do it right, but it's that BEAM does do it right. Having seen so many other attempts to do it better fail, at tremendous expense, it's an easy choice for me for systems that are expected to encounter these problems.
It is less that "I see it that way" then that "I encounter plenty of people who speak that way.", and that the Erlang community still definitely indoctrinates people to think that way.
See my other post on the topic: https://news.ycombinator.com/item?id=44181668 Which echos some of your points.
The biggest thing you need to have to have BEAM-like reliability is a message bus. Build a message-bus based system and use it properly and you're already 80% of the way there. In 1998, who knew what a "message bus" was? Today, it's a field so stuffed with options I won't even try to summarize them here. The one thing I will point out is that BEAM is 0-or-1 and 1-or-n seems to have won the race; this is one of the things I don't like about BEAM's message bus.
BEAM is based on a JSON-like data scheme because it wasn't clear how to have what you'd consider "classes" and such on a cluster with multiple versions of things any other way. Since then, there are now multiple technologies to solve this problem, like gRPC, Cap'n Proto, and again, what was "who's heard of that?" in 1998 is now an entire field of options I can barely summarize. It is no longer necessary to sacrifice everything we get with "methods" and "user data types" to have cross-cluster compatibility.
Bringing up clusters is now just Tuesday for a devops team. Kubernetes, Docker Cloud, all the cloud-specific orchestrations like CloudFormation, bringing up clusters of things now has many technologies you can deploy. Moreover, these technologies do not impose the requirement that your nodes be essentially homogeneous, all running BEAM. You can bring up any tech stack you like doing any specialized tech stuff you need, all communicating over shared message busses.
Reliability options vary widely, from running internal services in OS processes to capture and restart things, to things like lambda functions that are their own solution to that sort of reliability, to isolated micro-VMs, to threading-based solutions... while there is a certain appeal to "just crash and take the thread down" it's not the only option anymore. Now that every language community that can is building huge servers with millions of threads in them, many solutions to this problem have developed, with different cost/benefit tradeoffs for each. The "crash out the whole thread" is not the only option, merely one interesting one.
As to how they compare to BEAM, that does slant the question a little bit, as if BEAM is the golden standard that everyone else is still desperately aspiring to. How they compare to BEAM is that there is now a zoo of options of every sort, with a huge variety of tradeoffs in performance and cost and ease-of-use and ease-of-deployment, and BEAM is merely a particular point in the huge field now, which I wouldn't even characterize as particularly standing out on any front. For every thing it does like "have a nice crashing story" there's a tradeoff where it did things like "give up user-defined data types because they couldn't figure out how do to them in a cluster in the late 1990s". BEAM's feature set, on a feature-by-feature basis, isn't particularly special any more. There's faster options, there's easier options, there's "works with my current language" options, there's more reliable options, there's cheaper options, there's all kinds of things... but not all of these at once necessarily.
So, what is BEAM's unique value proposition in 2025? It is the integration of the solutions, and picking a set of solutions that IMHO may not be "the best" on any particular front any more but has proved to be "adequate to many tasks" for decades now. You can assemble all those technologies I mentioned in an incredible number of combinations, but you have to do the assembly yourself, and burn time asking yourself, which message bus? Which clustering? Which orchestration? It's overwhelming, and made even harder by the fact that even with all these choices there's still a lot of developers who don't even know these things exist, let alone how to evaluate them and start integrating them into a system successfully. BEAM offers a one-stop shop, with prepared and opinionated answers with a proved track record, and a community that knows how to use them, libraries that integrate with the opinionated answers. I.e. who can write a library that "works with Kafka, Amazon SQS, NATS, Redis streams, Postgres events, and a dozen other messaging libraries" without having to work with a lowest common denominator that is almost useless? But someone can definitely provide "an Erlang library" that meaningfully integrates with all of the BEAM infrastructure without an issue. I don't think BEAM is best-of-breed on any particular front but it is arguably best-of-breed in providing them an answer to you for all these problems under a single roof, and then being able to build on the advantage that picking a set of solutions brings you.
I wish the BEAM/Erlang/Elixir community would lean more into this as their pitch, and stop running around and acting like they've got the only solution to the problems and as if they are the gold standard. The advantage of BEAM is not that they have the only reliability solution, the only message bus, the only clustering solution, the only etc. etc. anymore... the advantage is in the integration. The individual components of the solutions are not where the awesomeness lies; in 2025 most of them have been handily exceeded on many fronts now, but that integration and the subsequent things like libraries built on that integration is a unique proposition, even today. The very explosion of choice in the solutions to the problems BEAM addresses and the resulting diaspora in all the code bases in the world make it difficult to build very much on top of these things because there's so many differences in so many places.
I get your point that BEAM’s individual components might not be the best in 2025 and you get the point of uniformity. so whats the point of saying there's a better BEAM like system, but then fail to point out one? Elixir/BEAM community promoting themselves as the only solution to said problems isn't a bad thing imo, because what other system can give me the guarantee without forcing me to learn a bunch of new DSLs or scripting languages, and deal with the idiosyncrasies of a bunch of different systems? With Elixir or Erlang, I can stick to one coherent environment and get all that.
Again you state all this in your post, yet say elixir/beam isn't the gold standard, then what is? As i am having a blast working with phoenix, liveview and nerves and the BEAMS guarantee of a ms soft real time fault tolerant system hasn't failed me yet and there doesnt seem to be anything like it in the market. The only thing I hate about elixir is types and would switch to rust/go if there was a similar offering
I didn't point out one, I pointed out thousands. All the combinations of message busses and serializations and schedulers and monitors you can imagine. Systemd monitoring a selection of processes that read and write from kafka queues in a cluster of VMs. Lambda functions that read and write Amazon SQS/SNS queues written in Go. Azure Functions using JSON to communicate over Azure Service Bus with Terraform configuring redundancy. A microservices architecture orchestrated with etcd based on gRPC services communicating with each other on a NATS message bus. Arbitrary mixes and matches of all of these things, probably the most common architecture of all at a corporate level.
Many of these beat BEAM on some metric or other that may be important at some point. For instance it's not hard to beat BEAM on performance (though it is also not hard to lose to it on performance; it's a very middle-of-the-road system on that front, which I mean completely literally). Another thing that you get very naturally is heterogeneity; if you want everything on a BEAM system you really have to stick to BEAM's languages, which is a pretty big issue in most cases.
The reason I say BEAM is not the gold standard is that there's still people running around who speak as if BEAM is the only way to write reliable software, that it is still the only way to have lots of independent services that communicate by passing messages, that if you don't implement every single one of gen_server's features and work exactly like OTP's supervision trees and if you don't handle errors by crashing processes, then you're not operating at Erlang's Golden Standard.
And that's not true anymore. There's plenty of alternatives, many better than BEAM on many fronts. BEAM is not the natural obvious leader standing above the rest of the crowd, secure in the knowledge that nothing is as good as it is and never will be. It's now down in the scrum, and as long as it is metaphorically running around claiming it's somehow unique I'm going to grumble about it. It's not even good for BEAM itself, which really ought to be pitching integration rather than "Look! We solve the reliability problem uniquely!"
To the extent that people are reading my post and basically backpedaling and insisting "Oh, no, it's the integration all along that we've been pitching"... well, I don't particularly enjoy the rewriting of history, but other than that... yes! Good! Do that! Pitch the integration! It's a legitimate advantage! But it's not 2005 anymore. Every major compiled language can handle tens or hundreds of thousands of connections on a reasonably sized server. Every language has solutions for running this robustly and with decoupled architecture. Every language has solutions to the problems now. BEAM's in a crowd now, whether it likes it or not.
There is no gold standard for these technologies any more. One might as well ask "well, what's the best computer?" There's no answer to that question. Narrow it down to "what's the best gaming computer" and you can still ask "what sort of games" and it'll be a crowded field. There's more options that anyone can even analyze anymore.
The BEAM is set up for "Erlang like languages" (or maybe it's the other way around). Writing Elixir, still feels a lot like Erlang because of the underlying semantics of how it operates. Even Gleam is closer to Erlang than anything else once you get past the types.
Go also has goroutines/green threads/Erlang-like processes as a core primitive for its parallelism. it doesn't have the same "opinion" about how to structure a concurrent application that you get from OTP though.
This warms my heart. While the internet is infamous for its negativity and how it makes people miserable, even small positive moments like this can make a lasting difference and remain memorable years later.
A lot of stuff that people do for free is rather thankless work except for the occasional appreciation. I've been maintaining some only modestly popular OSS projects on Github. They won't change the world and I mostly do it because it serves my own needs. But getting the occasional outreach from people that used some of my code is always a highlight in my day.
I don't use Erlang, but for 13 years in the making, I'm getting a copy.
Thank you.
p.s if the author sees this: Kindle edition too, please.