But a part of me is reading this and thinking "friend... if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?" Like, you have the counterexample right there! Other companies are making the "technically worse" choice but making it work.
I love coding and I recognize that human beings are made of narratives, but this feels like 3 days you could have spent on customer needs or feature dev or marketing, and instead you rolled around in the code mud for a bit. It's fine to do that every now and then, and if this was a more radical jump (e.g. a BEAM language like Elixir or Gleam, or hell, even Golang, which has that preemptive scheduler + fast compiles/binary deploys + designed around a type system...) than I'd buy it more. And I'm not in your shoes so it's easy to armchair quarterback. But it smells a bit like getting in your head on technical narratives that are more fun to apply your creativity to, instead of the ones your company really needs.
Python didn't cause their problems, Django did. They wanted async, but chose a framework that doesn't really support it. And they weren't even running it on an async app server.
Python didn't work for them because every subsequent choice they made was wrong.
From a technical perspective, I find both python and node.js to be pretty underwhelming. If I had to pick a shiny new thing it would probably be one of the usual suspects like Rust.
But last time I worked with Python (2022), types in python were pretty uninspiring. In 2022 typescript was already very powerful and it just keeps improving.
It's the asyncio and all performance footguns which need to be fixed.
Pydantic is good. Mypy and pyright are good enough for type checking real projects. I run mypy as pre commit. It takes time but it has saved me from real bugs.
The type system coupled with pydantic for validation is more expressive and ergonomic than java / go. But it's also lousy when working with people who don't have the type oriented thinking (especially juniors). You need to enforce people to type public signatures and enable strict linter settings.
Mixed:
Library wise, FastAPI ecosystem is type-first and works well. But old world ecosystems like django - I don't have first hand experience.
SQL alchemy seems to be getting better. But I wish something type-first similar to sqlc or room or Micronaut Data JDBC existed for python, where I could just use pydantic validated DTOs and use query builder, rather than dealing with SQLAlchemy's proxy objects. It's workable though. I would suggest keeping SQLA objects in only the layer that touches the DB and convert them to pydantic models at the boundary.
Library support is hit or miss. In common web dev, I get good typings as long as I stick to popular and "boring" libraries. Sometimes I have to look at docstring and use typing.cast to return types.
Cons:
new type checking solutions like pyrefly aren't there yet for my use cases. Ruff is good as linter and VSCode extension.
IDE extensions and mypy still miss some bugs which should never happen in typed languages. ESP with coroutines. (I wish there was a way to make it an error, to call coroutines without await, unless submitting them to a asyncio.run or gather etc.., dart has very good async linting in comparison).
Writing a: dict[tuple[str, str, str], int] = {} is no fun. But it guarantees if I use a wrong type of key down in the function, I will get a red squiggle.
Also what could you do in 3 mere days that would pay off more than having the code in a language that the team is much more efficient with, one which doesn't need hacks to "make it work"?
It would save you several days on features forever, compared to doing one thing for just 3 days.
In my book Nodejs doesn't belong on the server, but that's the choice they made. Python at least is thought out as a backend language, but can also be criticized for many aspects. If a team is more knowledgeable about modern languages, of course there are many technically probably better choices than both Nodejs or Python.
More seriously, I've worked on codebases I found ok, and some I deeply disliked, I guess there's a continuum from "exciting" to "frustrating".
The "what idiot wrote this? Oh, it was me" thing.
Personally I don't think there's anything wrong with scratching that itch, especially if its going to make you/your team more comfortable long term. 3 days is probably not make-or-break.
Also, considering the project is an AI framework, do you think the language ChatGPT is built on is a worse choice than the language we use because it's in the browser?
Because language bindings isn't really what makes ChatGPT tick.
Sure a project can be based on more than 1 language. But it seems to be mostly python.
the gist of this blog post is this company knew and understood node better than python, so they migrated to what they knew.
To be honest, I never liked the way async is done in python at all.
However, I love Django and Python in general. When I need "async" in a http cycle flow, I use celery and run it in background.
If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side. Either it's Chat response with LLM or importing a huge CSV file.
Simple rule for me is, "don't waste HTTP time, process quick and return quick".
SSE is nice.
I use a combination or channels and celery for a few projects and it’s works great.
but I still hope at some point they will manage to fix the devx with django/python and async
With LLMS, you shit out working production ready web apps in 2 days now that are quite performant, as long as you don't care about code maintainability long term.
Also performance wise FastAPI + uvicorn have many pitfalls as well, most of them being of asyncio.
The whole environment is built for async from the ground up. Thousands and thousands of hours put into creating a runtime and language specifically to make async programming feasible. The runtime handles async IO for you with preemptive scheduling. Ability to look at any runtime state on a production instance. Lovely community. More libraries than you might expect. Excellent language in Elixir.
Give it a shot.
People are reimplementing things that are first class citizens in elixir. Live content update, job runners, queues... Everything is built into the language. Sure you can do it all in typescript, but by then you'll be importing lots of libraries, reimplementing stuff with less reliability and offloading things like queues to third party solutions like pulsar or kafka.
People really should try elixir. I think the initial investment to train your workforce pays itself really quick when you don't have to debug your own schedulers and integrations with third party solutions. Plus it makes it really easy to scale after you have a working solution in elixir.
It's interesting, for some people Elixir really clicks, others can't make heads or tails of it. I don't mind Erlang either, but I understand that that is really an acquired taste.
But your comment has convinced me to try it since I am having a bit of NextJS burnout.
what about elixir that eliminates the need for kafka. simple queues I understand but kafka ?
A lot of the affordances in the ecosystem have been supplanted by more modern solutions for many use cases, like Kubernetes.
Elixir also opens a number of footguns like abuse of macros; these are some of the reasons to second guess switching.
I think that one of the strongest reasons for switching would be that if you are willing to trade off all of this in exchange for the ability to do zero downtime deploys, not just graceful shutdowns and rollovers. Like if you’re building a realtime system with long lived interactions, like air traffic control system or live conferencing systems.
It can sometimes feel like an esoteric or regrettable choice for a rest api or rpc/event driven system. Even if you want a functional language there may be better choices like kotlin.
??
Elixir is strongly but dynamically typed.
On the progress of static typing:
There are probably less code samples and let’s be honest this is 2025, how well do LLMs generate code for obscure languages where the training data is more sparse?
I've had 3 Elixir jobs and 2 Rust jobs in the last 10 years. All were on real products, not vaporware. I learned a ton, worked with great people, and made real friends doing it.
Luck? Skill? Who knows. It's not impossible to work with the technology of your choice on problems you find interesting if you're a little intentional.
Nothing ever gets better if everybody just does what's already popular.
He spent time running benchmarks for 0-1 apps and all kinds of other metrics and found basically no appreciable difference in the speed or accuracy of AI at generating Elixir vs. Python. Maybe some difference, but honestly it just doesn't exist enough to matter.
Most code is boilerplate and that's where LLMs shine, I don't think this specific issue is very important.
You'd be surprised: https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/blob/b1...
A: why in gods name B: Every language, every framework and every tech stack is 1 month to 5 years away from being legacy crap. Unless you're learning something like KOBOL it's better to be able to use a variety of languages and show that you can adapt.
LOL. Speaking about absolutely horrible ideas ...
As an acceptor of reality, you can begin to accept that as well.
I had to switch my project to .NET in the end because it was too hard to find/form a strong Elixir team. Still love Elixir. Indestructible, simple, and everything is easy once you wrap your head around the functional programming.
It. Just. Works.
Obviously that's not going to give you the benefit of a person who has specifically worked in the ecosystem and knows where the missing stairs are, which does definitely have its own kind of value. But overall, I think a big benefit of working in something like Elixir, Clojure, Rust, etc is that it attracts the kind of senior level people who will jump at the opportunity to work with something different.
One nice side effect of having done this is having a small rolodex of other people who are like that.
So, like, if I had a good use case for Elixir and wanted a pal to hack on that thing with, I know a handful of people who I'd call, none of whom have ever used Elixir before but I know would be excited to learn.
Any recommendations for someone looking to break into the Elixir space in a serious (job-related/production app) way?
So my advice is, try to bolster your story that you can design and build systems (regardless of language), learn what is needed to get the job done, and _communicate_ your knowledge of those systems to people. Good teams will recognize this regardless of prior specific tech.
Source: I've been on hiring panels at multiple companies that used Elixir extensively and the factors that led to us making offers to candidates were rarely their preexisting Elixir experience.
==> what makes erlang runtimes so special which you don't get by common solutions for retries etc?
The Erlang runtime can start a scheduler for every core on a machine and, since processes are independent, concurrency can be achieved by spawning additional processes. Processes communicate by passing messages which are copied from the sender into the mailbox of the receiver.
As an application programmer all of your code will run within a process and passively benefit from these properties. The tradeoff is that concurrency is on by default and single threaded performance can suffer. There are escape hatches to run native code, but it is more painful than writing concurrent code in a single-threaded by default language. The fundamental assumption of Erlang is that it is much more likely that you will need concurrency and fault tolerance than maximum single thread performance.
Conversely all the node+typescript projects, big and small, have been pretty great the last 10+ years or so. (And the C# .NET ones).
I use python for real data projects, for APIs there are about half a dozen other tech stacks I’d reach for first. I’ll die on this hill these days.
While, `PydanticAI` does the best it can with a limited type system, it just can't match the productivity of typescript.
And I still can't believe what a mess async python is. The worst thing we've encountered was a bug from mixing anyio with asyncio which resulted in our ECS container getting it's CPU pinned to 100% [1]. And constantly running into issue with libraries not handling task cancellation properly.
I get that python has captured the ML ecosystem, but these agent systems are just API calls and parsing json...
edit: ironically I'm the author of a weird third party library trying to second guess the asyncio architecture but mine is good https://awaitlet.sqlalchemy.org/en/latest/ (but I'll likely be retiring it in the coming year due to lack of interest)
In my experience async is something that node.js engineers try to develop/use when they come from node.js, and it's not something that python developers use at all. (with the exception of python engineers that add ASGI support to make the language enticing to node developers.)
FastAPI does have a few benefits over express, auto enforcing json schemas on endpoints is huge, vs the stupidity that is having to define TS types and a second schema that then gets turned into JSON schema that is then attached to an endpoint. That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
The auto generated docs in FastAPI are also cool, along with the pages that let you test your endpoints. It is funny, Node shops setup a postman subscription for the team and share a bunch of queries, Python gets all that for free.
But man, TS is such a nice language, and Node literally exists to do one thing and one thing only really well: async programming.
Just define all your types as TypeBox schemas and infer the schema from that validator. This way you write it once, it's synced and there's no need for a compiler plugin.
https://github.com/sinclairzx81/typebox?tab=readme-ov-file#u...
The TS compiler should either have an option to pop out JSON schema from TS types or have a well defined plugin system to allow that to happen.
TS being compile time only really limits the language. It was necessary early on to drive adoption, but now days it just sucks.
[0]: https://zod.dev/
Very painfully.
I avoid the async libs where possible. I'm not interested in coloring my entire code-base just for convenience.
Django is great but sometimes it seems it just tries to overdo things and make them harder
Trying to async Django is like trying to do skateboard tricks with a shopping cart. Just don't
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
If one says, "we don't use an ORM", you will incrementally create helper functions for pulling the data into your language to tweak the data or to build optional filters and thus will have an ad hoc, informally-specified, bug-ridden, slow implementation of half of an ORM.There is a time and place for direct SQL code and there is a time and place for an ORM. Normally I use an ORM that has a great escape hatch for raw SQL as needed.
But yeah don't do a high level lang's job in C or C++
In Django, you can change a single field in a model, and that update automatically cascades through to database migrations, validations, admin panels, and even user-facing forms in the HTML.
Once you're in the situation of supporting a production system with some of the limitations mentioned, you also owe it to yourself to truly evaluate all available options. A rewrite is rarely the right solution. From an engineering standpoint, assuming you knew the requirements pretty early on, painting yourself into a bad enough corner to scrap the whole thing and pick a new language gives me significant pause for thought.
In all honesty I consider a lot of this blog post to be a real cause for concern -- the tone, the conflating arguments (if your tests were bad before, just revisit them), the premature concern around scaling. It really feels like they may have jumped to an expensive conclusion without adequate research.
In an interview, I would not advance a candidate like this. If I had a report who exhibited this kind of reasoning, I'd be drilling them on fundamentals and double-checking their work through the entire engineering process.
It's entirely likely that we did something wrong and misused celery. But if many people have problems with using a system correctly then it's also something worth considering.
There’s not much software I really dislike but Celery is one.
A nightmare within a nightmare to configure and run.
Moreover, having worked with Django a bit (I certainly don't have as much experience as you do), it seems to me that anything that benefits from asynchrony and is trivial in Node is indeed a pain in Django. Good observability is much harder to achieve (tools generally support Node and its asynchrony out of the box, async python not so much), Celery is decent for long running, background, or fire and forget tasks, but e.g. using it to do some quick parallel work, that'd be a simple Promise.all() is much less performant (serialize your args, put it in redis, wait for a worker to pick it up, etc), doing anything that blocks a thread for a little bit, whether in Django or Celery,is a problem, because you've got a very finite amount of threads (unless you use gevent, which patches stdlib, which is a huge smell in itself), and it's easy to run out of them... Sure, you can work around anything, but with Node you don't have to think about any of this, it just works.
When you're still small, isn't taking a week to move to Node a better choice than first evaluating a solution to each problem, implementing solutions, each of which can be more or less smelly (which is something each of your engs will have to learn and maintain... We use celery for this, nginx for that, also gevent here because yada yada, etc etc), which in total might take more days and put a much bigger strain on you in the long term? Whereas with Node, you spend a week, and it all just works in a standard way that everyone understands. It seems to me that exploring other options first would indeed be a better choice, but for a bigger project, not when the rewrite is that small.
Thank you for your answers!
"Python doesn't have native async file I/O." - like almost everybody, as "sane" file async IO on Linux is somehow new (io_uring)
Anyway ..
They claim about an 8x improvement in speed.
And I'm guessing the reason they didn't do greenthreading is it'd severely complicate working with C/native libs.
This sounds like standard case going with what developers know instead of evaluating tool for job.
I work on a large Django codebase at work, and this is true right up until you stray from the "Django happy path". As soon as you hit something Django doesn't support, you're back to lego-ing a solution together except you now have to do it in a framework with a lot of magic and assumptions to work around.
It's the normal problem with large and all-encompassing frameworks. They abstract around a large surface area, usually in a complex way, to allow things like a uniform API to caches even though the caches themselves support different features. That's great until it doesn't do something you need, and then you end up unwinding that complicated abstraction and it's worse than if you'd just used the native client for the cache.
I guess if you write a lot of custom code into specific hooks that Django offers or use inheritance heavily it can start to hurt. But at the end of the day, it's just python code and you don't have to use abstractions that hurt you.
Could you be more specific? Don't get me wrong, I'm well aware that npm dependency graph mgmt is a PITA, but curious where you an into a wall w/ Node.
As far as going with what you know vs choosing the best tool for the job, that can be a bit of a balancing act. I generally believe that you should go with what the team knows if it is good enough, but you need to be willing to change your mind when it is no longer good enough.
A company using 2.7 in 2022 is an indicator that the company as a whole doesn't really prioritize IT, or at least the project the OP worked on. By 2017 or so, it should have been clear that whatever dependencies they were waiting on originally were not going to receive updates to support python3 and alternative arrangements should be made.
It got this bad because the whole thing "just worked" in the background without issues. "Don't fix what isn't broken" was the business viewpoint.
All-in, there's no single silver bullet to solving a given issue. Python has a lot of ecosystem around it in terms of integrations that you may or may not need that might be harder with JS. It really just depends.
Glad your migration/switch went relatively smoothly all the same.
Working with both sync Django and async FastAPI daily, it’s so easy to screw up async FastAPI and bring things to a halt. If async is such the huge key feature they seem to think it is for their product, then I would agree moving away from Python early while it’s still relatively easy is the right call.
> and we had actually already written our background worker service in Node,
Ok well that’s a little bizarre… why use Django to begin with if you are not going to use the huge ecosystem that comes with it. New Django has first-class support for background workers, not that Celery is difficult to get setup. It’s sounds like the engineering team just started building things in what they knew without any real technical planning and the async hiccup is more or less an excuse to get things in order after the fact.
For example?
It was a three day small task?
Given they used TS and performance was a concern I would also question the decision to use Node. Deno or Bun have great TS support and better performance.
Don't get me wrong, I use Bun and I'm happy with it, but it's still young. With Hono/Drizzle/Zod I can always switch back to Node or Deno if necessary.
"drizzle works on the edge"
const results = await query`
SELECT...
FROM...
WHERE x = ${varname}
`;
Note: This is not sql injection, the query is a string template handler that creates a parameterized query and returns the results asynchronously. There's adapters for most DBs, or it's easy enough to write one in a couple dozen lines of code or less.However drizzle makes it very very straightfoward to handle DB migration / versioning, so I like it a lot for that.
I'm not sure what additional help you're getting. I'm just not a fan of ORMs as they tend to have hard edges in practice.
Obviously ORMs and query builders won't solve 100% of your queries but they will solve probably +90% with much better DX.
For years I used to be in the SQL-only camp but my productivity has increased substantially since I tried EF for C# and Drizzle for TS.
With an ORM, you can also over-query deeply nested related entities very easy... worse, you can then shove a 100mb+ json payload to the web client to use a fraction of.
Also the overhead of good ORMs is pretty minimal and won't make a difference in the vast majority of cases. If you find a bottleneck you can always use SQL.
What’s going to end up happening is they’ll then create another backend for AI stuff that uses python and then have to deal with multiple backend languages.
They should have just bit the bullet and learned proper async in FastAPI like they mentioned.
I won’t even get started on their love of ORMs.
>I'll preface this by saying that neither of us has a lot of experience writing Python async code
> I'm actually really interested in spending proper time in becoming more knowledgeable with Python async, but in our context you a) lose precious time that you need to use to ship as an early-stage startup and b) can shoot yourself in the foot very easily in the process.
The best advice for a start-up is to use the tools that you know best. And sometimes that's not the best tool for the job. Let's say you need to build a CLI. It's very likely that Go is the best tool for the job, but if you're a great Python programmer, then just do it in Python.
Here's a clearer case where the author was not very good with Python. Clearly, since they actually used Django instead of FastAPI, which should have been the right tool for the job. And then wrote a blog post about Python being bad, but actually it's about Django. So yeah, they should have started with Node from day one.
Sometimes tools are worth learning!
A function to display help, and another old to parse the CLI parameters isn't PhD level coding.
Also nowadays, any LLM friend can quickly generate them.
If I'm farfing around with the console I'm going to have fun.
That is exactly what I am complaining about.
I guess some people like it, but just, ick.
Django works perfectly with green threads. It’s a superior model to async and avoids the whole function-coloring mess. I’ve seen Django setups outperform Go-based services running under similar conditions.
JavaScript is a terrible language and should only be used when there’s absolutely no alternative, such as in browsers.
Using Django was so intuitive although the nomenclature could do a bit better. But what took me days trying to battle it out on NextJS was literally done in an hour with Django (admin management on the backend). While Django still looks and feels a bit antiquated, at least it worked! Meanwhile I lost the entirety of the past weekend (or rather quite a bit of it), trying to fight my code and clean up things in NextJS because somehow the recommended approach for most things is mixing up your frontend and backend, and a complete disregard for separation of concerns.
My new stack from now one will likely be NextJS for the frontend alone, Django for the CRUD and authentication backend, Supabase Edge functions for serverless and FastAPI if needed for intensive AI APIs. Open to suggestions, ideas and opinions though.
My personal maybe somewhat "stubborn old man" opinion is that no node.js orm is truly production quality, but if I were to consider one I think I would start with it. Be aware it has only one (very talented) maintainer as far as I recall.
Always happy to hear feedback/issues if anyone here would like to try it out. Thanks!
But there’s giant red flags up if you’re trying to do async with Django, which is built as synchronous code.
But since Python's LLM ecosystem is so well, I really appreciate the courage it takes to migrate to Node when writing a RAG system. I've tried similar things recently, working on a document analyzing project using React Router as the full-stack framework, while put some ETL related work on the Python side, use inngest to bridge Node and Python services. In this way, I got the benefit of Node for LLM chat, while stil able to facilitate Python's SOTA ETL libraries.
I really wish the dev would extract the dependency injection portion of the project and flesh it out a bit. There are a lot of rough edges in there.
But who is "we rewrote our stack on week 1 due to hypothetical scaling issues" supposed to impress? Not software professionals. Not savvy investors. Potential junior hires?
I always find this line of thought strange. It's as if the entire team hinges their technical decision on a single framework, when in reality it's relatively easy to overcome this level of difficulties. This reminds me of the Uber blunder - the same engineer/team switched Uber's database from MySQL to Postgres and then from Postgres to MySQL a few years later, both times claiming that the replaced DB "does not scale" or "sucks". In reality, though, both systems can work very well, and truth be told, Uber's scale was not large enough for either db to show the difference.
https://medium.com/creativefoundry/i-tried-to-build-an-ai-pr...
Despite MS, Guido and co throwing their weight, still none of the somewhat promised 5x speedup across the board (more like 1.5x at best), the async story is still a mess (see TFA), the multiple-interpreters/GIL-less is too little, too late, the ecosystem still doesn't settled on a single dependency and venv manager (just make uv standard and be done with it), types are a ham-fisted experience, and so on, and so forth...
I see express as the backend. Why not nestjs? And are you using openapi at all for generating your frontend client?
What i've discovered is - any backend + orm should expose an openapi spec'd backend... and your frontend can autogen your client for you. Allows you to move extremely quick with the help of ai.
I recently wrote about issues debugging this stack[1], but now I feel very comfortable operating async-first.
[1] https://blendingbits.io/p/i-used-claude-code-to-debug-a-nigh...
I had to look for async versions of most of what I did (e.g. executing external binaries) and use those instead of existing functions or functionality, meaning it was a lot of googling "python subprocess async" or "python http request async".
If there were going to be some kind of Python 4.x in the future, I'd want some sort of inherent, goroutine-esque way of throwing tasks into the ether and then waiting on them if you wanted to. Let people writing code mark functions as "async'able", have Python validate that async'able code isn't calling non-async'able code, and then if you're not in an async runloop then just block on everything instead (as normal).
If I could take code like:
def get_image(image):
return_code = subprocess.check_call(["docker", "pull", image])
if return_code:
raise RuntimeError("idk man it broke")
result = get_image(imagename)
print(result)
And replace it with: def get_image(image):
return_code = subprocess.check_call(["docker", "pull", image])
if return_code:
raise RuntimeError("idk man it broke")
result = async get_image(imagename)
print(result)
And just have the runtime automatically await the result when I try to access it if it's not complete yet then it would save me thousands of lines of code over the rest of my career trying to parallelize things in cumbersome explicit ways. Perhaps provide separate "async" runners that could handle things - if for example you do explicitly want things running in separate processes, threads, interpreters, etc., so you can set a default async runner, use a context manager, or explicitly threadpool.task(async get_image(imagename)).Man, what a world that would be.
Also I think the node approach is probably still more performant than FastAPI but that's just a hunch.
Hopefully they won't have security issues because someone hijacked the node package that sets the font color to blue or passes the butter or something.
Answer: Because Django doesn't support async by default.
I have a a simple wrapper that allows you write once and works for both sync/async https://blog.est.im/2025/stdout-04
>Python async sucks
Python async may make certain types of IO-blocked tasks simpler, but it is not going to scale a web app. Now maybe this isn't a web app, I can't really tell. But this is not going to scale to a cluster of machines.
You need to use a distributed task queue like celery.
What honest reaction you expect from readers?
I say that as someone who prefers JS promises: you likely won't face issues with either.
Took me a weekend to create a basic trading algo tester in Go that leveraged all computer cores.
And I was new to Go and Llama didn't exist back then.
These days it would have probably taken me a Sunday morning to do the same
It's about getting it done.
lol sounds more like a bunch of front end developers who don’t know what they are doing wanted to use a language they use on the front end on the backend.
I always wanted an emacs with python as the underlying language. Is emacs brilliant choosing lisp or outdated?
Node.js is such an incredible mess. The ideas are usually ok but the implementation details, the insane dependencies (first time I tried to run a Node.js based project I thought there was something seriously wrong with my machine and that I'd been hacked), the lack of stability, the endless supply chain attacks, maintainers headaches and so on, there is very little to like about Node.js.
C# before Node.js and I can't stand C#. Java Before C#. Yes, it's a language rant, but in the case of Node I am really sorry.
In fact, JavaScript has borrowed a lot from C# including async/await, lambda expressions, and the syntax for disposables -- all influenced by and done first in C#.
Of course, TypeScript and C# are from the same designer at Microsoft so there are even more similarities. Any team that's considering moving to TypeScript should also really give C# a look.
[0] https://typescript-is-like-csharp.chrlschn.dev/pages/intro-a...
> As you get more familiar with computers you will understand more and more what's going on.
Pot, meet kettle.
And yes, Rust's package management was inspired by Node, and it is one of the major drawbacks of Rust.
theres effectts if you need app level control
theres caolan async if you need series and parallel controls
theres rxjs if you need observables
on web frameworks hono seems nice too. if you need performance, theres uwebsockets.js which beats all other web frameworks in http and websocket benchmarks.
for typesafety aside from typescript, theres ark, zod, valibot, etc.
It took our team 5 man-years (1 year of time) to upgrade from Laravel 4 to 5...
Or if feeling fancy, Erlang, Elixir.
"Migrated from Python to Rust? That makes sense, I guess. Next".
"Migrated from Python to Javascript? What? That's crazy! I'd better read this."
I started ripping them out of a java system even before that.
import { Skald } from '@skald-labs/skald-node';
const skald = new Skald('your-api-key-here');
// Create a memo const result = await skald.createMemo({ title: 'Meeting Notes', content: 'Full content of the memo...' });
// Chat with the memo const result = await skald.chat({ query: 'What were the main points discussed in the Q1 meeting?' });
Normally I do this either through multiprocessing or concurrent.futures, but I figured this was a pretty simple use case for async - a few simple functions, nothing complex, just an inner loop that I wanted to async and then wait for.
Turns out Python has a built in solution for this called a TaskGroup. You create a TaskGroup object, use it as a context manager, and pass it a bunch of async tasks. The TaskGroup context manager exits when all the tasks are complete, so it becomes a great way to spawn a bunch of arbitrary work and then wait for it all to complete.
It was a huge time saver right up until I realized that - surprise! - it wasn't waiting for them to complete in any way shape or form. It was starting the tasks and then immediately exiting the context manager. Despite (as far as I could tell) copying the example code exactly and the context manager doing exactly what I wanted to have happen, I then had to take the list of tasks I'd created and manually await them one by one anyway, then validate their results existed. Otherwise Python was spawning 40 external processes, processing the "results" (which was about three incomplete image downloads), and calling it a day.
I hate writing code in golang and I have to google every single thing I ever do in it, but with golang, goroutines, and a single WaitGroup, I could have had the same thing written in twenty minutes instead of the three hours it took me to write and debug the Python version.
So yeah, technically I got it working eventually but realistically it made concurrency ten times worse and more complicated than any other possible approach in Python or golang could have been. I cannot imagine recommending async Python to anyone after this just on the basis of this one gotcha that I still haven't figured out.