One can then later consider spinning certain logic off into a separate service (e.g. in Golang), if speed is a concern with Python.
1. Very easy to find developers for. Python developers are everywhere, and even if they haven't worked with Django, it's incredibly easy to learn.
2. Simple stuff is ridiculously fast, thanks to the excellent ORM and (to my knowledge fairly unique) admin.
3. It changes surprisingly little over time, pretty easy to maintain.
I prefer Python and it's web frameworks over Typescript/React because there is a lot more stability and lot less "framework-of-the-week"-itis to contend with. It's much easier to reason about Django code than any React project I've worked on professionally. IMO when you don't have a firehose of money aimed at you, then Python is the way to go
Yet the approach also scales up to enterprise grade project, leveraging DRF, Django-cotton and so on (and htmx).
Too much other stuff going on in this app to incorporate Django, but it's still way ahead of the curve compared to bringing together independent micro frameworks.
This is why most folks just needing a plain Python API without anything else, they usually go for Flask, which is vastly simpler. For a more complete web app or site, I would recommend Django.
In terms of views, route configuration and Django's class-based views are sorely missed when using FastAPI. The dependency pattern is janky and if you follow the recommended pattern of defining your routes in decorators it's not obvious where your URL structure is even coming from.
SQLA is cleaner and more powerful, but when you just need CRUD, django wins.
I don't know why anyone would use any other framework.
I still love Django for greenfield projects because it eliminates many decision points that take time and consideration but don't really add value to a pre-launch product.
We have a few FastAPI services, but are mostly moving away from Python for projects > 1kloc.
It makes it easy to compartmentalize the business logic in terms of module imports.
Modular monolith is a good idea and if you want to do it in Django then make small apps that just expose services to each other (ie. high-level business functions). Then have a completely separate app just for the UI that uses those services.
- Imports are a mess - No control of mutation in function signatures, and in general it's still a surprise when things mutate - Slow - Types and enums have room for improvement
> I talked to this speaker afterward, and asked him how they did nested modals + updating widgets in a form after creating a new object in a nested modal. He showed me how he did it, I've been trying to figure this out for 8 months!
Do share!
Use bigint, never UUID. UUIDs are massive (2x a bigint) and now your DBMS has to copy that enormous value to every side of a relation.
It will bloat your table and indexes 2x for no good reason whatsoever.
Never use UUIDs as your primary keys.
"enormous value" = 128 bits (compared to 64 bits)
In the worst case this causes your m2m table to double, but I doubt this has a significant impact on the overall size of the DB.
Preferably delete the row once it's been permanently stored.
Keeping an actual transaction open for that long is asking for contention, and the idea of having data hanging around ephemerally in memory also seems like a terrible idea – what happens if the server fails, the pod dies, etc.?
With UUIDs there is absolutely no need for that.
> Keeping an actual transaction open for that long is asking for contention
Yes, which is why this is not an actual db transaction, it's a business transaction as mentioned.
> and the idea of having data hanging around ephemerally
The data is not ephemeral of course. But also the mapping between business transaction and model is not 1:1, so while there is some use for the transaction ID, it can't be used to identify a particular model.
So your advice that you DEFINITELY won't need a BIGINT, well, that decision can come back to bite you if you're successful enough.
(You're probably thinking there's no way we have over 2 billion users and that's true, but it's also a bad assumption that one user row perfectly corresponds to one registered user. Assumptions like that can and do change.)
Also, protip for anyone using MySQL, you should take advantage of its UNSIGNED INT types. 2^32-1 is quite a bit; it’s also very handy for smaller lookup tables where you need a bit more than 2^7 (TINYINT).
> but it's also a bad assumption that one user row perfectly corresponds to one registered user. Assumptions like that can and do change.
There can be dupes and the like, yes, but if at some point the Customer table morphed into a Customer+CustomerAttribute table, for example, I’d argue you have a data modeling problem.
Or you could preload a table of autoincremented bigints and then atomically grab the next value from there where you need a surrogate key like in a distributed system with no natural pk.
This seems like terrible advice.
For the vast, vast, vast majority of people, if you don't have an obvious primary key, choosing UUIDv7 is going to be an absolute no-brainer choice that causes the least amount of grief.
Which of these is an amateur most likely to hit: crash caused by having too small a primary key and hitting the limit, slowdowns caused by having a primary key that is effectively unsortable (totally random), contention slowdowns caused by having a primary key that needs a lock (incrementing key), or slowdowns caused by having a key that is 16 bytes instead of 8?
Of all those issues, the slowdown from a 16 byte key is by far the least likely to be an issue. If you reach the point where that is an issue in your business, you've moved off of being a startup and you need to cough up real money and do real engineering on your database schemas.
You can monitor and predict the growth rate of a table; if you don’t know you’re going to hit the limit of an INT well in advance, you have no one to blame but yourself.
Re: auto-incrementing locks, I have never once observed that to be a source of contention. Most DBs are around 98/2% read/write. If you happen to have an extremely INSERT-heavy workload, then by all means, consider alternatives, like interleaved batches or whatever. It does not matter for most places.
I agree that UUIDv7 is miles better than v4, but you’re still storing far more data than is probably necessary. And re: 16 bytes, MySQL annoyingly doesn’t natively have a UUID type, and most people don’t seem to know about casting it to binary and storing it as BINARY(16), so instead you get a 36-byte PK. The worst.
This kind of problem only exists in unsophisticated databases like SQLite. Postgres reserves whole ranges of IDs at once so there is never any contention for the next ID in a serial sequence.
MySQL does need a special kind of table-level lock for its auto-incrementing values, but it has fairly sophisticated logic as of 8.0 as to when and how that lock is taken. IME, you’ll probably hit some other bottleneck before you experience auto-inc lock contention.
I’ll be 110 years old telling my great-grandchildren about how we used integers for primary keys, until reason arrived and we started using uuids.
And they’ll be like, “you weren’t one of those anti-vaxxers were you?”