API_KEY = os.environ.get("YOUTUBE_API_KEY")
CHANNEL_ID = os.environ.get("YOUTUBE_CHANNEL_ID")
if not API_KEY or not CHANNEL_ID:
print("Missing YOUTUBE_API_KEY or YOUTUBE_CHANNEL_ID.")
exit(1)
Presenting the user with "Missing X OR Y" when there's no reason that OR has to be there massively frustrates the user for the near zero benefit of having one fewer if statement. if not API_KEY:
print("Missing YOUTUBE_API_KEY.")
exit(1)
if not CHANNEL_ID:
print("Missing YOUTUBE_CHANNEL_ID.")
exit(1)
Way better user experience, 0.00001% slower dev time. if not (API_KEY := os.getenv("API_KEY")):
...
For internal tools I just let os.environ["API_KEY"] raise a KeyError. It's descriptive enough. try:
x = int('cat')
except Exception as e:
pass
print(e) # <- NameError: name 'e' is not defined
So, it appears Python actually has three variable scopes (global, local, exception block)? e = 'before'
try:
x = int('cat')
except Exception as e:
e2 = e
print(e)
print(e2) # <- This works!
print(e) # <- NameError: name 'e' is not defined
It's not a scoping thing, the bound exception variable is actually deleted after the exception block, even if it was already bound before!https://stackoverflow.com/questions/24271752
https://docs.python.org/3/reference/compound_stmts.html#exce...
But the language makes sense at a lower level, scopes, values, bindings have their mostly reasonable rules that are not hard to follow.
In comparison python seems like an infinite tower of ad-hoc exceptions over ad-hoc rules, sure it looks simpler but anywhere you look you discover an infinite depth of complexity [1]
[0] and how half of the complaints are a conjugation of "I don't like that NaNs exist
[1] my favourite example is how dunder methods are a "synchronized view" of the actual object behaviour, that is in a + b a.__add__ is never inspected, instead at creation time a's add behaviour is defined as its __add__ method but the association is purely a convention, eg any c extension type need to reimplement all these syncs to expose the correct behaviour and could for funzies decide that a type will use __add__ for repr and __repr__ for add
The "random things" make it practically impossible to figure out what will happen without learning a whole bunch of seemingly arbitrary, corner-case-specific rules (consider the jsdate.wtf test currently making the rounds). And no, nobody is IMX actually simply complaining about NaNs existing (although the lack of a separate integer type does complicate things).
Notice that tests showcasing JavaScript WTFery can work just by passing user data to a builtin type constructor. Tests of Python WTFery generally rely on much more advanced functionality (see e.g. https://discuss.python.org/t/quiz-how-well-do-you-know-pytho...). The only builtin type constructor in Python that I'd consider even slightly surprising is the one for `bytes`/`bytearray`.
Python's scoping is simple and makes perfect sense, it just isn't what you're used to. (It also, unlike JavaScript, limits scope by default, so your code isn't littered with `var` for hygiene.) Variables are names for objects with reference semantics, which are passed by value - exactly like `class` types in C# (except you don't have to worry about `ref`/`in`/`out` keywords) or non-primitives in Java (notwithstanding the weird hybrid behaviour of arrays). Bindings are late in most places, except notably default arguments to functions.
I have no idea what point you're trying to make about __add__; in particular I can't guess what you think it should mean to "inspect" the method. Of course things work differently when you use the C API than when you actually write Python code; you're interacting with C data structures that aren't directly visible from Python.
When you work at the Python level, __add__/__iadd__/__radd__ implement addition, following a well-defined protocol. Nothing happens "at creation time"; methods are just attributes that are looked up at runtime. It is true that the implementation of addition will overlook any `__add__` attribute attached directly to the object, and directly check the class (unlike code that explicitly looks for an attribute). But there's no reason to do that anyway. And on the flip side, you can replace the `__add__` attribute of the class and have it used automatically; it was not set in stone when the class was created.
I'll grant you that the `match` construct is definitely not my favourite piece of language design.
It's not that odd, since it's the only situation where you cannot keep it bounded, unless you enjoy having variables that may or may not be defined (Heisenberg variable?), depending on whether the exception has been raised or not?
Compare with the if statement, where the variable in the expression being tested will necessarily be defined.
if False:
x = 7
print(x)
print(x)
^
NameError: name 'x' is not defined
Ruby does this sort of stuff, where a variable is defined more or less lexically (nil by default). Python doesn't do this. You can have local variables that only maybe exist in Python. for i in range(0):
pass
Is there any chance this would cause trouble though? Furthermore, what would be the need of having this variable accessible after the except block? In the case of a for block, it could be interesting to know at which point the for block was "passed".
So, maybe "None" answers your question?
if <true comparison here>:
x = 5
print(x) # <- should give name error?
In the case of an if like in your example, no provision is made about the existence of x. It could have been defined earlier, and this line would simply update its value.
Your example:
if True:
x = 5
print(x) # 5
Same with x defined prior: x = 1
if False:
x = 5
print(x) # 1
What about this one? if False:
x = 5
print(x) # ???
On the other hand, the notation "<exception value> as <name>" looks like it introduces a new name; what if that name already existed before? Should it just replace the content of the variable? Why the "as" keyword then? Why not something like "except <name> = <exception value>" or the walrus operator?While investigating this question, I tried the following:
x = 3
try:
raise Exception()
except Exception as x:
pass
print(x) # <- what should that print?
edit: writing from phone on couch and the laptop... looks far, far away...
This "feature" was responsible for one of the worst security issues I've seen in my career. I love Python, but the scope leakage is a mess. (And yes, I know it's common in other languages, but that shouldn't excuse it.)
1) Loop through a list of permissions in a for list
2) After the loop block, check if the user had a certain permission. The line of code performing the check was improperly indented and should have failed, but instead succeeded because the last permission from the previous loop was still in scope.
Fortunately there was no real impact because it only affected users within the same company, but it was still pretty bad.
But if something fails in a loop running in the repl or jupyter I already have access to the variables.
If I want to do something with a loop of data that is roughly the same shape, I already have access to one of the the items at the end.
Short circuiting/breaking out of a loop early doesn't require an extra assignment.
I really can't see the downside.
>>> s = "abc"
>>> [x:=y for y in s]
['a', 'b', 'c']
>>> x
'c'
>>> y
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'y' is not defined
Comprehensions have their own local scope for their local variables, but the walrus operator reaches up to the innermost "assignable" scope.The last time I wrote Python in a job interview, one of the interviewers said "wait, I don't know Python very well but isn't this kinda an old style?" Yes, guilty. My Python dates me.
Letting the KeyError go is usually fine, but if you want to log it or can recover somehow then:
try:
API_KEY = os.environ["API_KEY"]
except KeyError:
logger.exception("...")
...
Sometimes that's inevitable, bit noisy of the time it isn't.
You can protect passwords in a password manager. You do not need to keep the passwords in env and I do not.
Why are processes running that can do this, that I don't already fully trust?
> You can protect passwords in a password manager.
What's your plan for supplying the password to the program, given that people will want to automate use of the program?
https://typer.tiangolo.com/tutorial/arguments/envvar/
It's especially nice for secrets. Best of both worlds :)
One could write a huge treatise on everything that is wrong with environment variables. Avoid them like the plague. They are a huge usability PITA.
Environment variables are substantially more secure than plain text files because they are not persistent. There are utilities for entering secrets into them without leaking them into your shell history.
That said, you generally should not use an environment variable either. You should use a secure temporary file created by your shell and pass the associated file descriptor. Most shells make such functionality available but the details differ (ie there is no fully portable approach AFAIK).
The other situation that sometimes comes up is that you are okay having the secret on disk in plain text but you don't want it inadvertently commited to a repository. In those cases it makes sense to either do as you suggested and have a dedicated file, or alternatively to set an environment variable from ~/.bashrc or similar.
if not API_KEY and not CHANNEL_ID:
print("Missing both YOUTUBE_API_KEY and YOUTUBE_CHANNEL_ID.")
exit(1)
if not API_KEY:
print("Missing YOUTUBE_API_KEY.")
exit(1)
if not CHANNEL_ID:
print("Missing YOUTUBE_CHANNEL_ID.")
exit(1)
That way you don't end up fixing one just come back and be told you're also missing another requirement valid = True
if not API_KEY:
print("Missing YOUTUBE_API_KEY.")
valid = False
if not CHANNEL_ID:
print("Missing YOUTUBE_CHANNEL_ID.")
valid = False
if not valid:
exit(1)
This way you only check each value once (because your logic might be more complicated than just checking it's not set, maybe it can be wrongly formatted) and you still get to do whatever logic you want. It also removed the combinatorial problems.This is a pretty general principle of separating decision from action.
API_KEY = os.environ.get("YOUTUBE_API_KEY")
CHANNEL_ID = os.environ.get("YOUTUBE_CHANNEL_ID")
assert(API_KEY, "Missing YOUTUBE_API_KEY")
assert(CHANNEL_ID, "Missing CHANNEL_ID")
That bit me before... it created a tuple which evaluated as true.
If the user is a typical consumer using a typical consumer interface, then yes you want to handhold them a bit more.
$ python3 -c "print('clear messaging'); exit(1)"
clear messaging
$ python3 -c "raise ValueError('text that matters')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ValueError: text that matters
and that story gets _a lot_ worse when some programs raise from within a "helper" module and you end up with 8 lines of Python junk to the one line of actual signalIf it hadn't already been flagged, that is.
("Lmao" is useless, but definitely not worse than some other responses.)
I recommend cookiecutter for this. I have a few templates I've built with that which I use frequently:
python-lib: https://github.com/simonw/python-lib
click-app: https://github.com/simonw/click-app
datasette-plugin: https://github.com/simonw/datasette-plugin
llm-plugin: https://github.com/simonw/llm-plugin
You can run them like this:
uvx cookiecutter gh:simonw/python-lib
It doesn't copy template repos, but rather creates a list of imperative steps to perform. Steps can be both manual (obtain an API key and store it here) and automatic (run 'uv init'). Markdown syntax, ruby string interpolation and bash.
It came from a deep hate for yml based configs.
I'm not a huge fan of cookiecutter on aesthetic principles, though. I think it's chosen some needlessly heavyweight dependencies for such a simple task. PyYAML comes with a big chunk of compiled C, while performance is really not going to matter and I'd rather use TOML anyway. I've yet to see a project depending on Rich that uses more than a tiny portion of it; this is no exception. More to the point, Rich brings in the much larger Pygments (for syntax highlighting; most of the bulk is actual syntax definition rules for a variety of programming languages) and you can't disable that. And honestly I'm not a fan of the whole "download other people's templates using an integrated client" model anyway; Requests and its dependencies are pretty bulky, too.
If you work at an agency or as a freelancer and you build various similar apps with similar tooling and base setup, being able to scaffold and have them all setup quickly, not having to do it manually and waste hours id important. Similarly, if you work on various small open source packages, you want the tooling to be the same, READMEs look the same, etc, a script or tool to “spit out” the basic structure can be nice.
On the other hand, if you set up the app or larger open source package and you’ll work only on that project for potentially years, setting up a project individually, organically makes a lot of sense.
And to run cookiecutter do you still use pipx, or have you switched to `uv tool install`
I use "uvx cookiecutter" myself three days.
That's kind of very optimistic evaluation - literally anything beyond "import json" will likely lead you into the abyss of virtual envs. Running something created with say Python 3.13.x on Ubuntu 22.04 or even 24.04 (LTSs) / Rocky 9 and the whole can of worms opened.
things like virtual envs + containers (docker like)/version managers become a must quickly.
Also, it’s not the 2000s any more. Using venv to isolate application installs is not very hard anymore and there have been decent package managers for a long time.
Arrogantly wrong.
I've coded in Python for almost 20 years. Many of those years I've had it as my primary language at work.
2024 was the first year I actually needed a virtualenv. Before that, I'd happily use pip to install whatever I want, and never had a version conflict that caused problems.
I often encounter junior folks who default to using a virtualenv, not because they need to, but because they've been brainwashed into believing that "you're doing it wrong if you don't use one".
In any case, people should use uv these days.
but yes, pip predates the paradigm of package managers managing your dependency file instead of you managing your dependency file and then invoking the package manager
It's wise to keep silent in such cases.
Okay I'll bite: how did you deal with situations where you needed to work on two different projects that required different versions of the same dependency?
Pip is focused on installing the packages, not really "managing" them. The problem is that there are quite a few different takes on what "management" should entail. That's where all the alternatives are coming from.
But also, ignoring things pip isn't meant to do like version management and just focusing on installing, pip's default of installing to some unclear system location was always confusing, particularly when Py2 vs 3 was a thing.
Pip is fine. It's been fine for at least the last 5 to 10 years.
My first taste of Python was as a sysadmin, back in 2012 or so, installing a service written in Python on a server. The dependency hell, the stupid venv commands, all this absolute pain just to get a goddamn webserver running, good lord. It turned me off of Python for over a decade. Almost any time I saw it I just turned and walked away, not interested, no thanks. The times I didn't, I walked right back into that pile of bullshit and remembered why I normally avoided it. The way `brew` handles it on macOS is also immensely frustrating, breaking basic pip install commands, installing libraries as commands but in ways that make them not available to other python scripts, what a goddamn disaster.
And no, I really have no clue what I'm talking about, because as someone starting out this has been so utterly stupid and bewildering that I just move on to more productive, pleasant work with a mental note of "maybe when Python gets their shit together I'll revisit it".
However, uv has, at least for my beginner and cynical eyes, swept away most of the bullshit for me. At least superficially, in the little toy projects I am starting to do in Python (precisely because its such a nicer experience), it sweeps away most of the horrid bullshit. `uv init`, `uv add`, `uv run`. And it just works*.
I don't think this is a silly theory at all. The only possibly silly part is that containers specifically helped solve this problem just for python. Lots of other software systems built with other languages have "dependency hell."
Debian's apt-get was very "apt" at the time when it came out. It solved the entire issue for Debian. There was a point at which there was an apt-rpm for redhat. Yum tried to solve it for redhat, but didn't really work that well -- particularly if you needed to pin packages to certain versions.
>so there is never a reason to start a new project with Python today
Nothing else has an ML/data ecosystem that compares. Perl/Go are maybe a distant 2nd
uv is _much_ better than what came before. As someone who has only had only glancing contact with Python throughout my career (terrible experiences at jobs with conda and pip), uv feels like Python trying to actually join the 21st century of package management. It's telling that it's in Rust and clearly takes inspiration from Cargo, which itself took inspiration from Ruby and Bundler.
[tools]
python = latest
uv = latest
"pipx:yt-dlp" = latest
[settings]
[settings.pipx]
uvx = true
[settings.python]
uv_venv_auto = true
These are all features of the [pipx backend] for mise, and the docs discuss what to do in the case of things like python updates, etc. The advantage of doing it this way, particularly for a global mise config, is that you treat these python tools as basically any other mise tool, so their versioning is easy to control.I know mise isn't really a package manager. But with its support for things like this, be it for python, ruby, npm, or cargo, as well as more universal support from things like Ubi and the upcoming github backends, its rapidly becoming my favorite package manager. I've a number of projects that use particularly useful node based tools, like markdown-lint or prettier, that aren't JS based in any way, shape, or form. Having to carry around a package.json felt weird, and with the way mise handles all of it, now I don't have to
[pipx backend]: https://mise.jdx.dev/dev-tools/backends/pipx.html
Or it could be something else, not sure.
``` [tools] python = "3.12.11" ruff = "latest" ```
I get ruff installed and anything else needed without any fuss.
Every language seems to have this problem. Or else how can we explain the proliferation of AppImage, Flatpak, Snap, ... ?
It's why I like using Bottle for small Python frontends: download the file and import.
(I'm ranting based on personal experiences with IT in the past. Yes in general virtualenv is the baseline)
If you're not using dependencies, and are writing 3.x code, there's very little justification.
If we use uv from TFA, like the commands are nearly 1:1:
npm install <=> uv sync
npm install foo <=> uv add foo
> It doesn't have to also store a local copy of NodeJS… which Node devs do have a thing for, too, called nvm, written in bash.
This point is true; the ecosystem simply can't change overnight. uv is getting there, I hope.
> it always works the same everywhere
`uv` works the same, everywhere?
> it actually writes down the deps
`uv add` does that, too.
> I don't have to manually switch between them (or set up fancy bashrc triggers) like venvs.
You don't have to do that with `uv`, either?
Those browser JS libs installed via <script> tags though, honestly were pretty convenient in a way.
Minor differences between distro versions can make a big difference, and not everyone that uses a Python script knows how to use something like pyenv to manage different versions.
I consider my point as still valid with UV, what you wanted to express?
On UV specifically - say 'asdf' compiles python right on your system from official sources - means using your ssl libs for example. UV brings Python binary - I feel worried on this.
I disagree that virtual environments represent an "abyss". It takes very little effort to learn how they work [1], plus there a variety of tools that will wrap the process in various opinionated ways [2]. The environment itself is a very simple concept and requires very few moving parts; the default implementation includes some conveniences that are simply not necessary.
In particular, you don't actually need to "activate" a virtual environment; in 99% of cases you can just run Python by specifying the path to the environment's Python explicitly, and in the exceptional cases where the code is depending on environment variables being set (e.g. because it does something like `subprocess.call(['python', 'foo.py'])` to run more code in a new process, instead of checking `sys.executable` like it's supposed to, or because it explicitly checks `VIRTUAL_ENV` because it has a reason to care about activation) then you can set those environment variables yourself.
Creating a virtual environment is actually very fast. The built-in `venv` standard library module actually does it faster in my testing than the equivalent `uv` command. The slow part is bootstrapping Pip from its own wheel - but you don't need to do this [2]. You just have to tell `venv` not to, using `--without-pip`, and then you can use a separate Pip (for recent versions — almost the last 3 years now) copy cross-environment using `--python` (it's a hack, but it works if you don't have to maintain EOL versions of anything). If you need heavy-duty support, there's also the third-party `virtualenv` [3].
Much of the same tooling that manages virtual environments for you — in particular, pipx and uv, and in the hopefully near future, PAPER [4] — also does one-off script runs in a temporary virtual environment, installing dependencies described in the script itself following a new ecosystem standard [5]. Uv's caching system (and of course I am following suit) makes it very fast to re-create virtual environments with common dependencies: it has caches of unpacked wheel contents, so almost all of the work is just hard-linking file trees into the new environment.
[0]: https://stackoverflow.com/questions/76105218
[1]: https://chriswarrick.com/blog/2018/09/04/python-virtual-envi...
[2]: https://zahlman.github.io/posts/2025/01/07/python-packaging-...
[3]: https://virtualenv.pypa.io/
Activating a venv was at least sething they could relate to.
Say you want to use a specific version of python that is not available on Ubuntu.
1. Install build dependencies https://devguide.python.org/getting-started/setup-building/#...
2. Download whichever Python source version you want, https://www.python.org/downloads/source/. Extract it with tar
3. run ./configure --enable-optimizations --with-lto
4. run make -s -j [num cores]
5. sudo make altinstall
This will install that specific version without overwriting default system python.
You can then bash alias pip to python3.xx -m pip to make sure it runs the correct one.
All the libraries and any pip install executable will be installed locally to ~/.local folder under the specific python version.
Alternatively, if you work with other tools like node and want to manage different versions, you can use asdf, as it gives you per folder version selection.
virtual environments are really only useful for production code, where you want to test with specific versions and lock those down.
(I mean, except on Windows, your venvs default to symlinking the interpreter and other shared bits, so you aren't really isolating the interpreter at all, just the dependencies.)
(also one of the reasons why, if you're invoking venv manually, you absolutely need to invoke it from the correct python as a module (`python3.13 -m venv`) to make sure you're actually picking the "correct python" for the venv)
Looking at just the first link, looks way more complicated than venv. And I'm a C++ developer, imagine someone who less experienced, or even who just isn't familiar with C toolchains.
Here it is for clarity
sudo apt-get install build-essential gdb lcov pkg-config \ libbz2-dev libffi-dev libgdbm-dev libgdbm-compat-dev liblzma-dev \ libncurses5-dev libreadline6-dev libsqlite3-dev libssl-dev \ lzma lzma-dev tk-dev uuid-dev zlib1g-dev libmpdec-dev libzstd-dev
It's the kinda thing an experienced engineer wouldn't have that much trouble with, but you should be able to recognize how much experiential knowledge is required to compile a complex C code base and what kinda dumb stuff can go wrong.
You probably don't need to do much of the stuff on that page to build, but "What is dnf?", "Is the pre-commit hook important?", "Do I need cpython?", "What's an ABI dump?" are questions many people will the wrestling with while reading.
venvs also aren't complicated.
I have built Python from source before, many times. I do it to test Python version compatibility for my code, investigate performance characteristics etc.
Re-building the same version of Python, simply in order to support a separate project using the same version with different dependencies, is a huge waste of time and disk space (hundreds of MB per installation, plus the mostly-shared dev dependencies). Just make the virtual environment. They are not hard to understand. People who want a tool to do that understanding for them are welcome to waste a smaller amount of disk space (~35MB) for uv. A full installation of pip weighs 10-15MB and may take a few seconds; you normally only need one copy of it, but it does take some work to avoid making extra copies.
For example here's a ~2k line Python project which is a command line finance income and expense tracker: https://github.com/nickjj/plutus/blob/main/src/plutus
It uses about a dozen stdlib modules.
About 25% of the code is using argparse to parse commands and flags. With that said, I tend to prefer code that yields more lines for the sake of clarity. For example this could technically be 1 or 2 lines but I like each parameter being on its own line.
parser_edit.add_argument(
"-s",
"--sort",
default=False,
action="store_true",
help="Sort your profile and show a diff if anything changed",
)
It's so much easier to write a simple usage() function that has all help text in one big block, and Getopt with a case block. Anyone can just look at it and edit it without ever looking up documentation.
One nice thing about argparse is it comes with a helper function for mutually exclusive groups that can be optional or required. For example you can have --hello OR --world but not both together.
Python lets you just nest data structures without having to twist your brain. You want a tuple in a list in a dictionary value: you just write it down and can access it with a unified notation. Bam. No reference madness and thinking about contexts. It's a big part of what I typically need to get things done quickly and understand how I did it 5 years later. That has to count for something. Python is boring in that sense, Perl is fun. But that's exactly my problem with it. It's too clever for it's own good and writing it does things to your brain (well at least mine).
That was about the time I stopped using Perl for any new projects. I never wanted to go back.
Powershell is also horribly offended by trying to nest arrays. As is bash.
perl <<'EOF'
my $object = bless sub { my ($s) = @_; if ($s eq "thingy") { return("hello world!") } else { return("goodbye world!") } };
my %hash1 = ( baz => "here we go", foo => $object );
my @array1 = ( undef, undef, undef, \%hash1 );
my %hash2 = ( bar => \@array1 );
my $reference = \%hash2;
print( $reference->{bar}->[3]->{baz}, "\n" );
print( $reference->{bar}->[3]->{foo}->('thingy') , "\n" );
print( $reference->{bar}->[3]->{foo}->('something') , "\n" );
EOF
here we go
hello world!
goodbye world!
Perl being Perl, TMTOWTDI, but you can enforce your own style using linters and formatters.I think the classic Python vs. Perl question ultimately comes down to using what you feel most comfortable with. People are different, and that's totally fine.
Perl falls apart when you need structure and even small projects. For short scripts meant to do what you would use Bash for, it's a godsend. Safer, more portable, less friction.
There seems to very few smart people who agree with this in practice, however.
I got forced to learn it for a project where I was proposing Ruby and the customer insisted on Python. This was years ago when Ruby was much slower. I was annoyed but I got used to it and here I am enjoying it many years later.
I take issue with the description and use of make though! :-D What is the point of it if you're not going to use dependencies? One might as well just write a script with a case statement..... I'm adding smileys because I don't mean to be all that serious but I really do think that the failure of the youth of today to get to grips with Make is a sad reflection on our culture....... and get off my lawn, ok? :-)
I ended up feeling that it removes a lot of (internal) debate about what's the best style for braces that you get in C-like languages.
Genius.
I started with this and evolved into simple flat Makefiles, because they're basically the same but Make feels more standard (there's a Makefile here vs. there's a random bash script here).
Coming from a mathy background I found it incredibly satisfying, although I’ve come around to other languages since.
Inheritence should have stayed esoteric. Composition is closer to reality.
Relations and functions and values are way closer to applications than OOP seems to be.
It's like when people pick a "unique" name for their baby along with almost everyone else. What you thought was a unique name is the #2 most popular name.
Then, in 2005, Guido van Rossum was hired by Google to work on Google Cloud. That opened the door for wider adoption in academia, since Python had strong math libraries and integrated well with tools researchers were already using, like Hadoop, right around the time big data and ML were starting to take off.
Also, between 2005 and 2006, two important things happened: Ruby on Rails came out and inspired Django, which was starting to gain popularity, and web developers were getting tired of Perl's messy syntax. That's how Python quickly became a solid choice not just for server-side scripts, but for building proper web apps. In the meantime, another language that could be embedded directly into HTML was storming the web: PHP. Its syntax was similar to JavaScript, it was easy to pick up, lowered the barrier to entry for software development, worked straight out of the box, and didn't require thousands of print statements to get things done.
The 3 Ps made history. According to programmers from 20 years ago, they were like religions. Each had its own philosophy and a loyal group of followers crusading online, getting into heated debates, all trying to win over more adopters. The new generation of devs is more pragmatic. These days it's less about language wars and more about picking the right tool for the job.
It's very weird reading something you lived through described in these terms, as though it were being described by an anthropologist.
Can't help but wonder what the future will have to say about today.
"In 2025, programmers used 'frameworks' to hang 'apps' from some kind of 'web'. They believed these frameworks gave them magic powers. Many even fought in 'flame wars' on this topic, which experts believe involved the use of fire to destroy webs woven by competing programmers."
It brought across a ton of users from R and Matlab.
Pandas, Matplotlib and ScikitLearn then consolidated Python's place as the platform of choice for both academic and commercial ML.
PHP's popularity isn't really from 2005-2006. It was popular at the end of the 90s, and it looks like JS as much as it looks like a potato.
I come from a core science background. I studied Physics. And it was like this in my institute: FORTRAN -> A lot of C, small amount of FORTRAN -> C -> Python. I was taught C, but from the exact next year, it was switched to the Python ecosystem.
It was done much later when Python became the standard in research universities, the language of recent research papers, etc.
A generation of Scienctists learned C/FORTRAN/MATLAB in college/grad school as it was taught, but they switched to Python early/mid career. Teaching Python in undergrad followed.
I also taught a mid-career Economics professor Python. She used SPSS before for her papers. Humanities professors doing data crunching are now switching to Python, too. There is a clear trend.
You run it by saying `python hello.py`.
Compare that to the amount of crap you need(ed) with 2005 Java just to have something running.
The shittiness of ActivePython and generally getting python to run on Windows were a bit of a hurdle, but still it was easier than the competition
Perl is a fine language, but it's like using a wood chipper with no safeties. It takes extreme care and know-how to use it without splattering ASCII everywhere and making an unmaintainable write-only mess.
For every beautiful and maintainable perl program (mostly irssi scripts) I've seen 99 abominations that are quicker to rewrite completely than decode wtf they are doing.
No, entry-level courses were in a mix of Scheme, C, and other languages until Java’s industrial ubiquity ended up in it becoming a popular choice, but not completely displacing the others. Then as Python (along with the broader class of dynamic OO scripting languages) became quite popular, Python got added to the mix, but, unlike Java, it fairly quickly displaced not only Java but a lot of what had been around longer in introductory courses.
Python’s industrial success drove its initial use in introductory courses, but doesn't fully explain it, as it doing what Java never did indicates.
I think the people teaching introductory courses find it less of a compromise to industrial popularity than they did with Java.
Python’s success is multifaceted, some of it is right-place right-time, a lot of it is the ecosystem it built because of that, but a lot of it is, I think, that it turns out to be a language that has been designed (both initially and in how it is been managed over time) to be a very good language for real people solving real problems, despite not adhering to any of what various typing and performance and paradigm purists like to posit as the essential features of a good language.
2. Python on web servers was a thing long before 2012. You had Zope in 1998 or so, and it was pretty popular for a while, and hugely influential to subsequent web frameworks. Django came out in about 2005. TurboGears, Pylons in about 2005 or so. Flask in 2010... and these are just the more popular frameworks.
3. I think the author meant that PHP was also vaguely C-like in syntax, like JS. Keyword: vaguely. You had a common way of declaring stuff with curly braces and semi-colons.
Python had web servers from 2000, including Jim Fulton's Zope (really a full framework for a content management system) and in 2002 Remi Delon's CherryPy.
Both were useful for their day, well supported by web hosting companies, and certainly very lightweight compared to commercial Java systems that typically needed beefy Sun Solaris servers.
But yeah Python was on an upswing for webdev and sysadmin (early DevOps?) tooling, but took quite a hit with Ruby eg Rails, Puppet, Vagrant and Chef etc.
But Python hung on and had a comeback due to data science tooling, and Ruby losing it's hype to node for webdev and golang for the devops stuff.
I think this could be generalized to ergonomics. Java 1.6 is an absolute nightmare for a newb compared to Python. No top-level statements, explicit typing, boxing, verbose declaration syntax, long import statements, curly braces everywhere... and, most importantly, no out-of-the-box REPL. Java has since made strides and borrowed from Kotlin and Lombok but my understanding is that it was too little too late.
Depending upon preference and application one might consider ~half of these things anti-features at the expense of stability but it speaks volumes to why people join and stay in a software ecosystem. If I had to work on a Python project again I'd use mise with uv and viciously lint the codebase down to a reasonable fully-typed subset of the language devoid of custom decorator magic and dunder abuse.
Too little too late to be the #1 language of choice for serious server-side software that it is today?
The weird thing about Java is that people naturally compare its popularity today to its dominance in the early '00s, which was an aberration. The ecosystem has long since returned to its fragmented self, and while Java is not nearly as dominant as it was during that very short period, no other language is, either.
I was replying to
> Python's success is entirely due to entry-level programming courses. They all switched to Python,
not to mention that there are an awful lot of qualifiers in your statement. There are certainly plenty of Java jobs to be had but all the usual suspects like PYPL, TIOBE, SO (disregarding the old adage about damn lies and statistics) put Python squarely above Java in terms of popularity.
This is all to say that if I got conked on the head and lost all of my programming knowledge and had to make a living I'd restart with Python. This isn't a value judgment - the JVM and Python ecosystem are on roughly equal footing in my mind. It's just how things are.
[1]: https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...
Yeah, after 2008. And by 2014, it had overtaken Java in many CS programs. But I was referring to the events that led to that.
Being default installed was almost certainly the larger factor. As evidenced by how much pain it caused people when they started using dependencies that were not stable. They had to add specific support for it to not let you pip install things to the system install.
Excuse me but... what ? Django is 20 years old.
Nonsense.
> I don't think I heard about web servers in Python before 2012.
More nonsense.
> I suppose a 2005 computer wouldn't be able to serve a Python backend smoothly.
Extreme nonsense.
https://medium.com/signal-v-noise/ruby-has-been-fast-enough-...
And this is when Python was edging Ruby performance wise.
I feel like the religious wars aspects of this is completely overblown. The conversations around languages really hasn't changed all that much in the last 20 years. Look at the way you see things happening in HN, heck even in the comment thread right here. It's the exact same kinds of discussions as happened back 20 years ago, with exactly the same kind of pragmatism.
I think the gulf comes about from the fact that the primary sources are conversations occurring between people that are largely anonymous behind an online handle, interacting with people that aren't face to face. There's always been an element of exaggeration in the interactions. What might be a "Umm, no" with a maybe a shake of the head, becomes "Your mother was a hamster and your father smells of elderberries". Almost every party involved comes to recognise the particular cultural quirks (which varied from forum to forum) and how to translate what was said, to what was actually meant.
Yea, I was a sysadmin around 2000 (before that too) and I knew it as such.
>between 2005 and 2006, two important things happened:
Somewhat - I used it in 2001 for Plone which is based on Zope, which was somewhat popular around that time. Writing all the web stuff with Python made sense, since Plone provided a CMS and could include a wiki. Adding on some sql calls to it in python just made sense. The competition was between PHP and Python, though there were some other less popular choices. Ruby on Rails definitely was getting a lot more popular around those times. PHP didn't start getting popular around 2005, if anything, people started using Python more, and started criticizing the crappy code that was circulating in the PHP community.
In any case, it was a fun time, but what's the point of looking back like that?
I remember saying to a coworker, "Google is single-handedly keeping Python alive."
Then bam. It was everywhere. Mid 2010's, I took a cybersec job and they told me to learn Python in the two weeks between accepting and starting. "Python is all over cybersec," I was told. It was then I realized Python took over academia, which positioned it perfectly for ML. It's features made it easy to start, but it also benefited from right place, right time.
You should talk to the Java advocates in my company :) The language wars are still around, it's just Java vs the rest now.
I can see similar way why Swift eventually can loose market share - too late to open source and too late to embrace cross-platform.
Apple also added Python in, I think, Mac OS X Jaguar, so it blipped on the radar.
When you have big players using it, the community automatically grows.
Django was the continuity of that and definitely contributed to the growing popularity of Python, but I think the hype started way before (and in fact we had to suffer Zope/Plone/Twisted for it :)).
Another decisive date was circa 2010 when the Belgian book "learn programming with python" came out. Granted, Django was already 5 years old at the time, but it brought many beginners who knew nothing about programming at the time.
For example, to install yt-dlp, I followed these steps:
sudo apt install pipx
pipx install yt-dlp
Actually, only the second one, because I already had pipx (https://pipx.pypa.io/ — a wrapper for pip that does basic virtual environment management) installed.Can you name some specific things in Python you have tried to use, and give more concrete descriptions of how you tried to set them up?
This interview from Brett Cannon (old core dev who worked on packaging, imports, the vscode python extension...) is eye opening:
https://www.bitecode.dev/p/brett-cannon-on-python-humans-and
The guy cares SO MUCH and they have so many things not to break you can feel the sense of passion and the weight of responsibility in everything he says.
What exactly is the problem with __init__ or __new__? @dataclass is very nice syntactic sugar, but are we arguing here that having access to initializer/allocator/constructor dunder methods is "legacy ugliness"? This is the core of pythonic built-in aware python. Bizarre.
Kotlin: constructor is either part of class definition or keyword constructor.
Ruby: initialize
JS: constructor
Python: ______new______, _______init_______
Literally this meme: https://knowyourmeme.com/memes/three-headed-dragon
Specifically, in the case of constructors, via <class>(...).
Is there an alternative API? No. This is public API regardless of anyone's intentions. Though "it's weird" is really not a very strong argument against it.
def foo(... # public
def _foo(... # internal
def __foo(... # munged
def __foo__(... # magic
Internal is more convention as the language doesn't really do anything with it, but it does with munged, and magic methods are specifically for things implemented in the language.Internal and munged don't exactly map to private and protected, but are kinda similar ish.
In any case I actually like how one can use underscores to point on how exposed some method is supposed to be. Makes it simpler to actually know what to skip and what not.
Admittedly it's obnoxious when you've got habits for one and you're on a team that uses another--totally flies in the face of the zen re: "there should be only one obvious way to do things".
...but that was always a rather ambitious goal anyway. I'm ok navigating the forest of alternative API's if it means not being locked into something that I can only change by choosing an entirely different language. I'm happy that it's very easy to tell when somebody is mucking about with python internals vs when they're mucking about with some library or other.
I think it's fairly short sighted to criticize these. FWIW, I also did that the first time I wrote Python. Other languages that do similar things provide a useful transparency.
I've had snippets in my editor for approximately 15 years at this point so I don't have to manually type any of the dunder methods. Would recommend!
Which, between them, covers approximately everyone.
Same-other-language-user:
((()))()()()({}{}{{(((())){}}}}{}{}{};;;;();)(;}}}
*not supposed to be correct syntax, it's just a joke
Ugliness is not the point of hi-vis vests lol, the point is to not get shot by other hunters.
By contrast, I find a well camouflaged deer quite beautiful--once I notice it at all. The beauty comes from the way that it is so clearly a thing of its surroundings. Nothing at all like a bright orange hat.
Sure... yes the bright orange is ugly, but it's not the ugliness that prevents you from getting shot, it's the bright unnatural color. Other hunters aren't thinking "oh that's really ugly, it must not be something I can shoot" they're thinking "bright orange means person, I should not fire my rifle in that direction".
> If it wasn't, it wouldn't grab my attention so well.
Are you saying that if you thought the bright orange was pretty it wouldn't occur to you not to fire your gun in its direction?
If you call something "stupid" it doesn't really convey anything meaningful, especially not in the way you're using there, it comes across as "I don't actually have a reason I don't like it, I just don't".
The programming languages world is broad/varied enough that any statement like "no other language does this!" is almost certainly wrong (outside of esoteric languages, which python and php most certainly are not)
__add, __sub, __mul, __div, __mod, __pow, __unm, __idiv
__band, __bor, __bxor, __bnot, __shl, __shr
__concat, __len
__eq, __lt, __le
__index, __newindex, __call
__gc, __close, __mode, __name
https://gcc.gnu.org/onlinedocs/cpp/Standard-Predefined-Macro...
Now you don't need to write your double-underscore methods ever again, if you don't want to.
IMO this is less horrendous than e.g. go's insistence that exported functions are indicated by a capital letter - that really affects code using the module not just the definition.
Ruby has the same thing but it’s called ‘new’.
Implementing the type of customization (idiomatically) that __new__ provides in Kotlin and JS isn’t any cleaner.
This hasn't been true since Python 3.3. You no longer need a __init__.py for Python to recognize a module, but it can still be useful in many cases.
> The __init__.py files are required to make Python treat directories containing the file as packages (unless using a namespace package, a relatively advanced feature).
I wrote it in about an hour and a half. It seems to work in Python 3.6 and 3.12. Now you never need to write another double-underscore magic method again.
But who cares? It's syntax, it has its purpose.
My read was that the "ugliness" was in the method naming, and specifically the double underscores, not the availability of the methods themselves.
In C++, if you want to define the binary + operator on a class, you give the class a method with the special name `operator+`. In Python, to do the same thing, you give the class a method with the pseudo-special name `__add__`. You don't think the Python way is worse?
Have you considered how much familiarity might shape your reaction to the two? Both are specific patterns with arbitrary restrictions which new learners have to internalize, and that’s a fairly advanced task most people won’t hit until they are familiar with the language syntax.
Here’s a counter argument: the Python methods are normal method declarations whereas C++ developers have to learn that “operator=“ means something different than what “operator =“ (or any other assignment statement) would mean, and that this is the only context where you can use those reserved symbols in method names.
To be clear, I don’t think either of these is a big deal - the concepts and usage are harder to learn than the declaration syntax – but I think it’s incredibly easy to conflate familiarity with ease of acquisition for things like this.
It doesn't.
> Both are specific patterns with arbitrary restrictions which new learners have to internalize, and that’s a fairly advanced task most people won’t hit until they are familiar with the language syntax.
No, the Python methods are just ordinary methods with valid names. What 'arbitrary restrictions' are you referring to?
C++ it’s just one pattern to learn, x -> operatorx
This doesn't apply to the dunder methods, though. They're magically exempt from this magical mangling, so you could call them directly if you wanted. ¯\_(ツ)_/¯
> You don't think the Python way is worse?
They seem about equivalent? I don't see any real reason to pick one or the other, beyond personal preferences.
As composition over inheritance becomes more idiomatic, there are probably less places where this matters, but as long as inheritance is used at all it can be useful.
Take a deep breath.
Maybe as I grow to think of the "big picture" architecture-wise with my code, I will start incorporating dunders, but until then...
https://www.attrs.org/en/stable/why.html
There's some obvious potential for bias there, but I thought most of the arguments were well reasoned.
Edit: found another that I thought was helpful: https://threeofwands.com/why-i-use-attrs-instead-of-pydantic...
My rule of thumb is to use Pydantic if I need serialisation, otherwise I default to dataclasses.
On the other hand if I see a dataclass I can tell what it's purpose is by whether it's frozen or not etc.
Always strive for self-documenting code.
I'd use it for passing flat structures as function args rather than a massive list of individual args.
My thoughts about python here: https://calvinlc.com/p/2025/06/10/thank-you-and-goodbye-pyth...
Next time I get into Python I’ll try uv, ruff, ty.
The rest is small stuff that adds up like Py whitespace scoping, or Py imports somehow not taking relative paths, or JS object syntax is nicer: https://news.ycombinator.com/item?id=44544029
uv + ruff (for formatting AND linting) is a killer combo, though.
And the more you use uv, the most you discover incredible stuff you can do with it that kills so many python gotchas like using it in the shebang with inline deps or the wonders of "--with":
It's included in the default install of most desktop/server Linux distros (with plenty of exceptions), but I don't believe any of the BSDs ship it in their base system.
IIRC macOS used to have python 2 in its default install, but I vaguely recall that being deprecated and removed at some point. My only Mac is on the other side of the country at the moment, so I can't check myself.
https://developer.apple.com/documentation/macos-release-note...
I wonder if that kerfuffle is why they ended up not removing Ruby and Perl yet, despite the same promise. macOS’ Ruby is around 2.6. Perl I somehow doubt they’ll get to, as it’s such an important part of Unix admin I bet they themselves use it somewhere.
There is still a /usr/bin/python3 which is a shim. When you call it, if you don’t have the Xcode Developer Tools you’ll be asked to install them (it’s a non-scary GUI dialog which takes two clicks) and then you’re set. That is also a few versions behind the cutting edge, but it does get updated sometimes.
I will never understand this.
But then, I've been using Python 3 since 3.2 and my first reaction to that was a sigh of relief, and by the time I updated to 3.4 I was already wondering why everyone else was lagging behind on the switch.
Perhaps because your interpretation of my comment is wrong.
> was removed in [an OS released on October 25, 2021]
No, no it was not! That would have been fine. Heck, it would even have been fine if they had removed it the year before. Two years. Three. I don’t care. The problem is that it was removed on a point release (not October) without warning, after setting the precedent of removing another language on a major release.
> But then, I've been using Python 3 since 3.2 and my first reaction to that was a sigh of relief
And I don’t even care about Python. But I still had to deal with the fallout from that from things I didn’t write.
My perspective is that the problem is that people were trying to use Python 2 after January 1, 2020. I left it behind years before that.
They also have a habit of sometimes add or change some APIs mid-cycle, so you may see requirements for the latest Xcode being a mid-cycle version of the OS. Or how was it with Mac App Store, that not only itself, but the relevant APIs for programs distributed in it appeared in 10.6.4?
Edit: Unlike older versions of macOS that came with Python 2.7 pre-installed, newer macOS versions (Catalina and later) no longer include Python pre-installed.
https://news.ycombinator.com/item?id=44580198
The removal was in Monterey. Catalina and its successor Big Sur very much still had it. Catalina was the one that removed 32-bit support.
you could just take a random person macbook, open the terminal and launch python3 -m http.server 3000 --directory ~
then on the local network you could download all his files
It seems much more likely to me they were just tired of having to manage the languages (and being constantly criticised they were behind) and simply chose to remove them.
Though for a while there having built in interpreters was great for kids and learners.
/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/
That said, call me old-fashioned, but I really take issue with "curl $URL | bash" as an installation method. If you're going to use an install script, inspect it first.
It solves a lot of the package management headaches for me.
Two additional suggestions:
* mise to manage system dependencies, including uv version and python itself
* just instead of make; makefile syntax is just too annoying.
Mise actually has a command runner as well which I haven't tried yet, and might work better for running commands in the context of the current environment. It's pretty nice when your GitHub actions workflow is just:
* Install mise
* mise install everything else
uv venv activate
The command actually creates a new venv in a dir called "activate". The correct way to activate the venv is like this: source .venv/bin/activate
Otherwise, the existence of a file or folder with the same name as your task ("test", for example) will stop that task from being run, which might be very annoying if you're using the Makefile as part of a script or CI or something where you won't notice the "Nothing to be done for..." message.
IIRC the info pages just say that it is for targets that lack a file. This is way easier to remember.
I always found vscode lacking for Python and C compared to pycharm and clion. The latter just work without fiddling with config files.
In my experience (not saying this is universal), the folks that like JetBrains IDEs came from java/intellij backgrounds, where I hear it really shines.
This all might be a skill issue, as almost all my professional projects have been VSCode based, but since I've only worked at smaller places I definitely can't rule out this was because it was easier to set things up than to fight for Fin to get us all licences.
In your opinion, what makes PyCharm (or CLion if you want to add that in) 'just work'? Do you think it is because you've used it for so long and just know the ins-and-outs? Or is there something you see that they have and VSCode doesn't?
I've always been curious about this as someone who hasn't had a lot of professional exposure to the JetBrains world.
import re
import sys
if re.match(r"[A-Za-z}", sys.stdin):
print("ok")
PyCharm will spot that error immediately, no insaneo configuration requiredPyCharm Professional also gets into the SQL side of things:
with connect(":memory:") as conn:
c = conn.cursor()
c.execute("SELECT * FROM oops WHERE NAME IN ('alpha, 'beta')")
instantly spotted, no configuration requiredI was going to be cute and use json.loads as an example, but it seems somewhere along the way they botched the fact that the first argument to json.loads is a fucking JSON string. But, it does allow showcasing that you can have PyCharm syntax check the "inner" language of any string literal you'd like via their language injection annotations:
import json
# language=json
bogus = """{"this is subtly wrong"; true}"""
json.loads(bogus)
# and the same with tomllib:
import tomllib
# language=toml
bogus = """
[alpha]
beta = []onoz
"""
tomllib.loads(bogus)
# or, if you have some tricky internal stuff
def do_the_thing(
# language=sql
s: str
):
with connect(something) as conn:
c = conn.cursor()
c.execute(s)
do_the_thing("SELECT * FROM oops WHERE NAME IN ('alpha, 'beta')")
# ^^^ also gets syntax checked because it knows the first arg is SQL
I've switched to python primarily (from perl) in early 2010s (I think my first "seriously" used version was 2.6. This is mostly for system management, monitoring, and data transformation /visualisation. Nothing fancy like AI back then in a work setting.
I found the biggest impact was not so much on writing code but on it remaining readable for a very long time, even if it was created hastily "just get this working now" style. Especially in a team.
Python is still one of my favourites and the first tool I reach if bash is not enough for what I'm trying to do.
Make is great at compiling code in languages that don't have bespoke build systems. If you want to compile a bunch of C, awesome. For building a Rust or JavaScript project, no way. Those have better tooling of their own.
So for the last 15 years or so, I've used make as a task runner (like "make test" shelling out to "cargo test", or "make build" calling "cargo build", etc.). As a task runner... it kinda sucks. Of course it's perfectly capable of running anything a shell script can run, but it was designed for compiling large software projects and has a lot of implicit structure around doing that.
Just doesn't try to be a build system. It's optimized for running tasks. Here, that means it provides a whole lot of convenient functions for path manipulation and other common scripty things. It also adds dozens of quality-of-life features that devs might not even realize they wanted.
For example, consider this trivial justfile:
# Delete old docs
clean:
rm -rf public
# This takes arguments
hello name:
@echo Hello, {{name}}
If you're in a directory with it and run `just --list`, it'll show you the list of targets in that file: $ just --list
Available recipes:
clean # Delete old docs
hello name # This takes arguments
That second recipe takes a required command line argument: $ just hello
error: Recipe `hello` got 0 arguments but takes 1
usage:
just hello name
$ just hello underdeserver
Hello, underdeserver
You can do these things in make! I've seen it! Just doesn't add things that were impossible before. But I guarantee you it's a lot harder to implement them in make than it is in just, where they're happy native features.There are a zillion little niceties like this. Just doesn't try to do everything make does. It just concentrates on the smaller subset of things you'd put in .PHONY targets, and makes them really, really ergonomic to use.
You wouldn't use just to replace make in a large, complicated build. I would unhesitatingly recommend it for wrapping common targets in repos of newer languages, so that `just clean build test` does the same things whether you're in Python or TS or Rust or whatever, and you don't want to hack around all of make's quirks just to build a few simple entry points.
My thing is that it's ubiquitous and if you stay simple, the syntax is perfectly readable; if you're doing anything complicated, I'll argue you don't want your logic in a Makefile anyway, you want it in a shell script or a Python script.
I get that if you want to do really complicated things, Just can be more ergonomic. But I haven't seen any real argument that it's more ergonomic, or understandable, or worth the learning curve, when your Makefiles are simple or have actual scripts doing the complicated stuff.
`just --list` is indeed missing (though I hear make is adding a --print-targets flag), but I usually need to run the same commands - make run, make test, make typecheck, make lint, make format. Note how none of these take an argument - that's also something I've never found myself needing.
The argument stuff is nice when you realize you’re no longer constrained by not having easy access to it. I used just to build my blog, and added a target to create a template entry whenever I updated to a new release of certain software. I could write “just newversion 1.2.3” to publish an announcement using the same mechanisms that did everything else. Without that feature, I could have scripted something up to do the same. With it, I didn’t have to. I wouldn’t have tried that with make.
The benefit of just is that it's designed to be a command runner, whereas make is designed to be a build tool. justfile syntax is much simpler and more ergonomic. It also has nice features: Private recipes, submodules, recipes that you can specify to run only in a particular OS (we use the same justfile for both Windows and Linux), writing your recipes in a language other than your shell language, and many many other niceties.
A new user can start doing "advanced" stuff in just in a couple of hours. They'll take a lot longer if trying to do it via make.
If you want to read binary files, you can use C or python, but not go (unless you use segfault prone 3rd party libraries of very dubious quality, of course).
Is this referring at all to to PyTorch. If not, any guesses what the author has in mind
"Not only because the syntax is more human-friendly, but also because the Python interpreter is natively integrated in all Unix distros."
Is this referring to GNU/Linux.
UNIX (UNIX-like) includes more than Linux; some UNIX distributions do not include Python in the base system
Where it is left as choice to the user whether to install it
I know this because I use such distributions and, unless some software needs it, I do not install Python
In such case, when I am done using that software I uninstall it^1
For example, he mentions retrieving YouTube channel metadata
I do not use Python for this; I use a 19-line shell script (ash not bash), its startup time is faster
Unlike Python, it is included in the base system of the UNIX distributions (both Linux and BSD) that I use
But if I need to test something using yt-dlp, then I might temporarily install Python
1. I compile Python from source and one annoying aspect of the project , in addition to the slow startup time, is their failure to include an uninstall target in their Makefile
He's referring to Python in general
"Not only because the syntax is more human-friendly, but also because the Python interpreter is natively integrated in all Unix distros."
I think he means that its available or readily available in many major linux distributions like Ubuntu, Fedora, NixOS, etc. I don't think native is the right word.
I use bash too but Python is amazing. You're right, there are problems related to packaging and speed, but it is still very often the right tool for the job. It's powerful, easy to use, and open source.
I do not use bash. I use ash. Bash is too slow for me, like Python.
Ash doesn't do web requests unless you've implemented HTTP in ash. You're back to using 3rd party dependencies that aren't installed on all systems
The TCP networking is done with orginal netcat reading the HTTP from a pipe.
The TLS is handled by a TLS forward proxy listening on the loopback.
The orginal netcat and other TCP clients I use, like tcpclient from djb's ucspi or tcploop from haproxy, are not part of the NetBSD base system but are easily added when I compile the OS. For Linux I use a custom distribution I make myself, without LFS. Busybox has ash and nc together in the same binary.
These TCP client programs are stationary targets, they will work reliably year after year, and small enough that I can store and compile them quickly even on computers with modest resources. Python is constantly evolving, a moving target, and much larger.
I wrote a utility in C89 that "implements HTTP" called yy025. This is produced using GCC, specifically flex, which is intsalled on all systems I use. flex is part of the NetBSD toolchain. It's a requirement for compiling the OS. It's a requirement for building many GNU userland utilities. It's even a listed requirement when building the Linux kernel.
yy025 is what I normally use in shell scripts when I need to generate HTTP. It reads URLs on stdin and outputs customised HTTP to stdout. There is no "third party dependency" for HTTP. This is a "first party" program. I wrote it.
But this script to fetch YouTube metadata doesn't use yy025. It's just some printf statements.
One could thus argue that there is a TCP client "installed on every system".
For me, what is more important is an installed copy of GCC. I have yet to use a UNIX-like system that did not have the needed networking functions to create a basic TCP client.
Worked at a company where that approach lead to huge unwieldy structure that no one dared to touch to not break anything other teams are working on. The problem was not so much the repo, but dependencies structure (like single requirements.txt for the whole repo) and build scripts.
In theory it should've worked great -- you only need to update a dependency once and be sure all the code has the most recent security patches. In reality everyone was afraid to touch it, because it will break someone's else code. Monorepos work great only if you have serious NIH syndrome (Google).
I actually started appreciating damned micro-services there, as long as each service just reflects organization team structure.
That said, I think service-per-team is a good pattern
I get that it's not the shiny new thing, but I don't understand people hating on it. Is this just junior devs who never learned it, or is there some new language out that I missed? (And please don't tell me Javascript....)
I mostly just don’t like some of the design decisions it made. I don’t like that lambdas can’t span multiple lines, I don’t like how slow loops are, I don’t like some functions seem to mutate lists and others don’t, and I am sure that there are other things I missed.
But it really comes down to the fact that my career hasn’t used it much. I occasionally have had to jump into it because of scripting stuff, and I even did teach it for a programming 101 course at a university, but I haven’t had a lot of exposure otherwise.
For scripting stuff for myself, I usually end up using Clojure with GraalVM (yes I know about babashka), or nowadays even just a static linked compiled language like Rust.
I don’t really understand why people think that compiled languages can’t be used for scripting (or at least task automation), honestly. Yes you have to add a compilation step. This involves maybe one extra step during development, but realistically not even that. With Rust I just do cargo run while developing, I don’t see how that’s harder than typing Python.
for x in [1,2,3]: print(x)
x sticks around! So you'll always have these random variables floating around, and hope you don't use the wrong one.
And to add insult to insult to injury, if you're using mypy for type checking you'll get a nice error if you try to reuse x with a different type:
for x in ['a', 'b', 'c']: print(x) << Incompatible types in assignment (expression has type "str", variable has type "int") [assignment]
And the types I can never trust. I've used all the tooling and you still get type errors in runtime. It's also ridiculously slow.
I would also like optional chaining (e.g. foo?.bar?.baz) and a million other features that other high level programming languages have.
"Insane feature" is a generous way of describing this behavior. I would say it is just a stupid bug that has managed to persist. Probably it is impossible to fix now because of https://xkcd.com/1172/
How typecheckers and linters should deal with this is a tricky question. There is how the language ought to work, and then there is how the language actually does in fact work, and unfortunately they are not the same.
Lucky for you, LLMs are pretty good at that these days.
>And the types I can never trust. I've used all the tooling and you still get type errors in runtime. It's also ridiculously slow.
IDE integrated mypy checking does this in the background as you type. As for errors, it all has to do with how much typing you actually use. You can set the IDE to throw warning based around any types or lack of type annotation.
Again, handholding.
msedit main.py && ./main.py
!!
!!
But indeed, pressing F5 solves that for both Rust and Python [1,2,3,4].filter(x => {
let z = x * 2;
let y = x * 3;
let a = x / 2;
return (z + x * a) % 27 == 2;
});
Obviously I know I could name this function and feed it in, but for one-off logic like this I feel a lambda is descriptive enough and I like that it can be done in-place.You're free to disagree, but I think there's a reason that most languages do allow multi-line lambdas now, even Java.
FWIW, you'd also have the benefit of being able to unit test your logic.
I have done a lot of Haskell and F#, and I'm very familiar with the concept of "lifting", and yeah being able to individually test the components is nice, but even within Haskell it's not too uncommon to use a lambda if the logic doesn't really need to be reused or is only a couple lines.
If you have a huge function, or you think there's any chance of the logic being reused, of course don't use a lambda, use a named function. I'm just saying that sometimes stuff that has 2-4 lines is still not worthy of having a name.
I have seen people do this in JavaScript quite often, but I always assumed there was some kind of underlying performance benefit that I didn't know about.
As I think about it I guess it makes sense if you're passing a function to a function and you just want it to be concise. I could imagine using something like that off the top of my head, but then pulling it apart and giving it a name the moment I had to troubleshoot it. Which is how I currently use nested comprehensions, just blurt them out in the moment but refactor at the first sign of trouble.
I think maybe I just have trouble seeing some of the braces and stuff, and it's easier for me to reason about if it's named. I guess that's why we have 32 flavors.
Thanks for answering me honestly I really do appreciate it, even if my tone came off as dismissive. Sometimes I don't realize how I sound until after I read it back.
> I have seen people do this in JavaScript quite often, but I always assumed there was some kind of underlying performance benefit that I didn't know about.
I don't think so, at least I haven't heard of it if there is.
I tend to have a rule of thumb of "if it's more than 6-7 lines, give it a name". That's not a strict rule, but it's something I try and force myself to do.
Like in Python, most lambdas can be done in one line, but that also kind of gets into a separate bit of gross logic, because you might try and cram as much into an expression as possible.
Like, in my example, it could be written like this:
[1,2,3,4].filter(x =>((x * 2) + x * (x/2)) % 27 == 2);
But now I have one giant-ass expression because I put it all into one line. Now where previously I had two extra names for the variables, I have the ad-hoc logic shoved in there because I wanted to squeeze it into a lambda.At it's core, I think it's fair to say Python is about forcing the user into formatting their code in a readable way. It's gotten away from it over the years for practicality reasons, and to increase adoption by people who disagree on which ways are more readable.
Sometimes I wish they would take nested comprehensions away from me, I am too lazy to avoid them in the heat of the moment, and I get a thrill out of making it work, even though I know they're disgusting.
Is it "better" than a named function? No, of course, they work mostly the same. But we are not talking about better or not. We are talking about syntax just for the sake for syntax, because some people prefer to write code in a way you don't necessarily care about.
I always thought the appeal of functional programming was more about test-ability and concurrency, it never occurred to me that people actually preferred the syntax.
Different strokes and whatnot, not everyone likes functional programming and of course there are valid enough criticisms against it, but I've always appreciated how terse and simple it feels compared to imperative stuff.
Even with regards to testability, if your function is pure and not mucking with IO or something, then even using a multi line lambda shouldn't affect that much. You would test function calling it.
Keep in mind, Haskell doesn't really have "loops" like you'd get in Python; in Python you might not necessarily need the lambda because you might do your one-off logic inside a for loop or something. In Haskell you have map and filter and reduce and recursion, that's basically it.
weeell, you can still do stuff like this in Haskell:
import Data.Vector.Mutable
...
let n = length vec
forM_ [0 .. n-1] $ \i -> do
next <- if i < n-1 then read vec (i+1) else pure 0
modify vec (+ next) i -- increase this value by next value
it's just in many cases the functional machinery is so accessible that you don't need to reach for for-loops.At a previous job I did manage to put a chip in that when I demonstrated replacing one of our Java services with a Python implementation. It was a fraction of the code, and achieved much better latency and throughput. Obviously not every Python program is going to do that. But my point isn't that Python is better, it's that these kinds of things are never so cut-and-dried. Many non-trivial Python programs are just thin shells around a large core of very well-optimized and battle-tested C/C++/Rust code. And Java, for its part, can also accumulate a lot of dynamic language-style performance losses to pointer chasing and run-time type lookups (and GC churn) if you're not careful about how you use generics. As always, the devil's in the details. It's also less able to pull the "actually I'm a C++" trick because using a compacting garbage collector makes it difficult to interop with native code without paying a hefty marshaling tax.
Still I believe Java is a better application language. Python is a better scripting language (replacement for Bash). Small apps tend to be easier on Python, but large apps are way easier on Java, both for syntax (types) and ecosystem (libs).
Seen plenty of coding horrors in both ecosystems...
I wouldn't call it a new language, for me it's just a syntactic sugar over Java, but for any problem you would google "how to do X in Java", not "how to do X in Kotlin".
But there you can do way simpler syntax, like:
0..100 meters with -45..45 deg within 3 seconds
Because "0..100 meters ..." is equivalent to "(0..100).meters(...)"(0..100) is a built-in IntRange type, that you can extend:
data class MyDistanceRange(val meters: ClosedRange<Double>)
val IntRange.meters: MyDistanceRange
get() = MyDistanceRange(first.toDouble()..last.toDouble())
and class IWantToBeABean:
def init(self, arg1: int, arg2: str, arg4: str) -> None:
self._field1: int = arg1
self._field2: str = arg2
self._field3: str = arg3
def get_field1(self) -> int:
return self._field1
def set_field1(self, value: int) -> None:
self._field1 = value
def get_field2(self) -> str:
return self._field2
def set_field2(self, value: str) -> None:
self._field2 = value
def get_field3(self) -> str:
return self._field3
def set_field3(self, value: str) -> None:
self._field3 = value
when it could have just been: @dataclass
class IDontWantToBeABean:
field1: int
field2: str
field3: str
The worse case for Python is when you get people doing the oldschool Python thing of acting like dynamic and duck typing means it's OK to be a type anarchist. Scikit-learn's a good one to put on blast here, with the way that the type and structure of various functions' return values, or even the type and structure of data they can handle, can vary quite a bit depending on the function's arguments. And often in ways that are not clearly documented. Sometimes the rules even change without fanfare on minor version upgrades.The reason why large Python codebases are particularly scary isn't necessarily the size itself. It's that for a codebase to even get that large in the first place it's very likely to have been around so long that the probability of it having had at least one major contributor who likes to do cute tricks like this is close to 1. And I'd take overly verbose like the Java example above over that kind of thing any day.
get_field1(self) -> int:
"""
Gets Field1.
@rtype: int
@return: the field1
"""
self._field1
And use double underscores for private fields, because, you know, encapsulation.Seriously though, in Java you don't need get*() either. Typically you either:
1. use `record` types (since Java14, 2020), or
2. use Lombok that auto-generates them [1], or
3. use Kotlin `data class`, or
4. just raw field access. I don't buy the encapsulation thing if my code is not a library for wide internet to use.
The worst thing about Java is the average quality of Java programmer.The same could probably be said about Python. However I think that there are fewer Python programmers trying to write AbstractFactoryFactory than in Java. Java has a terrible culture of overly verbose, deep inheritance trees that make debugging and development worse, with worse performance.
Java programmers may not blog as much, and Java doesn't show up on Hacker News as much, but not being Extremely Online does not mean that it isn't extremely widely used by real people whose experiences are just as valid.
No, you could not. You could say it about maybe four others: PHP, C, C++, and C#.
No other languages have anywhere near the userbase size while being fairly quiet when it comes to online tech discussion.
I agree there are crusty old Java devs (as well as crusty old C, C++, PHP, etc. devs). In a decade or two, there will be crusty old TypeScript devs. It's just the nature of technology lifecycles and career paths. Some people get relatively tired of learning when they get older and just want to capitalize on what they already know for their remaining earning years.
Most of those rarely make to the top of HN, and other generalized forums. If anything, Java and Python are together at the popular kids table and we forget about the silent heros keeping the ship afloat.
The comment I was replying to seems to believe Java is a tiny backwater, which is anything but true.
Considering that as recently as 4 years ago I was working on a project where we still had a hard requirement to support running in Java 7, and this kind of thing was not considered unusual, I can't really disagree too strongly with that. Yes, that was still inside of Java 7's extended support period, so there was really nothing unusual or surprising about this, from a Java developer perspective. But that's kind of the point.
It's also not really a bad thing, considering what kinds of things run on Java. Mainframe developers have a similar thing going on, for a similar and similarly good reason.
I recall things like updating packages/code on the fly, recompiling fast paths on the fly. Maybe that's not necessary in a borg/kubernetes world where you can just restart things and have your load balancer infra take care, or just run slower code because compute isn't that expensive once you accelerate your CPU-heavy libraries, but cool anyways.
Edit more succinct
The Log4shell incident is the perfect demonstrator of what kind of people are in Java world.
To what? Java is still a very efficient, productive language. Updating a legacy codebase to use newer Java features would probably be good, but migrating to another language is unlikely to significantly move the needle in terms of runtime performance or developer velocity.
When you work with a codebase that doesn't need compilation, the development velocity is quite fast. You don't need to compile, you can prototype features on the fly, you can even write real time updates while the web server is running.
And standard compilation in Java is done in a VERY inefficient manner with groovy - you have groovy that gets compiled to bytecode, which then gets JIT compiled to native, which then actually runs the compilation of the source code to bytecode, and then you have a first startup latency as it gets JIT compiled to native.
All you really need to write any modern app is Python + C. Anything that needs to go fast, just encapsulate in a small C program, and launch that from Python.
In fact, a lot of the most interesting plt and compiler r&d going into real world applications is on the jvm (project loom, graal etc), and the features of modern Java (pattern matching, records, etc) make it a great choice for lots of projects that aren’t legacy enterprise apps.
I don’t think Java makes anyone unambitious, I think it’s that Java is taught in schools and unambitious people don’t feel the need to learn anything else, and they get jobs at unambitious corporations. It selection-biases towards unambitious people who don’t want to learn more than they have to.
Compare this to something like Clojure or Haskell or something, which isn’t routinely taught at schools and is not very employable. People who learn these languages generally seek out these things because they’re interested in it. This selection-biases towards intellectually ambitious people.
As a result, Java people can be insufferable for people like me.
The people who make Java have actually made the platform and language pretty ok in the last two decades, but I had to fight at a previous job to use NIO, which I think was introduced in Java 4, but none of my coworkers had really heard of it or used it because the regular Java blocking IO has been “good enough”.
Given fact that Lombok is still pretty much widely used, with its under the hood functionality of essentially hacking the AST, or the fact that annotation processors write out Java code to files, or the fact that you could be using a standard library like Log4j and have a massive vulnerability on your system because someone decided that it would be a good idea if log statements could execute code and nobody said anything otherwise, or the fact that Kotlin and Groovy were literally made to address inefficiencies in Java, amongst other things....
Yeah not really sure how you came to that conclusion.
Kotlin and Groovy did come and address problems with Java, you should use use them if your employer allows it. I'm just saying that Java 21 is actually kind of fun to write.
Yes, some of the libraries have been unsafe, but that's one example of the 30 years of Java.
I just feel like Java has improved in the last twenty years. It's the engineers that haven't.
I think one comment I saw here on HN that Java is better if written in Pythonic way. I agree completely with that stance.
but yeah within Java you've 'merchants of complexity' people who wanna do things in the most abstract way rather than the simple way.
btw Java can be as simple as Go.
And the slides are available here: https://github.com/Tombert/lambda_days_2025/blob/main/slides...
I'm afraid that my humor isn't really reflected in the slides, but imagine everything here is said kind of snarkily.
Java can be mostly as nice as Go, the BlockingQueues and Virtual Threads can get you pretty far, though they're not quite as nice as Go channels because there's no real way to select across multiple BlockingQueues like you can with Go channels.
Overall though, I think Java 21 is actually not too bad. Shockingly, I even sometimes have fun writing it.
Other than being ageist, it’s wrong; or misattributed to Java. I work with Python every day, and what’s missing is static typing and IDEs that make use of it to greatly reduce the amount of code and context I have to store in my head. Python (a dynamically typed language obviously) is exhausting to maintain. But easy to write. Java/C#/whatever statically typed language with great IDE is easy to write and maintain by comparison.
Of course there are IDE for Python and dynamically typed languages, but everyone I’ve tried has fallen short compared to the best Java/c# IDEs.
Static vs dynamic used to be a huge flame war on the internet, but over the past few years I’ve encountered people who’ve never heard of it. This is it.
- Environment / dependency management
- Type safety
- Performance
As the author points out, these have largely been addressed now by uv, type hints, pydantic, FastAPI, etc.
Not really.
Environment/dependency management is/was never an actual problem. People act like you update a version and stuff breaks. Even then, venv existed for this reason, and you could specify version in the setup.py or requirements.txt.
Type safety should in theory be covered by unit tests. If you assign a variable to another one in a dynamic setting accidentally, like a string to a dict, then your functionality breaks. In reality though, its really not that hard to use different variable names. In my experience, the only time things start getting confusing is if you start using Java concepts with things like Factories and a lot of OOP and inheritance in Python. None of that stuff is really necessary, you can get by with dicts for like 90% of the data transfer. And in very type safe languages, you spend a lot of time designing the actual type system instead of just writing code that does stuff.
Performance is still slow, but language performance doesn't really matter - you can launch natively compiled code easily from Python. Numpy was built around this concept and is also 10 years old. You will never beat C (with explicit processor instructions for things like vector math) or CUDA code, but most of your code doesn't require this level of performance.
Backward compatibility, which I suppose is closely related to needing to use env, is also a pain. In my experience you can't go forward or backward in many cases. It's especially painful on projects that don't change very often. I'm sure that an active codebase can probably be kept updated from version to version, but if you've waited a bunch of versions, it seems painful.
But, I'm not sure I've given it a fair shake because I haven't needed to. It's use in AI does make it an attractive option, now more than ever.
- statically typed languages got better, reducing the relative benefits of dynamic typing. people realized they didn't really hate static types, they hated the way [insert 90s-00s enterprise language here] did static types
- the GIL became more and more painful as computers got more cores
- it used to be a language for passionate hackers, like a latter-day lisp (cf that old pg essay). "written in python" used to be a signal of quality and craftsmanship. now it's the most commonly taught beginner language; millions of bootcamp devs put it in their CV. average skill level plunged.
- PSF was asleep at the wheel on the packaging / tooling problems for years. pip/venv are dinosaurs compared to cargo, nix, or even npm.
Seems more like it's fallen out of favor with senior devs who have moved to Go/Rust.
Python compares fairly well to Bash or JavaScript or whatever, right? (Maybe JavaScript is better, I don’t know anything about it).
JavaScript has much more intuitive async syntax, which was actually borrowed from a Python framework.
For whatever reasons, the Python folks decided not to build on what they had, and reinvents things from scratch.
It seems like two of the main entries under “Python stuff” are “working with people who only know Python” and “AI/ML because of available packages.”
What are some others?
- Interpreter version management: uv can handle separate python versions per project automatically. No need for pyenv/asdf/mise.
- Bootstrapping: you only need the uv binary, and it can handle any python installation, so you don't need to install python to install a management program.
- Environment management: uv will transparently create and update the environment. You don't need to source the venv either, you use `uv run...` instead of python directly.
Overall, it makes common Python project management tasks simple and transparent. It also follows standards (like project definition metadata and lock files), which poetry often doesn't.
I did just update the dependencies of some of my projects. I could see how that could be faster. I don't do that often enough for me to care about it, though. `poetry run pytest` is the slowest thing I have, and I'm confident most of that slowness is in my direct control already.
I'm intrigued on the lock file point. I thought that was literally one of the main reasons to use something like poetry, in the first place? Does uv have a better lock file mechanism?
I'm very fortunate that my python projects are all relatively small, so maybe that colors things a bit. Certainly looks like something that would have swayed me to uv at the start, but as things are, I think I mainly just wish there was a more standard/accepted work flow for build and release.
It doesn’t ship with a first party package manager so you got the community trying to fill this gap. Use any other language with good tooling like golang or rust and it is a breath of fresh air.
Python used as an actual PL is a footgun because it’s dynamic scripted. (Don’t tell me about using tools X, Y, Z, mypy, …) You essentially become the compiler checking types, solving basic syntax errors when uncommon paths are executed.
A programming language that’s only good for < 100 line scripts is not a good choice.
I honestly wish python were in a better state. I switched to rust.
What's wrong with using tools that improve on common issues? I don't think I'd use Python without them, but ruff and pyright make Python a very productive and reliable language if you're willing to fully buy into the static analysis.
What a bunch of crap. It's so trivial to show very popular and useful programs written in Python that far exceed this number I'm not even going to do the work.
What a lazy criticism.
As python projects grow and grow, you need to do lots of support work for testing and even syntactic correctness. This is automatic in compiled languages where a class of issues is caught early as compile errors, not runtime errors.
Personally I prefer to move more errors to compile time as much as possible. Dynamic languages are really powerful in what you can do at runtime, but that runtime flexibility trades off with compile time verification.
Of course, every project can be written in any language, with enough effort. The existence of large and successful python projects says nothing about the developer experience, developer efficiency, or fragility of the code.
In hindsight, I should have just left it alone and not replied which is what I usually do. But Python's popularity isn't an aberration. It's tradeoffs make sense for a lot of people and projects. The low effort bad faith swipes at it from subsections of the HN community got me a bit riled today and I felt I had to say something. My apologies for a less than constructive critique of your comment.
Best.
> In Comments
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
> Please don't fulminate. Please don't sneer, including at the rest of the community.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Today the IDEs got much better, but I still can’t see the significant whitespace as anything than downside. With brackets I can indent, but also have automatic indenting succeed every single time.
I just have a line in my Justfile that does this. Probably would be better to format on save but I use different editors and haven’t gotten around to it.
Still doesn’t fix the creeping doubts about everything in a language conceived by people who made that whitespace call, but it removes the specific pain point.
If you copy and paste some Python code and it isn't indented properly, it breaks the python program. It's the stupidest thing I've seen in any language, including javascript (which isn't as stupid as many claim it is).
It's still reigning champion of data science, and of course it has a huge number of uses and users still around, but it's not really cool or in vogue outside of those circles.
And what is lightweight scripting? Isn't scripting by definition lightweight?
But Python overall is still very popular outside the Data Science-circles. Not sure where this claim is coming from.
This is a wild take. You're never going to get a fully accurate measurement but every source I've seen[0][1][2] puts Python as the most common programming language by a country mile.
If it doesn't seem "cool" or "in vogue", that's because everyone is already using it.
[0] https://www.tiobe.com/tiobe-index/ [1] https://pypl.github.io/PYPL.html#google_vignette [2] https://www.pluralsight.com/resources/blog/upskilling/top-pr...
The famous answer.... _it depends_.
Plus there has been a rising sentiment against dynamic typing by masochists over the last decade or so.
> by masochists
Hey! The masochism pays dividends. I can't do anything with duck typing that I can't also do with `dyn Trait` abuse :)But then uv came along. It's not just another poetry or pipenv or whatever. It works well and it has uvx and `uv tool install` and other nice things.
Previously when I saw that something was written in Python I'd curse under my breath because I don't have the mental energy to make a virtual environment and source its shell script and remember where I put it just to try something out. On arch Linux I can't just pip install something without going through some crazy setup that I used to have patience for but as I get older it's been better to just curse that snake language altogether and spend time doing something more chill like writing rust.
But now I can just type "uvx" and it works. I'm probably not going to start writing python again any time soon, but at least now I have less reason to be disappointed with people who themselves choose to write Python.
print "xyz"
now and then. It's not a big deal, but it reminds me of struggling with pip->pip3 and many other things for a long time years ago.
Tbf at that point Django was still pretty shaky with 3 and basically none of the 3rd party Django libraries supported it at all, plus I was using Google AppEngine which at the time was tightly coupled to the 2.7 runtime. But really I was just parroting the Slashdot hivemind which was 100% convinced the transition was the new Perl 6 and would kill Python, and that Python.org was dishonestly teaching newbies who didn't know better a dead language and worthless skill when they changed the default.
Fortunately for him he ignored me and most of the big Django libraries were ported like a year later at which point I had to switch anyway to get updates. Fully agreed that in 2025 it's pretty much irrelevant, and honestly despite some legitimate pain the transition was much more successful than the cynics assumed it would be at the time.
Everyone will mention uv/pyenv/poetry/conda/virtualenvs, so fine, let's pretend it's not a problem that you tried each of those in desperation and they are all household names. Suppose the packaging wars actually ended and everyone uses what you use without you needing to tell them, and suppose further that every pypa problem isn't blaming debian maintainers for obvious regressions. Pypi will still yank[0] packages at the source to perhaps randomly break deterministic behaviour, smashing anything in the blast radius rather than only clients using a --strict flag or something. You can pin your own dependencies but who knows what they will pin (nothing, or the wrong stuff, or the wrong way probably!) or what they will yank. Now for repeatability you need to host a mirror for everything you use- which is fine for corporate but a nonstarter for the novice and annoying for FOSS projects.
If you have zero dependencies or a slowly changing environment that is receiving constant care and feeding, you'll probably never notice how bad things are. If you put down most projects for a month though and pick them back up, perhaps with a different environment, machine, or slightly different python version it's broken, bitrotted, and needs serious attention to be rehabilitated.
People might argue.. that's just software dev. Nope. I say it with lots of love and gratitude for the efforts of the community.. but most langs/ecosystems would never tolerate this level of instability. One has to eventually just admit that working, portable, and actually repeatable environments with python basically just require docker. Can we talk about how "--break-system-packages" is hilarious? After you retreat to docker you can type this a few times a day, push the container up, pull it down months/years later, and then realize that literally the only way to get a stable working environment is to request a broken one. QED
Of course then crypto bros happened and the rest is history.
1: I don't like dealing with language crossing boundaries with it - it's painful, especially if to/from a compiled language - there's just too much friction. It's easy to write but hard to maintain.
2: Debugging python can be...painful. If you have a pure perfect environment dedicated to pure python and all the tooling set up, it can be breezy. But if you have something messier like C++ that calls Python or vice-versa and are using an editor not made for Python like QTCreator then suddenly it becomes hell.
Who owns this data!? Where was this initialized!? What calls this function!? What type is this data!?!?!?!? These questions are so effortless in C++ for me and so very painful in Python. It slows efforts to a crawl.
It feels exhausting to have to go back to print statements, grep, and hand-drawing a graph that shows what functions call what just to figure out WTF went wrong. It's painful to a degree that I avoid it as much as possible.
...and don't even get me started on dealing with dependency problems in deployments...
I've never found myself debugging the underlying C++ code unless developing a C++ extension. But is it really that hard? Just point lldb to the python process and run your script in lldb.
If the C++ is not yours & assuming it's a mature lib (e.g. PyTorch): it's probably an error caused by you in Python land.
The problem is how to step into one from the other, you really can't as far as I know.
The python parts aren't standalone enough that it could just be run on its own, there's so much setup involved prior to that point that it can really only be run from the top-level user controls.
But Python's tooling, particularly with what Astral is doing (uv, ruff, ty), is so good, I'm always using Python and virtually never using Ruby. And yeah, the rich libraries, too.
I thought nodejs/typescript seemed to be the default that most LLMs choose? Or is that just v0/lovable/replit? (although replit seems better about going non-js sometimes)
I have been enjoying Lisp languages since the late 1970s, and today it makes me happy using Common Lisp and Racket in the same way as when I stood in a forest very early this morning drinking coffee makes me happy.
But, Python is also a fun language and phrases like “good enough” and “actually liking it” are valid.
Except it doesn't. It just creates another X that is popular for a while, and doesn't somehow retroactively "fix" all the chaotic projects that are a nightmare to install and upgrade. Yes, I understand people like Python. Yes, I understand the LLM bros love it. But in a real production environment, for real applications, you still want to avoid it because it isn't particularly easy to create robust systems for industrial use. You may survive if you can contain the madness in a datacenter somewhere and have people babysit it.
Here's to hoping it manages to actually solve the Python packagig issue (and lots of people are saying it already has for their use cases)!
That is only true if you never reexamine the universality of your statement. I promise that it is possible to "solve" the mess that was Python's ecosystem, that uv has largely done so, and that your preconceptions are holding you back from taking advantage of it.
Multiple times people have explained why they think whatever they are madly in love with now is the definitive solution. And none of those times, over those couple of decades did it turn out to be true.
I understand that you are enthusiastic about things. I get it. But perhaps you might understand that some people actually need to see things stick before they declare a winner? I'm not big on wishful thinking.
Python isn't the only language that has poor tooling. C/C++ is even bigger than Python in terms of established code base, and its tooling is nothing short of atrocious.
What helps is people realizing where tooling and production readiness should be. They can learn a lot from Rust and Go.
The it's big so therefore it must be right argument is nonsense. Worse yet: it is nonsense that excuses lack of real improvement.
This is silly and seems to discount the massive Python codebases found in "real production environment"s throughout the tech industry and beyond, some of which are singlehandedly the codebases behind $1B+ ventures and, I'd wager, many of which are "robust" and fit for "industrial use" without babysitting just because they're Python.
(I get not liking a given language or its ecosystem, but I suspect I could rewrite the same reply for just about any of the top 10-ish most commonly used languages today.)
There's also projects that can't use `uv` because it doesn't like their current `requirements.txt`[0] and I have no bandwidth to try and figure out how to work around it.
[0] We have an install from `git+https` in there and it objects strongly for some reason. Internet searches have not revealed anything helpful.
Think JNI or cgo management.
Those native packages can be in any language and require any odd combination of tools to build. Who has truly solved that problem?
If you need to link a C lib, there are ways to set it up to compile other OS (and maybe other archs).
And there's ffmpeg....
(Not even kidding, I've seen people do this)
start with wheels if you just want pre-built binaries, move up to conda if you need arch and OS-specific C libs that depend on system packages, flatpack or docker if you dont want those to mess up your host, or unikernel/firecracker/VMs if you need kernel modules or hardware virtualization.
It's just HN users are more likely to be that somebody else. Probably we have to deal with non-python dependencies anyway so we're reaching for bigger hammers like docker or nix. It would be nice if there wasn't a mess of python package managers, but whichever one I use ends up being a somewhat unimportant middle-layer anyway.
There will be a file: uv.lock You can use uv2nix to get a single nix package for all the project's python dependencies, which you can then add to your devshell like anything else you find in nixpkgs (e.g. right alongside uv itself). It ends up being two or three lines of code to actually get the package, but you can just point a LLM at the uv2nix docs and at your flake.nix and it'll figure them out for you.
Your devshell will then track with changes that other devs make to that project's dependencies. If you want to modify them...
edit pyproject.toml # package names and versions
uv lock # map names and versions to hashes (nix needs hashes, finds them in uv.lock)
nix flake lock # update flake.lock based on uv.lock
nix develop # now your devshell has it too
This way you're not maintaining separate sources of truth for what the project depends on, and the muggles need not think about your nix wizardy at all.And it seems like the package resolution is finally local by default, although that requires a 'virtualenv', which seems to be a legacy of the global packaging system.
I find this depressing. Not only are LLMs covertly reducing our ability to think and make decisions, they’re now also making people voluntarily conform to some lower common denominator.
It’s like humanity decided to stagnate at this one point in time (and what a bad choice of point it was) and stop exploring other directions. Only what the LLM vomits is valid.
But weiting a processing pipeline with Python is frustrating if you have worked with C# concurrency.
I figured the best option is Celery and you cannot do it without an external broker. Celery is a mess. I really hate it.
I'm hoping the existence of free-threading will push for more first-class concurrency primitives. Concurrent Futures is nice until you need a concurrent-safe data structure besides a queue
I also had a lot of problem due to async primitives with sqlalchemy - there's some tricky stuff with asyncio.gather vs TaskGroup and how sqlalchemy session works with it to be able to compose code easily.
There is nothing more annoying than tons of little repos all of which containing tiny projects with a few hundred lines of code but (of course) you need most / all of them to do anything. Use a mono repo until there is some obvious reason to split it up imo.
On the flip side, we have an org with 50+ teams and our operations team is pinning for a monorepo. They are just fine with one team's push forcing N teams to have an unexpected deploy and recycling of caches, connections, etc. Not to mention what will happen when team A doesn't have time to deal with team B's merge due to other org pressures.
For personal projects, though, I get the value of an actual small projects monorepo.
e.g. rather than:
> It’s important not to do any heavy data processing steps in the project-ui … we keep the browser application light while delegating the heavy lifting and business logic to the server
Chomp the complexity, serve HTML from the backend directly > ty
Im curious where ty goes but for a min-complexity stack i couldnt spend complexity tokens on pre release tools > pydantic … dataclasses
One or the t’other, plus i’ll forever confuse myself: is it post_init (dataclasses) or is it post_model_init (pydantic) - i had to check! > docker
if we already have uv, could we get away without docker? uv sync can give an experience almost akin to static compiled binaries with the right setup. Its not going to handle volumes etc so if you're using docker features, this concept isnt going to fly. If you're not wedded to docker though, can you get away with just uv in dev and prod? in an enterprise prob not, i wouldn't expect to be able to download deps in prod. For flying solo though… > compose
You’ve a frontend, a backend and presumably a database. Could you get away with just uv to manage your backend and a local sqlite db?So a broadly feature comparable stack for rapid iteration with less complexity but still all the bells and whistles so you dont need to cook everything yourself, might look like:
- uv
- fastapi + jinja + htmx + surreal + picocss
- sqlite
You could probably sketch a path to hyper scale if you ever needed it: - v1 = the above stack
- v2 = swap sqlite for postgres, now you unlocked multiple writers and horizontal scaling, maybe py-pglite for test envs so you can defer test-containers adoption for one more scaling iteration. WAL streaming would add some complexity to this step but worth it
- v3 = introduce containers, a container registry and test-containers. I dont think you really unlock much in this step for all that additional complexity though…
- v4 = rke2 single node cluster, no load balancer needed yet
- v5 = scale to triple node, we need a load balancer too
- v6 = add agent nodes to the rke cluster
- v7 = can you optimise costs, maybe rewrite unchanging parts in a more resource efficient stack
…
"I went from being fully covered in mud to only being half covered in mud, and it's great! I don't understand why people complain about being half covered in mud."
More seriously: it's fascinating and interesting to see how closely this article mirrors my own Python project layout. The tools and practices have come a long way over the last decade and good developer standards have a way of becoming the defacto in organic fashion.
IMO uv has won the race to be the new standard for Python environment management (Poetry gave it a solid try, but lost on speed).
Why is that?
Why Python for AI?
This is false, a lot of non "vibe coders" are using Python for AI because of PyTorch and a many other AI libraries have first class Python support.
I am pretty sure some people (maybe this individual, too) may be using Python because their scripts can be executed in a sandbox on one of these websites.
Heck, if it was as good at Factor or Forth as it is at Python, I would be writing more of them, too.
In any case, you cannot claim that it is not one of the reasons. Can you?
Also the vibe coding part gave me the impression that you were implying that people that use/chose Python for AI are all vibe coders which is again false. Sorry if I misunderstood you, but this is what I got from your initial message.
That said, I remember writing myself a note a few years ago to avoid Python projects. I had to clean up code from all over the company and make it ready for production. Everyone had their own Python version, dependencies missing from requirements.txt, three way conflicts between 2 dependencies and the python version, wildly different styles, and a habit of pulling in as many libraries as possible [1]. Even recalling those memories makes my stomach turn.
I believe constraints make a project shine and be maintainable. I'd prefer if you throw at me a real python instead of a python project.
[1] Yes, I'm aware of containers, I was the unlucky guy writing them.
Still could be better, but I think Python's really hit its stride now.
In my personal timeline, people giving up waiting for Perl 6 were a huge source of early Python developers.
I will stick to other languages when I need a better type system.
Made me think this is probably normally a Ruby developer indoctrinated against Python. The article doesn’t seem to say what they have come from.
Ruby devs think about code differently. Like Perl, they embrace TIMTOWTDI.
https://perl.fandom.com/wiki/TIMTOWTDI
Also, there's a pride in writing elegant code as opposed to following "Pythonic" conventions. Excellence is not conformity.
I use Python a lot more often now because it's seen as simpler and more approachable and easier to force compliance. I miss Ruby.
Having said that, in reviews you do get lazy comments like "not pythonic" or "not best practises" which often boil down to the reviewer just not liking something and being too much of an *** to say why. This is supposed to be a total shutdown that you cannot argue with and it's the kind of thing that might put you off the term "pythonic" for life.
"There should be one-- and preferably only one --obvious way to do it."
This is probably the core thing you might have issue with but I think its not really about conforming in how you write your own code but about the thing you make being easy for others to use.
> There should be one-- and preferably only one --obvious way to do it.
But I often don't think the Pythonic way is a very good way to do it. And this leaves you a bit stuck!
Yeah, and it's the wrong approach. Of course, you can have whatever preference you want, but in terms of engineering, it's plain wrong.
> If you know me, you know I used to be mostly a Java/JavaScript/R kind of guy. ↩