#!/usr/bin/env -S uv run --python 3.14 --script
Then you don't even need python installed. uv will install the version of python you specified and run the command. #!/usr/bin/env -S uv run --script
#
# /// script
# requires-python = ">=3.12"
# dependencies = ["foo"]
# ///^mostly, some defs might have StackOverflow copy/pasta
I'm sure the documentation of this featureset highlights what I'm about to say but if you're attracted to the simplicity of writing Python projects who are initialized using this method, do not use this code in staging/prod.
If you don't see why this is not production friendly it's for the simple a good.reaaon that creating deployable artifacts packaging a project or a dependency of a project this uses this method, creating reproducible builds becomes impossible.
This will also lead to builds that pass your CI but fail to run in their destination environment and vice versa due to the fact that they download heir dependencies on the fly.
There may be workarounds and I know nothing of this feature so investigate yourself if you must.
My two cents.
#! /usr/bin/env nix-shell
#! nix-shell -i python3 --packages python3(this is a joke btw)
What you meant was, "you don't need python pre-installed". This does not solve the problem of not wanting to have (or limited from having) python installed.
At the time, Poetry and Pipenv were the popular tools, but I found they were not sufficient; they did a good job abstracting dependencies, but not venvs and Python version.
That in retrospective was what made rye temporarily attractive and popular.
If you've never used Clojure and start a Clojure project, you will almost definitely find advice telling you to use Leiningen.
For Python, if you search online you might find someone saying to use uv, but also potentially venv, poetry or hatch. I definitely think uv is taking over, but its not yet ubiquitous.
Ironically, I actually had a similar thing installing Go the other day. I'd never used Go before, and installed it using apt only to find that version was too old and I'd done it wrong.
Although in that case, it was a much quicker resolution than I think anyone fighting with virtual environments would have.
Over the years, I've used setup.py, pip, pipenv (which kept crashing though it was an official recommendation), manual venv+pip (or virtualenv? I vaguely remember there were 2 similar tools and none was part of a minimal Python install). Does uv work in all of these cases? The uv doc pointed out by the GP is vague about legacy projects, though I've just skimmed through the long page.
IIRC, Python tools didn't share their data across projects, so they could build the same heavy dependencies multiple times. I've also seen projects with incomplete dependencies (installed through Conda, IIRC) which were a major pain to get working. For many years, the only simple and sane way to run some Python code was in a Docker image, which has its own drawbacks.
Yes. The goal of uv is to defuck the python ecosystem and they're doing a very good job at it so far.
I only work a little bit with python.
Sometimes it's things like updating to Fedora 43 and every tool you installed with `pipx` breaking because it was doing things that got wiped out by the system upgrade, sometimes it's `poetry update --only dep1` silently updating dep2 in the background without telling you because there was an update available and even though you specified `--only` you were wrong to do that and Poetry knows best.
Did you know that when you call `python -m venv` you should always pass `--upgrade-deps` because otherwise it intentionally installs an out of date version of pip and setuptools as a joke? Maybe you're not using `python -m venv` because you ran the pyenv installer and it automatically installed `pyenv-virtualenv` without asking which overrides a bunch of virtualenv features because the pyenv team think you should develop things in the same way they do regardless of how you want to delevop things. I hate pyenv.
So far the only problem I've had with `uv` is that if you run `uv venv` it doesn't install pip in the created virtualenv because you're supposed to run `uv pip install` instead of `pip install`. That's annoying but it's not a dealbreaker.
Outside of that, I feel very confident that I could give a link to the uv docs to a junior developer and tell them to run `uv python install 3.13` and `uv tool install ruff` and then run `uv sync` in a project and everything will work out and I'm not going to have to help them recover their hard drive because they made the foolish mistake of assuming that `brew install python` wouldn't wreck their macbook when the next version of Python gets released.
I get that installing to the site-packages is a security vulnerability. Installing to my home directory is not, so why can't that be the happy path by default? Debian used to make this easy with the dist-packages split leaving site-packages as a safe sandbox but they caved.
The brilliant part about venvs is that A and B can have their completely separate mutually incompatible environments.
The thing that makes Python different is that it was never designed with any kind of per-project isolation in mind and this is the best way anyone's come up with to hack that behaviour into the language.
All you have to know is where to import packages from as far as I understand.
Id imagine even just a script that does
export pythonpath=/python_modules
python x.py
should do the trick.
transitive depedencies I suppose?
uv has replaced that for me, and has replaced most other tools that I used with the (tiny amount of) Python that I write for production.
One of the neatest features of uv is that it uses clever symlinking tricks so if you have a dozen different Python environments all with the same dependency there's only one copy of that dependency on disk.
For pip to do this, first it would have to organize its cache in a sensible manner, such that it could work as an actual download cache. Currently it is an HTTP cache (except for locally-built wheels), where it uses a vendored third-party library to simulate the connection to files.pythonhosted.org (in the common PyPI case). But it still needs to connect to pypi.org to figure out the URI that the third-party library will simulate accessing.
Before uv came along I was starting to write stuff in Go that I’d normally write in Python.
Python's always been a pretty nice language to work in, and uv makes it one of the most pleasant to deal with.
It's just so useful: uv is great and there are decent quality packages for everything imaginable.
I think this properly kicked off with RVM, which needed to come into existence because you had this situation where the Ruby interpreter was going through incompatible changes, the versions on popular distributions were lagging, and Rails, the main reason people were turning to Ruby, was relatively militant about which interpreter versions it would support. Also, building the interpreter such that it would successfully run Rails wasn't trivial. Not that hard, but enough that a convenience wrapper mattered. So you had a whole generation of web devs coming up in an environment where the core language wasn't the first touchpoint, and there wasn't an assumption that you could (or should) rely on what you could apt-get install on the base OS.
This is broadly an extremely good thing.
But the critical thing that RVM did was that it broke the circular dependency at the core of the problem: it didn't itself depend on having a working ruby interpreter. Prior to that you could observe a sort of sniffiness about tools for a language which weren't implemented in that language, but RVM solved enough of the pain that it barged straight past that.
Then you had similar tools popping up in other languages - nvm and leiningen are the first that spring to mind, but I'd also throw (for instance) asdf into the mix here - where the executable that you call to set up your environment has a '#!/bin/bash' shebang line.
Go has sidestepped most of this because of three things: 1) rigorous backwards compatibility; 2) the simplest possible installation onramp; 3) being timed with the above timeline so that having a pre-existing `go` binary provided by your OS is unlikely unless you install it yourself. And none of those are true of Python. The backwards compatibility breaks in this period are legendary, you almost always do have a pre-existing Python to confuse things, and installing a new python without breaking that pre-existing Python, which your OS itself depends on, is a risk. Add to that the sniffiness I mentioned (which you can still see today on `uv` threads) and you've got a situation where Python is catching up to what other languages managed a decade ago.
Again.
I thought the current best practice for Clojure was to use the shiny new built-in tooling? deps.edn or something like that?
This is sort of like saying "You might find someone saying to drive a Ford, but also potentially internal combustion engine, Nissan or Hyundai".
But with much more detail: it seems complicated because
* People refuse to learn basic concepts that are readily explained by many sources; e.g. https://chriswarrick.com/blog/2018/09/04/python-virtual-envi... [0].
* People cling to memories of long-obsolete issues. When people point to XKCD 1987 they overlook that Python 2.x has been EOL for almost six years (and 3.6 for over four, but whatever)[1]; only Mac users have to worry about "homebrew" (which I understand was directly interfering with stuff back in the day) or "framework builds" of Python; easy_install is similarly a long-deprecated dinosaur that you also would never need once you have pip set up; and fewer and fewer people actually need Anaconda for anything[2][3].
* There is never just one way to do it, depending on your understanding of "do". Everyone will always imagine that the underlying functionality can be wrapped in a more user-friendly way, and they will have multiple incompatible ideas about what is the most user-friendly.
But there is one obvious "way to do it", which is to set up the virtual environment and then launch the virtual environment's Python executable. Literally everything else is window dressing on top of that. The only thing that "activating" the environment does is configure environment variables so that `python` means the virtual environment's Python executable. All your various alternative tools are just presenting different ways to ensure that you run the correct Python (under the assumption that you don't want to remember a path to it, I guess) and to bundle up the virtual environment creation with some other development task.
The Python community did explicitly provide for multiple people to provide such wrappers. This was not by providing the "15th competing standard". It was by providing the standard (really a set of standards designed to work together: the virtual environment support in the standard library, the PEPs describing `pyproject.toml`, and so on), which replaced a Wild West (where Setuptools was the sheriff and pip its deputy).
[0]: By the way, this is by someone who doesn't like virtual environments and was one of the biggest backers of PEP 582.
[1]: Of course, this is not Randall Munroe's fault. The comic dates to 2018, right in the middle of the period where the community was trying to sort things out and figure out how to not require the often problematic `setup.py` configuration for every project including pure-Python ones.
[2]: The SciPy stack has been installable from wheels for almost everyone for quite some time and they were even able to get 3.12 wheels out promptly despite being hamstrung by the standard library `distutils` removal.
[3]: Those who do need it, meanwhile, can generally live within that environment entirely.
The way I teach, I would start there; then you always have it as a fallback, and understand the system better.
I generally sort users into aspirants who really should learn those things (and will benefit from it), vs. complete end users who just want the code to run (for whom the developer should be expected to provide, if they expect to gain such a following).
This is more of a pip issue than uv though, and `uv pip` is still preferable in my mind, but seems Python package management will forever be a mess, not even the bandaid uv can fix things like these.
in the end i went back to good old virtualenvwrapper.sh and setting PYTHONPATH. full control over what goes into the venv and how. i guess people like writing new tools. i can understand that.
Maybe for more complex projects and use cases it's harder, but it's a lot faster than just pip and pyproject.toml is a lot nicer to manage than `requirements.txt`, so that's two easy enough wins for me to move over.
And regardless if you use only uv, or pip-via-uv, or straight up pip, dependencies you install later steps over dependencies you installed earlier, and no tool so far seems to try to solve this, which leads me to conclude it's a Python problem, not a package manager problem.
First off, in my mind the kinds of things that are "scripts" don't have dependencies outside the standard library, or if they do are highly specific to my own needs on my own system. (It's also notable that one of the advantages the author cites for Go in this niche is a standard library that avoids the need for dependencies in quick scripts! Is this not one of Python's major selling points since day 1?)
Second, even if you have dependencies you don't have to learn differences between these tools. You can pick one and use it.
Third, virtual environments are literally just a place on disk for those dependencies to be installed, that contains a config file and some stubs that are automatically set up by a one-liner provided by the standard library. You don't need to go into them and inspect anything if you don't want to. You don't need to use the activation script; you can just specify the venv's executable instead if you prefer. None of it is conceptually difficult.
Fourth, sharing an environment for these quick scripts actually just works fine an awful lot of the time. I got away with it for years before proper organization became second nature, and I would usually still be fine with it (except that having an isolated environment for the current project is the easiest way to be sure that I've correctly listed its dependencies). In my experience it's just not a thing for your quick throwaway scripts to be dependent on incompatible Numpy versions or whatever.
... And really, to avoid ever having to think about the dependencies you provide dynamically, you're going to switch to a compiled language? If it were such a good idea, nobody would have thought of making languages like Python in the first place.
And uh...
> As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future. Anyone who's ever tried to get python working on different systems knows what a steep annoying curve it is.
The pseudo-shebang trick here isn't going to work on Windows any more than a conventional one is. And no, when I switched from Windows to Linux, getting my Python stuff to work was not a "steep annoying curve" at all. It came more or less automatically with acclimating to Linux in general.
(I guess referring to ".pyproject" instead of the actually-meaningful `pyproject.toml` is just part of the trolling.)
I had a recent conversation with a colleague. I said how nice it is using uv now. They said they were glad because they hated messing with virtualenvs so much that preferred TypeScript now. I asked them what node_modules is, they paused for a moment, and replied “point taken”.
Uv still uses venvs because it’s the official way Python stores all the project packages in one place. Node/npm, Go/go, and Rust/cargo all do similar things, but I only really here people grousing about Python’s version, which as you say, you can totally ignore and never ever look at.
The very long discussion (https://discuss.python.org/t/pep-582-python-local-packages-d...) of PEP 582 (https://peps.python.org/pep-0582/ ; the "__pypackages__" folder proposal) seems relevant here.
It'll be interesting to see how this all plays out with __pypackages__ and friends.
Yep. And so does the pyenv approach (which I understand involves permanently adding a relative path to $PATH, wherein the system might place a stub executable that invokes the venv associated with the current working directory).
And so do hand-made subshell-based approaches, etc. etc.
In "development mode" I use my activation-script-based wrappers. When just hacking around I generally just give the path to the venv's python explicitly.
* * * * * /path/to/project/.venv/python /path/to/project/foo.py
It's more typing one time, but avoids a whole lot of fragility later.The standard recommendation for this is `tomli`, which became the basis of the standard library `tomllib` in 3.11.
/// 2>/dev/null ; gorun "$0" "$@" ; exit $?
>Good-old posix magic. If you ask an LLM, it'll say it's due to Shebangs.Well, ChatGPT gives same explanation as article, unsurprising considering this mechanic has been repeated many times.
>none other fits as well as Go
Nim, Zig, D, all have `-run` argument and can be used in similar way. Swift, OCaml, Haskell can directly execute a file, no need to provide an argument.
[0]: https://groups.google.com/d/msg/golang-nuts/iGHWoUQFHjg/_dbL...
So this is skill issue, the blog post. `uv run` and PEP 723 solved every single issue the author is describing.
I have worked with Python on and off for 20+ years and I _always_ dreaded working with any code base that had external packages or a virtual environment.
`uv run` changed that and I migrated every code base at my last job to it. But it was too late for my personal stuff - I already converted or wrote net new code in Go.
I am on the fence about Python long term. I’ve always preferred typed languages and with the advent of LLM-assisted coding, that’s even more important for consistency.
And even if uv was perfectly solves all of our woes, it still seems worse than languages that solve packaging and deployment with a first-party built tools.
There’s only so much lipstick and makeup you can put on a pig…
The user
just
wants
to run
the damn program.
> `uv run` and PEP 723 solved every single issue the author is describing.
PEP 723 eh? "Resolution: 08-Jan-2024"
Sure, so long as you somehow magically gain the knowledge to use uv, then you will have been able to have a normal, table-stakes experience for whole 2 years now. Yay, go Python ecosystem!
Is uv the default, officially recommended way to run Python? No? Remember to wave goodbye to all the users passing the language by.
Yes, seems like it is de facto default, officially recommended way.
I strongly encourage you to read the article to acquire the context for the conversation before commenting, which is what I assume is happening here.
I don't agree, the user wants to run the program in a way the user wants to, but is frustrated when it doesn't.
If all dependencies were installed on the machine the script would run no problem. I have some scripts with dependencies that are installed on the system.
The author writes:
> The built in tooling within the go ecosystem is another large selling point. We don't need a .pyproject or package.json to configure ad-hoc formatting and linters, backed by pipelines to ensure consistency.
Maybe shebangs is not the solution to that problem? It's a convenience to run scripts as executable, but the user is supposed to setup the environment. Then he continues to explain that go has a great stdlib which makes it perfect for scripting. This is the reason I usually reach for python for complex scripts, the stdlib is big enough to solve most my problems.
Now that node includes sqlite the choice isn't as easy, but I wouldn't be pissed at node and javascript if I have to setup the environment to make sure the script runs. I understand how it runs, where it gets the dependencies. If I forget to run `npm i` before running the scripts that's my error, I prefer errors that remind me of my stupidity over magic.
However... scripting requires (in my experience), a different ergonomic to shippable software. I can't quite put my finger on it, but bash feels very scriptable, go feels very shippable, python is somewhere in the middle, ruby is closer to bash, rust is up near go on the shippable end.
Good scripting is a mixture of OS-level constructs available to me in the syntax I'm in (bash obviously is just using OS commands with syntactic sugar to create conditional, loops and variables), and the kinds of problems where I don't feel I need a whole lot of tooling: LSPs, test coverage, whatever. It's languages that encourage quick, dirty, throwaway code that allows me to get that one-off job done the guy in sales needs on a Thursday so we can close the month out.
Go doesn't feel like that. If I'm building something in Go I want to bring tests along for the ride, I want to build a proper build pipeline somewhere, I want a release process.
I don't think I've thought about language ergonomics in this sense quite like this before, I'm curious what others think.
For me, the dividing line is how compact the language representation is, specifically if you can get the job done in one file or not.
I have no doubt that there's a lot of Go jobs that will fit in a 500 line script, no problem. But the language is much more geared towards modules of many files that all work together to design user-defined types, multi-threading, and more. None of that's a concern for BASH, with Python shipping enough native types to do most jobs w/o need for custom ones.
If you need a whole directory of code to make your bang-line-equipped Go script work, you may as well compile that down and install it to /usr/local/bin.
Also the lack of bang-line support in native Go suggests that everyone is kinda "doing it wrong". The fact that `go run` just compiles your code to a temporary binary anyway, points in that direction.
It's really a huge pain point in python. Pure python dependencies are amazingly easy to use, but there's a lot of packages that depend on either c extensions that need to be built or have OS dependencies. It's gotten better with wheels and manylinux builds, but you can still shoot your foot off pretty easily.
No, bash is technically not "more" OS than e.g. Python. It just happens that bash is (often) the default shell in the terminal emulator.
In python, doing math or complex string or collection operations is usually a simple oneliner, but calling shell commands or other OS processes requires fiddling with the subprocess module, writing ad-hoc streaming loops, etc - don't even start with piping several commands together.
Bash is the opposite: As long as your task can be structured as a series of shell commands, it absolutely shines - but as soon as you require custom data manipulation in any form, you'll run into awkward edge cases and arbitrary restrictions - even for things that are absolutely basic in other languages.
More specifically, for the readability of code written by an LLM.
This phrasing sounds contradictory to me. The whole idea of scripts is that there's nothing to install (besides one standard interpreter). You just run them.
This notion is still strange to me. Just... incompatible with how I understand the term "script", I guess.
One of my biggest problems with python happens to be caused by the fact that a lot of freecad is written in python, and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore ) and the env variable that is supposed to disable that STUPID behavior has no effect because freecad is an appimage and my env variable is not propagating to the environment set up by freecad for itself.
That is me "trying to install other people's scripts" the other people's script is just a little old thing called FreeCAD, no big.
What I don't understand is why you call it a "script".
> and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore )
You're expected to do that anyway; it's part of the standard "Python project" .gitignore files offered by many sources (including GitHub).
But you mean that the repo contains plugins that FreeCAD will import? Because otherwise I can't fathom why it's executing .py files that are within your repo.
Anyway, this seems like a very tangential rant. And this is essentially the same thing as Java producing .class files; I can't say I run into a lot of people who are this bothered by it.
it's not as portable
pipx can run Python scripts with inline script metadata. pipx is implemented in Python and packaged by Linux distributions, Free/Net/OpenBSD, Homebrew, MacPorts, and Scoop (Windows): https://repology.org/project/pipx/versions.
But a script only has one shebang.
I see it has been proposed: https://discuss.python.org/t/standardized-shebang-for-pep-72....
A lot of people seem to describe a PEP 723 use case where the recipient maybe doesn't even know what Python is (or how to check for a compatible version), but could be instructed to install uv and then copy and run the script. This idea would definitely add friction to that use case. But I think in those cases you really want to package a standalone (using PyInstaller, pex, Briefcase or any of countless other options) anyway.
You could also use shell scripting or Python or another scripting language. While Python is not great at backward compatibility most scripts will have very few issues. Shell scripts are backward compatible as are many other scripting languages are very backward compatible (e.g. TCL) and they areG more likely to be preinstalled. If you are installing Go you could just install uv and use Python.
The article does say "I started this post out mostly trolling" which is part of it, but mostly the motivation would be that you have a strong preference for Go.
bla bla bla
node bla.js
If you care about anyone but yourself, don't write things in python for other people to distribute, install, integrate, run, live with.
If you don't care about anyone else, enjoy python.
When you know well the language, you dont need to search for this info for basic types, because you remember them.
But that's also true for typed languages.
That said, we can abuse the same trick for any languages that treats `//` as comment.
List of some practical(?) languages: C/C++, Java, JavaScript, Rust, Swift, Kotlin, ObjC, D, F#, GLSL/HLSL, Groovy
Personally, among those languages, GLSL sounds most interesting. A single-GLSL graphics demo is always inspiring. (Something like https://www.shadertoy.com/ )
Also, let’s not forget that we can do something similar using block comment(`/* … */`). An example in C:
/*/../usr/bin/env gcc "$0" "$@"; ./a.out; rm -vf a.out; exit; */
#include <stdio.h>
int main() { printf("Hello World!\n"); return 0; }
For larger projects (the exe), the shebang points to a C build file, which when compiled, knows the root path; that C build script then looks for a manifest, builds, links, and fork()s. A good a/m timestamp library with direct ccache support can spin up as fast as a script even on big projects.
Again, this is all a bad idea bc it's hard to control your environment.
I guess we were doing all this in the mid 2000s? When did TCC come out?
I think it’s uv’s equivalent, but for Swift.
(Also Swift specifically supports an actual shebang for Swift scripts.)
The main reasons being it is slow, its type system is significantly harder to use than other languages, and it's hard to distribute. The only reason to use it is inertia. Obviously inertia can be sufficient for many reasons, but I would like to see the industry consider python last, and instead consider typescript, go, or rust (depending on use case) as a best practice. Python would be considered deprecated and only used for existing codebases like pytorch. Why would you write a web app in Python? Types are terrible, it's slow. There are way better alternatives.
With that said... there is a reason why ML went with Python. GPU programming requires C-based libraries. NodeJS does not have a good FFI story, and neither does Rust or Go. Yes, there's support, but Python's FFI support is actually better here. Zig is too immature here.
The world deserves a Python-like language with a better type system, a better distribution system, and not nearly as much dynamism footguns / rope for people to hang themselves with.
https://pragprog.com/titles/smelixir/machine-learning-in-eli...
A Practical Guide to Machine Learning in Elixir - Chris Grainger
Why replace a nice language like python with anything coming out of javascript?
If TypeScript had the awesome python stdlib and the Numpy/ML ecosystem I would use it over Python in a heartbeat.
For IO bound tasks, it also helps that JavaScript has a much simpler threading model. And it ships an event based IO system out of the box.
Has shortcomings like all languages but it brought a lot of advanced programming language concepts to the masses!
> The only reason to use it is inertia
and
> Typescript is ubiquitous in web
:-)
There are some things that aren't as good, e.g. Python's arbitrary precision integers are definitely nicer for scripting. And I'd say Python's list comprehension syntax is often quite nice even if it is weirdly irregular.
But overall Deno is a much better choice for ad-hoc scripting than Python.
JavaScript itself supports bigint literals just fine. Just put an ‘n’ after your number literal. Eg 0xffffffffffffffn.
There’s a whole bunch of features I wish we could go in and add to json. Like comments, binary blobs, dates and integers / bigints. It would be so much nicer to work with if it has that stuff.
It absolutely doesn't. It doesn't impose any limits on number precision or magnitude.
Market pressure. Early ML frameworks were in Lisp, then eventually Lua with Torch, but demand dictated the choice of Python because "it's simple" even if the result is cobbled together.
Lisp is arguably still the most suitable language for neural networks for a lot of reasons beyond the scope of this post, but the tooling is missing. I’m developing such a framework right now, though I have no illusions that many will adopt it. Python may not be elegant or efficient, but it's simple, and that's what people want.
The code hasn't reached RC yet, but I'll definitely post a Show HN once it's ready for a preview.
(OCaml is probably what I'm looking for, but I'm having a hard time getting motivated to tackle it, because I dread dealing with the tooling and dependency management of a 20th century language from academia.)
Disclaimer: I work on Flutter at Google.
Yes, OCaml would be a decent language to look into. Or perhaps even OxCaml. The folks over at Jane Street have put a lot of effort into tooling recently.
If you want something with minimal startup times then you need a language that complies to native binaries like Zig, Rust or OCaml.
1. You can import by relative file path. (Python can't.)
2. You can specify third party dependencies in a single file script and have that work properly with IDEs.
Deno is the best option I've found that has both of those and is statically typed.
I'm hoping Rust will eventually too but it's going to be at least a year or two.
And part of those who still complain are momentarily stuck with it.
Just like survivorship bias. It's productive to ponder on the issues experienced by those who never returned.
As an example, almost everyone I’ve worked with in my career likes using macOS and Linux. But there are entire software engineering sub communities who stick to windows. For them, macOS is a quaint toy.
If you’ve never met or worked with people who care about typing, I think that says more about your workplace and coworkers than anything. I’ve worked with plenty of engineers who consider dynamic typing to be abhorrent. Especially at places like FAANG.
Long before typescript, before nodejs, before even “JavaScript the good parts”, Google wrote their own JavaScript compiler called Closure. The compiler is written in Java. It could do many things - but as far as I can tell, the main purpose of the compiler was to add types to JavaScript. Why? Because googlers would rather write a compiler from scratch than use a dynamically typed language. I know it was used to make the the early versions of Gmail. It may still be in use to this day.
If ML fulfills its promise, it won't matter in the least what language the code is/was written in.
If it doesn't, it won't matter anyway.
Python has multiple excellent options for this: JAX, Pytorch, Tensorflow, autograd, etc. Each of these libraries excels for different use cases.
I also believe these are cases where Python the language is part of the reason these libraries exist (whereas, to your point, for the matrix operations pretty much any language could implement these C wrappers). Python does make it easy to perform meta-programming and is very flexible when you need to manipulate the language itself.
> The main reasons being it is slow, <snip>, and it's hard to distribute.
Don't forget that Python consumes approximately 70x more power when compared to C.
Speed is the least concern because things like numpy are written in C and the overhead you pay for is in the glue code and ffi. The lack of a standard distribution system is a big one. Dynamic typing works well for small programs and teams but does not scale when either dimension is increased.
But pure Python is inherently slow because of language design. It also cannot be compiled efficiently unless you introduce constraints into the language, at which point you're tackling a subset thereof. No library can fix this.
A similar point was raised in the other python thread on cpython the other day, and I’m not sure I agree. For sure, it is far from trivial. However, GraalVM has shown us how it can be done for Java with generics. Highover, take the app, compile and run it. The compilation takes care of any literal use of Generics, running the app takes care of initialising classes and memory, instrumentation during runtime can be added to add runtime invocations of generics otherwise missed. Obviously, this takes a lot of details getting it right for it to work. But it can be done.
Their main criticisms of Python were:
> it is slow, its type system is significantly harder to use than other languages, and it's hard to distribute
Your comment would have been more useful if it had discussed how FastAPI addresses these issues.
It does rely on // which is implementation-defined according to POSIX. In some system //usr could refer to some kind of network path.
Last sentence here:
3.254 Pathname
A string that is used to identify a file. In the context of POSIX.1-2024, a pathname may be limited to {PATH_MAX} bytes, including the terminating null byte. It has optional beginning <slash> characters, followed by zero or more filenames separated by <slash> characters. A pathname can optionally contain one or more trailing <slash> characters. Multiple successive <slash> characters are considered to be the same as one <slash>, except it is implementation-defined whether the case of exactly two leading <slash> characters is treated specially.
[IEEE Std 1003.1, 2024 Edition]
It really is better for a language to either have # comments, or else support #! as a special case in a file that is presented for execution. You're also not launching an extra shell instance. (Too bad this // trick cannot use the "exec" shell command to replace the shell with the go program.)
Specifically the problem here is automated reformatting. Gopls typically does this on save as you are editing, but it is good practice for your CI system to enforce the invariant that all merged *.go files are canonically formatted. This ensures that the user who makes a change formats it (and is blamed for that line), instead of the hapless next person to touch some other spot in that file. It also reduces merge conflicts.
But there's a second big (bigger) problem with this approach: you can't use a go.mod file in a one-off script, and that means you can't specify versions of your dependencies, which undermines the appeal to compatibility that motivated your post:
> The primary benefit of go-scripting is [...] and compatibility guarantees. While most languages aims to be backwards compatible, go has this a core feature. The "go-scripts" you write will not stop working as long as you use go version 1.*, which is perfect for a corporate environment.
> In addition to this, the compatibility guarantees makes it much easier to share "scripts". As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future.
> The price of convenience is difficulties to scale
Of course, they never scale. The moment you start thinking about scaling, you should stop writing code as throwaway scripts but build them properly. That's not an argument to completely get rid of Python or bash. The cost of converting Python code to Go is near zero these days if there is a need to do so. Enough has been said about premature optimization.
> Anyone who's ever tried to get python working on different systems knows what a steep annoying curve it is.
If you need 10 libraries of certain versions to run a few lines of Python code, nobody calls that a script anymore. It becomes a proper project that requires proper package management, just like Go.
"You'd rather drive a compact car than an SUV? Might as well drive a motorcycle then!"
Perl is right there, requires no installation, and is on practically every unix-like under the sun. Sure it's not a good language, or performant, or easy to extend, but neither is python, so who cares. And, if anything, it's a bit more expressive and compact than python, maybe to a fault.
/*?sr/bin/env go run "$0" "$@"; exit $? #*/That being said...use Go for scripting. It's fantastic. If you don't need any third party libraries this approach seems really clean.
I make computers do things, but I never act like my stuff is the only stuff that makes things happen. There is a huge software stack of which my work is just the final pieces.
The term “full stack” works fine within its usual context, but when viewed more broadly, it becomes misleading and, in my opinion, problematic.
And it's okay. It doesn't mean it should be this way for everyone else.
It is pretty common (and been so for at least two decades) for web devs to differentiate like so: backend, frontend or both. This "both" part almost always is replaced by "full stack".
When people say this they just mean they do both parts of a web app and have no ill will or neglect towards systems programmers or engineers working on a power plant.
But it is already established in the industry, and fighting it is unlikely to yield any positive outcomes.
[1]: https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...
I think Java can run uncompiled text scripts now too
Additionally, it is even more powerful when used with go modules. Make every script call a single function in the shared “scripts” module and they will all be callable from anywhere symmetrically. This will ensure all scripts build even if they aren’t run all the time. It also means any script can call scripts.DeployService(…) and they don’t care what dir they are in, or who calls it. The arguments make it clear what paths/configuration is needed for each script.
// 2>/dev/null; exec go run "$0" "$@"#!/usr/bin/env bash
""":"
if command -v uv > /dev/null
then exec uv run --script "$0" "$@"
else
exec python3 "$0" "$@"
fi
":"""
//$HOME/.cargo/bin/rustc "$0" && ${0%.rs} "$@" ; exit
use std::env;
fn main() {
println!("hello, world!");
for arg in env::args() {
println!("arg: {arg}");
}
}
Total hack, and it litters ./ with the generated executable. But cute.I think arg0 was always useful especially when developing multifunctional apps like busybox that changes its behavior depending on the name it was executed as.
=begin
ruby $0; exit
=end
puts "Hello from Ruby"
Not immediately useful, but no doubt this trick will pop up at some random moment in the future and actually be useful. Very basic C99 too, though I'm not sure I'd want to script with it(!): //usr/bin/cc $0 && ./a.out && exitno need to name your program foo.go when you could just name it foo
Get started here: https://dev.to/yawaramin/practical-ocaml-314j
Something like //usr/bin/gcc -o main "$0"; ./main "$@"; exit
The main reason was to do all this without any dependencies beyond a C compiler and some POSIX standard library.
Oh come on, it's easy:
Does the project have a setup.py? if so, first run several other commands before you can run it. python -m venv .venv && source .venv/bin/activate && pip install -e .
else does it have a requirements.txt? if so python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
else does it have a pyproject.toml? if so poetry install and then prefix all commands with poetry run ...
else does it have a pipfile? pipenv install and then prefix all commands with pipenv run ...
else does it have an environment.yml? if so conda env create -f environment.yml and then look inside the file and conda activate <environment_name>
else does it have a uv.ock? then uv sync (or uv pip install -e .) and then prefix commands with uv run.
If you've checked out a repo or unpacked a tarball without documentation, sure.
If you got it from PyPI or the documentation indicates you can do so, then you just use your tooling of choice.
Also, the pip+venv approach works fine with pyproject.toml, which was designed for interoperability. Poetry is oriented towards your own development, not working with someone else's project.
Speaking of which, a project that has a pipfile, environment.yml, uv.lock etc. and doesn't have pyproject.toml is not being seriously distributed. If this is something internal to your team, then you should already know what to do anyway.
Acting as if these projects using whatever custom tool (and its associated config, by which the tool can be inferred), where that tool often isn't even advertised as an end-user package installer, are legitimate distributions is dishonest; and acting as if it reflects poorly on Python that this is possible, far more so. Nothing prevents anyone from creating a competitor to npm or Cargo etc.
1. https://mise.jdx.dev/lang/python.html
via https://gelinjo.hashnode.dev/you-dont-need-nvm-sdkman-pyenv-...
I respectfully disagree with this sentiment. JS is a fantastic Python replacement for scripts. Node.js has added all kinds of utility functions that help you write scripts without needing external dependencies. Bun, Deno, and Node.js can execute TS files (if you want to bring types into the mix). All 3 runtimes are sufficiently performant. If you do end up needing external dependencies, they're only a package.json away. I write all my scripts in JS files these days.
///usr/bin/env go run "$0" "$@"; exit
Note, the exit code isn't passed through due to:
https://github.com/golang/go/issues/13440> How true this is, is a topic I dare not enter.
augroup fix autocmd! autocmd BufWritePost *.go \ if getline(1) =~# '^// usr/bin/' \ | call setline(1, substitute(getline(1), '^// ', '//', '')) \ | silent! write \ | endif augroup END
I just want to see the full script where I execute it.
Look no further than babashka! It’s a clojure interpreter that has first class support scripting stuff. Great built in libs for shelling out to other programs, file management, anything http related (client and server), parsing, html building, etc.
Babashka is my go-to tool for starting all new projects now. It has mostly everything you need. And if it’s missing anything, it has some of the most interesting and flexible dependency management of any runtime I’ve ever seen. Via the “pod protocol” any process (written in go/rust/java whatever) can be exposed as a babashka dependency and bundled straight in. And no separate “install dependencies” command is required, it installs and caches things as needed.
And of course you keep all of the magic of REPL based development. It’s got built in nrepl support, so just by adding on ‘—nrepl-server 7888’ to your command, you can connect to it from your editor and edit the process live. I’m building my personal site this way and it’s just SO nice.
Sorry for the rant but when superior scripting solutions come up, I have to spread the love for bb. It’s too good to not talk about!!
Example:
#! /usr/bin/env gorun
//
// go.mod >>>
// module foo
// go 1.22
// require github.com/fatih/color v1.16.0
// require github.com/mattn/go-colorable v0.1.13
// require github.com/mattn/go-isatty v0.0.20
// require golang.org/x/sys v0.14.0
// <<< go.mod
//
// go.sum >>>
// github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
// github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
// github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
// github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
// github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
// github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
// github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
// golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
// golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
// golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
// golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
// <<< go.sum
package main
import "github.com/fatih/color"
func main() {
color.Green("Hello, world!")
}
The shebang line can be replaced for compatibility with standard Go tooling: /// 2>/dev/null ; gorun "$0" "$@" ; exit $?
//
// go.mod >>>
// ...Awesome!
Or the venerable https://babashka.org/
> go run does not properly return the script error code back to the operating system and this is important for scripts, because error codes are one of the most common ways multiple scripts interact with each other and the operating system environment.
I struggle to think of how the answers provided here could be clearer or more satisfactory. Why write an article if you're going to half-ass your research? Why even mention this nothingburger sidetrack at all...? (Bait?)
[0] https://stackoverflow.com/questions/24678056/linux-exec-func...
It’s great for “robust” code, not for quick things that you’re okay with exploding in the default way.
So your goal was to waste your reader's time. Thanks.
This is something I generally believe, but I think it's particularly important for things like languages and runtimes: the idea of installing things "on" the OS or the system needs to die.
Per-workspace or per-package environment the way Go, Rust, etc. does it is correct. Installing packages globally is wrong.
There should not be such a thing as "globally." Ideally the global OS should be immutable or nearly so, with the only exception being maybe hardware driver stuff.
(Yes I know there's stuff like conda, but that's yet another thing to fix a fundamentally broken paradigm.)
Python has been trying to kill it for years; or at least, the Linux distros have been seeking Python's help in killing it on Linux for years. https://peps.python.org/pep-0668/ is the latest piece of this.
The use of the system as a workspace goes back to when computers were either very small and always personal only to one user, or when they were very big and administrated by dedicated system administrators who were the only ones with permission to install things. Both these conditions are obsolete.
At least it seems important on NixOS, I had to rewrite a few shebangs on some scripts that used /bin/bash and didn't work on NixOS.
The compilation command returned immediately, and I thought it had failed. So I tried again and same result. WTF? I thought to myself. Till I did an `ls` and saw an `a.out` sitting in the directory. I was blown away by how fast the golang compiler was.
You can abuse the falsity of None to do things like `var or ""`, but this ground gets quite shaky when real bools get involved.
I think this points to some shortcomings of the shebang mechanism itself: That it expects the shebang line to be present and adhering a specific structure - but then passes the entire file with the line to the interpreter where the interpreter has to process (and hopefully ignore) the line again.
I know that situations where one piece of text is parsed by multiple different systems are intellectually interesting and give lots of opportunities for cleverness - but I think the straightforward solution would be to avoid such situations.
So maybe the linux devs should consider adding a new form for the shebang where the first line is just stripped before passing the file contents to the interpreter.
The only way to "pass the file contents" would be through the standard input stream, but the script might want to use stdin like normal, so this isn't an option.
Try the following in sh:
////////usr/local/go/bin/go
Well, how about this: I use ruby or python. And not shell.Somehow I have been doing so since +25 years. Never regretted it. Never really needed shell either. (Ok, that's not entirely true; I refer to shell scripts. I do use bash as my primary shell, largely due to simplicity; I don't use shell scripts though, save for keeping a few legacy ones should I be at a computer that has no support for ruby, python or perl. But this is super-rare nowadays.)