uv python pin <version> will create a .python-version file in the current directory.
uv virtualenv will download the version of Python specified in your .python-version file (like pyenv install) and create a virtualenv in the current directory called .venv using that version of Python (like pyenv exec python -m venv .venv)
uv pip install -r requirements.txt will behave the same as .venv/bin/pip install -r requirements.txt.
uv run <command> will run the command in the virtualenv and will also expose any env vars specified in a .env file (although be careful of precedence issues: https://github.com/astral-sh/uv/issues/9465)
Uv works more or less the same as I’m used to with other tooling in Ruby, JS, Rust, etc.
It was just that almost constant conflicts with poetry (and the errors about the project being out of sync) with a team developing in parallel were painful enough for me to suggest we try uv instead.
It seemed uniformly better with a simpler docker setup too (although I liked how pants would created executable bundles and you could just ship those).
For some reason uv pip has been very slow, however. Unsure why, might be my org doing weird network stuff.
> uv will respect Python requirements defined in requires-python in the pyproject.toml file during project command invocations. The first Python version that is compatible with the requirement will be used, unless a version is otherwise requested, e.g., via a .python-version file or the --python flag.
— https://docs.astral.sh/uv/concepts/python-versions/#project-...
# Ensure we always have an up to date lock file.
if ! test -f uv.lock || ! uv lock --check 2>/dev/null; then
uv lock
fi
Doesn't this defeat the purpose of having a lock file? If it doesn't exist or if it's invalid something catastrophic happened to the lock file and it should be handled by someone familiar with the project. Otherwise, why have a lock file at all? The CI will silently replace the lock file and cause potential confusion.If you end up with an invalid lock file, it doesn't silently fail and move on with a generated lock file. The `uv lock` command fails with a helpful message and then errexit from the shell script kicks in.
The reason I redirected the uv lock --check command's errors to /dev/null is because `uv lock` throws the same error and I wanted to avoid outputting it twice.
For example, I made my lock file invalid by manually switching one of the dependencies to a version that doesn't match the expected SHA.
Then I ran the same script you partially quoted and it yields this error which blocks the build and gives a meaningful message that a human can react to:
1.712 Using CPython 3.13.3 interpreter at: /usr/local/bin/python3
1.716 error: Failed to parse `uv.lock`
1.716 Caused by: The entry for package `amqp` v5.3.4 has wheel `amqp-5.3.1-py3-none-any.whl` with inconsistent version: v5.3.1
------
failed to solve: process "/bin/sh -c chmod 0755 bin/* && bin/uv-install" did not complete successfully: exit code: 2
This error is produced from `uv lock` when the if condition evaluates to true.With that said, this logic would be much clearer which I just commit and pushed:
if test -f uv.lock; then
uv lock --check
else
uv lock
fi
As for a missing lock file, yep it will generate one but we want that. The expectation there is we have nothing to base things off of, so let's generate a fresh one and use it moving forward. The human expectation in a majority of the cases is to generate one in this spot and then you can commit it so moving forward one exists.That revised script seems to be correct now. It'll check the lock if it exists, otherwise will generate the lock file. If this is a rule that's in agreement with all the team it's fine!
> If you end up with an invalid lock file, it doesn't silently fail and move on with a generated lock file. The `uv lock` command fails with a helpful message and then errexit from the shell script kicks in.
I just wanted to challenge this, because that might not be how uv behaves, or maybe my tests were wrong.
I created a new test project with uv, added `requests` and manually changed the lock file to produce an error (just changed the last line, where it read `v2.32.0` or similar to `v3`). While `uv lock --check` failed with an error message, `uv lock` happily updated the file.
Therefore, while I think the updated script works, it doesn't seem to be functionally equivalent to the previous revision. Or maybe we are not talking about the same kinds of issues with the lock file. How do you cause the lock file error?
It's just a minor nitpick however. Thanks for taking the time to answer!
if ! test -f uv.lock || ! uv lock --check 2>/dev/null; then uv lock; fi
Your new version no longer has the bug we are talking about. I don't know why you are trying to pretend it was never there though?I'm not sure I understand what you mean?
1. I posted the article last week on my site
2. I noticed it was on HN today (yay)
3. I looked at the parent's comment
4. The parent's description isn't what happens with the original code
5. I made the comment you're replying to on HN to address their concerns and included a refactored version of the original condition for clarity then said I pushed the updates
6. I pushed the updates to both git and my site so both match up
There's nothing to pretend about and there's no bug because both versions of the code do the same thing, the 2nd version is just easier to read and requires less `uv` knowledge to know what happens when `uv lock` runs with an invalid lock file. The history is in the HN comment I wrote and git history.It doesn't make sense to leave the original code in the blog post and then write a wall of text to explain how it worked fine but here's a modified version for clarity. Both versions of the code have the same outcome which is ensuring there's a valid lock file before syncing.
What would you have done differently? I saw feedback, saw room for improvement, left an audit trail in the comments and moved on.
Here's the commits https://github.com/nickjj/docker-flask-example/commit/d1b7b9... and https://github.com/nickjj/docker-django-example/commit/a12e2... btw.
Yes, it is: both gchamonlive and myself pointed out that if your lock file exists and is out of date, your (previous) script would silently update it before installing. This would happen because `uv lock --check` would return false, triggering the call to `uv lock`.
Your new version no longer does that, because you removed `! uv lock --check` from the condition.
Check my original comment, it doesn't operate like this. You can try it yourself in the same way I outlined in the comment.
`uv lock` fails if your lock file has a mismatch and will produce a human readable error saying what's wrong.
Now you seem even more confused. Do you mean `uv sync` will fail? `uv lock` is literally the command you run when there's a mismatch between pyproject.toml and uv.lock to update uv.lock. That's why it's called lock.
Here's a full reproducer: https://gist.github.com/remram44/21c98db9a80213b2a3a5cce959d...
Check out branch "previous-blog". Run `docker build . -t uvtest`. You will see that it builds with no error, and if you run `docker run uvtest cat /app/uv.lock`, you will see that the uv.lock in the image is NOT the one in the repo. It has been updated silently, which is what gchamonlive and myself pointed out.
Now check out branch "master". Run `docker build . -t uvtest` again. You will see `error: The lockfile at `uv.lock` needs to be updated` which is what you say always happened.
If you do `uv sync --locked` it will not succeed if the lock file does not exist or is out of date.
Edit: I slightly misread your comment. I strongly agree that having no lock file or a lockfile that does not match your specified dependencies is a case where a human should intervene. That's why I suggest you should always use the --locked option in your build.
There are many projects that use pip-compile to lock things down. You couldn’t use python in a regulated environment if you didn’t. I’ve written many Makefiles that explicitly forbid CI from ever creating or updating the actual requirements.txt. It has to be reviewed by a human, or more.
To me, one of the big advantages of UV (and similar tools) is that they make locked dependencies the default, rather than something you need to learn about and opt into. These sorts of better defaults are sorely needed in the Python ecosystem.
For applications, it's recommended (but still optional) to commit lock files so that very specific and consistent dependencies are maintained to prevent arbitrary, unsupervised package upgrades leading to breakage.
When you're developing a library, you still want consistent, reproducible dependency installs. You don't want, for example, a random upgrade to a testing library to break your CI pipelines or cause delays while releasing. So you check in the lock file for the people working on the library.
But when someone installs the library via a package manager, that package manager will ignore the lock file and just use the constraints in the package metadata. This avoids any interoperability issues for downstream users.
I've heard of setups where there are even multiple lock files checked in so different combinations of dependency can be tested in CI, but I've not seen that in practice, and I imagine it's very much dependent on how the ecosystem as a whole operates.
- (fairly) reproducable builds in that you don't want dependencies blind-updating without knowing about it
- removing "works on my machine" issues caused by different dependency versions
- being able to cache dependency download folders in CI and use the lockfile as the cache key
Should I be committing the lock file?
A lock file ensures all installations resolve the same versions, and the environment doesn’t differ simply because installations were made on different dates. Which is usually what you want for an application running in production.
They say this but do the exact opposite as you point out:
> The --frozen flag ensures the lock file doesn’t get updated. That’s exactly what we want because we expect the lock file to have a complete list of exact versions we want to use for all dependencies that get installed.
I also feel like this handles rare edge cases, but it seems like a pretty straightforward way to do so.
There is never a reason for an automated system to create a lockfile.
Where the lockfile doesn't exist, it creates it from whatever current is, and the lockfile then gets thrown away later. So it's equivalent to what you're saying, it just avoids having two completely separate install paths. I think it's the correct approach.
I think 2 languages are enough, we don't need a 3rd one that nobody asked for.
I have nothing against Rust. If you want a new tool, go for it. If you want a re-write of an existing tool, go for it. I'm against it creeping into an existing eco-system for no reason.
A popular Python package called Pendulum went over 7 months without support for 3.13. I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself.
https://github.com/python-pendulum/pendulum/issues/844
In my ideal world if someone wanted fast datetimes written in Rust (or any other language other than C) they'd write a proper library suitable for any language to consume over FFI.
So far this Rust stuff has left a bad taste in my mouth and I don't blame the Linux community for being resistant.
Having your python management tools also be written in python creates a chicken-and-egg situation. Now you have to have a working python install before you can start your python management tool, which you are presumably using because it's superior to managing python stuff any other way. Then you get a bunch of extra complex questions like, what python version and specific executable is this management tool using? Is the actual code you're running using the same or a different one? How about the dependency tree? What's managing the required python packages for the installation that the management tool is running in? How do you know that the code you're running is using its own completely independent package environment? What happens if it isn't, and there's a conflict between a package or version your app needs and what the management tool needs? How do you debug and fix it if any of this stuff isn't actually working quite how you expected?
Having the management tool be a compiled binary you can just download and use, regardless of what language it was written in, blows up all of those tricky questions. Now the tool actually does manage everything about python usage on your system and you don't have to worry about using some separate toolchain to manage the tool itself and whether that tool potentially has any conflicts with the tool you actually wanted to use.
Need modern Python on an ancient server running with EOL’d distro that no one will touch for fear of breaking everything? uv.
Need a dependency or two for a small script, and don’t want to hassle with packaging to share it? uv.
That said, I do somewhat agree with your take on extensions. I have a side project I’ve been working on for some years, which started as pure Python. I used it as a way to teach myself Python’s slow spots, and how to work around them. Then I started writing the more intensive parts in C, and used ctypes to interface. Then I rewrote them using the Python API. I eventually wrote so much of it in C that I asked myself why I didn’t just write all of it in C, to which my answer was “because I’m not good enough at C to trust myself to not blow it up,” so now I’m slowly rewriting it in Rust, mostly to learn Rust. That was a long-winded way to say that I think if your external library functions start eclipsing the core Python code, that’s probably a sign you should write the entire thing in the other language.
I will be out enjoying the sunshine while you are waiting for your Pylint execution to finish
I can't help but think uv is fast not because it's written in Rust but because it's a fast reimplementation. Dependency solving in the average Python project is hardly computationally expensive, it's just downloading and unpacking packages with a "global" package cache. I don't see why uv couldn't have been implemented in Python and be 95% as fast.
Edit: Except implementing uv in Python requires shipping a Python interpreter kinda defeating some of it's purpose of being a package manager able to install Python as well.
Interpreter startup time is hardly significant once in one invocation to set up your environment.
What makes Rust faster for downloading and unpacking dependencies. Considering how slow pip is and how fast uv is (100s of X) it seems naive to attribute it to the language.
>I think 2 languages are enough, we don't need a 3rd one that nobody asked for.
Enough for what? The uv users dont have to deal with that. Most ecosystems use a mix of language for tooling. It's not a detail the user of the tool has to worry about.
>I'm against it creeping into an existing eco-system for no reason.
It's much faster. Because its not written in Python.
The tooling is for the user. The language of the tooling is for the developer of the tooling. These dont need to be the same people.
The important thing is if the tool solves a real problem in the ecosystem (it does). Do people like it?
I do get the sentiment that a user of these tools, being a Python developer could in theory contribute to them.
But, if a tool does its job, I don't care if it's not "in Python". Moreover, I imagine there is a class of problems with the Python environment setup that'd break the tool that could help you fix it if the tool itself is written in Python.
If there are two versions of X, it becomes possible to use the wrong one.
If a tool to manage X depends on X, some of the changes that we would like the tool to perform are more difficult, imperfect or practically impossible.
Look at the number of stars ruff and uv got on github. That's a meteoric rise. So they were validated with ruff, and continued with uv, this we can call "was asked for".
> I'm against it creeping into an existing eco-system for no reason.
It's not no reason. A lot of other things have been tried. It's for big reasons: Good performance, and secondly independence from Python is a feature. When your python managing tool does not depend on Python itself, it simplifies some things.
From its homepage: https://rye.astral.sh/
> If you're getting started with Rye, consider uv, the successor project from the same maintainers.
> While Rye is actively maintained, uv offers a more stable and feature-complete experience, and is the recommended choice for new projects.
> Having trouble migrating? Let us know what's missing.
"I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself."
Is there anything being done in uv that couldn't be done in Python?
> Is there anything being done in uv that couldn't be done in Python?
Speed, at the very least.
You could just ignore uv and use whatever you want...
In an ecosystem where the primary implementation of the language is in C and nearly all native extensions are written in C do you really not know the answer to that?
They've been teaching C in universities for like 40 years to every Computer Science and Engineering student. The number of professionally trained developers who know C compared to Rust is not even close. (And a lot of us are writing Python because it's easy and productive, not because we don't know other languages.)
Ps the government and others have all recommended moving from C/C++ to Rust... It's irrelevant whether or not that's well-founded - it simply is.
And plenty of other cli tools have been successfully and popularly ported to Rust.
> If Python developers were the inventors of uv - they'd have invented uv
However, in both cases (uv and rye) it took someone with a Rust background to build something to actually shake up the status quo. With the core PyPa people mostly building on incremental improvements in pip, and Poetry essentially ignoring most PEP effort, things weren't really going to go anywhere.
I detailed this in another comment but pip (via requirements.txt): 8.1s, poetry: 3.7s, uv: 2.1s.
Not even 10x against pip and certainly not against poetry.
That said, the speed is only one reason I use it. I find its ergonomics are the best in the Python tools I’ve tried. For example it has better dependency resolution than poetry in my estimation, and you can use the uv run —-with command to try things before adding them to your environment.
I watched the video and he does mention it going from 30s to 3s when switching from a requirements.txt approach to a uv based approach. No comparison was done against poetry.
I am unable to reproduce these results.
I just copied his dependencies from the pyproject.toml file into a new poetry project. I ran `poetry install` from within Docker (to avoid using my local cache) `docker run --rm -it -v `pwd`:/work python:3.13 /bin/bash` and it took 3.7s
I did the same with an empty repo and a requirements.txt file and it took 8.1s.
I also did through `uv` and it took 2.1s.
Better performance?, sure. A lot better performence?, I can't say that with the numbers I got. 10x performance?... absolutely not.
Also, this isn't a major part of anybody's workflow. Docker builds happen typically on release. Maybe when running tests during CI/CD after the majority of work has been done locally.
https://news.ycombinator.com/item?id=44359183
I agree it would be better if it was in Python but pypa did not step up, for decades! On the other hand, it is not powershell or ruby, it is a single deployed executable that works. I find that acceptable if not perfect.
I updated a rust-implemented wheel to 3.13 compat myself and literally all that required was bumping pyo3 (which added support back in June) and adding the classifier. Afaik cryptography had no trouble either, iirc what they had to wait on was a 3.13 compatible cffi .
> I'm sure some of the changes are going too far. We are open to revert them if there's an interest from maintainers to merge this PR :)
Notably they bumped the bindings (“O3”) for better architecture coverage, and that required some renaming as 0.23 completed an API migration.
Cool story bro.
I'm totally against Python tooling being in dismal dissaray for 30 years I've been using the language, and if it takes some Rust projects to improve upon it, I'm all for it.
I also not rather have the chicken-and-egg dependency issue with Python tooling written in Python.
>A popular Python package called Pendulum went over 7 months without support for 3.13. I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself.
Somehow the availability and wide knowledge of C didn't make anyone bother writing a datetime management lib in C and making it as popular. It took those Pendulum Rust coders.
And you could of course use pytz or dateutil or some other, but, no, you wanted to use the Rust-Python lib.
Well, when you start the project yourself, you get to decide what language it would be in.
There is a reason: tools that exist today are awful and unusable if you ever wrote anything other than python.
: I'm saying it because the only way I can see someone not realizing it is that they have never seen anything better.
Okay, maybe C and C++ have even worse tooling in some areas, but python is still the top language of having the worst tooling.
However rust is a thousand times faster than python.
At the end, if you don't like it don't use it.
And thus rust is used to either make tools, or build libraries (de novo or out of rust libraries), which plays to both strengths.
Most programmers I've met were beginners, and they need something easier to work with until they can juggle harder concepts easily.
By default `uv` won't generate `pyc` files which might make your service much slower to start.
See https://docs.astral.sh/uv/reference/settings/#pip_compile-by...
I stumbled on this by porting something using that was previously pip, and that surprisingly different default has been a foot gun.
>In docker you can just raw COPY pyproject.toml uv.lock* . then run uv sync --frozen --no-install-project. this skips your own app so your install layer stays cacheable. real ones know how painful it is to rebuild entire layers just cuz one package changed.
>UV_PROJECT_ENVIRONMENT=/home/python/.local bypasses venv. which means base images can be pre-warmed or shared across builds. saves infra cost silently. just flip UV_COMPILE_BYTECODE=1 and get .pyc at build.
> It kills off mutable environments. forces you to respect reproducibility. if your build is broken, it's your lockfile's fault now. accountability becomes visible
Speed is okay, but security of a package manager is far more important.
https://chaitalks.tech/uv-a-modern-python-package-manager-in...
And while I'm here ... how does uv go about mitigating typosquatting risks ? I could imagine how it might issue warnings if you perhaps it notices you requesting "dlango", which would work OK for the top 10% but are you suggesting there's some more general solution built into uv ?
I did a quick search but 'typosquatting' is not an easy string to cut through.
Python packages are often just a zip file full of py files, with one of them called 'setup.py'. Running this file installs the package (originally using [distutils](https://docs.python.org/3.9/install/index.html#install-index)). This installation may fail if dependencies are not present, but there’s no method provided for installing those dependencies. You’re supposed to read the error message, go download the source for the missing dependencies, then run their setup.py scripts to install them.
b) pip now has an option _not_ to run arbitrary code by disallowing source distributions, by passing --only-binary :all:
"By default, pip does not perform any checks to protect against remote tampering and involves running arbitrary code from distributions. It is, however, possible to use pip in a manner that changes these behaviours, to provide a more secure installation mechanism." https://pip.pypa.io/en/stable/topics/secure-installs/
Given how often the python community already deals with breaking changes, it shouldn't be much different for pip to adopt saner defaults in a new major version.
In the end, every package manager (so far at least) download and runs untrusted (unless you've verified it manually) 3rd party code. Whatever the security difference is between uv and pip implementation-wise is dwarfed compared to if you haven't found a way of handling untrusted 3rd party code yet.
So every year we get a new “new way” to do it. Like that xkcd… this time this is the standard that will work!
Some of these are uv following the standards while pip is still migrating away from legacy behavior, some of these are design choices that uv has made, because the standard is underdefined, it's a tool specific choice, or uv decided not to follow the standards for whatever reason.
Current Dockerfile pip is as simple as:
COPY --chown=python:python requirements.txt .
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir --compile -r requirements.txt
COPY --chown=python:python . .
RUN python -m compileall -f .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
https://docs.astral.sh/uv/guides/integration/docker/#using-u...(We'd recommend pinning the version or SHA in production)
RUN --mount=type=cache,target=/root/.cache/pip pip install ...
As someone who usually used platform pythons, despite advise against that, uv is now what got me to finally stop doing so.
- Removing requirements.txt makes it harder to track the high-level deps your code requires (and their install options/flags). Typically requirements.txt should be the high level requirements, and you should pass them to another process that produces pinned versions. You regenerate the pinned versions/deps from the requirements.txt, so you have a way to reset all dependencies as your core ones gain or lose nested dependencies.
- +COPY --from=ghcr.io/astral-sh/uv:0.7.13 /uv /uvx /usr/local/bin/ seems useful, but the upstream docker tag could be repinned on a different hash, causing conflicts. Use the hash, or use a different way to stage your dependencies and copy them into the file. Whenever possible, confirm your artifacts match known hashes.
- Installing into the container's /home/project/.local may preserve the uv pattern, but it's going to make a container that's harder to debug. Production containers (if not all containers) should install files into normal global paths so that it's easy to find the, reason about them, and use standard tools to troubleshoot. This allows non-uv users to diagnose the application running, and removes extra abstraction layers which create unneeded complexity.
- +RUN chmod 0755 bin/ && bin/uv-install* - using scripts makes things easier to edit, but it makes it harder to understand what's going on in a container, because you have to run around the file tree reading files and building a mental map of execution. Whenever possible, just shove all the commands into RUN lines in the Dockerfile. This allows a user to just view the Dockerfile and know the entire execution without extra effort. It also removes some complexity in terms of checking out files, building Docker context, etc.
- Try to avoid docker compose and other platform-constrained tools for the running of your tests, for the freezing of versions, etc. You SDLC should first be composed of your build tools/steps using just native tools/environments. Then on top of that should go the CI tools. This separation of "dev/test environment" from CI allows you to take your "dev/test environment" and run it on any CI platform - Docker Compose, GitHub Actions, CircleCI, GitLab CI, Jenkins, etc - without modifying the "dev/test environment" tools or workflow. Personally I have a dev.sh that sets up the dev environment, build.sh to run any build steps, test.sh to run all the test stuff, ci.sh to run ci/cd specific stuff (it just calls the CI/CD system's API and waits for status), and release.sh to cut new releases.
This is what pushed me to use Poetry.
A simple "requirements.in" I did over this weekend was a single dependency:
miniboss >=0.4, <0.5
And used pip-compile to pin all transitive dependencies: pip-compile -o requirements.txt requirements.in
This generated a "requirements.txt" with 14 dependencies with pinned versions: attrs==25.3.0
...13 more dependencies
It's then only a matter of running "pip install -r requirements.txt" in the venv for my "application" (wrapper scripts for Docker).I've largely settled on this scheme for work and person projects because it's simple (only dev dependency is pip-tools or uv), and it doesn't tie me to a particular Python project management tool (pipenv, pdm, poetry, etc.).
I thought it only locks down hashes?
The very first section of the article talks about replacing requirements.txt with pyproject.toml which contains a similar high-level list of deps
Only thing I’m not sure about: why is having your list of requirements in requirements.txt vs project.toml? Isn’t it just one file vs another?
Just install it and try running something using the —-with flag. That’s where I became intrigued.