Under the hood it's effectively running:
docker run --rm -w $PWD:/workspace \
python:3.11-slim \
pip install -q patchpal && \
<command>
Which cool, great, I sure love "pip install"ing every time instead of just baking a single container image with it already installed.This isn't any sort of fancy or interesting sandboxing, this is shelling out to "docker run", and not even using docker as well as it could.
Quoting from the linked page:
> The tradeoff is ~5-10 seconds of container startup overhead
Sure, maybe it's 5-10 seconds if you use containers wrong. Unpacking a root filesystem and spinning up a clean mount namespace on linux is a few ms, and taking more than a second means something is going wrong, like "pip install"ing at runtime instead of buildtime for some reason.
I can spin up a full linux vm and run some code in quicker than 5 seconds.
Obviously the correct thing for such a use case would be building their own image with whatever tools are needed and then using that.
Unfortunately, then they’d probably get roasted for not maintaining the image well enough and not having proper enough automation set up to keep it recent in Docker Hub or wherever, which they’d then also have to do. On an individual level, it’s easier to just hold it wrong and do what works. Could also build the image locally once, but again, more work.
I think the ideal DX on Docker’s side would be:
docker run some-container --pre-requisites “pip install…”
Basically support for some list of commands needing to be done, which would build an intermediate image locally and reuse it whenever the same base image and prerequisite command is used in a run command. Then you could avoid doing unnecessary init in the container itself and keep using silly little scripts without having to push reusable images and keep them up to date yourself.Really, the fact that any package gets that many downloads is crazy to me. (I think the main reason that boto3 ecosystem stuff tops the charts is that they apparently publish new wheels daily.) How many devices run Python? How many of those need, say, Numpy? How many of those really care about being on the latest version all the time, and can't use a cached version? (Granted, another problem here is that you can't readily tell pip "prefer a cached version if anything already cached is usable". Pip doesn't even know what's in its own cache, unless it was built locally; the cache is really only there to power a caching HTTPS proxy, so it stores artifacts keyed by a hash of the original download URL.)
That doesn’t sound right - the LLM told them it was a fantastic idea!
Works great when you have a clear verification signal (tests passing), but what drives convergence when that signal isn’t well-defined?
Launch an AI agent to operate on production servers/sql safely using tmux
What makes this shortsighted is that EV development isn't just about the car — it's about building the software and battery supply chain competence that will define the next 20 years of automotive. You can't pause that for a few years and catch up later. The institutional knowledge, supplier relationships, and engineering talent move to whoever is actively building.
This feels like the Kodak pattern: a profitable incumbent deciding the future can wait because the present is still comfortable.