At my work (university research lab) the Ph.D. students have to publish their thesis as a book to defend their degree. They are free to make the image for the cover, which is a very nice touch ang give you artistic freedom in what was supposed to be one of the most important moments of your career (I went for a picture of the chip I designed during my research).
For the past 3 years or so all we have are generic AI generated sciency-looking figures at the cover and it is depressing.
"Don't judge a book by its cover" is for people who don't actually read books. You can't necessarily tell when a book is good based on its cover, but you can absolutely tell with a high certainty that a book with a lazy, crappy, low effort cover is probably going to have a similar level of care and attention put to the contents. It's easily at least a 95% hit rate. Is not overly critical to see something presented lazily and assume it will also be lazy inside, and in this kind of field, I'd also expect that if the cover is AI generated, most of the content could easily be as well.
This isn't the first tab I've closed at the first sign of lazy AI usage and it probably won't be the last.
In that case throwing a generated image without touch up shows the lack of care of the author for a work that's not as fleeting as a podcast. It's not that hard to type the correct words and/or a non wobbly font with Paint.NET / Photopea / Gimp / Affinity Photo / Photoshop / <your favorite pixel editor here>. It also shows an usage of AI without supervision which is kind of a red flag.
I used to listen to Michael Kennedy a lot when my day job was Python, and still occasionally do so this may get a pass, but it's still a bad signal in my books.
In my mind both are first steps to something more "proper" but one is at least hand-crafted artisan-ish compared to the other.
I have no qualms about using AI generated images as placeholder stuff or as a first step in an iterative process but when someone just slaps the image without the least bit of retouching it ends up looking kitsch.
https://blobs.talkpython.fm/00-readers-brief-intro.mp3?cache...
Put more effort into respecting other art forms and you might not get this reception next time.
The podcast has also been going for several years, before it would have been possible to generate audio podcasts with AI.
What you are referring to is the Readers' brief and you're taking it out of context. If you actually went to all of them, you would find this as the very first "track":
https://blobs.talkpython.fm/00-readers-brief-intro.mp3?cache...
It clearly explains that this is an extra to the book, it's created with AI and meant to be fun. It makes minor mistakes but enjoy it if you like or just don't listen and read the book without.
Here is the transcript from that opening track in case you don't want to listen:
Hello and welcome to the Reader's Brief for the Talk Python in Production book. This is your author, Michael Kennedy. I am thrilled you're interested in the book.
This companion audio series features short, exploratory conversations around each chapter of the book. They typically range from two and a half to four minutes per chapter. You can listen to each chapter's Reader's Brief just before reading a chapter to help you get in the ideal mindset. Or you can listen afterward and let your mind wander and expand on the ideas covered in the chapter. It's all up to you. Once I neared completion of the book, I had been brainstorming how I could offer an audio version. However, there are sufficiently many code listings that are too important to the content to really support a word-for-word audio book. It would be rough to listen to me narrate a Nginx configuration file to you, for example.
Thus, I came up with the reader's brief idea. A conversation around topics of each chapter in a brief two to four minute format that adds to the book rather than a traditional spoken true audio book. I do want to set expectations a bit. The reader's briefs are spoken by AI. Very good ones, by the way. So you will hear an occasional misspoken acronym such as Nginx or SSH. But enjoy these for what they are and don't expect perfection. They're really interesting ideas and background stories and thoughts on each chapter.
I hope you find that they add value to your experience. I know they did to mine. Finally, I have added the necessary metadata to the MP3 files. You should be able to add them all to your music library and they will appear in a single album. MP3 file for all the chapters together and put chapter markers in that MP3 file. Pick which one you want. Thanks for the interest in my book. I'll be with you page by page as you learn from it. Cheers.
Likely safe to assume that everything on this site is AI generated, including the book.
I skipped through it and heard it pronounce nginx as en-gee-onix and only then did I realize it was fake.
I appreciate the idea.
Love the podcast! I'm sure it isn't slop of any kind - I've really enjoyed reading your recent blog posts about Talk Python's web set up, so I'll definitely be giving it a read.
I had the same initial thought, but I was skimming the page and came across this line:
> Then, see how to deploy a Flask+HTMX app via Granian, wire it into NGINX, and ensure automatic startups with systemd.
So I've just discovered that https://github.com/emmett-framework/granian exists...
You can read the first 1/3 online for free. The rest is available DRM free.
But for the site as you mention, there is no dark mode. Is this some janky extension you use that isn't working? That's note the site's fault.
docker volume create umami-volume
How do you manage data backup and restore?For example, if you run `sqldump --db umami ...` in regular postgres (which is what they are using IIRC), then you run:
docker exec umami "sqldump --db umami ..."
Or something almost exactly like this (just from memory here and I wrote that script a year ago).
edit: console command for anyone else struggling to read this `document.documentElement.style.setProperty('--bulma-strong-color', '#000');`
This site doesn't even have two themes, that css is just there to break the bold text!
Actually, one of the more interesting parts of the Google SRE book was that they don't try to aim for 0 downtime. They consider the background error rate of any network request and optimising much beyond this is counter productive.
Even for individual services they make a point of not trying to make them perfectly available, as this means downstream services are less likely to build in adequate provision for failure.
Those tech giants got to where they are by recognising specifically that they don't have "no downtime" requirements.
"Move fast and break things" isn't the mantra of companies with zero downtime requirements.
Was hoping the book would cover data persistence.
I ran code that way for years. But now we have 23 different services: web apps, APIs, and database servers, my code and other self-hosted services.
I would NOT run 23 projects/servers (3 versions of postgres) this way. Like so much, it depends. FWIW, the book goes into depth about these trade-offs.
uv tool run "httpx[cli] @ git+https://github.com/encode/httpx"
To be clear in this example i'm not pulling a package published on pypi, i'm running the HEAD of that git repo (i could do a branch or tag instead). I could use the "uvx" shortcut instead of "uv tool run". I could specify a specific python version (either one already installed on this OS or choose a dist which uv downloads for me).This caches the deps in an isolated virtual env for me. It'll only download the deps in the first run.
This forced my browser to reload the page, and it beats the entire purpose of anchoring and fragment-based navs.
It is as frustrating as when people use "200% faster" to mean exactly the same thing as "twice as fast", and "100% faster" to mean the same thing.
It's just a wording annoyance that always gets to me, so it's not a big deal. I always prefer "x% as fast" and "x% the price", because they're largely unambiguous.
A 8 CPU / 16 GB Ram server at Hetzner is $30 or so per month. It's $200+ at AWS / Azure.
Bandwidth is 4TB included from free at Hetzner, it's $92.16 / TB or $368.64 additional at AWS / Azure.
That is where the 6x comes from. It's described in detail with that math in the book BTW.
I interviewed the creator of Granian on Talk Python BTW.
It is pretty light reading, name dropping a lot of software without going into details.
As always with Python: These books do not tell you the downsides, and the future of Python is uncertain because the governance has been taken over by a bunch of mediocre weirdos. Python core has always suffered from the problem that occasionally smart people implement something and then leave, but the majority of core devs are pretty dumb and they can now vote in their own after van Rossum left.