> This stack gives me full control: no subscriptions, no vendor lock-in,
> and no risk of platforms disappearing or changing policies.
I'm not trying to dunk on the author, but this sentiment encapsulates a tremendous paradigm shift in the meaning of "full control", compared to say:* Writing in obsidian
* Syncing files via git
* Publishing via nginx hosted on the cheapest possible vps (with https via let's encrypt)
Running a static blog is one of the easiest scenarios to self-host, because even if you get slashdotted, nginx should be able to handle all the requests even if you're host it on a potato
It's not free, but you can get a VPS for $20-$30 a year.
This isn't the best fit for everyone, but it seems weird to talk about "full control" and "no risk of platforms disappearing" when you're relying on the free tier of two external companies
The overhead of switching from Cloudflare to a VPS if they needed to is really not that much larger than switching from one VPS to another, so they likely figured it wasn't worth paying $30/yr to own the whole stack, as long as they architected it such that they could own the whole stack if they needed to.
While I know how to maintain a server, as I worked as a SysOp for many years supervising way bigger infrastructures, I just didn't want to spend my free time also with maintaining some remote host, installing security updates, checking fail2ban and so on.
Self hosting is always more than just "instaling nginx".
>> but it seems weird to talk about "full control" and "no risk of platforms disappearing" when you're relying on the free tier of two external companies
This ignores the fact, that OP is actually storing the content on it's own devices. Sure, OP is using some cloud services to sync them around and GitHub/Cloudflare to eventually publish it to the world. But there's no real dependency. OP could always and easily (!) move to another similar setup.
A fun hobby of mine is Googling “how I built this blog with [next.js/Gatsby/etc etc]” and going to page 10 of the Google results.
It’s just hundreds of developer blogs with no blog posts other than the ones announcing how the blog was built.
ooof.. this chart is making my day, not because of it's content, but it's presentation. Apparently it only works if you have very different scales for x and y axis. As you have the same metrics on both of them (number of posts), it only worked if like x axis is from 1 to 10 while y axis is either log or from 1 to 100 or so. Or you choose a differnt metric for the x axis, like "share of posts about blog setups".
I will off course link to you site. It’s to illustrate a point about my blogspot blog haha…
I’ve been looking to switch over to something else. But I’ve been actively blogging since 2006 and I haven’t found a good enough way and platform to switch over to.
Would you consider participating in a private beta of https://exotext.com/, a simple blogging platform I'm building? (example blog: https://rakhim.exotext.com/)
If so, please send me an email: hello at rakhim.org
(I'll update my website soon and will make the licensing clearer.)
This picture was used here:https://www.janromme.com/2025/04/are-there-any-good-blogging...
Thanks a lot. That graph is useful, and a nice shout-out is always good.
I have a 7,000 word blog post on how and why I switched which I didn't publish yet because I wanted to wait 6+ months with Hugo to make sure I ironed out all of the kinks. I guess I'm the anti-case for this one! Maybe I should post this one next week. It's been quite stable.
Is there a name for this phenomenon where this actually turns out to be true? Closest I can think of is "During a gold rush, sell shovels".
Note that Obsidian's markdown editing experience is _different_ from (but not necessarily better or worse than) what you'd get in a typical IDE. So while the choice seems weird to me, it absolutely makes sense if the author prefers the feature set that Obsidian offers. Being supported by so many different editors is one of markdown's strengths, after all, and this kind of editor-portability fits right in with the other parts of "Fully Owned" from the blog post.
But I honestly despise writing raw markdown in an IDE. If I'm writing (not coding), I need it to be somewhat visual, which means I want WYSIWYG -- and Obsidian is an excellent markdown editor, even if you don't use the other features.
My reasons for not liking writing "raw" markdown:
- Long links take up too much space. I put it in text so it'd be hidden
- No syntax highlighting in code blocks
- Text wrapping/font is typically not configured for easy reading of text
- A ton of markdown features are for visual formatting. Highlighting, bold, underline, strike-through, inline code, etc. If you stay in raw IDE no-preview, you never get the visual benefits.
- When I'm using markdown, I'm often mostly reading, and doing some writing, but even when I'm writing, I'm re-reading what I wrote constantly. It's annoying to switch to preview mode. Writing mode in IDEs isn't a pleasant reading experience unless you do a lot of configuration. (depending on the IDE of course)
I mean, writing raw md is fine for tiny little things. But because reading & writing are so linked, I don't like separate modes for each. I want to write in the visual mode I read in.
<cough> You didn't grow up with WordPerfect 5.1 for DOS with reveal codes on, did you? :)
Young me was like "how can you edit a document if you can't see the codes?!" Still to this day I wish I had it in word processors.
Hugo is a great choice for an SSG, I find it logical and intuitive. As for extending it with a CMS front end, I looked at Decapcms.org - formerly Netlify CMS - it gives you the WYSIWYG editor and you can hook it up to an asset management platform like Cloudinary for images.
BTW just checked Cloudinary pricing - generous free tier looks like plenty for most blogs.
Github has an automated action that then uses Pelican (a python based static site generator) to convert to HTML and deploys it to my VPS where it is served by Caddy.
Makes it very easy to have a WYSIWYG interface, the blog pages look basically identical to Obsidian.
https://mordenstar.com/blog/obsidian-to-pelican-exporter
Pelican static site generator:
I really wanted to use animated WebP but the iOS decoding is SUPER unreliable often resulting in choppy inconsistent framerates.
One thing I don't do but I know is more common is using <Picture> elements to serve scaled assets depending on if the user is coming from mobile vs desktop.
Depending on what you use for your blog, you might look and see if the SSG has plugins for media optimization. By the time I figured that out, I had already handrolled my own. :p
No new editors to learn and one gets instant access to copilot etc.
[1] https://simonwillison.net/2023/Apr/2/calculator-for-words/#c...
(Wrote about this: https://stanislas.blog/2025/02/writing-workflow-llm/)
Those risks still exist. GitHub and Cloudflare can do these things at any moment.
I hosted mine in AWS S3 with CloudFront for Https and custom domain. Using Hugo too. I wrote Cloudformation template for whole setup [0], just create the CFN stack using template in AWS. Then copy public html to S3 using "aws s3 sync" and done!
[0] https://github.com/neutor/bite-sized-aws/blob/main/static-s3...
Jekyll is slow for large content (my blog is huge), Hugo is fast. But I want to stay as mobile and lean as possible. I've tried and with a few changes in the template, I can move from Jekyll to Hugo in a weekend. I've also tried to stay as decoupled as possible from any templating engine (Jekyll), and hence rely mostly on pure HTML + CSS, while Jekyll is used more as the task runner. All the posts are separated by "YYYY" as folders and none of the posts have frontmatter.
I can also move between Github Pages, CloudFlare Pages, Netlify, or Vercel within hours if not minutes by just pointing the DNS to the ones that I want. I did this when Github Pages went down quite a few times about 3 years ago.
I almost went Markdown > Pandoc + Make but not worth it right now.
.
├── .devcontainer
│ └── devcontainer.json
├── content
│ └── posts
│ ├── 1718983133-post-1.md
│ ├── 1720321560-post-2.md
│ └── 1740070985-post-3.md
├── go.mod
├── go.sum
├── hugo.toml
└── README.md
3 directories, 8 files`themes` are small in size, why not copy them into your repository to keep the build hermetic?
https://discourse.gohugo.io/t/how-to-add-a-theme-using-modul...
I don’t have any but I’ve been swapping out to see which I like better and trying to make my own too.
I do almost the same, but instead I use Astro.
I use Obsidian, with a GitHub submodule for the content, and host it all as a static page. I wrote about that here if anyone is interested: https://bryanhogan.com/blog/obsidian-astro-submodule
The only thing I want is to implement a gui for adding and editing posts.
Haven't heard of dev containers like that before, but cool to see that they can be used like that.
> GitHub Pages is not intended for or allowed to be used as a free web-hosting service to run your online business, e-commerce site, or any other website that is primarily directed at either facilitating commercial transactions or providing commercial software as a service (SaaS).
Not finding a similar mention for Cloudflare... commercial sites are fine there?
So a small company could host a static landing page with generic info and "contact us" etc., and that would be fine, I think?
It also mentions that breaking the rules will result in getting a "polite email suggesting ways to reduce load on Github".
Curious though how it handles a surge in requests, like from being on the front-page of HN. But many open source projects host their doc pages with Github pages and some of those get a lot of traffic so I'm sure that it's not an issue
curl -i https://simonw.github.io/
I get this: HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
permissions-policy: interest-cohort=()
last-modified: Wed, 16 Nov 2022 21:38:29 GMT
access-control-allow-origin: *
etag: "63755855-299"
expires: Wed, 23 Apr 2025 18:20:50 GMT
cache-control: max-age=600
x-proxy-cache: MISS
x-github-request-id: 3D02:22250F:11BEDCA:123BE7A:68092D2A
accept-ranges: bytes
age: 0
date: Wed, 23 Apr 2025 18:10:50 GMT
via: 1.1 varnish
x-served-by: cache-pao-kpao1770029-PAO
x-cache: MISS
x-cache-hits: 0
x-timer: S1745431851.518299,VS0,VE110
vary: Accept-Encoding
x-fastly-request-id: 0df3187f050552dfde088fae8a6a83e0dde187f5
content-length: 665
The x-fastly-request-id is the giveaway.Everyone’s brain is wired differently, and for me Kirby CMS works best as I can blog anywhere and can run scripts server side. Throw it up on NearlyFreeSpeech.Net and I pay less than $5 a year for hosting.
I’ve read about SSGs suffer software (really dependency) rot when left alone for a few years. You’ll never need to worry about that with PHP.
It is just centralizing the web. You can do a lot with a $4 droplet if a single board computer isn't your cup of tea. Not "buying" into icloud/cloudfare is alone worth that cost. Also much more meaningful stack to learn.
Nothing against the post/author, I just feel the creativity to "exploit" features of the giants that is put in place just to undermine alternatives is misplaced.
Ps: Folks should chill out about wording here and there.
I tried some generators but it was so much more complicated than writing a style sheet and some pages. Maybe for some more complex use-case, okay, I get it, but the author's blog is so minimal and simple.
edit: today I learned people have very strong opinions about static site generators. Good valid reasons, y'all. Maybe my use case really is too simplistic!
- the ability to update every page on my site at-once in a uniform fashion (want to change the page layout or add a link in the footer, either you're manually editing a hundred HTML files or a couple lines in a single template file)
- Syntax highlighting. My blog has code blocks so I would have to manually invoke a syntax highlighter for each code block (and every time I update the code I'd have to do it again).
- auto-invoking graphviz and gnuplot. Similar to the code blocks, I write graphviz dot and graphviz blocks as code in my blog posts and have my static site generator render them to images. If I was manually writing the HTML then I'd either end up committing the built binaries (rendered images) to git (which, of course, is bad practice but ultimately inconsequential when talking about small SVGs for the few charts/graphs on my blog) or I'd have to have a "build" step where a script invokes graphviz/gnuplot to output the images which is the first step on the road to a static site generator.
- Avoiding name collisions with internal anchor links (the kind with the `#` symbol that scrolls the page). I use these for footnotes and code block references. I could manually assign names for these for each blog post, but when combining multiple blog posts into a single list-view for the home page, there is the risk that I might reuse the same `#fn1` anchor across multiple blog posts. Using a static site generator, I don't need to concern myself with naming them at all and I have my static site generator assign unique names so there are no conflicts.
Or at least, you will have as you realize things like you only wanted to do that on certain categories or whathaveyou.
You can in principle just write HTML with no script support, but it itself becomes an exercise in conspicuous asceticism. It is not unreasonable to want things like "the last 5 posts" on every page on a blog, or an RSS feed, or any number of other very basic, simple things that are impractical with an editor and raw HTML.
Obsidian is a nice middle ground between WYSIWYG and plain text - it doesn't send markup characters into the ether but at the same time does show you something close to the final text in real time.
Closest thing we've had to WordPerfect's Reveal Codes in decades.
It takes me nine characters plus URL and alt text in markdown using Hugo. I would be surprised if you get it right on the third try without consulting MDN, spending time to research, do image conversions, and fiddle with the <picture> tag and its srcset attribute.
The goal of generators is to reduce the friction of taking your notes/articles/etc. and wrapping them in thematically consistent HTML/CSS for your site. Once you've got it tuned/setup for your blog, they're pretty easy to use.
Obviously in your use-case where you find static site generators more complicated, then you can stick with raw html.
compare https://fabiensanglard.net/fastdoom/ to https://fabiensanglard.net/fluide/.
I didn't know about Cloudflare pages, thanks for sharing!
I use Jekyll, Github pages and Cloudflare. I use hackmd for editing but Obsidian will work as well.
I've been doing it for 20+ years (xitami and thttpd before nginx) and it not only has an infinite lifetime (because it's .html and files) but it also has no attack surfaces or mantainence required.
All that static site generator and remote corporate services stuff will break within a year if left untouched. And re: security, running a static nginx server from home is about 10,000x less of a security risk than opening a random page in $browser with JS enabled. nginx RCEs are less than once a decade. And DoS re: home IP? A vastly over-stated risk for most human people. In my 20+ years I've never had a problem, and even if I ever do, who cares? It's not like my personal website needs nine 9s of uptime. It can be down for days, weeks, with no issue. It's just for fun.
I know you have acknowledged the decision to entrust nginx with all of your personal data and tax records and bank statements and legal documents and browser history and GitHub credentials and ssh private keys and so on.
But it's still madness. You are one oversight, accident, or bug away from total pwnage.
Running nginx isn't madness. Thinking nginx is more of a risk, or even comparable to, your normal daily browser behavior certainly is.
Go look up the last nginx RCE. I think you'll be in the 2000s for just bare nginx.
We could go back and forth all day about the likelihood of a v8 sandbox escape vs RCE in a big C program. But another risk to consider is a non-obvious misconfiguration. A default server block with a wildcard server name. A stray symlink inside the docroot. An unexpected mount point. A temporary config change that you forget to revert. So many ways to fail...
Regardless, trusting your entire personal data security to a single layer of protection is madness.
Perhaps only exceeded by the logic of "it hasn't happened for a long time, therefore it will never happen again".
Good luck.
Host your personal data on your local machine. Encrypt it and sync to another physical location for backup.
But serve your blog from somewhere else. If you want to self-host it at home, buy a cheap NUC (or RPi) and hang it off the guest network on your WiFi router. Or, minimally, a VM or a zone/jail/container. I don't like the idea of a compromised host sitting on my home LAN, but it's better than a compromised daemon running on my desktop OS.
Or don't self-host at home, but mirror the data up to GitHub Pages or Cloudflare Pages for free. Or pay for a cheap VPS (people elsewhere in these comments mentioned a $20/yr host). Or OVH, Hetzner, even AWS low-spec instances...all reasonable options.
If you're no longer talking about blog posts, but you want worldwide access to arbitrary personal data on your home desktop, that's a job for a VPN -- preferably one that still does not terminate on your desktop itself, and of course not one that gives a sketchy third party direct access to the desktop.
I completely agree that pushing your personal files and such up to Dropbox (e.g., etc) would also be madness!
You say we're not talking about us, but I'm responding to your specific mention that you serve blog posts to the public Internet from nginx running on your desktop. We may not be able to help to average consumer, but I'm talking about you! :)
I am confident in my digital security for my threat model. Physical security, less so. The only time my data has ever been taken was when the FBI broke into my apartment at 6am in 2010 and held me at gunpoint and stole every computer in my apartment. They never charged me with a crime, never even indicted me. It was all just the feds squashing political dissent back in the Occupy wallstreet days and I was one of hundreds on the mass warrant issued for that morning's cross-country raids meant to intimidate and destroy lives. As was the FBI's style, they stole all the bitcoin I had on those computers, which I discovered when they kindly returned them (in parts) 10+ years later in 2021.
I'd argue the real risk for me living in the USA is not from random hackers finding a unicorn nginx RCE (or me misconfiguring), but from the government. And they're going to come in the front door not through my computer.
Given the state of things I think this applies to far more people than just me. So start up those home static servers. It's a relatively low risk, all things considered. And free communication with other humans, not shaped by corporate policy and opinion shaping, might just mitigate the government problem a bit.
That’s part of why I prefer hosting the static output somewhere external. Not perfect, but it lets me step away from the setup for months and still have it running.
As for IP, when it changes you can just copy the new IP and stop sending links with the old IP to friends. It's not a big deal. But a domain is nice (either some dyndns subdomain or a real tld with free DNS hosting (and dyndns updates) by zoneedit or the like).
Droobox and similar services have never been seamless for me with Obsidian. There will inevitably be some conflict when the same file gets edited on two machines, and an ugly conflict file will appear, with no easy way to resolve it without a specialized diff tool. Sometimes this conflict file will get unnoticed and a change will fall theough the cracks. Not a deal breaker - but not "seamless" either.
Relay attaches a CRDT to the Obsidian editor. It makes collaboration really smooth and removes conflicts.
Markdown collaboration is free, but we do charge for attachment/media storage. If you dm me (@dtkav) on our discord [1] or email me (in my profile) I'm happy to give out free storage to HN folks.
One other benefit of using Relay is that you can use the new "App Storage" feature on Android to keep your files isolated and private on your device [2]. Using dropbox/gdrive forces you to store your notes in your documents folder and that means that they can be read by other apps.
[0] https://relay.md
[1] https://discord.system3.md
[2] https://obsidian.md/changelog/2025-04-22-mobile-v1.8.10/
Basically, I saw that nextcloud built their own realtime text editor based on TipTap so I created an obsidian extension to connect to it.
Unfortunately work and uni got in the way but it was a very interesting idea/learning experience.
- you haven't setup auto save/auto sync
- multiple people editing the documents
- you frequently have no Internet and the application closed when you re-gain Internet or similar
if only a single person edits the same document at a time and you always sync changes when they happen that should be a non issue
https://matthewbilyeu.com/blog/2025-04-06/blogging-via-email
The one thing I do differently with Obsidian is that I use a private git repo rather than having it live in iCloud. I sync it across an iphone, ipad, and windows desktop.
I shared my communications plan here: https://notes.dsebastien.net/30+Areas/33+Permanent+notes/33....
This enables me to connect everything and to have a single place where I maintain the content.
My end goal is automating publishing and updates to all platforms from within Obsidian.
FWIW the CMS is Decap CMS and I have it configured likewise with Cloudflare Pages (since Pages supports functions that are needed for the auth/callback cycle).
On my end I ended up building an entire custom thing that bastardizes SvelteKit to produce a static website that I then upload to GitHub Pages, but I think over-engineering a personal website is always good fun - and hey, I can tweak lots of silly aspects like how my post names get turned into /nice-and-readable-urls.
Out of curiosity, what's the advantage of using Cloudflare Pages over GitHub Pages? Both seem to require a GitHub repository.
Only friend has struggled to get pelican and git etc set up on a new laptop. And tbh I am lost - and horrified - at the latest windows. Not keen on being tech support and working out why python command hangs etc.
And my custom vps setup doesn’t do tls. And I don’t want to try.
So I’m wondering if there are any alternatives? Is there a web frontend for blogging straight to ghpages, for example?
It seems this post talks about blogging from the desktop. But I just installed Obsidian on Android, it allows a filesystem vault. I think pairing it with Syncthing and some automation on my NAS (to do a git push to Github/Gitlab) could make it very streamlined.
I stopped uploading to Instagram the day they started using images for AI training.
If any of the file syncing applications work directly on the filesystem, I think you can use Obsidian on these folders and it'll sync automatically. On iOS, Obsidian defaults to iCloud, for example.
No deployment is required, just authorise Github to access your public repo permissions. It will create a repo called "tinymind-blog" under your Github account. Every update you make will be committed to your Github repo, ensuring data synchronisation at all times.
For my work[2] I am using SvelteKit and written my own blog using mdsvex and enabled pre-render. That works well too.
[1]: https://vivekshuk.la [2]: https://daestro.com/blog
These days for daily blogging I built Pagecord so I can just type an email and clicking Send :) https://pagecord.com
What I regret though was using Tailwind. Mainly because I later couldn't find a straightforward way to just use normal CSS and ignore that within the scope of some component/page.
Shameless plug for my AI blog run on Hugo -- https://reticulated.net/
(I would use Obsidian Publish, but it rendered far too slowly on some pages. I do use their excellent sync service though.)
I however use CF workers a lot to deploy single-purpose webapps for my personal use.
If I want to do a post, I log in, draft the post in a simple rich-text editor with image support and keyboard shortcuts for formatting, and click "publish." I don't have to fool with anything, there is no chance of sync breaking, and it's instantly responsive.
The back-end is stored in Github, but the posts are stored, with revision history, in a Postgres database that I have full access to.
It's hard to envision a scenario where I'd prefer digging through a git repository to see a previous version of a post rather than just clicking into the CMS site and clicking on the historical version of the post that I'd like to look at, where it is instantly displayed including images. And even with daily blogging, the number of times I've actually looked at a prior version of a post is very low -- probably less than once a year.
Keeping Hugo installed and up to date as part of the publish process seems like a headache as well. I like the blog to be totally separate from my local machine, so if I change anything or switch laptops, it doesn't interfere with the publication process.
Manually adding the Hugo front matter to each post also strikes me as annoyingly fiddly, although you could use a text expander app to handle most of it. Another issue is that I'm not sure that Markdown would do well for the full scope of formatting, such as aligning images and adding image captions.
What hit me ( hard ) wasn't the blogging set up, it was this:
>And if anything’s unclear, LLMs like ChatGPT or Claude are great for filling in the gaps.
For people like me who grown up before Internet was a thing. If we dont understand anything we either have to go to the library to look it up go to find someone for help. Then it was encarta. When Internet arrived I could do look up faster, or more importantly if I am stuck anywhere I could ask on IRC or forums for help.
I am sensing a large part of learning on forums and online will be gone. Read the manual becomes ask the LLMs?
And I am sensing ( no offence ) the generation growing up with LLMs may be far worst. We already have a generation of people who think Googling the answer is 99% of the work when in fact the answer is not important. It is the process of getting to that answer that is largely missing from today's education.
Sorry about the off topic message. But that sentence really hit me differently.
It's easy to fret about, "How will the next generation survive in the world I grew up in, without the skills I developed?"
But the answer is, they won't. Just like you don't need the same skills a caveman had because you live in a thoroughly transformed world, the kids of today won't need the same skills you had because they'll live in a thoroughly transformed world.
Ofc some good or important things will always be lost from one generation to the next. But that's okay. Still, humanity marches onward.
I do see the "I asked chatGPT" response more and more and initially had a similar feeling but I think it's still early days for LLM's. Will they be around in 10 years and ubiquitous like search engines? Who knows. But undoubtably they will get better over time and more accurate. Just like how the early internet had a lot of bad information on it, it got better over time.
This might also be a divide between different types of people. Personally I am very curious and want to really understand how something works so I get tons of information that won't help me solve a problem, but I understand the tool or part better. I would guess you might be in the same basket. But there are also people who just want the answer to solve the problem. They don't care how it works they just want it to work. And that's fine. It takes all kinds. Not everyone needs to have a masters in CS to use a computer or program one. Best we can do is try and nurture curiosity among other people and help them figure out ways to learn more and more.
One thing I don't see the author mention that is part of what I plan to do with Obsidian is use Syncthing (which I already use for other things) so I can work on a post when I'm not at my laptop. Probably just to write down ideas/notes and then fully work it out when I get to my laptop.
If the blog author is here, curious if they commit drafts to their repo or not. I personally don't commit drafts. Besides also using 'draft: true` in the front-matter, I gitignore any markdown file where the filename starts with the word "draft". When I'm ready to publish I rename the file.
Yeah, I do commit drafts. My repo’s private, so I don’t mind keeping everything versioned there, including posts still marked as draft: true.
It’s funny because we could ostensibly switch to any git hoster but it’s really only GitHub and gitlab huh? And Cloudflare is hard to beat.
Why???
I actually prefer reading like this, even on desktop. It feels like it causes less eye strain. Though I might prefer if it was closer to 65rem.
It's like having infinite "free" domains (even with the small fee for the base domain.)
But the most important part is the fun of just having an entire namespace at your disposal to create whatever "domain name" in seconds.
Nothing technically groundbreaking (actually Cloudflare does it automatically for me), but it's a nice quality of life trick.
How is Obsidian for correcting this? Years ago I would have used something like grammarly to solve it but I'd rather something build it in if possible and make it as brainless as possible
How to handle images & video content, when using git to track files? I'll explain my setup...
My vision is a somewhat "lo-fi" and 10+ year durable system - so even my "CI/CD" is more local, in the form of shell scripts, rather than remotely (like GitHub Actions)
I have a folder, that's basically my-website.com, and it has a folder `docs` for content (that's `mkdocs` default content folder). The top level directory is a git repository, which is pushed to Codeberg (code repository, similar to GitHub).
As a content editor, I currently use VSCodium (open-source VSCode) + FOAM (clone of ROAM, similar to Obsidian, which lets you cross-link Markdown files with Obsidian-style links [[MyLink]]. To be specific here, on MacOS, I created a shortcut to the website folder on Finder, and I drag that onto the VSCodium app icon, when I want to write. It's a pretty easy workflow on my computer (not practical on mobile)
I use MkDocs to generate the HTML site. I use a simple deploy script to run `mkdocs build` and `aws s3 sync` to copy the files to an AWS S3 bucket.
This all works pretty well, but I'm now trying to figure out how to handle photos & videos.
To give an example, I would have something like `~/_Websites/my-website.com/docs` and inside of that I would have `journal/2025/04/2025-04-23.md` as a journal entry. Related photos and videos, I use `journal/2025/04/media` - so its sort of a catch-all for all the media files for the month.
Recently, I added some large videos (unrelated: but I'm recording video of my CNC router doing cool stuff), and quickly realized
(1) Git is not the right spot for large media (I knew this, but just hadn't hit the problem yet - seeing how long `git pushes` take).
(1.1) I actually have a broken repo right now, as I committed video files into it, and can't `git push` without the network connection being cut. I think it may be on the Codeberg side, because they have a limit of 1 GB per repo.... So I'm also trying to figure out how to back out the change, get the video file out of there, and arrive to a better solution.
(2) After reading `git-lfs` (Git Large File Storage) website several times, I can't quite figure out how to integrate it - or IF I should integrate it.
(3) Now, I'm noodling on having something like a `MEDIA.my-website.com` directory, which is basically a non-git-tracked folder structure of photos/videos, which I would then rsync to a separate S3 bucket - and then website content could reference it. However - I'm fearful that over time, the markdown content and the media site would be out of "sync" - I frequently re-organize content as needed. For example, I might start with a `python` folder, and a `java` folder, but then later create a top-level `programming` folder. I could see doing the same with the media folder too. `padel` and `squash` folder (containing video clips of games, how-to's) might be grouped under a `sports` folder, and so on. Dragging these folders around while the media content is inside the folder usually doesn't cause problems, because of relative links. However when the content & media are in different file structures, broken links would happen - and this reduces the "fluidity" / increases friction in naturally re-organizing the site with time.
To conclude: - How to handle video media with git tracked markdown content?
Anyway! I appreciate your patience in reading this, and hope you get the idea of the setup - curious what folks who have been down this route can recommend.
[submitted title was "How I Blog with Obsidian, Hugo, GitHub, and Cloudflare – Zero Cost, Fully Owned"]
I agree it’s not “self-hosted”. But compared to a closed CMS or paid platform, it feels meaningfully more in my hands.
Also, I like treating my blog as a version-controlled, declarative "codebase" that's just a bunch of plaintext files (no MySQL tables, XML or JSON to crawl through).
What if it's a dedicated machine, colocated? What if it's at home, but I pay an ISP?
edit: Downvoted, care to explain why? I genuinely wrestle with this question. I self-host lots of services at home - and I also self-host services on a cloud VPS, which have better availability and security posture with regards to my home network for things I make public or semi-public. Some have told me this isn't "self-hosting" and I am not sure where the line is drawn.
I think blogs should be built like this to make preservation easier. I'd love to have something that make content domain agnostic, more like git that allows cloning and distributing content without forcing people to guess when to pull and archive if they want to keep track of things.
Anyway, the important thing is being in control of your own data. With proper off-site backups and reproducible setups using containers, migrating between VPS providers should usually take just a few hours.
I fully understand the arguments for (and against) managing your own server. But I've not been convinced by any arguments for that server being in your house/office rather than a climate controlled warehouse somewhere.
Well, unless you enjoy setting up and managing the physical hardware yourself of course. That's fully reason enough.
Personally I feel if you can quickly pull out of a provider and host elsewhere with maybe just a config change - aye the data is fully owned, close enough.
Edit: I guess you mean the content itself is still on your machine if the services go away, and you can choose to host them elsewhere
- Content is just Markdown files in my local Obsidian vault
- Hugo builds the site locally - no dependency on external editors or platforms
- GitHub is just used for version control and deployment
- Cloudflare Pages handles static hosting, but I can move it elsewhere anytime
But, I think using the term "fully-owned" to refer to pushing up to GitHub, then deploying to Cloudflare Pages is definitely not "fully-owned"
You're right that hosting on GitHub and Cloudflare isn't infrastructure ownership. I should’ve been more precise with the wording.
The OP's fully owned is analogous to someone else doing the printing for the privelege of spying on your readers.
Would be nice to coin an unambiguous term for this as it's a useful design goal.
Two of which are services operated by a corporate entity and one of which is a closed source piece of software.
The only thing "owned" here is the fact that the entire blog is simple markdown and the domain name. However, that doesn't mean it's very portable. It's not impossible, but it's a lot more work than I would want to do.
It's not really fully-owned, but it's owned in the ways that matter most
The internet as a whole relies on a huge variety of services all working as they should.