I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.
This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.
Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.
Cheap vps servers 1 gb ram and everything can cost around 10-11$ per year and using something like hetzner's cheap as well for around 30$ ish an year or 3$ per month most likely while having some great resilient numbers and everything
If anything, people self host because they own servers so upgrading becomes easier (but there are vps's which target a niche which people should look at like storage vps, high perf vps, high mem vps etc. which can sometimes provide servers for dirt cheap for your specific use case)
The other reason I feel like are the ownership aspect of things. I own this server, I can upgrade this server without costing a bank or like I can stack up my investment in a way and one other reason is that with your complete ownership, you don't have to enforce t&c's so much. Want to provide your friends or family vps servers or people on internet themselves? Set up a proxmox or incus server and do it.
Most vps servers sometimes either outright ban reselling or if they allow, they might sometimes ban your whole account for something that someone else might have done so somethings are at jeopardy if you do this simply because they have to find automated ways of dealing with abuse at scale and some cloud providers are more lenient than others in banning matters. (OVH is relaxed in this area whereas hetzner, for better or for worse, is strict on its enforcement)
They would be cheaper than starlink fwiw and most connections can be robust usually.
That being said, one can use tailscale or cloudflare tunnels to expose the server even if its behind nat which you mention in your original comment that you might be against at for paranoid reasons and thats completely fine but there are ways to go do that if you want as well which I have talked about it on the other comment I have written here in-depth.
I don't know what's the name of dongle though, it was similar to those sd card to usb thing ykwim, I'd appreciate it if someone could help find this too if possible
but also yeah your point is also fascinating as well, y'know another benefit of doing this is that atleast in my area, 5g (500-700mbps) is really cheap (10-15$) with unlimited bandwidth per month and on the ethernet side of things I get 10x less bandwidth (40-80mbps) so much so that me and my brother genuinely thought of this idea
except that we thought that instead of buying a router like this, we use an old phone device and insert sim in it and access router through that way.
You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well
You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.
You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything
But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.
Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up
> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what
I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.
The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.
Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.
Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.
I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.
Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
> I am not sure why people are so afraid of exposing ports
It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
> It's the way the Internet is meant to work.
Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.
Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.
There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585
I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.
This incident precisely shows that containerization worked as intended and protected the host.
Containerizing your publicly exposed service will also not protect your HTTP server from hosting malware or your SMTP server from sending SPAM, it only means you've protected your SMTP server from your compromised HTTP server (assuming you've even locked it down accurately, which is exactly the kind of thing people don't want to be worried about).
Tailscale puts the protection of the public portion of the story to a company dedicated to keeping that portion secure. Wireguard (or similar) limit the protection to a single service with low churn and minimal attack surface. It's a very different discussion than preventing lateral movement alone. And that all goes without mentioning not everyone wants to deal with containers in the first place (though many do in either scenario).
That's where wg/Tailscale come in - it's just a traditional IP network at that point. Also less to do to shut up bad login attempts from spam bots and such. I once forgot to configure the log settings on sshd and ended up with GBs of logs in a week.
The other big upside (outside of not having a 3rd party) in putting in the slightly more effort to do wg/ssh/other personal VPN is the latency+bandwidth to your home services will be better.
I am sure there must be an Iphone app which could probably allow something like this too. I highly recommend more people take a look into such workflow, I might look into it more myself.
Tmate is a wonderful service if you have home networks behind nat's.
I personally like using the hosted instance of tmate (tmate.io) itself but It can be self hosted and is open source
Once again it has third party issue but luckily it can be self hosted so you can even have a mini vps on hetzner/upcloud/ovh and route traffic through that by hosting tmate there so ymmv
Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.
I prefer to hide my port instead of using F2B for a few reasons.
1. Log spam. Looking in my audit logs for anything suspicious is horrendous when there's just megs of login attempts for days.
2. F2B has banned me in the past due to various oopsies on my part. Which is not good when I'm out of town and really need to get into my server.
3. Zero days may be incredibly rare in ssh, but maybe not so much in Immich or any other relatively new software stack being exposed. I'd prefer not to risk it when simple alternatives exist.
Besides the above, using Tailscale gives me other options, such as locking down cloud servers (or other devices I may not have hardware control over) so that they can only be connected to, but not out of.
Never again, it takes too much time and is too painful.
Certs from Tailscale are reason enough to switch, in my opinion!
The key with successful self hosting is to make it easy and fast, IMHO.
This is what I do. You can do Tailscale like access using things like Pangolin[0].
You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.
> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.
This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].
If you are running some public service, it might have bugs and of course we see some RCE issues as well or there can be some misconfig and containers by default dont provide enough security if an hacker tries to break in. Containers aren't secure in that sense.
Virtual machines are the intended use case for that. But they can be full of friction at time.
If you want something of a middle compromise, I can't recommend incus enough. https://linuxcontainers.org/incus/
It allows you to setup vm's as containers and even provides a web ui and provides the amount of isolation that you can trust (usually) everything on.
I'd say to not take chances with your home server because that server can be inside your firewall and can infect on a worst case scenario other devices but virtualization with things like incus or proxmox (another well respected tool) are the safest and provide isolation that you can trust with. I highly recommend that you should take a look at it if you deploy public serving services.
I personally wouldn't trust a machine if a container was exploited on it, you don't know if there were any successful container escapes, kernel exploits, etc. Even if they escaped with user permissions, that can fill your box with boobytraps if they have container-granted capabilities.
I'd just prefer to nuke the VPS entirely and start over than worry if the server and the rest of my services are okay.
there are some well respected compute providers as well which you can use and for very low amount, you can sort of offload this worry to someone else.
That being said, VM themselves are good enough security box too. I consider running VM's even on your home server with public facing strategies usually allowable
I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.
Your eventual connection is direct to your device, but all the management before that runs on Tailscales server.
But I also think it's worth a mention that for basic "I want to access my home LAN" use cases you don't need P2P, you just need a single public IP to your lan and perhaps dynamic dns.
But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?
https://github.com/jwhited/wgsd
https://www.jordanwhited.com/posts/wireguard-endpoint-discov...
In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.
You can also buy quite a few routers now that have it built in, so you literally just tick a checkbox, then scan a QR code/copy a file to each client device, done.
Behind a VPN your only attack surface is the VPN which is generally very well secured.
Edit: This is the kind of service that you should only expose to your intranet, i.e. a network that is protected through wireguard. NEVER expose this publicly, even if you don't have admin:admin credtials.
I now know better, but there are still a million other pitfalls to fall in to if you are not a full time system admin. So I prefer to just put it all behind a VPN and know that it's safe.
Pro tip: After you configure a new service, review the output of ss -tulpn. This will tell you what ports are open. You should know exactly what each line represents, especially those that bind on 0.0.0.0 or [::] or other public addresses.
The pitfall that you mentioned (Docker automatically punching a hole in the firewall for the services that it manages when an interface isn't specified) is discoverable this way.
In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.
With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.
With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.
Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!
Why did people use Dropbox instead of setting up their own FTP servers? Because it was easier.
Tailscale gives me an app I can install on my iPhone and my Mac and a service I can install on pretty much any Linux device imaginable. I sign into each of those apps once and I'm done.
The first time I set it up that took less than five minutes from idea to now-my-devices-are-securely-networked.
1. 1-command (or step) to have a new device join your network. Wireguard configs and interfaces managed on your behalf.
2. ACLs that allow you to have fine grained control over connectivity. For example, server A should never be able to talk to server B.
3. NAT is handled completely transparently.
4. SSO and other niceties.
For me, (1) and (2) in particular make it a huge value add over managing Wireguard setup, configs, and firewall rules manually.
right, like browsers are just sugar on top of curl
Speaking of that, I have always preferred a plain Unbound instance and a Samba server over fancier alternatives. I guess I like my setups extremely barebone.
All these are manageable through other tools, but it’s more complicated stack to keep up.
I could send a one page bullet point list of instructions to people with very modest computer literacy and they would be up and running in under an hour on all of their devices with Plex in and outside of their network. From that point forward it’s basically like having your own Netflix.
The only thing served on / is a hello world nginx page. Everything else you need to know the randomly generated subpath route.
LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.
But Tailscale is just a VPN (and by VPN, I mean: Something more like "Connect to the office networ" than I do "NordVPN"). It provides a private network on top of the public network, so that member devices of that VPN can interact together privately.
Which is pretty great: It's a simple and free/cheap way for me to use my pocket supercomputer to access my stuff at home from anywhere, with reasonable security.
But because it happens at the network level, you (generally) need to own the machines that it is configured on. That tends to exclude using it in meaningful ways with things like library kiosks.
Your cloudflare tunnel availability depends on Cloudflare’s mood of the day.
I agree you could use LLMs to learn how it works, but given that they explain and do the actions, I suspect the vast majority aren't learning anything. I've helped students who are learning to code, and very often they just copy/paste back and forth and ignore the actual content.
And I find the stuff that the average self hoster needs is so surface level that LLMs flawlessly provide solutions.
If you're self hosting for other reasons then that's fine. I self host media for various reasons, but I also give all my email/calendar/docs/photos over to a big tech company because I'm not motivated by that aspect.
They also aren't seeing any of your sensitive data being hosted on the server. At least the way I use them is getting suggestions for what software and configs I should go with, and then I do the actual doing. Which means I'm becoming independently more capable than I was before.
On self-hosting: be aware that it is a warzone out there. Your IP address will be probed constantly for vulnerabilities, and even those will need to dealt with as most automated probes don't throttle and can impact your server. That's probably my biggest issue along with email deliverability.
Haproxy with SNI routing was simple and worked well for many years for me.
Istio installed on a single node Talos VM currently works very well for me.
Both have sophisticated circuit breaking and ddos protection.
For users I put admin interfaces behind wireguard and block TCP by source ip at the 443 listener.
I expose one or two things to the public behind an oauth2-proxy for authnz.
Edit: This has been set and forget since the start of the pandemic on a fiber IPv4 address.
I had a 30-year-old file on my Mac that I wanted to read the content of. I had created it in some kind of word processing software, but I couldn’t remember which (Nexus? Word? MacWrite? ClarisWorks? EGWORD?) and the file didn’t have an extension. I couldn’t read its content in any of the applications I have on my Mac now.
So I pointed CC at it and asked what it could tell me about the file. It looked inside the file data, identified the file type and the multiple character encodings in it, and went through a couple of conversion steps before outputting as clean plain text what I had written in 1996.
Maybe I could have found a utility on the web to do the same thing, but CC felt much quicker and easier.
Having others run a service for you is a good thing! I'd love to pay a subscription for a service, but ran as a cooperative, where I'm not actually just paying a subscription fee, instead I'm a member and I get to decide what gets done as well.
This model works so well for housing, where the renters are also the owners of the building. Incentives are aligned perfectly, rents are kept low, the building is kept intact, no unnecessary expensive stuff added. And most importantly, no worries of the building ever getting sold and things going south. That's what I would like for my cloud storage, e-mail etc.
But Tailscale is the real unlock in my opinion. Having a slot machine cosplaying as sysadmin is cool, but being able to access services securely from anywhere makes them legitimately usable for daily life. It means your services can be used by friends/family if they can get past an app install and login.
I also take minor issue with running Vaultwarden in this setup. Password managers are maximally sensitive and hosting that data is not as banal as hosting Plex. Personally, I would want Vaultwarden on something properly isolated and locked down.
That said, I'm not sure if Bitwarden is the answer either. There is certainly some value in obscurity, but I think they have a better infosec budget than I do.
Granted, that's rarely enforced, but if you're a stickler for that sort of thing, check your ISP's Acceptable Use Policy.
I wonder if a local model might be enough for sysadmin skills, especially if were trained specifically for this ?
I wonder if iOS has enough hooks available that one could make a very small/simple agentic Siri replacement like this that was able to manage the iPhone at least better than Siri (start and stop apps, control them, install them, configure iPhone, etc) ?
p0wnland. this will have script kiddies rubbing their hands
This is nonsense. You can't self-host services meant to interact with the public (such as email, websites, Matrix servers, etc.) without a public IP, preferably one that is fixed.
I have a 1U (or more), sitting in a rack in a local datacenter. I have an IP block to myself.
Those servers are now publicly exposed and only a few ports are exposed for mail, HTTP traffic and SSH (for Git).
I guess my use case also changes in that I don’t use things just for me to consume, select others can consume services I host.
My definition here of self-hosting isn’t that I and I only can access my services; that’s be me having a server at home which has some non critical things on it.
Took a couple hours with some things I ran across, but the model had me go through the setup for debian, how to go through the setup gui, what to check to make it server only, then it took me through commands to run so it wouldn't stop when I closed the laptop, helped with tailscale, getting the ssh keys all setup. Heck it even suggested doing daily dumps of the database and saving to minio and then removing after that. Also knows about the limitations of 8 gigs of ram and how to make sure docker settings for the difference self services I want to build don't cause issues.
Give me a month and true strong intention and ability to google and read posts and find the answer on my own and I still don't think I would have gotten to this point with the amount of trust I have in the setup.
I very much agree with this topic about self hosting coming alive because these models can walk you through everything. Self building and self hosting can really come alive. And in the future when open models are that much better and hardware costs come down (maybe, just guessing of course) we'll be able to also host our own agents on these machines we have setup already. All being able to do it ourselves.
CC lets you hack together internal tools quickly, and tailscale means you can safely deploy them without worrying about hardening the app and server from the outside world. And tailscale ACLs lets you fully control who can access what services.
It also means you can literally host the tools on a server in your office, if you really want to.
Putting CC on the server makes this set up even better. It’s extremely good at system admin.
I just wish this post wasn’t written by an LLM! I miss the days where you can feel the nerdy joy through words across the internet.
If you have your own agent, then it can talk to whatever you want - could be OpenRouter configured to some free model, or could be to a local model too. If the local model wasn't knowledgeable enough for sysadmin you could perhaps use installable skills (scripts/programs) for sysadmin tasks, with those having been written by a more powerful model/agent.
It's not tariffs (I'm in Switzerland). It's 100% the buildout of data centers for AI.
We've gone a step further, and made this even easier with https://zo.computer
You get a server, and a lot of useful built-in functionality (like the ability to text with your server)
From to time, test the restore process.
Then someday we self-host the AI itself, and it all comes together.
My self hosted things all run as docker containers inside Alpine VMs running on top of Proxmox. Services are defined with Docker Compose. One of those things is a Forgejo git server along with a runner in a separate VM. I have a single command that will deploy everything along with a Forgejo action that invokes that command on a push to main.
I then have Renovate running periodically set to auto-merge patch-level updates and tag updates.
Thus, Renovate keeps me up to date and git keeps everyone honest.
Is there a replica implementation that works in the direction I want?
I am writing a personal application to simplify home server administration if anybody is interested: https://github.com/prettydiff/aphorio
I’ll bite. You can save a lot of money by buying used hardware. I recommend looking for old Dell OptiPlex towers on Facebook Marketplace or from local used computer stores. Lenovo ThinkCentres (e.g., m700 tiny) are also a great option if you prefer something with a smaller form factor.
I’d recommend disregarding advice from non-technical folks recommending brand new, expensive hardware, because it’s usually overkill.
And then you can only use distros which have a raspberry pi specific build. Generic ARM ones won't work.
I build out my server in Docker and I’ve been surprised that every image I’ve ever wanted to download has an ARM image.
Avoid stacking in too many hard drives since each one uses almost as much power as the desktop does at idle.
But I want to host an LLM.
Thanks
related "webdev is fun again": claude. https://ma.ttias.be/web-development-is-fun-again/
Also the "Why it matters" in the article. I thought it's a jab at AI-generated articles but it starts too look like the article was AI written as well
Waiting for the follow-on article “Claude Code reformatted my NAS and I lost my entire media collection.”
[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...
Historically, managed platforms like Fly.io, Render, and DigitalOcean App Platform existed to solve three pain points: 1. Fear of misconfiguring Linux 2. Fear of Docker / Compose complexity 3. Fear of “what if it breaks at 2am?”
CLI agents (Claude Code, etc.) dramatically reduce (1) and (2), and partially reduce (3).
So the tradeoff has changed from:
“Pay $50–150/month to avoid yak-shaving” → “Pay $5–12/month and let an agent do the yak-shaving”
I suspect it may have been related to the Network File System (NFS)? Like whenever I read a file on the host machine, it goes across the data-center network and charges me? Is this correct?
Anyway, I just decided to take control of those costs. Took me 2 weeks of part-time work to migrate all my stuff to a self-hosted machine. I put everything behind Cloudflare with a load balancer. Was a bit tricky to configure as I'm hosting multiple domains from the same machine. It's a small form factor PC tower with 20 CPU cores; easily runs all my stuff though. In 2 months, I already recouped the full cost of the machine through savings in my AWS bill. Now I pay like $10 a month to Cloudflare and even that's basically an optional cost. I strongly recommend.
Anyway it's impressive how AWS costs had been creeping slowly and imperceptibly over time. With my own machine, I now have way more compute than I need. I did a calculation and figured out that to get the same CPU capacity (no throttling, no bandwidth limitations) on AWS, I would have to pay like $1400 per month... But amortized over 4 years my machine's cost is like $20 per month plus $5 per month to get a static IP address. I didn't need to change my internet plan other than that. So AWS EC2 represented a 56x cost factor. It's mind-boggling.
I think it's one of these costs that I kind of brushed under the carpet as "It's an investment." But eventually, this cost became a topic of conversation with my wife and she started making jokes about our contribution to Jeff Bezos' wife's diamond ring. Then it came to our attention that his megayacht is so large that it comes with a second yacht beside it. Then I understood where he got it all from. Though to be fair to him, he is a truly great businessman; he didn't get it from institutional money or complex hidden political scheme; he got it fair and square through a very clever business plan.
Over 5 years or so that I've been using AWS, the costs had been flat. Meanwhile the costs of the underlying hardware had dropped to like 1/56th... and I didn't even notice. Is anything more profitable than apathy and neglect?
Bandwidth inside the same zone is free.