I am reminded of an aphorism about having a problem and deciding to use regex.
> Historical data: I’m not chasing down grand mysteries that require fleet-wide aggregate metrics.
Everyone believes this .. until it isn't true, and then you find yourself needing logs from the last two weeks.
For home labs, log aggregation is an easy problem to deal with these days, and a secure sink to send all your logs to has (potentially) more than one benefit.
Anecdote - I've just been tracking some unpleasant FLUSH CACHE EXT errors on my aging pre-owned Xeon box, and having an understanding of frequency / distribution of those errors on the hypervisor, but also correlation with different but related errors presenting in the VMs, was a) very useful, b) not something I'd have predicted I'd need before hand.
I recently had to troubleshoot a hanging issue on one of my servers, so I needed something that could ship logs. The modern observability stack is a deep pit of complexity, but OpenTelemetry is a standard, and there are reasonably simple tools in the ecosystem. I knew I didn't want a behemoth like Grafana, and I was aware of SigNoz, though it seems janky. Then I stumbled upon OpenObserve, and it looked promising. Setting it up on a spare mini PC and opentelemetry-collector on the server was pretty straightforward. Getting the collector configuration right took some trial and error, though.
I have to say, I'm quite satisfied with this setup. I ended up installing the collector on other machines, so it's almost like a proper observability system now :)
The graphs are nice. I can expand it to monitor anything else I would need. I haven't setup alerts yet, but it's possible.
I'm not really concerned about monitoring the monitor. It's not a big deal for my use case if it goes down. Metrics and logs will be submitted when it's back up, since they're cached on the servers. Besides, I'm only running OpenObserve on the machine, so there aren't many moving parts.
Anyway, all this is to say that sometimes there's more to be gained from using off-the-shelf tooling instead of rolling your own, even if it involves more complexity. Server monitoring is an old tradition, and there are many robust solutions out there. OTLP isn't that bad, especially at smaller scales, and it opens the door to a large ecosystem. It would be foolish not to take advantage of that.
Short of paying for a service (which somewhat goes against the grain of trying to host all your own stuff), the closest I can come up with is relying on a service outside your network that has access to your network (via a tunnel/vpn).
Given a lot of my own networking set-up (DNS/Domains/Tunnels etc) are already managed via Cloudflare, I'm thinking that using some compute at that layer to provide a monitoring service. Probably something to throw next at my new LLM developer...
If anybody wants to be a clever clogs, combining both this and Uptime Kuma would be genius. What I want is redundancy. E.g., if something can't be reached, check on the other, likewise if one service takes a crap, continue monitoring via the other and sync up the histories once they're both back online.
This "local or cloud" false dichotomy makes no sense to me—a hybrid approach would be brilliant.
If anyone manages this, email me: me@hammyhavoc.com. I would love to hear about it.
I find Gatus much more thought through.
Not necessarily an API, but a config file would be nice.
Also... there is a big disclaimer at the very top of the page.
Having a feature-rich TSDB backing alerting minimizes time adding alerts, and the UX of being able to write a potential alert expression and seeing when in the past it would fire is amazing.
Just two processes to run, either bare or containerized, and you can throw in a Grafana instance if you want better graphs.
e.g. For 1 machine, hourly checking is ~$0.25/year
I have found that "machine is online" is usually not what I need monitoring for, at all. I'll notice if it's down. It's all the mission-critical-but-silently-breakables that I bother to monitor.
- raid health
- free disk space
- whether backup jobs running
- ssl certs expiring
Also if you have kids 0-6 you can't schedule anything relaibly
Because of the wide breadth of what a homelab can mean it's really hard to make universal statements about what is always good. By the style of the article the author probably wants backups & failure notifications in some way if it's not already covered outside of their custom monitoring.
Hope you might give us a try at https://heiioncall.com/ and let me know if that fits. (Disclosure: our team is building and operating it as a simple monitoring, alerting, on-call rotations solution.) We have the cron job heartbeats, HTTP healthchecks, SSL certificate expiration, etc etc all in one simple package. With mobile app alerts (critical and non-critical), business hours rules, etc. And a free tier for homelabbers / solo projects / etc. :)
Edit: since you mentioned silencing things in your post, we also have very flexible "silence" buttons, which can set silence at various levels of the hierarchy, and can do so with a predefined timer. So if you know you want things to be silenced because you're fixing them, you can click one button and silence that trigger or group of triggers for 24 hours -- and it'll automatically unsilence at that time -- so you don't have to remember to manually manage silence/unsilence status!
Much easier to edit a list in vscode than click around a bunch in an app
Similar to the author, I want to run a minimalist monitoring setup and currently just use Glances. But Grafana Cloud might be my first choice if I need to expand the setup.
I did figure out some ways to reduce the log query usage of the alerts and made the "you need to upgrade to a paid tier!" notices stop. Still, the experience was the straw that broke the camel's back. I'd already been getting somewhat frustrated by the 2 week retention and 10 dashboard limit.
FWIW, it wasn't too difficult to stand up the Docker containers for Grafana, Loki, and Prometheus for my own usage.
All monitoring would be handled via plugins, which would be extremely easy to write.
It would ship with a few core plugins (ping, http, cert check, maybe snmp), but you could easily write a plugin to monitor anything else — for example, you could use the existing Python Minecraft library and write a plugin to monitor your Minecraft server. Or maybe even the ability to write plugins in any language, not just Python.
I’m not a developer and I’m opposed to vibe coding, so it’ll be slow going :)
Easy to configure, easy to extend with Go, and slots in to alerting.
I have an enterprise grade NAS, but if there's any kind of disk or RAID issue it beeps the shit out of me; I call that enough for home use.
I have a Unifi router, if there is a connection issue it fails over to LTE and I get a notification on my phone.
I have a UPS, if there is a power failure, my lights shut off, my NAS and workstation shuts down via NUT, and I can restart them remotely via VPN into my router and sending WOL packets.
Basically everything is already taken care of.
What the hell else do I need for a home? When I'm away I don't exactly have 10 million users trying to access my system, let alone 1.
E.g., you run a service container that also needs Postgres, Redis, a reverse proxy, a Cloudflare Tunnel and perhaps sidecar worker containers too, like Authentik. People want to know where the problem is immediately without fucking around with 80+ containers.
There’s no certificate expiration monitoring just yet, but everything else is there: poll probes (active ICMP or TCP probes), push probes (reporting HTTP API for apps), and local probes (reporting HTTP API for sub-Vigil for firewalled infrastructure parts).
A whole chain of these can end up being the key to overcoming otherwise unsurmountable obstacles.
Which can be extremely unlikely for anybody else to be able to replicate in the future, especially if they don't even get the first step right, perhaps not even yourself :\
Something like that can be quite a moat for a technology developer ;)
(Full disclosure: I'm the author)
It’s shockingly easy to setup. I have the monitoring stack living on a GCP host that I have setup for various things and have it connected via tailscale.
It actually paid for itself by alerting me to low voltage events via NUT. I probably would have lost some gear to poor electrical conditions.
I just use a simple script that is run every 60 seconds and a list of resources to check.
NewRelic and Grafana Cloud have pretty good free plan limits, but I'm paying for that in effort because I don't use either at work so it's not what I'm used to.
You also only get system metrics, no integrations - but most metrics and checks can be done remotely with a single dedicated agent
Even a quick Prometheus + alert manager setup with two docker containers is not difficult to manage - mine just works, I seldom have to touch it (mainly when I need to tweak the alert queries).
I use pushover for easy api-driven notifications to my phone, it’s a one-time $7 fee or so and it was money well spent.
I used to have sensu, but it was a pain to keep updated (and didn't work that well on old rpis)
But what I did find was a good alternative was telegraph->some sort of time series (I still really like graphite, influxQL is utter horse shit, and prometheus's fucking pull models is bollocks)
Then I could create alert conditions on grafana. At least that was simple.
However the alerting on grafana moved from being "move the handle adjust a threshold, get a a configurable alert" to craft a query, get loads of unfilterable metadata as an alert.
its still good enough.
Prometheus as a time series DB is great, I even like its QL. What I don't like is pull. Sure there is agent mode or telegraf/grafana agent. But the idea that I need to hold my state and wait for Prometheus to collect it is utterly stupid. The biggest annoyance is that I need to have a webserver somewhere, with a single god instance(s) that can reach out and touch it.
Great if you have just one network, but a bollock ache if you have any kind of network isolation.
This means that we are using influxdb and its shitty flux QL (I know we could upgrade, but thats hard)
Were all kubernetes these days so i guess i didn’t think about it a lot in recent years.
Especially for home cloud, home ops, home labs: that's great! That's awesome that you did for yourself, that you wrote up your experience.
But in general I feel like there's a huge missing middle of operations & sys-admin-ery that creates a distorted weird narrative. There's few people out there starting their journey with Prometheus blogging helpfully through it. There's few people mid way through their k8s work talking about their challenges and victories. The tales of just muddling through, of the perseverance, of looking for information, trying to find signal through the noise are few.
What we get a lot of is "this was too much for me so I wrote my own thing instead". Or, "we have been doing such and such for years and found such and such to shave 20% compute" or "we needed this capability so added Z to our k8s cluster like so". The journey is so often missing, we don't have stories of trying & learning. We have stories like this of making.
There's such a background of 'too complex' that I really worry leads us spiritually astray. I'm happy for articles like this, it's awesome to see ingenuity on display, but there's so many good amazing robust tools out there that seem to have lots of people happily or at least adequately using them, but it feels like the stories of turning back from the attempt, stories of eschewing the battle tested widely adopted software drive so much narrative, have so much more ink spilled over them.
Very thankful for Flix language putting Rich Hickey's principle of Simple isn't Easy first, for helping re-orient me by the axis of Hickey's old grand guidance. I feel like there's such a loud clambor generally for easy, for scripts you throw together, for the intimacy of tiny systems. And I admire a lot of these principles! But I also think there's a horrible backwardsness that doesn't help, that drives us away from more comprehensive capable integrative systems that can do amazing things, that are scalable both performance wise (as Prometheus certainly is) and organizationally (that other other people and other experts will also lastingly use and build from). The preselection for easy is attainable individually quickly, but real simple requires vastly more, requires so much more thought and planning and structure. https://www.infoq.com/presentations/Simple-Made-Easy/
It's so weird to find myself such a Cathedral-but-open-source fan today. Growing up the Bazaar model made such sense, had such virtue to it. And I still believe in the Bazaar, in the wide world teaming with different softwares. But I worry what lessons are most visible, worry what we pass along, worry about the proliferation of software discontent against the really good open source software that we do collaborate together on em masse. It feels like there's a massive self sabotage going on, that so many people are radicalized and sold a story of discontent against bigger more robust more popular open source software. I'd love to hear that view so much, but I want smaller individuals and voices also making a chorus of happy noise about how far they get how magical how powerful it is that we have so many amazing fantastic bigger open source projects that so scalably enable so much. https://en.m.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
I love the idea of writing up my ultimately-successful experiences of using open source software. I'm currently working on a big (for me anyway) project for my homelab involving a bunch of stuff I've either never or rarely done before. But... if I were to write about my experiences, a lot of it would be "I'm an idiot and I spent two hours with a valid but bad config because I misunderstood what the documentation was telling me about the syntax and yeah, I learned a bit more about reading the log file for X, but that was fundamentally pointless because it didn't really answer the question." I'd also have to keep track of what I did that didn't work, which adds a lot more work than just keeping track of what did work.
There's also a social aspect there where I don't want to necessarily reveal the precise nature of my idiocy to strangers over the internet. This might be the whole thing here for a lot of people. "Look at this awesome script I made because I'm a rugged and capable individualist" is probably an easier self-sell than "Despite my best efforts, I managed to scrounge together a system that works using pieces made by people smarter than me."
I think I might try. My main concern is whether it will ruin the fun. When I set up Prometheus, I had a lot of fun, even through the mistakes. But, would also trying to write about it make it less fun? Would other people even be interested in a haphazard floundering equivalent to reading about someone's experience with a homework assignment? Would I learn more? Would the frustrating moments be worse or would the process of thinking through things (because I am going to write about it) lead to my mistakes becoming apparent earlier? Will my ego survive people judging my process, conclusions, and writing? I don't know. Maybe it'll be fun to find out.