When choosing software that I run in my “homelab” I lean towards community developed projects first. They may not always have as high quality as the ones offered by commercial entities but they’re just safer for the long term and have no artificial limits (Plex). I used to be a happy Plex customer (I have Plex Pass) but several years ago I had enough of their bullshit, switched to Jellyfin and couldn’t be happier!
Over the years I have been streaming all my movies and shows to as many people I want.
Plex added HDR support for transcoding, live subtitles syncing and more.
Especially the subtitle syncing is fantastic. It completely solved the problem.
You also realistically can't fork things unless multiple people do, and they all stay interested in the fork.
That's what we say it's about. But it's really about open source devs being our slaves forever. Get to work, Mattermost! (whip crack)
> What This Means for Existing Deployments
> Paid Customers: No action required—your deployments are unaffected.
[1] https://forum.mattermost.com/t/mattermost-v11-changes-in-fre...
I saw "we're happy to pay for it" and thought they were paying for it. They're not, yet.
> Mattermost Entry gives small, forward-leaning teams a free self-hosted Intelligent Mission Environment to get started on improving their mission-critical secure collaborative workflows. Entry has all features of Enterprise Advanced with the following server-wide limitations and omissions:
https://docs.mattermost.com/product-overview/editions-and-of...
Sounds like some kind of parody of enterprise software.
I assumed that they were being forced by the copyright mafia, but they’re perfectly capable of making these decisions on their own.
If you ask these people you need to buy expensive hardware and build your own datacenter at home.
I have been hosting all my services on a single Intel Nuc from 10 years ago and a RPI5 as backup for critical services like DNS.
That's it.
You'll truly be amazed at how much stuff you can actually run on very little hardware if you only have between 2 and 5 users like in a family.
Also, MinIO was always a enterprise option. It was never meant for home use. Just use SeaweedFS, Garage or so if you really want S3.
Sidenote: You do not need S3 in your house. Just use the filesystem.
For me personally, I built my “data centre” as cheap as possible, but there’s a few requirements that the computers you’re using would not cut it: storage server must be using ZFS with ECC. I started this around a decade ago and I only spent ~$300 at the time (reusing old PSU and case I think).
There are many requirements of a data centre that can be relaxed in a home lab settings, up time, performance, etc. but I would never trade data integrity for tiny bit of savings. Sadly this is a criteria that many, including some of those building very sophisticated home cluster, didn’t set as a priority.
What are you putting in the VM, another Linux kernel? Why? Yeah then you need to take into account between 4GB and ~ 8GB of extra ram per VM.
I don't have RAID though I do backup to my NAS at my parents'.
But honestly a NVMe drive is basically like a CPU: it's either dead on arrival or it will just run forever.
There are some use cases for a VM over a container, sometimes you want better isolation (my public facing webserver runs in one), or a different OS for some reason (I run an OSX VM because its the only way to test a site in Safari).
Seems like a waste to me.
Backup your docker config and your data, that's what you actually need. The rest is just available online if you ever need it.
>Besides sometimes you need to run software that is not available on linux.
Really, like what?
You don't need ECC
You absolutely don't need proxmox, containers are good enough
It does not quickly make sense to build a proper home server
Raid1 or raid6 makes sense, but it's absolutely not a tipping point.
Goals are vastly different too. For some it's about hosting a few services to be free from company slop, for others it's a way to practice devops: clustering, containers, complex networking.
Seeing someone recommending Proxmox or Freenas to a beginner that just want to share family photos from an old laptop is wrong in so many ways...
1. Peer-to-peer model of decentralization like bittorrent, instead of the client-server model. Local web UIs (like Transmission's web UI) may be served locally (either host-only or LAN-only) as frontend for these apps. Consider this as the 'last-mile connectivity' if you will.
2. Applications are resistant to outages. Obviously, home servers can't be expected to be always online. It may even be running on you regular desktops. But you shouldn't lose the utility of the service just because it goes offline. A great example of this is the email service. They can wait for up to 2 days for the destination server to show up before declaring a delivery failure. Even rejections are handled with retries minutes later.
3. The applications should be able to deal with dynamic IPs and NATs. We will probably need a cryptographic identity mechanism and a way to translate that into a connection to the correct end node. But most of these technologies exist today.
4. E2E encrypted and redundant storage and distribution servers for data that must absolutely be online all the time. Nostr relays seem like a good example.
The Solid and Nostr projects embody many of these ideas already. It just needs a bit more polish to feel natural and intuitive. One way to do it is to have a local daemon that acts as a gateway, cache and web-ui to external data.
FWIW: My Intel Nuc on Ubuntu LTS with a cron apt update / upgrade job works for years without a hickup.
I have reliable electricity and internet at home, though.
I apologize if it was confusing. I was suggesting the exact opposite. It's not about how to build a mini enterprise cluster. It's about how to change the service infrastructure to suit the small computers we usually find at homes, without any modifications. I'm suggesting a more fundamental change.
> I have reliable electricity and internet at home, though.
It isn't too bad where I'm at, either. But sadly, that isn't the practical situation elsewhere. We need to treat power and connectivity as random and intermittent.
Enshittification also usually implies that switching to an alternative is difficult (usually because creating a competing service is near impossible because you'd have to get users on it). That flaw doesn't really apply to self hosting like it does with centralized social media. You can just switch to Jellyfin or Garage or Zulip. Migration might be a pain, but it's doable.
You can't as easily stop using LinkedIn or GitHub or Facebook, etc.
That's probably not how you should interpret it. Self hosting as a whole is still a vastly better option. But if there is a significant enough public movement towards it, you can expect it to be targeted for enshittification too. The incidents related to Plex, MinIO and Mattermost should be taken as warning signals about what this may escalate into in the future. Here are the possible problems I foresee.
1. The situation with Plex, MinIO and Mattermost can be expected to happen more frequently. After a limit, the pain of frequent migration will become untenable. MinIO is a great example. Even the crowd on HN hadn't considered an alternative until then. Some of us learned about Garage, RustFS and Ceph S3 for the first time and we were debating about each of their pros and cons. It's very telling that that discussion was very lengthy.
2. There is a gradual nudge to move everything to the cloud and then monetize it. Mandatory online account for Win11, monetization of GH self-hosted runner (now suspended after backlash, I think) and cloudification of MS Office are good examples. You can expect a similar attempt on self hosted applications. Of course, most of our self-hosted software is currently open source. But if these big companies decide to embrace, extend and extinguish it, I'm not sure that the market will be prudent enough to stick with the FOSS options. Half of HN was fighting me a few days back when I suggested that we should strive to push the market towards serviceable modular hardware.
3. FOSS projects developed under companies are always at a higher risk of being hijacked or going rogue. To be clear, I'm not against that model. For example, I'm happy with Zulip's development and monetization model - ethical, generous and not too pushy. But mattermost shows where that can go wrong. Sure, they're are open source. But there are practical difficulties in easily overriding such issues.
4. At one time, we were expecting small form-factor headless computers (Plug computers [1]) like SheevaPlug and FreedomBox to become ubiquitous. That should still be an option, though I'm not sure where it's headed, given the current RAM situation. But even if they make a come back, it's very likely that OEMs will lock it down like smartphones today and make it difficult for you to exercise your choices of servers, if not outright restrict them. (If anybody wants to argue that normal people will never consider it, remember how smartphones were, before iPhone. We had a blackberry that was used only by a niche crowd.)
I suspect you don’t. I suspect a couple of beelinks could run your whole business (minus the GPU needs).
Honestly, the problem that they're preparing for, isn't any of our fault. This is inflicted upon the world by some very twisted business models, incentives and priorities. It's hard to predict how it will all end up. Perhaps the market will be flooded with tons of RAM that will have to be transplanted onto proper DIMM modules. Or perhaps we might be scavenging the e-waste junkyard for every last RAM IC we can find - in which case, his choice would be correct.
...today.
If you're self-hosting, do you need 640K of ram?
You can buy a “lightly used” Dell Optiplex with 8gb RAM for like $40 which will cover all your self hosting needs today.
yes no more dyndns free accounts... but u can still use afraid or do cf tunnels maybe?
and in some cases nowadays u can get away with
docker-compose up
and some of those things like minio and mattermost are complaints about the free tier or complaints about self hosting? i can't tell
indeed the easiest "self hosting" ever was when ngrok happened.. u could get ur port listening on the internet without a sign up... by just running a single binary without a flag...
Oh yes it is. I already self hosted stuff back in 2000 and it was very hard. Then came docker and it is very simple now.
Sure "very simple" mean different things to different people, but if you self host you need to know a lot already.
This is somehiw similar to amateur electronics. You used to do 100% yourself from scratch. Now you have boards and you can start in z much simpler way.
"Plex added a paid license for remote streaming, a feature that was previously free. And then Plex decided to also sell personal data — I sure love self-hosted software spying on me."
How is it "self-hosted" if it's "remote streaming?" And if you're hosting it, you can throttle any outgoing traffic you want. Right?
The only other examples are Mattermost and MinIO... which I don't know much about, but again: Aren't you in control of your own host?
This article is lame. How about focusing on back-ends that pretend to support self-hosting but make it difficult by perpetuating massive gaps in its documentation (looking at you, Supabase)?
You host the plex service with your media library. Plex allows you to stream without opening up your firewall to others. Not sure now it works exactly because I never hosted it myself.
It relies on their hosted services/infrastructure. I avoid Plex for that reason. I just host my media with nginx + indexing enabled. Wireguard for creating the tunnel between the server-client and Kodi as the frontend to view the media (you can add an indexed http server as a media source).
Works great, no transcoding like Plex, but that's less of an issue nowadays when hardware accelerated decoders are common for h264 & h265.
Only if you want it to. Your local Plex server is always available on port 32400 - which can be opened up for others as well. But using Plex’s authentication is more convenient, of course.
that's one way of enshittifying, but what the article talks about is nonetheless very important.
People rely on projects being open source (or rather: _hosted on github_) as some sort of mark of freedom from shitty features and burdensome monetization.
As the examples illustrate, the pattern of capturing users with a good offering and then subsequently squeezing them for money can very easily be done by open source software with free licenses. The reason for that is that source code being available is not, alone, enough to ensure not getting captured by adversarial interests.
What you ALSO need is people wanting to put in the work to create a parallel fork to continuously keep the enshittification at bay. Someone who rolls a distribution with a massive amount of ever-decaying patches, increasingly large amounts of workarounds, etc. Or, alternatively, a "final release" style fork that enters maintenance mode and only ever backports security vulnerability fixes. Either of those is a huge amount of work and it's not even sure that people will find that fork on their own rather than just assume "things are like that now".
Given that the code's originating corporation can and will eagerly throw whole teams of people at disabling such efforts, the counter-efforts would require the same amount of free labor to be successful - or even larger, given that it's easy to wreck things for the code's originator but it's difficult to fix them for the restoration crew.
This pattern, repeated in many projects over the decades since GPL2 and MIT were produced, displays that merely being free and open source does not create a complete anti enshittification measure for the end user. What is actually necessary is a societal measure, a safety web made up of developers dedicated to conservation of important software, who would be capable of correcting any stupid decisions made by pointy-haired managers. There are some small projects like this (eg Apache, and many more) but they are not all-encompassing and many projects that are important to people are without such a safety net.
So for this reason, eg when people are upset that mattermost limits the messages to 10000, their real quarrel isn't really even with the scorpion, who is known to sting, it is with the lack of there being a social safety net for this particular software. Their efforts would be well spent on rapidly building such a safety network to quickly force the corporation's hand into increasingly more desperate measures, accelerating their endgame and self-induced implosion. Then, after the corpo's greed inevitably makes them eat themselves in full, the software can enter the normal space of FOSS development rather than forever remain this corporate slave-product that is pact-bound to a Delaware LLC by a chain of corporate greed.
Only once any free fork's competition backed by VCs burning their money on a ceremonial heap has been removed can the free version of the software become the central source for all users and therefore become successful, rather than continuously play catch up with a throng of H-2B holders.
But the biggest thing I am worried about is the hardware prices too.
So I want to ask but is there any hardware (usually ram) which isn't getting its price increase insanely much? Perhaps refurbished or auctioned servers?
What is the best way to now get hardware which is bang for its buck? Should we even buy hardware right now or wait 3-4 years for factory production to rise and AI bubble to crash, I definitely think that ram prices will fall off very steeply (its almost a cycle in the ram business)
I am not sure but buying up small levels of compute feels like a decent idea if you are doing anything computationally expensive and of course if you have something like plex, then I suppose you have to expand on the storage part and not so much on the ram part (perhaps some encoding/decoding which could be ram intensive but I don't know)
I had gotten into the rumour that asus is ramping up chip production or smth to save hardware but it turned out to be fake so not sure how to respond but please some hardware company should definitely see this opportunity smh.
A TinyMiniMicro https://www.servethehome.com/introducing-project-tinyminimic... used PC is more than adequate for most workloads (except for local AI and if you want to have a huge amount of storage). Last time I checked the prices were in the ballpark of $100/$150 for a working machine.
New machines with a N series Intel CPU are in the similar ballpark.
He is really excited for this project, he brought me newspaper clippings the other day showing that my idea has potential and other things so that's nice and I have given him the task to get his contacts in our small city for hardware, auctions and rents and try to get more information about some cheapped out specs starting out as I don't want us to invest in with a lot of hardware/investment up front but rather reinvesting the profits and maintaing a clear transparency.
Do you think we should postpone this idea till 3-4 years (I am thinking so) honestly because I would love to build my own software and I am thinking that within these years I can try more pain points of other providers and build a list of the nice features I like (If you know of any, please let me know as well as I still am making the list)
I am not trying to achieve AI purposes at all but rather simple compute (even low-end compute starting out)
Power consumption comparison isn't that much of an issue I think
Honestly I am thinking that we should wait out this cycle of rising hardware so that the hardware prices can go down in the start of the next cycle but I am interested if NUC's would be good enough for my workflow as I can redirect my father more about it because I am not that expertised about the hardware side of things so much so I would really appreciate it if you can tell me more about it/what could be the best use cases for that?
I saw from your article that chic-fil-a uses intel nucs to run their kubernetes clusters so I am assuming that it can be good enough for my use case as well?
Also, there is no guarantee that I end up doing it and its still more so an idea than anything and as I would probably do some projections to see if its worth it and a lot of other things before we get ourselfs some basic cheap equipment to even start and If we do we would probably start out with homelabbing equipment itself but just to be more clear, storage compactness isn't that big of a worry starting out as I think his office is good enough.
Honestly right now, In my understanding Ram Prices are the ones which extremely kill the project and makes me want to reconsider the software side of things (to build things myself/learn more) for a few years so that we can then build the hardware. I think this is the way to go but talking to my father and he was super excited about it I am not exactly sure but still it might give him a few years of his spare time to be more familiar with the hardware side of auctions etc. that he can find us better deals etc. too so please share any advice that you (or anyone) has about it as I would love sharing it down to my father so he that can do some queries about somethings in the local markets / his contacts as well.
That’s about all I’ll say though, not my article.
But it does almost seems like there is a squeeze on general purpose computing from all sides, including homelab. The DRAM and SSD prices is just the latest addition to that. There's also Win 11 requiring TPM, which is not an bad thing by itself, but which will almost certainly take away the ability to run arbitrary OSes 5-10 years down the line on PCs. Or you'd still be able to boot them, but nothing will run on it without a fully trusted chain from TPM -> secure boot -> browser.
Running an OS in 16-32mb ram with GUI...
Memory management for programs...
Less cutely, this is an interesting topical site/newsletter => https://selfh.st
> Even old hardware isn't safe: DDR4 prices are also affected, so that tiny ThinkCentre M720 won't save us.
Most of my home infrastructure is DDR2 or DDR3. It’s plenty fast for quite a lot of things. I really don’t care whether some background operation takes five minutes or an hour. I rather care how little energy and heat that machine produces.
Unless you have a heavy-duty pipe to your prem you're just risking all kinds of headaches, and you're going to have to put your stuff behind Cloudflare anyway and if you're doing that why not use a VPS?
It's just not practical for someone to run a little blog or app that way.
Take file storage: Some folks find Google Drive and similar services unpalatable because they can and will scan your content. Setting up Nextcloud or even just using file sharing built into a consumer router is pretty easy.
You don't need to rely on Cloudflare, either. Some routers come with VPN functionality or can have it added.
The self-hosting most people talk about when they talk about self-hosting is very practical.
Also, forking is an option, you can always use AI to keep it current.
If the source code is available for you to fork, modify, and maintain as you see fit, what's the complaining really about?
I think co-management is going to be the next paradigm.