And even if one enables zswap and configures nothing else, compressing RAM and only swapping out to disk under extreme pressure is still the default behavior.
Full compression would be nicer, but I'd also like to see ECC emulation (or alternative) as a cheaper alternative to the real hardware, although with current prices that might be less so.
early intro: https://lwn.net/Articles/537422/
general usage docs: https://wiki.archlinux.org/title/Zswap
code: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
It's good that we have better swapping now, but I wish they'd address the above. I'd rather have programs getting OOMKilled or throwing errors before the system grinds to a halt, where I can't even ssh in and run 'ps'.
An interactive system that does not interact (terminal not reactive, can't ssh in, screen does not refresh) is broken. I don't understand why this is not a kernel bug.
On my system, to add insult to injury, when the system does come back twenty minutes later, I get a "helpful" pop-up from the Linux Kernel saying "Memory Shortage Avoided". Which is just plain wrong. The pop-up should say "sorry, the kernel bricked your system for a solid twenty minutes for no good reason, please file a report".
echo y >/sys/kernel/mm/lru_gen/enabledAny combination of cgroups, /proc flags and other forbidden knobs to get such behaviour?
Any thoughts on that?
To be honest I don't know why it's such an issue on Linux. Mac and Windows don't have this issue at all. Windows presumably because it doesn't over-commit memory. I'm not sure why Mac is so much better than Linux at memory management.
My eventual solution was to just buy a PC with a ton of RAM (128 GB). Haven't had any hard reboots due to OOM since then!
To be fair, my Windows system grinds to a halt (not really, but it becomes very noticably less responsive in basically anything) when JetBrains is installing an update (mind you I only have SSDs with all JetBrains stuff being on an NVMe). I don't know what JetBrains is doing, but it consistently makes itself noticable when it is updating.
Similarly, on a server where you might expect most of the physical memory to get used, it ends up being very important for stability. Think of VM or container hosts in particular.
Either you're going to never exhaust your system ram, so it doesn't matter, minimally exhaust it and swap in some peak load but at least nothing goes down, or exhaust it all and start having things get OOM'd which feels bad to me.
Am I out of touch? Surely it's the children who are wrong.
There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.
For instance, if I have 8 GB of RAM, people recommend adding 8 GB of swap. But since I like having plenty of memory, I install 16 GB of RAM instead—and yet, people still tell me to use swap. Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.
Then, if I upgrade to 24 GB of RAM, the advice doesn’t change—they still insist on enabling swap. I could install an absurd amount of RAM, and people would still tell me to set up swap space.
It seems that for some, using swap has become dogma. I just don’t see the reasoning. Memory is limited either way; whether it’s RAM or RAM + swap, the total available space is what really matters. So why insist on swap for its own sake?
> Memory is limited either way; whether it’s RAM or RAM + swap
For two reasons: usage spikes and actually having more usable memory. There's lots of unused pages on a typical system. You get free ram for the price of cheap storage, so why wouldn't you?
The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.
Better overcommit right away and live with the feeling you're wasting space.
That rule came about when RAM was measured in a couple of MB rather than GB, and hasn't made sense for a long time in most circumstances (if you are paging our a few GB of stuff on spinning drives your system is likely to be stalling so hard due to disk thrashing that you hit the power switch, and on SSDs you are not-so-slowly killing them due to the excess writing).
That doesn't mean it isn't still a good idea to have a little allocated just-in-case. And as RAM prices soar while IO throughput & latency are low, we may see larger Swap/RAM ratios being useful again as RAM sizes are constrained by working-sets aren't getting any smaller.
In a theoretical ideal computer, which the actual designs we have are leaky-abstraction laden implementations of, things are the other way around: all the online storage is your active memory and RAM is just the first level of cache. That ideal hasn't historically ended up being what we have because the disparities in speed & latency between other online storage and RAM have been so high (several orders of magnitude), fast RAM has been volatile, and hardware & software designs or not stable & correct enough such that regular complete state resets are necessary.
> Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.
Because your need for fast immediate storage has increased, so 8-quick-8-slow is no longer sufficient. You are right in that this doesn't mean you need 16-quick-16-slow is sensible, and 128-quick-128-slow would be ridiculous. But no swap at all doesn't make sense either: on your machine imbued with silly amounts of RAM are you really going to miss a few GB of space allocated just-in-case? When it could be the difference between slower operation for a short while and some thing(s) getting OOM-killed?
This has far worse degradation behavior than normal swapping of regular data pages. That at least gives you the breathing space to still schedule processes when under memory pressure, such as whichever OOM killer you favor.
The “paging space needs to be X*RAM” and “paging space needs to be RAM+Y” predate hibernate being a common thing (even a thing at all), with hibernate being an extra use for that paging space not the reason it is there in the first place. Some OSs have hibernate space allocated separately from paging/swap space.
Also, as has been pointed out by another commenter, 8GB of swap for a system with 8GB of physical memory is overkill.
Genuinely curious where this idea has come from. Is it something being taught currently?
Note that simply buying more RAM than what you expect to use is not going to help. Going back to my post from earlier, I had a laptop with 8GB of RAM at a time where I would usually only need about 2-4GB of RAM for even relatively heavy usage. However, every once in a while, I would run something that would spike memory usage and make the system unresponsive. While I have much more than 8GB nowadays, I'm not convinced that it's enough to have completely outrun the risk of this sort of behaviour re-occuring.
tl;dr; give it 4-8GB and forget about it.
"Make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens" is not.