VM with limited memory fell over today, so I had to learn a bit about linux memory allocation. Notes for future reference:
The Linux kernel generally responds to memory requests positively. By default, it uses a heuristic algorithm that will deny blatantly unserviceable requests, but allow a degree of overcommitment on the grounds that most processes ask for more than they need most of the time. This works reasonably well usually but if you’re overallocating there’s an inevitable risk of an out of memory error. When this happens, the oom-killer is invoked and will gleefully traipse through your system slaughtering processes according to its own arcane whims*. This can properly mess up your system, so once it’s been invoked you probably want to consider a full reboot in case it’s knocked out any fundamental processes (hopefully once you’ve investigated and fixed whatever is causing you to run out of memory).
In some situations it might be better to be more strict about over-commitment of memory in the first place. The parameter vm.overcommit_memory controls how the kernel responds to requests for memory and takes one of three values:
0: The default. Use the heuristic thing. Allocate memory unless a process is obviously taking the piss.
1: Just hand out memory, don’t worry about overcommitment (so big risk of oom, but potentially better performance for memory intensive tasks)
2: Don’t hand out memory if the request is bigger than the total available swap + vm.overcommit_ratio% RAM
The default vm.overcommit_ratio setting is 50%.
* It uses some heuristic to determine which processes would be least damaging to kill off, but it can still take out vital stuff.