This is pasted codeor command output.
vmstat -m
last pid: 36860; load averages: 0.44, 0.42, 0.46 up 3+07:50:54 14:46:3453 processes: 1 running, 52 sleepingCPU: 0.2% user, 0.0% nice, 13.1% system, 0.0% interrupt, 86.7% idleMem: 51M Active, 1041M Inact, 547M Wired, 331M Buf, 2314M Free
last pid: 70854; load averages: 0.29, 0.46, 0.48 up 3+12:26:21 19:22:0157 processes: 3 running, 54 sleepingCPU: 13.4% user, 0.0% nice, 33.3% system, 0.0% interrupt, 53.3% idleMem: 65M Active, 1066M Inact, 558M Wired, 340M Buf, 2265M Free
vmstat -z | tr ':' ',' | sort -nk4 -t','
Being curious about your issue... Can you post the output ofCode: [Select]vmstat -z | tr ':' ',' | sort -nk4 -t','
We see that your wired memory is increasing slowly over time, does it stabilize at some point or really ends up consuming both free and inactive ? Mine is stable around 800M. It varies between 8 to 12% of the total memory 8G in my case so 800M is around 10%). Do you know how big the numbers were right before an OOM (including free and inactive) ?
We also see your free being converted into inactive memory. This does not seem abnormal at first glance... This inactive memory can be reused if need be.
You should not be running into an OOM with those numbers unless there is something else eating up the memory or trying to allocate something very rapidly right before the OOM that we do not see here that would explain the OOM.
Also, there is indeed a shift in memory from free to inactive that was less present in pre-22 version but I never ran into an OOM because of it, then again I have 8GB of ram, not 4GB so I'm likely less prone to OOM.What does your Reporting/Health looks like (if you have it) ?Attached is mine (System/Memory from Reporting/Health) for the last 77 days (inverse turned on, resolution high), each peak is usually a reboot or an upgrade/reboot.