Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> writes: > On 2025/6/9 15:35, Michal Hocko wrote: >> On Mon 09-06-25 10:57:41, Ritesh Harjani wrote: >>> Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> writes: >>> >>>> On some large machines with a high number of CPUs running a 64K pagesize >>>> kernel, we found that the 'RES' field is always 0 displayed by the top >>>> command for some processes, which will cause a lot of confusion for users. >>>> >>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >>>> 875525 root 20 0 12480 0 0 R 0.3 0.0 0:00.08 top >>>> 1 root 20 0 172800 0 0 S 0.0 0.0 0:04.52 systemd >>>> >>>> The main reason is that the batch size of the percpu counter is quite large >>>> on these machines, caching a significant percpu value, since converting mm's >>>> rss stats into percpu_counter by commit f1a7941243c1 ("mm: convert mm's rss >>>> stats into percpu_counter"). Intuitively, the batch number should be optimized, >>>> but on some paths, performance may take precedence over statistical accuracy. >>>> Therefore, introducing a new interface to add the percpu statistical count >>>> and display it to users, which can remove the confusion. In addition, this >>>> change is not expected to be on a performance-critical path, so the modification >>>> should be acceptable. >>>> >>>> In addition, the 'mm->rss_stat' is updated by using add_mm_counter() and >>>> dec/inc_mm_counter(), which are all wrappers around percpu_counter_add_batch(). >>>> In percpu_counter_add_batch(), there is percpu batch caching to avoid 'fbc->lock' >>>> contention. This patch changes task_mem() and task_statm() to get the accurate >>>> mm counters under the 'fbc->lock', but this should not exacerbate kernel >>>> 'mm->rss_stat' lock contention due to the percpu batch caching of the mm >>>> counters. The following test also confirm the theoretical analysis. >>>> >>>> I run the stress-ng that stresses anon page faults in 32 threads on my 32 cores >>>> machine, while simultaneously running a script that starts 32 threads to >>>> busy-loop pread each stress-ng thread's /proc/pid/status interface. From the >>>> following data, I did not observe any obvious impact of this patch on the >>>> stress-ng tests. >>>> >>>> w/o patch: >>>> stress-ng: info: [6848] 4,399,219,085,152 CPU Cycles 67.327 B/sec >>>> stress-ng: info: [6848] 1,616,524,844,832 Instructions 24.740 B/sec (0.367 instr. per cycle) >>>> stress-ng: info: [6848] 39,529,792 Page Faults Total 0.605 M/sec >>>> stress-ng: info: [6848] 39,529,792 Page Faults Minor 0.605 M/sec >>>> >>>> w/patch: >>>> stress-ng: info: [2485] 4,462,440,381,856 CPU Cycles 68.382 B/sec >>>> stress-ng: info: [2485] 1,615,101,503,296 Instructions 24.750 B/sec (0.362 instr. per cycle) >>>> stress-ng: info: [2485] 39,439,232 Page Faults Total 0.604 M/sec >>>> stress-ng: info: [2485] 39,439,232 Page Faults Minor 0.604 M/sec >>>> >>>> Tested-by Donet Tom <donettom@xxxxxxxxxxxxx> >>>> Reviewed-by: Aboorva Devarajan <aboorvad@xxxxxxxxxxxxx> >>>> Tested-by: Aboorva Devarajan <aboorvad@xxxxxxxxxxxxx> >>>> Acked-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> >>>> Acked-by: SeongJae Park <sj@xxxxxxxxxx> >>>> Acked-by: Michal Hocko <mhocko@xxxxxxxx> >>>> Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> >>>> --- >>>> Changes from v1: >>>> - Update the commit message to add some measurements. >>>> - Add acked tag from Michal. Thanks. >>>> - Drop the Fixes tag. >>> >>> Any reason why we dropped the Fixes tag? I see there were a series of >>> discussion on v1 and it got concluded that the fix was correct, then why >>> drop the fixes tag? >> >> This seems more like an improvement than a bug fix. > > Yes. I don't have a strong opinion on this, but we (Alibaba) will > backport it manually, > > because some of user-space monitoring tools depend > on these statistics. That sounds like a regression then, isn't it? -ritesh