Re: [PATCH] mm: fix the inaccurate memory statistics issue for users

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2025/6/5 00:54, Shakeel Butt wrote:
On Wed, Jun 04, 2025 at 10:16:18PM +0800, Baolin Wang wrote:


On 2025/6/4 21:46, Vlastimil Babka wrote:
On 6/4/25 14:46, Baolin Wang wrote:
Baolin, please run stress-ng command that stresses minor anon page
faults in multiple threads and then run multiple bash scripts which cat
/proc/pidof(stress-ng)/status. That should be how much the stress-ng
process is impacted by the parallel status readers versus without them.

Sure. Thanks Shakeel. I run the stress-ng with the 'stress-ng --fault 32
--perf -t 1m' command, while simultaneously running the following
scripts to read the /proc/pidof(stress-ng)/status for each thread.

How many of those scripts?

1 script, but will start 32 threads to read each stress-ng thread's status
interface.

   From the following data, I did not observe any obvious impact of this
patch on the stress-ng tests when repeatedly reading the
/proc/pidof(stress-ng)/status.

w/o patch
stress-ng: info:  [6891]          3,993,235,331,584 CPU Cycles
            59.767 B/sec
stress-ng: info:  [6891]          1,472,101,565,760 Instructions
            22.033 B/sec (0.369 instr. per cycle)
stress-ng: info:  [6891]                 36,287,456 Page Faults Total
             0.543 M/sec
stress-ng: info:  [6891]                 36,287,456 Page Faults Minor
             0.543 M/sec

w/ patch
stress-ng: info:  [6872]          4,018,592,975,968 CPU Cycles
            60.177 B/sec
stress-ng: info:  [6872]          1,484,856,150,976 Instructions
            22.235 B/sec (0.369 instr. per cycle)
stress-ng: info:  [6872]                 36,547,456 Page Faults Total
             0.547 M/sec
stress-ng: info:  [6872]                 36,547,456 Page Faults Minor
             0.547 M/sec

=========================
#!/bin/bash

# Get the PIDs of stress-ng processes
PIDS=$(pgrep stress-ng)

# Loop through each PID and monitor /proc/[pid]/status
for PID in $PIDS; do
       while true; do
           cat /proc/$PID/status
	usleep 100000

Hm but this limits the reading to 10 per second? If we want to simulate an
adversary process, it should be without the sleeps I think?

OK. I drop the usleep, and I still can not see obvious impact.

w/o patch:
stress-ng: info:  [6848]          4,399,219,085,152 CPU Cycles
67.327 B/sec
stress-ng: info:  [6848]          1,616,524,844,832 Instructions
24.740 B/sec (0.367 instr. per cycle)
stress-ng: info:  [6848]                 39,529,792 Page Faults Total
0.605 M/sec
stress-ng: info:  [6848]                 39,529,792 Page Faults Minor
0.605 M/sec

w/patch:
stress-ng: info:  [2485]          4,462,440,381,856 CPU Cycles
68.382 B/sec
stress-ng: info:  [2485]          1,615,101,503,296 Instructions
24.750 B/sec (0.362 instr. per cycle)
stress-ng: info:  [2485]                 39,439,232 Page Faults Total
0.604 M/sec
stress-ng: info:  [2485]                 39,439,232 Page Faults Minor
0.604 M/sec

Is the above with 32 non-sleeping parallel reader scripts?

Yes.




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux