> > Users can obtain the KSM information of a cgroup just by: > > > > # cat /sys/fs/cgroup/memory.ksm_stat > > ksm_rmap_items 76800 > > ksm_zero_pages 0 > > ksm_merging_pages 76800 > > ksm_process_profit 309657600 > > > > Current implementation supports both cgroup v2 and cgroup v1. > > > > Before adding these stats to memcg, add global stats for them in > enum node_stat_item and then you can expose them in memcg through > memory.stat instead of a new interface. Dear shakeel.butt, If adding these ksm-related items to enum node_stat_item and bringing extra counters-updating code like __lruvec_stat_add_folio()... embedded into KSM procudure, it increases extra CPU-consuming while normal KSM procedures happen. Or, we can just traversal all processes of this memcg and sum their ksm'counters like the current patche set implmentation. If only including a single "KSM merged pages" entry in memory.stat, I think it is reasonable as it reflects this memcg's KSM page count. However, adding the other three KSM-related metrics is less advisable since they are strongly coupled with KSM internals and would primarily interest users monitoring KSM-specific behavior. Last but not least, the rationale for adding a ksm_stat entry to memcg also lies in maintaining structural consistency with the existing /proc/<pid>/ksm_stat interface.