Hi Dave, On 2/19/25 07:53, Dave Martin wrote: > On Wed, Jan 22, 2025 at 02:20:30PM -0600, Babu Moger wrote: >> Provide the interface to list the assignment states of all the resctrl >> groups in mbm_cntr_assign mode. > > [...] > >> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> index 5d305d0ac053..6e29827239e0 100644 >> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> @@ -975,6 +975,81 @@ static ssize_t resctrl_mbm_assign_mode_write(struct kernfs_open_file *of, > > [...] > >> +static int resctrl_mbm_assign_control_show(struct kernfs_open_file *of, >> + struct seq_file *s, void *v) >> +{ > > [...] > >> + list_for_each_entry(rdtg, &rdt_all_groups, rdtgroup_list) { >> + seq_printf(s, "%s//", rdtg->kn->name); >> + >> + sep = false; >> + list_for_each_entry(dom, &r->mon_domains, hdr.list) { >> + if (sep) >> + seq_puts(s, ";"); >> + >> + seq_printf(s, "%d=%s", dom->hdr.id, >> + rdtgroup_mon_state_to_str(r, dom, rdtg, str)); >> + >> + sep = true; >> + } >> + seq_putc(s, '\n'); >> + >> + list_for_each_entry(crg, &rdtg->mon.crdtgrp_list, mon.crdtgrp_list) { >> + seq_printf(s, "%s/%s/", rdtg->kn->name, crg->kn->name); >> + >> + sep = false; >> + list_for_each_entry(dom, &r->mon_domains, hdr.list) { >> + if (sep) >> + seq_puts(s, ";"); >> + seq_printf(s, "%d=%s", dom->hdr.id, >> + rdtgroup_mon_state_to_str(r, dom, crg, str)); > > Unlike the other resctrl files, it looks like the total size of this > data will scale up with the number of existing monitoring groups > and the lengths of the group names (in addition to the number of > monitoring domains). > > So, this can easily be more than a page, overflowing internal limits > in the seq_file and kernfs code. > > Do we need to track some state between read() calls? This can be done > by overriding the kernfs .open() and .release() methods and hanging > some state data (or an rdtgroup_file pointer) on of->priv. > > Also, if we allow the data to be read out in chunks, then we would > either have to snapshot all the data in one go and stash the unread > tail in the kernel, or we would need to move over to RCU-based > enumeration or similar -- otherwise releasing rdtgroup_mutex in the > middle of the enumeration in order to return data to userspace is going > to be a problem... Good catch. I see similar buffer overflow is handled by calling seq_buf_clear() (look at process_durations() or in show_user_instructions()). How about handling this by calling rdt_last_cmd_clear() before printing each group? diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 484d6009869f..1828f59eb723 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -1026,6 +1026,7 @@ static int resctrl_mbm_assign_control_show(struct kernfs_open_file *of, } list_for_each_entry(rdtg, &rdt_all_groups, rdtgroup_list) { + rdt_last_cmd_clear(); seq_printf(s, "%s//", rdtg->kn->name); sep = false; @@ -1041,6 +1042,7 @@ static int resctrl_mbm_assign_control_show(struct kernfs_open_file *of, seq_putc(s, '\n'); list_for_each_entry(crg, &rdtg->mon.crdtgrp_list, mon.crdtgrp_list) { + rdt_last_cmd_clear(); seq_printf(s, "%s/%s/", rdtg->kn->name, crg->kn->name); sep = false; > >> + sep = true; >> + } >> + seq_putc(s, '\n'); >> + } >> + } >> + >> + mutex_unlock(&rdtgroup_mutex); >> + cpus_read_unlock(); >> + return 0; >> +} > > [...] > > Cheers > ---Dave > -- Thanks Babu Moger