Hi Babu, There seems to be many places referring to user space assigning "counter IDs", as I understand the interface the user has no control over the actual ID of the counter being assigned. Please correct me if I am wrong. Considering this, how about: fs/resctrl: Add resctrl file to display number of assignable counters If you agree, please check the whole series as this seems to be an often copy&pasted term. On 6/13/25 2:04 PM, Babu Moger wrote: > The "mbm_event" mode allows users to assign a hardware counter ID to an "a hardware counter ID" -> "a hardware counter"? > RMID, event pair and monitor bandwidth usage as long as it is assigned. > The hardware continues to track the assigned counter until it is > explicitly unassigned by the user. > > Create 'num_mbm_cntrs' resctrl file that displays the number of counter > IDs supported in each domain. 'num_mbm_cntrs' is only visible to user "number of counter IDs" -> "number of counters"? > space when the system supports "mbm_event" mode. > > Signed-off-by: Babu Moger <babu.moger@xxxxxxx> > --- ... > --- > Documentation/filesystems/resctrl.rst | 11 ++++++++++ > fs/resctrl/monitor.c | 4 ++++ > fs/resctrl/rdtgroup.c | 30 +++++++++++++++++++++++++++ > 3 files changed, 45 insertions(+) > > diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst > index 4e76e4ac5d3a..801914de0c81 100644 > --- a/Documentation/filesystems/resctrl.rst > +++ b/Documentation/filesystems/resctrl.rst > @@ -288,6 +288,17 @@ with the following files: > result in misleading values or display "Unavailable" if no counter is assigned > to the event. > > +"num_mbm_cntrs": > + The maximum number of counter IDs (total of available and assigned counters) "number of counter IDs" -> "number of counters" > + in each domain when the system supports mbm_event mode. > + > + For example, on a system with maximum of 32 memory bandwidth monitoring > + counters in each of its L3 domains: > + :: > + > + # cat /sys/fs/resctrl/info/L3_MON/num_mbm_cntrs > + 0=32;1=32 > + > "max_threshold_occupancy": > Read/write file provides the largest value (in > bytes) at which a previously used LLC_occupancy > diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c > index dcc6c00eb362..92a87aa97b0f 100644 > --- a/fs/resctrl/monitor.c > +++ b/fs/resctrl/monitor.c > @@ -924,6 +924,10 @@ int resctrl_mon_resource_init(void) > else if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID)) > mba_mbps_default_event = QOS_L3_MBM_TOTAL_EVENT_ID; > > + if (r->mon.mbm_cntr_assignable) > + resctrl_file_fflags_init("num_mbm_cntrs", > + RFTYPE_MON_INFO | RFTYPE_RES_CACHE); > + > return 0; > } > > diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c > index ba7a9a68c5a6..967e4df62a19 100644 > --- a/fs/resctrl/rdtgroup.c > +++ b/fs/resctrl/rdtgroup.c > @@ -1829,6 +1829,30 @@ static int resctrl_mbm_assign_mode_show(struct kernfs_open_file *of, > return 0; > } > > +static int resctrl_num_mbm_cntrs_show(struct kernfs_open_file *of, > + struct seq_file *s, void *v) > +{ > + struct rdt_resource *r = rdt_kn_parent_priv(of->kn); > + struct rdt_mon_domain *dom; > + bool sep = false; > + > + cpus_read_lock(); > + mutex_lock(&rdtgroup_mutex); > + > + list_for_each_entry(dom, &r->mon_domains, hdr.list) { > + if (sep) > + seq_putc(s, ';'); > + > + seq_printf(s, "%d=%d", dom->hdr.id, r->mon.num_mbm_cntrs); > + sep = true; > + } > + seq_putc(s, '\n'); > + > + mutex_unlock(&rdtgroup_mutex); > + cpus_read_unlock(); > + return 0; > +} > + > /* rdtgroup information files for one cache resource. */ > static struct rftype res_common_files[] = { > { > @@ -1866,6 +1890,12 @@ static struct rftype res_common_files[] = { > .seq_show = rdt_default_ctrl_show, > .fflags = RFTYPE_CTRL_INFO | RFTYPE_RES_CACHE, > }, > + { > + .name = "num_mbm_cntrs", > + .mode = 0444, > + .kf_ops = &rdtgroup_kf_single_ops, > + .seq_show = resctrl_num_mbm_cntrs_show, > + }, > { > .name = "min_cbm_bits", > .mode = 0444, Patch looks good. Reinette