On 8/22/25 17:53, Moger, Babu wrote: > Hi Reinette, > > On 8/7/2025 8:49 PM, Reinette Chatre wrote: >> Hi Babu, >> >> On 8/5/25 4:30 PM, Babu Moger wrote: >>> "io_alloc" feature in resctrl enables direct insertion of data from I/O >>> devices into the cache. >>> >>> On AMD systems, when io_alloc is enabled, the highest CLOSID is reserved >>> exclusively for I/O allocation traffic and is no longer available for >>> general CPU cache allocation. Users are encouraged to enable it only when >>> running workloads that can benefit from this functionality. >>> >>> Since CLOSIDs are managed by resctrl fs, it is least invasive to make the >>> "io_alloc is supported by maximum supported CLOSID" part of the initial >>> resctrl fs support for io_alloc. Take care not to expose this use of >>> CLOSID >>> for io_alloc to user space so that this is not required from other >>> architectures that may support io_alloc differently in the future. >>> >>> Introduce user interface to enable/disable io_alloc feature. >> Please include high level overview of what this patch does to enable >> and disable io_alloc. Doing so will help connect why the changelog contains >> information about CLOSID management. > > > Sure. > >> >>> diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c >>> index d495a5d5c9d5..bf982eab7b18 100644 >>> --- a/fs/resctrl/ctrlmondata.c >>> +++ b/fs/resctrl/ctrlmondata.c >>> @@ -685,3 +685,140 @@ int resctrl_io_alloc_show(struct kernfs_open_file >>> *of, struct seq_file *seq, voi >>> return 0; >>> } >>> + >>> +/* >>> + * resctrl_io_alloc_closid_supported() - io_alloc feature utilizes the >>> + * highest CLOSID value to direct I/O traffic. Ensure that >>> io_alloc_closid >>> + * is in the supported range. >>> + */ >>> +static bool resctrl_io_alloc_closid_supported(u32 io_alloc_closid) >>> +{ >>> + return io_alloc_closid < closids_supported(); >>> +} >>> + >>> +static struct resctrl_schema *resctrl_get_schema(enum >>> resctrl_conf_type type) >>> +{ >>> + struct resctrl_schema *schema; >>> + >>> + list_for_each_entry(schema, &resctrl_schema_all, list) { >>> + if (schema->conf_type == type) >>> + return schema; >> This does not look right. More than one resource can have the same >> configuration type, no? >> Think about L2 and L3 having CDP enabled ... >> Looks like this is missing a resource type as parameter and a check for >> the resource ... >> but is this function even necessary (more below)? > > May not be required. Comments below. > >> >>> + } >>> + >>> + return NULL; >>> +} >>> + >>> +/* >>> + * Initialize io_alloc CLOSID cache resource CBM with all usable (shared >>> + * and unused) cache portions. >>> + */ >>> +static int resctrl_io_alloc_init_cbm(struct resctrl_schema *s, u32 >>> closid) >>> +{ >>> + struct rdt_resource *r = s->res; >> Needs reverse fir. > > > Sure. > >> >>> + enum resctrl_conf_type peer_type; >>> + struct resctrl_schema *peer_s; >>> + int ret; >>> + >>> + rdt_staged_configs_clear(); >>> + >>> + ret = rdtgroup_init_cat(s, closid); >>> + if (ret < 0) >>> + goto out; >>> + >>> + /* Initialize schema for both CDP_DATA and CDP_CODE when CDP is >>> enabled */ >>> + if (resctrl_arch_get_cdp_enabled(r->rid)) { >>> + peer_type = resctrl_peer_type(s->conf_type); >>> + peer_s = resctrl_get_schema(peer_type); >>> + if (peer_s) { >>> + ret = rdtgroup_init_cat(peer_s, closid); >> This is unexpected. In v7 I suggested that when parsing the CBM of one >> of the CDP >> resources it is not necessary to do so again for the peer. The CBM can be >> parsed *once* and the configuration just copied over. See: >> https://lore.kernel.org/ >> lkml/82045638-2b26-4682-9374-1c3e400a580a@xxxxxxxxx/ > > Let met try to understand. > > So, rdtgroup_init_cat() sets up the staged _config for the specific CDP > type for all the domains. > > We need to apply those staged_configs to its peer type on all the domains. > > Something like this? > > /* Initialize staged_config of the peer type when CDP is enabled */ > if (resctrl_arch_get_cdp_enabled(r->rid)) { > list_for_each_entry(d, &s->res->ctrl_domains, hdr.list) { > cfg = &d->staged_config[s->conf_type]; > cfg_peer = &d->staged_config[peer_type]; > cfg_peer->new_ctrl = cfg->new_ctrl; > cfg_peer->have_new_ctrl = cfg->have_new_ctrl; > } > } > Replaced with following snippet. /* Initialize schema for both CDP_DATA and CDP_CODE when CDP is enabled */ + if (resctrl_arch_get_cdp_enabled(r->rid)) { + peer_type = resctrl_peer_type(s->conf_type); + list_for_each_entry(d, &s->res->ctrl_domains, hdr.list) + memcpy(&d->staged_config[peer_type], + &d->staged_config[s->conf_type], + sizeof(*d->staged_config)); + } > >> >> Generally when feedback is provided it is good to check all places in >> series where >> it is relevant. oh ... but looking ahead you ignored the feedback in the >> patch >> it was given also :( > > > My bad. > > I will address that. > > Thanks > > Babu > > -- Thanks Babu Moger