Hi Babu, On 8/5/25 4:30 PM, Babu Moger wrote: > "io_alloc" feature enables direct insertion of data from I/O devices into > the cache. By directly caching data from I/O devices rather than first > storing the I/O data in DRAM, it reduces the demands on DRAM bandwidth and > reduces latency to the processor consuming the I/O data. > > When CDP is enabled, io_alloc routes traffic using the highest CLOSID > associated with the L3CODE resource. To ensure consistent cache allocation > behavior, the L3CODE and L3DATA resources must remain synchronized. > rdtgroup_init_cat() function takes both L3CODE and L3DATA into account when > initializing CBMs for new groups. The io_alloc feature must maintain the > same behavior, ensuring that the Cache Bit Masks (CBMs) for both L3CODE and > L3DATA are updated together. Please rework this copy&pasted text also and make specific to what this patch actually does. > Enable users to modify io_alloc CBMs (Capacity Bit Masks) via the > io_alloc_cbm resctrl file when io_alloc is enabled. Here the changelog can provide overview of what is done by this patch when a user provides a new CBM. This can include that a CBM provided to CDP enabled resource will copy the CBM to the CDP peer. > > Signed-off-by: Babu Moger <babu.moger@xxxxxxx> > --- > --- > Documentation/filesystems/resctrl.rst | 8 +++ > fs/resctrl/ctrlmondata.c | 97 +++++++++++++++++++++++++++ > fs/resctrl/internal.h | 3 + > fs/resctrl/rdtgroup.c | 3 +- > 4 files changed, 110 insertions(+), 1 deletion(-) > > diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst > index 3002f7fdb2fe..d955e8525af0 100644 > --- a/Documentation/filesystems/resctrl.rst > +++ b/Documentation/filesystems/resctrl.rst > @@ -187,6 +187,14 @@ related to allocation: > # cat /sys/fs/resctrl/info/L3/io_alloc_cbm > 0=ffff;1=ffff > > + CBMs can be configured by writing to the interface. > + > + Example:: > + > + # echo 1=FF > /sys/fs/resctrl/info/L3/io_alloc_cbm It may be useful to demonstrate syntax when more than one CBM is modified. > + # cat /sys/fs/resctrl/info/L3/io_alloc_cbm > + 0=ffff;1=00ff > + > When CDP is enabled "io_alloc_cbm" associated with the DATA and CODE > resources may reflect the same values. For example, values read from and > written to /sys/fs/resctrl/info/L3DATA/io_alloc_cbm may be reflected by > diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c > index 641094aac322..1f69117f96f8 100644 > --- a/fs/resctrl/ctrlmondata.c > +++ b/fs/resctrl/ctrlmondata.c > @@ -858,3 +858,100 @@ int resctrl_io_alloc_cbm_show(struct kernfs_open_file *of, struct seq_file *seq, > cpus_read_unlock(); > return ret; > } > + > +static int resctrl_io_alloc_parse_line(char *line, struct rdt_resource *r, > + struct resctrl_schema *s, u32 closid) > +{ > + enum resctrl_conf_type peer_type; > + struct resctrl_schema *peer_s; > + struct rdt_parse_data data; > + struct rdt_ctrl_domain *d; > + char *dom = NULL, *id; > + unsigned long dom_id; > + > +next: > + if (!line || line[0] == '\0') > + return 0; > + > + dom = strsep(&line, ";"); > + id = strsep(&dom, "="); > + if (!dom || kstrtoul(id, 10, &dom_id)) { > + rdt_last_cmd_puts("Missing '=' or non-numeric domain\n"); > + return -EINVAL; > + } > + > + dom = strim(dom); > + list_for_each_entry(d, &r->ctrl_domains, hdr.list) { > + if (d->hdr.id == dom_id) { > + data.buf = dom; > + data.mode = RDT_MODE_SHAREABLE; > + data.closid = closid; > + if (parse_cbm(&data, s, d)) > + return -EINVAL; > + /* > + * When CDP is enabled, update the schema for both CDP_DATA > + * and CDP_CODE. > + */ > + if (resctrl_arch_get_cdp_enabled(r->rid)) { > + peer_type = resctrl_peer_type(s->conf_type); > + peer_s = resctrl_get_schema(peer_type); > + if (parse_cbm(&data, peer_s, d)) > + return -EINVAL; CBM is still parsed twice. As I mentioned in v7 the parsing only need to be done once and then the resulting CBM can be copied to CDP peer. https://lore.kernel.org/lkml/82045638-2b26-4682-9374-1c3e400a580a@xxxxxxxxx/ > + } > + goto next; > + } > + } > + > + return -EINVAL; > +} > + > +ssize_t resctrl_io_alloc_cbm_write(struct kernfs_open_file *of, char *buf, > + size_t nbytes, loff_t off) > +{ > + struct resctrl_schema *s = rdt_kn_parent_priv(of->kn); > + struct rdt_resource *r = s->res; > + u32 io_alloc_closid; > + int ret = 0; > + > + /* Valid input requires a trailing newline */ > + if (nbytes == 0 || buf[nbytes - 1] != '\n') > + return -EINVAL; > + > + buf[nbytes - 1] = '\0'; > + > + cpus_read_lock(); > + mutex_lock(&rdtgroup_mutex); > + > + rdt_last_cmd_clear(); > + > + if (!r->cache.io_alloc_capable) { > + rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name); > + ret = -ENODEV; > + goto out_unlock; > + } > + > + rdt_last_cmd_clear(); Unnecessary rdt_last_cmd_clear(). > + rdt_staged_configs_clear(); Placement of this can be improved by putting it closer to the code that populates the staged configs. That is, just before resctrl_io_alloc_parse_line(). The flow is not symmetrical in that the out_unlock exit code always clears the buffer whether it was used or not. I think it will be easier to understand if the out_unlock *only* unlocks the locks and there is a new goto label, for example, "out_clear_configs" that calls rdt_staged_configs_clear() and is used after resctrl_io_alloc_parse_line(). > + > + if (!resctrl_arch_get_io_alloc_enabled(r)) { > + rdt_last_cmd_printf("io_alloc is not enabled on %s\n", s->name); > + ret = -ENODEV; > + goto out_unlock; > + } > + > + io_alloc_closid = resctrl_io_alloc_closid(r); > + > + ret = resctrl_io_alloc_parse_line(buf, r, s, io_alloc_closid); > + > + if (ret) > + goto out_unlock; > + > + ret = resctrl_arch_update_domains(r, io_alloc_closid); > + > +out_unlock: > + rdt_staged_configs_clear(); > + mutex_unlock(&rdtgroup_mutex); > + cpus_read_unlock(); > + > + return ret ?: nbytes; > +} Reinette