On 12/06/2025 07.02, Christoph Hellwig wrote: > On Wed, Jun 11, 2025 at 02:15:10PM +0200, Daniel Gomez wrote: >>> #define NVME_MAX_SEGS \ >>> - min(NVME_CTRL_PAGE_SIZE / sizeof(struct nvme_sgl_desc), \ >>> - (PAGE_SIZE / sizeof(struct scatterlist))) >>> + (NVME_CTRL_PAGE_SIZE / sizeof(struct nvme_sgl_desc)) >> >> The 8 MiB max transfer size is only reachable if host segments are at least 32k. >> But I think this limitation is only on the SGL side, right? > > Yes, PRPs don't really have the concept of segments to start with. > >> Adding support to >> multiple SGL segments should allow us to increase this limit 256 -> 2048. >> >> Is this correct? > > Yes. Note that plenty of hardware doesn't really like chained SGLs too > much and you might get performance degradation. > I see the driver assumes better performance on SGLs over PRPs when I/Os are greater than 32k (this is the default sgl threshold). But what if chaining SGL is needed, i.e. my host segments are between 4k and 16k, would PRPs perform better than chaining SGLs? Also, if host segments are between 4k and 16k, PRPs would be able to support it but this limit prevents that use case. I guess the question is if you see any blocker to enable this path?