> + dma_len = min_t(u32, length, NVME_CTRL_PAGE_SIZE - (dma_addr & (NVME_CTRL_PAGE_SIZE - 1))); And overly long line slipped in here during one of the rebases. > + /* > + * We are in this mode as IOVA path wasn't taken and DMA length > + * is morethan two sectors. In such case, mapping was perfoormed > + * per-NVME_CTRL_PAGE_SIZE, so unmap accordingly. > + */ Where does this comment come from? Lots of spelling errors, and I also don't understand what it is talking about as setors are entirely irrelevant here. > + if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_state, iod->total_len)) { > + if (iod->cmd.common.flags & NVME_CMD_SGL_METABUF) > + nvme_free_sgls(dev, req); With the addition of metadata SGL support this also needs to check NVME_CMD_SGL_METASEG. The commit message should also really mentioned that someone significantly altered the patch for merging with latest upstream, as I as the nominal author can't recognize some of that code. > + unsigned int entries = req->nr_integrity_segments; > struct nvme_iod *iod = blk_mq_rq_to_pdu(req); > > + if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_meta_state, > + iod->total_meta_len)) { > + if (entries == 1) { > + dma_unmap_page(dev->dev, iod->meta_dma, > + rq_integrity_vec(req).bv_len, > + rq_dma_dir(req)); > + return; > } > } > > + dma_pool_free(dev->prp_small_pool, iod->meta_list, iod->meta_dma); This now doesn't unmap for entries > 1 in the non-IOVA case.