On 6/24/25 5:52 AM, Christoph Hellwig wrote:
virt_boundary_mask implies an unlimited max_segment_size. Setting both
can lead to data corruption because __blk_rq_map_sg() can split requests
so that the virt_boundary_mask is not respected if max_segment_size is
not UINT_MAX.
Signed-off-by: Christoph Hellwig <hch@xxxxxx>
Reviewed-by: Hannes Reinecke <hare@xxxxxxx>
---
drivers/infiniband/ulp/srp/ib_srp.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index 1378651735f6..23ed2fc688f0 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -3705,9 +3705,10 @@ static ssize_t add_target_store(struct device *dev,
target_host->max_id = 1;
target_host->max_lun = -1LL;
target_host->max_cmd_len = sizeof ((struct srp_cmd *) (void *) 0L)->cdb;
- target_host->max_segment_size = ib_dma_max_seg_size(ibdev);
- if (!(ibdev->attrs.kernel_cap_flags & IBK_SG_GAPS_REG))
+ if (ibdev->attrs.kernel_cap_flags & IBK_SG_GAPS_REG)
+ target_host->max_segment_size = ib_dma_max_seg_size(ibdev);
+ else
target_host->virt_boundary_mask = ~srp_dev->mr_page_mask;
target = host_to_target(target_host);
Acked-by: Bart Van Assche <bvanassche@xxxxxxx>