[PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Remove the assumption that bv_len is a multiple of 512 bytes since this is
not guaranteed by the block layer. This assumption may cause this function
to return a smaller value than it should. This is harmless from a
correctness point of view but may result in suboptimal performance.

Note: unsigned int is sufficient for num_bytes since bio_for_each_segment()
yields at most one page per iteration and since PAGE_SIZE * BIO_MAX_VECS
fits in an unsigned int.

Suggested-by: John Garry <john.g.garry@xxxxxxxxxx>
Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx>
---
 block/blk-crypto-fallback.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 005c9157ffb3..3c914d2c054f 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -213,15 +213,16 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
 {
 	struct bio *bio = *bio_ptr;
 	unsigned int i = 0;
-	unsigned int num_sectors = 0;
+	unsigned int num_bytes = 0, num_sectors;
 	struct bio_vec bv;
 	struct bvec_iter iter;
 
 	bio_for_each_segment(bv, bio, iter) {
-		num_sectors += bv.bv_len >> SECTOR_SHIFT;
+		num_bytes += bv.bv_len;
 		if (++i == BIO_MAX_VECS)
 			break;
 	}
+	num_sectors = num_bytes >> SECTOR_SHIFT;
 	if (num_sectors < bio_sectors(bio)) {
 		struct bio *split_bio;
 




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux