Inside btrfs we have some call sites using bio_for_each_segment() and bio_for_each_segment_all(). They are fine for now, as we only support bs <= ps, thus the returned bv_len is no larger than block size. However for the incoming bs > ps support, a block can cross several pages (although they are still physical contiguous, as such block is backed by large folio), in that case the single page iterator is not going to handle such blocks. Replace the followinng call sites with bio_for_each_bvec*() helpers: - btrfs_csum_one_bio() This one is critical for basic uncompressed writes for bs > ps case. Or it will use the content of the page to calculate the checksum instead of the correct block (which crosses multiple pages). - set_bio_pages_uptodate() - verify_bio_data_sectors() They are mostly fine even with the old single-page interface, as they won't bother bv_len at all. But it's still helpful to replace them, as the new multi-page helper will save some bytes from the stack memory. Signed-off-by: Qu Wenruo <wqu@xxxxxxxx> --- fs/btrfs/file-item.c | 13 +++++++------ fs/btrfs/raid56.c | 8 ++++---- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c index 4dd3d8a02519..bb08b27983a7 100644 --- a/fs/btrfs/file-item.c +++ b/fs/btrfs/file-item.c @@ -775,6 +775,7 @@ int btrfs_csum_one_bio(struct btrfs_bio *bbio) SHASH_DESC_ON_STACK(shash, fs_info->csum_shash); struct bio *bio = &bbio->bio; struct btrfs_ordered_sum *sums; + const u32 blocksize = fs_info->sectorsize; char *data; struct bvec_iter iter; struct bio_vec bvec; @@ -799,16 +800,16 @@ int btrfs_csum_one_bio(struct btrfs_bio *bbio) shash->tfm = fs_info->csum_shash; - bio_for_each_segment(bvec, bio, iter) { - blockcount = BTRFS_BYTES_TO_BLKS(fs_info, - bvec.bv_len + fs_info->sectorsize - - 1); + bio_for_each_bvec(bvec, bio, iter) { + ASSERT(bvec.bv_len >= blocksize); + ASSERT(IS_ALIGNED(bvec.bv_len, blocksize)); + blockcount = BTRFS_BYTES_TO_BLKS(fs_info, bvec.bv_len); for (i = 0; i < blockcount; i++) { data = bvec_kmap_local(&bvec); crypto_shash_digest(shash, - data + (i * fs_info->sectorsize), - fs_info->sectorsize, + data + (i << fs_info->sectorsize_bits), + blocksize, sums->sums + index); kunmap_local(data); index += fs_info->csum_size; diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index df48dd6c3f54..2c810fe96bdf 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1513,11 +1513,11 @@ static void set_bio_pages_uptodate(struct btrfs_raid_bio *rbio, struct bio *bio) { const u32 sectorsize = rbio->bioc->fs_info->sectorsize; struct bio_vec *bvec; - struct bvec_iter_all iter_all; + int i; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, iter_all) { + bio_for_each_bvec_all(bvec, bio, i) { struct sector_ptr *sector; phys_addr_t paddr = bvec_phys(bvec); @@ -1574,7 +1574,7 @@ static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio, struct btrfs_fs_info *fs_info = rbio->bioc->fs_info; int total_sector_nr = get_bio_sector_nr(rbio, bio); struct bio_vec *bvec; - struct bvec_iter_all iter_all; + int i; /* No data csum for the whole stripe, no need to verify. */ if (!rbio->csum_bitmap || !rbio->csum_buf) @@ -1584,7 +1584,7 @@ static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio, if (total_sector_nr >= rbio->nr_data * rbio->stripe_nsectors) return; - bio_for_each_segment_all(bvec, bio, iter_all) { + bio_for_each_bvec_all(bvec, bio, i) { void *kaddr; kaddr = bvec_kmap_local(bvec); -- 2.50.1