On 20/08/25 1:45 pm, Nirjhar Roy (IBM) wrote:
With large block sizes like 64k the test failed with the
following logs:
QA output created by 301
basic accounting
+subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128)
+subvol 256 mismatched usage 168165376 vs 138805248 (expected data 138412032 expected meta 393216 diff 29360128)
+subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128)
+subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128)
fallocate: Disk quota exceeded
The test creates nr_fill files each of size 8k. Now with 64k
block size, 8k sized files occupy more than the expected sizes (i.e, 8k)
due to internal fragmentation, since 1 file will occupy at least 1
fsblock. Fix this by making the file size 64k, which is aligned
with all the supported block sizes.
Reviewed-by: Qu Wenruo <wqu@xxxxxxxx>
Reported-by: Disha Goel <disgoel@xxxxxxxxxxxxx>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@xxxxxxxxx>
I tested it on Power, and the btrfs/301 test passes with both 4k & 64k
block sizes.
SECTION -- btrfs_64k
RECREATING -- btrfs on /dev/loop0
FSTYP -- btrfs
PLATFORM -- Linux/ppc64le localhost 6.17.0-rc2-00060-g068a56e56fa8
#3 SMP Thu Aug 21 17:54:04 IST 2025
MKFS_OPTIONS -- -f -s 65536 -n 65536 /dev/loop1
MOUNT_OPTIONS -- /dev/loop1 /mnt/scratch
btrfs/301 67s ... 111s
Ran: btrfs/301
Passed all 1 tests
Tested-by: Disha Goel <disgoel@xxxxxxxxxxxxx>
---
tests/btrfs/301 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/btrfs/301 b/tests/btrfs/301
index 6b59749d..be346f52 100755
--- a/tests/btrfs/301
+++ b/tests/btrfs/301
@@ -23,7 +23,7 @@ subv=$SCRATCH_MNT/subv
nested=$SCRATCH_MNT/subv/nested
snap=$SCRATCH_MNT/snap
nr_fill=512
-fill_sz=$((8 * 1024))
+fill_sz=$((64 * 1024))
total_fill=$(($nr_fill * $fill_sz))
nodesize=$($BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV | \
grep nodesize | $AWK_PROG '{print $2}')