With large block sizes like 64k the test failed with the following logs: QA output created by 301 basic accounting +subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128) +subvol 256 mismatched usage 168165376 vs 138805248 (expected data 138412032 expected meta 393216 diff 29360128) +subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128) +subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128) fallocate: Disk quota exceeded The test creates nr_fill files each of size 8k. Now with 64k block size, 8k sized files occupy more than the expected sizes (i.e, 8k) due to internal fragmentation, since 1 file will occupy at least 1 fsblock. Fix this by making the file size 64k, which is aligned with all the supported block sizes. Reviewed-by: Qu Wenruo <wqu@xxxxxxxx> Reported-by: Disha Goel <disgoel@xxxxxxxxxxxxx> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@xxxxxxxxx> --- tests/btrfs/301 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/btrfs/301 b/tests/btrfs/301 index 6b59749d..be346f52 100755 --- a/tests/btrfs/301 +++ b/tests/btrfs/301 @@ -23,7 +23,7 @@ subv=$SCRATCH_MNT/subv nested=$SCRATCH_MNT/subv/nested snap=$SCRATCH_MNT/snap nr_fill=512 -fill_sz=$((8 * 1024)) +fill_sz=$((64 * 1024)) total_fill=$(($nr_fill * $fill_sz)) nodesize=$($BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV | \ grep nodesize | $AWK_PROG '{print $2}') -- 2.34.1