When tested with block size/node size 64K on btrfs, then the test fails with the folllowing error: QA output created by 563 read/write read is in range -write is in range +write has value of 8855552 +write is NOT in range 7969177.6 .. 8808038.4 write -> read/write ... The slight increase in the amount of bytes that are written is because of the increase in the the nodesize(metadata) and hence it exceeds the tolerance limit slightly. Fix this by increasing the iosize. Increasing the iosize increases the tolerance range and covers the tolerance for btrfs higher node sizes. Reported-by: Disha Goel <disgoel@xxxxxxxxxxxxx> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@xxxxxxxxx> --- tests/generic/563 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/generic/563 b/tests/generic/563 index 89a71aa4..6cb9ddb0 100755 --- a/tests/generic/563 +++ b/tests/generic/563 @@ -43,7 +43,7 @@ _require_block_device $SCRATCH_DEV _require_non_zoned_device ${SCRATCH_DEV} cgdir=$CGROUP2_PATH -iosize=$((1024 * 1024 * 8)) +iosize=$((1024 * 1024 * 16)) # Check cgroup read/write charges against expected values. Allow for some # tolerance as different filesystems seem to account slightly differently. -- 2.34.1