Re: [PATCH v4 2/3] generic: various atomic write tests with hardware and scsi_debug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Jun 12, 2025, at 12:06 AM, Ojaswin Mujoo <ojaswin@xxxxxxxxxxxxx> wrote:
> 
> On Wed, Jun 11, 2025 at 06:57:07PM -0700, Catherine Hoang wrote:
>> Simple tests of various atomic write requests and a (simulated) hardware
>> device.
>> 
>> The first test performs basic multi-block atomic writes on a scsi_debug device
>> with atomic writes enabled. We test all advertised sizes between the atomic
>> write unit min and max. We also ensure that the write fails when expected, such
>> as when attempting buffered io or unaligned directio.
>> 
>> The second test is similar to the one above, except that it verifies multi-block
>> atomic writes on actual hardware instead of simulated hardware. The device used
>> in this test is not required to support atomic writes.
>> 
>> The final two tests ensure multi-block atomic writes can be performed on various
>> interweaved mappings, including written, mapped, hole, and unwritten. We also
>> test large atomic writes on a heavily fragmented filesystem. These tests are
>> separated into reflink (shared) and non-reflink tests.
>> 
>> Signed-off-by: "Darrick J. Wong" <djwong@xxxxxxxxxx>
>> Signed-off-by: Catherine Hoang <catherine.hoang@xxxxxxxxxx>
>> ---
> 
> <snip>
> 
> Okay after running some of these tests on my setup, I have a few
> more questions regarding g/1225.
> 
>> diff --git a/tests/generic/1225 b/tests/generic/1225
>> new file mode 100755
>> index 00000000..f2dea804
>> --- /dev/null
>> +++ b/tests/generic/1225
>> @@ -0,0 +1,128 @@
>> +#! /bin/bash
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Copyright (c) 2025 Oracle.  All Rights Reserved.
>> +#
>> +# FS QA Test 1225
>> +#
>> +# basic tests for large atomic writes with mixed mappings
>> +#
>> +. ./common/preamble
>> +_begin_fstest auto quick rw atomicwrites
>> +
>> +. ./common/atomicwrites
>> +. ./common/filter
>> +. ./common/reflink
>> +
>> +_require_scratch
>> +_require_atomic_write_test_commands
>> +_require_scratch_write_atomic_multi_fsblock
>> +_require_xfs_io_command pwrite -A
> 
> I think this is already covered in _require_atomic_write_test_commands
> 
>> +
>> +_scratch_mkfs_sized $((500 * 1048576)) >> $seqres.full 2>&1
>> +_scratch_mount
>> +
>> +file1=$SCRATCH_MNT/file1
>> +file2=$SCRATCH_MNT/file2
>> +file3=$SCRATCH_MNT/file3
>> +
>> +touch $file1
>> +
>> +max_awu=$(_get_atomic_write_unit_max $file1)
>> +test $max_awu -ge 262144 || _notrun "test requires atomic writes up to 256k"
> 
> Is it possible to keep the max_awu requirement to maybe 64k? The reason
> I'm asking is that in 4k bs ext4 with bigalloc, having cluster size more
> than 64k is actually experimental so I don't think many people would be
> formatting with 256k cluster size and would miss out on running this
> test. Infact if i do set the cluster size to 256k I'm running into
> enospc in the last enospc scenario of this test, whereas 64k works
> correctly).
> 
> So just wondering if we can have an awu_max of 64k here so that more
> people are easily able to run this in their setups?

Yes, this can be changed to 64k. I think only one of the tests need
256k writes, but it looks like that can be changed to 64k as well.
Thanks for the comments!
> 
> 
> 
> Regards,
> ojaswin
> 
> <snip>
> 
>> +
>> +echo "atomic write max size on fragmented fs"
>> +avail=`_get_available_space $SCRATCH_MNT`
>> +filesizemb=$((avail / 1024 / 1024 - 1))
>> +fragmentedfile=$SCRATCH_MNT/fragmentedfile
>> +$XFS_IO_PROG -fc "falloc 0 ${filesizemb}m" $fragmentedfile
>> +$here/src/punch-alternating $fragmentedfile
>> +touch $file3
>> +$XFS_IO_PROG -dc "pwrite -A -D -V1 0 65536" $file3 >>$seqres.full 2>&1
>> +md5sum $file3 | _filter_scratch
>> +
>> +# success, all done
>> +status=0
>> +exit
>> diff --git a/tests/generic/1225.out b/tests/generic/1225.out
>> new file mode 100644
>> index 00000000..92302597
>> --- /dev/null
>> +++ b/tests/generic/1225.out
>> @@ -0,0 +1,21 @@
>> +QA output created by 1225
>> +atomic write hole+mapped+hole
>> +9464b66461bc1d20229e1b71733539d0  SCRATCH_MNT/file1
>> +atomic write adjacent mapped+hole and hole+mapped
>> +9464b66461bc1d20229e1b71733539d0  SCRATCH_MNT/file1
>> +atomic write mapped+hole+mapped
>> +9464b66461bc1d20229e1b71733539d0  SCRATCH_MNT/file1
>> +atomic write unwritten+mapped+unwritten
>> +111ce6bf29d5b1dbfb0e846c42719ece  SCRATCH_MNT/file1
>> +atomic write adjacent mapped+unwritten and unwritten+mapped
>> +111ce6bf29d5b1dbfb0e846c42719ece  SCRATCH_MNT/file1
>> +atomic write mapped+unwritten+mapped
>> +111ce6bf29d5b1dbfb0e846c42719ece  SCRATCH_MNT/file1
>> +atomic write interweaved hole+unwritten+written
>> +5577e46f20631d76bbac73ab1b4ed208  SCRATCH_MNT/file1
>> +atomic write at EOF
>> +75572c4929fde8faf131e84df4c6a764  SCRATCH_MNT/file1
>> +atomic write preallocated region
>> +27a248351cd540bc9ac2c2dc841abca2  SCRATCH_MNT/file1
>> +atomic write max size on fragmented fs
>> +27c9068d1b51da575a53ad34c57ca5cc  SCRATCH_MNT/file3
>> -- 
>> 2.34.1






[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux