On 22/10/28 09:31AM, Darrick J. Wong wrote:
> On Fri, Oct 28, 2022 at 10:00:32AM +0530, Ritesh Harjani (IBM) wrote:
> > This patch just changes the struct iomap_page uptodate & uptodate_lock
> > member names to state and state_lock to better reflect their purpose for
> > the upcoming patch.
> >
> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx>
> > ---
> > fs/iomap/buffered-io.c | 30 +++++++++++++++---------------
> > 1 file changed, 15 insertions(+), 15 deletions(-)
> >
> > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > index ca5c62901541..255f9f92668c 100644
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -25,13 +25,13 @@
> >
> > /*
> > * Structure allocated for each folio when block size < folio size
> > - * to track sub-folio uptodate status and I/O completions.
> > + * to track sub-folio uptodate state and I/O completions.
> > */
> > struct iomap_page {
> > atomic_t read_bytes_pending;
> > atomic_t write_bytes_pending;
> > - spinlock_t uptodate_lock;
> > - unsigned long uptodate[];
> > + spinlock_t state_lock;
> > + unsigned long state[];
> > };
> >
> > static inline struct iomap_page *to_iomap_page(struct folio *folio)
> > @@ -58,12 +58,12 @@ iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
> > else
> > gfp = GFP_NOFS | __GFP_NOFAIL;
> >
> > - iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)),
> > + iop = kzalloc(struct_size(iop, state, BITS_TO_LONGS(nr_blocks)),
> > gfp);
> > if (iop) {
> > - spin_lock_init(&iop->uptodate_lock);
> > + spin_lock_init(&iop->state_lock);
> > if (folio_test_uptodate(folio))
> > - bitmap_fill(iop->uptodate, nr_blocks);
> > + bitmap_fill(iop->state, nr_blocks);
> > folio_attach_private(folio, iop);
> > }
> > return iop;
> > @@ -79,7 +79,7 @@ static void iomap_page_release(struct folio *folio)
> > return;
> > WARN_ON_ONCE(atomic_read(&iop->read_bytes_pending));
> > WARN_ON_ONCE(atomic_read(&iop->write_bytes_pending));
> > - WARN_ON_ONCE(bitmap_full(iop->uptodate, nr_blocks) !=
> > + WARN_ON_ONCE(bitmap_full(iop->state, nr_blocks) !=
> > folio_test_uptodate(folio));
> > kfree(iop);
> > }
> > @@ -110,7 +110,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
> >
> > /* move forward for each leading block marked uptodate */
> > for (i = first; i <= last; i++) {
> > - if (!test_bit(i, iop->uptodate))
> > + if (!test_bit(i, iop->state))
>
> Hmm... time to add a new predicate helper clarifying that this is
> uptodate state that we're checking here.
Yup. Willy suggested something like iop_block_**. But to keep it short we can
keep it like iop_test_uptodate().
-ritesh