On Mon 12-05-25 14:33:15, Zhang Yi wrote: > From: Zhang Yi <yi.zhang@xxxxxxxxxx> > > jbd2_journal_blocks_per_page() returns the number of blocks in a single > page. Rename it to jbd2_journal_blocks_per_folio() and make it returns > the number of blocks in the largest folio, preparing for the calculation > of journal credits blocks when allocating blocks within a large folio in > the writeback path. > > Signed-off-by: Zhang Yi <yi.zhang@xxxxxxxxxx> ... > @@ -2657,9 +2657,10 @@ void jbd2_journal_ack_err(journal_t *journal) > write_unlock(&journal->j_state_lock); > } > > -int jbd2_journal_blocks_per_page(struct inode *inode) > +int jbd2_journal_blocks_per_folio(struct inode *inode) > { > - return 1 << (PAGE_SHIFT - inode->i_sb->s_blocksize_bits); > + return 1 << (PAGE_SHIFT + mapping_max_folio_order(inode->i_mapping) - > + inode->i_sb->s_blocksize_bits); > } FWIW this will result in us reserving some 10k transaction credits for 1k blocksize with maximum 2M folio size. That is going to create serious pressure on the journalling machinery. For now I guess we are fine but eventually we should rewrite how credits for writing out folio are computed to reduce this massive overestimation. It will be a bit tricky but we could always reserve credits for one / couple of extents and try to extend the transaction if we need more. The tricky part is to do the partial folio writeout in case we cannot extend the transaction... Honza > /* > diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h > index 023e8abdb99a..ebbcdab474d5 100644 > --- a/include/linux/jbd2.h > +++ b/include/linux/jbd2.h > @@ -1723,7 +1723,7 @@ static inline int tid_geq(tid_t x, tid_t y) > return (difference >= 0); > } > > -extern int jbd2_journal_blocks_per_page(struct inode *inode); > +extern int jbd2_journal_blocks_per_folio(struct inode *inode); > extern size_t journal_tag_bytes(journal_t *journal); > > static inline int jbd2_journal_has_csum_v2or3(journal_t *journal) > -- > 2.46.1 > -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR