Re: [PATCH v11 22/24] mm: zswap: Allocate pool batching resources if the compressor supports batching.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > > > [...]
> > > > >
> > > > > +       /*
> > > > > +        * Set the unit of compress batching for large folios, for quick
> > > > > +        * retrieval in the zswap_compress() fast path:
> > > > > +        * If the compressor is sequential (@pool->compr_batch_size is 1),
> > > > > +        * large folios will be compressed in batches of
> > > > ZSWAP_MAX_BATCH_SIZE
> > > > > +        * pages, where each page in the batch is compressed sequentially.
> > > > > +        * We see better performance by processing the folio in batches of
> > > > > +        * ZSWAP_MAX_BATCH_SIZE, due to cache locality of working set
> > > > > +        * structures.
> > > > > +        */
> > > > > +       pool->batch_size = (pool->compr_batch_size > 1) ?
> > > > > +                               pool->compr_batch_size : ZSWAP_MAX_BATCH_SIZE;
> > > > > +
> > > > >         zswap_pool_debug("created", pool);
> > > > >
> > > > >         return pool;
> > > > >
> > > >
> > > > It’s hard to follow — you add batch_size and compr_batch_size in this
> > > > patch, but only use them in another. Could we merge the related changes
> > > > into one patch instead of splitting them into several that don’t work
> > > > independently?
> > >
> > > Hi Barry,
> > >
> > > Thanks for reviewing the code and for your comments! Sure, I can merge
> > > this patch with the next one. I was trying to keep the changes modularized
> > > to a) zswap_cpu_comp_prepare(), b) zswap_store() and c)
> > zswap_compress()
> > > so the changes are broken into smaller parts, but I can see how this can
> > > make the changes appear disjointed.
> > >
> > > One thing though: the commit logs for each of the patches will
> > > also probably need to be merged, since I have tried to explain the
> > > changes in detail.
> >
> > It’s fine to merge the changelog and present the story as a whole. Do we
>
> Sure.
>
> > really need both pool->batch_size and pool->compr_batch_size? I assume
> > pool->batch_size = pool->compr_batch_size if HW supports batch; otherwise
> > pool->compr_batch_size = 1.
>
> Actually not exactly. We have found value in compressing in batches of
> ZSWAP_MAX_BATCH_SIZE even for software compressors. Latency benefits
> from cache locality of working-set data. Hence the approach that we have
> settled on is pool->batch_size = ZSWAP_MAX_BATCH_SIZE if
> the compressor does not support batching (i.e., if pool->compr_batch_size is 1).
> If it does, then pool->batch_size = pool->compr_batch_size.

I understand that even without a hardware batch, you can still
have some software batching that excludes compression.

However, based on the code below, it looks like
pool->compr_batch_size is almost always either equal to
pool->batch_size or 1:

+       pool->compr_batch_size = min(ZSWAP_MAX_BATCH_SIZE,
+                                    crypto_acomp_batch_size(acomp_ctx->acomp));

+       pool->batch_size = (pool->compr_batch_size > 1) ?
+                               pool->compr_batch_size : ZSWAP_MAX_BATCH_SIZE;


It seems one of these two variables may be redundant.
For instance, no matter if pool->compr_batch_size > 1, could we always treat
batch_size as ZSWAP_MAX_BATCH_SIZE?  if we remove
pool->batch_size, could we just use pool->compr_batch_size as the
step size for compression no matter if pool->compr_batch_size > 1.

for example:
       pool->compr_batch_size = min(ZSWAP_MAX_BATCH_SIZE,
                                    crypto_acomp_batch_size(acomp_ctx->acomp));

Then batch the zswap store using ZSWAP_MAX_BATCH_SIZE, but set the
compression step to pool->compr_batch_size.

Thanks
Barry





[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux