Em Tue, 09 Sep 2025 12:56:17 -0600 Jonathan Corbet <corbet@xxxxxxx> escreveu: > Mauro Carvalho Chehab <mchehab+huawei@xxxxxxxxxx> writes: > > > Basically, what happens is that the number of jobs can be on > > different places: > > There is a lot of complexity there, and spread out between __init__(), > run_sphinx(), and handle_pdf(). Is there any way to create a single > figure_out_how_many_damn_jobs() and coalesce that logic there? That > would help make that part of the system a bit more comprehensible. I'll try to better organize it, but run_sphinx() does something different than handle_pdf(): - run_sphinx: claims all tokens; - handle_pdf: use future.concurrent and handle parallelism inside it. Perhaps I can move the future.concurrent parallelism to jobserver library to simplify the code a little bit while offering an interface somewhat similar to run_sphinx logic. Let's see if I can find a way to do it while keeping the code generic (*). Will take a look on it probably on Thursday of Friday. (*) I did one similar attempt at devel time adding a subprocess call wrapper there, but didn't like much the solution, but this was before the need to use futures.concurrent. > That said, I've been unable to make this change break in my testing. I > guess I'm not seeing a lot of impediments to applying the next version > at this point. Great! I'll probably be respinning the next (hopefully final) version by the end of this week, if I don't get sidetracked with other things. Thanks, Mauro