On Sat, 2025-08-16 at 10:28 +0530, Nandakumar Edamana wrote: > On 16/08/25 00:40, Eduard Zingerman wrote: > > > Could you please provide a selftest demonstrating a difference in > behavior? > > What technique did you employ to estimate the number of cases when > > precision is improved vs worsened? If this is some kind of a program > > doing randomized testing, could you please share a link to it? > > Hi Eduard, > > Thanks for the quick response! I've made the test program available here: > https://github.com/nandedamana/improved-tnum-mul Hi Nandakumar, Thank you for the link and for the algorithm explanation in the readme. What tool do you use for code generation (ngg)? [...] > $ ./experi --bits 6 > ... > mine vs kernel (bits = 6): > better = 285059 > same = 229058 > worse = 17324 > myprod optimal cases = 432406 / 531441 > linprod optimal cases = 202444 / 531441 > ---------------------------------------------------------- > is optimal? (bits = 6): 0 I did a more primitive jab at this in [1], comparing number of unknown bits (`__builtin_popcountl(tnum.mask)`) for current vs proposed tnum_mul() for all 8-bit tnums and got the following results: Tnums : 6560 New win: 30086328 70 % Old win: 1463809 3 % Same : 11483463 27 % [1] https://github.com/eddyz87/tnum_mul_compare/blob/master/README.md [...] > Regarding adding selftests, I'll look into that as well. It'd be great > if you > have any specific strategy in mind, wrt to this particular change. BTW, any > set of BPF programs that are known to fail due to the imprecision in the > current > algorithm? I don't know of any current examples failing because of lacking multiplication precision. Given that your algorithm does not introduce any new edge cases, I'd simply add a test case which is more precise with the new algorithm. Just to have selftests that can detect the change.