Re: [PATCH v2 10/10] powerpc/uaccess: Implement masked user access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gabriel,

Le 25/08/2025 à 11:04, Gabriel Paubert a écrit :
[Vous ne recevez pas souvent de courriers de paubert@xxxxxxx. D?couvrez pourquoi ceci est important ? https://aka.ms/LearnAboutSenderIdentification ]

Hi Christophe,

On Fri, Aug 22, 2025 at 11:58:06AM +0200, Christophe Leroy wrote:
Masked user access avoids the address/size verification by access_ok().
Allthough its main purpose is to skip the speculation in the
verification of user address and size hence avoid the need of spec
mitigation, it also has the advantage of reducing the amount of
instructions required so it even benefits to platforms that don't
need speculation mitigation, especially when the size of the copy is
not know at build time.

So implement masked user access on powerpc. The only requirement is
to have memory gap that faults between the top user space and the
real start of kernel area.

On 64 bits platforms the address space is divided that way:

       0xffffffffffffffff      +------------------+
                               |                  |
                               |   kernel space   |
                               |                  |
       0xc000000000000000      +------------------+  <== PAGE_OFFSET
                               |//////////////////|
                               |//////////////////|
       0x8000000000000000      |//////////////////|
                               |//////////////////|
                               |//////////////////|
       0x0010000000000000      +------------------+  <== TASK_SIZE_MAX
                               |                  |
                               |    user space    |
                               |                  |
       0x0000000000000000      +------------------+

Kernel is always above 0x8000000000000000 and user always
below, with a gap in-between. It leads to a 4 instructions sequence:

   80: 7c 69 1b 78     mr      r9,r3
   84: 7c 63 fe 76     sradi   r3,r3,63
   88: 7d 29 18 78     andc    r9,r9,r3
   8c: 79 23 00 4c     rldimi  r3,r9,0,1

This sequence leaves r3 unmodified when it is below 0x8000000000000000
and clamps it to 0x8000000000000000 if it is above.


This comment looks wrong: the second instruction converts r3 to a
replicated sign bit of the address ((addr>0)?0:-1) if treating the
address as signed. After that the code only modifies the MSB of r3. So I
don't see how r3 could be unchanged from the original value...

Unless I'm missing something, the above rldimi leaves the MSB of r3 unmodified and replaces all other bits by the same in r9.

This is the code generated by GCC for the following:

	unsigned long mask = (unsigned long)((long)addr >> 63);

	addr = ((addr & ~mask) & (~0UL >> 1)) | (mask & (1UL << 63));



OTOH, I believe the following 3 instructions sequence would work,
input address (a) in r3, scratch value (tmp) in r9, both intptr_t:

         sradi r9,r3,63  ; tmp = (a >= 0) ? 0L : -1L;
         andc r3,r3,r9   ; a = a & ~tmp; (equivalently a = (a >= 0) ? a : 0)
         rldimi r3,r9,0,1 ; copy MSB of tmp to MSB of a

But maybe I goofed...


From my understanding of rldimi, your proposed code would:
- Keep r3 unmodified when it is above 0x8000000000000000
- Set r3 to 0x7fffffffffffffff when it is below 0x8000000000000000

Extract of ppc64 ABI:

rldimi RA,RS,SH,MB

The contents of register RS are rotated 64 left SH bits.
A mask is generated having 1-bits from bit MB
through bit 63− SH and 0-bits elsewhere. The rotated
data are inserted into register RA under control of the
generated mask.






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux