Re: [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2025-08-26 at 20:35:49 +0100, Catalin Marinas wrote:
>On Mon, Aug 25, 2025 at 10:24:26PM +0200, Maciej Wieczor-Retman wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index e9bbfacc35a6..82cbfc7d1233 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -431,11 +431,11 @@ config KASAN_SHADOW_OFFSET
>>  	default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
>>  	default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
>>  	default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
>> -	default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
>> -	default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
>> -	default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
>> -	default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
>> -	default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>> +	default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
>> +	default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
>> +	default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
>> +	default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
>> +	default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>>  	default 0xffffffffffffffff
>>  
>>  config UNWIND_TABLES
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 5213248e081b..277d56ceeb01 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -89,7 +89,15 @@
>>   *
>>   * KASAN_SHADOW_END is defined first as the shadow address that corresponds to
>>   * the upper bound of possible virtual kernel memory addresses UL(1) << 64
>> - * according to the mapping formula.
>> + * according to the mapping formula. For Generic KASAN, the address in the
>> + * mapping formula is treated as unsigned (part of the compiler's ABI), so the
>> + * end of the shadow memory region is at a large positive offset from
>> + * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
>> + * formula is treated as signed. Since all kernel addresses are negative, they
>> + * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
>> + * itself the end of the shadow memory region. (User pointers are positive and
>> + * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
>> + * not allocated for them.)
>>   *
>>   * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
>>   * memory start must map to the lowest possible kernel virtual memory address
>> @@ -100,7 +108,11 @@
>>   */
>>  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>>  #define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>> +#ifdef CONFIG_KASAN_GENERIC
>>  #define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
>> +#else
>> +#define KASAN_SHADOW_END	KASAN_SHADOW_OFFSET
>> +#endif
>>  #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
>>  #define KASAN_SHADOW_START	_KASAN_SHADOW_START(vabits_actual)
>>  #define PAGE_END		KASAN_SHADOW_START
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index d541ce45daeb..dc2de12c4f26 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
>>  /* The early shadow maps everything to a single page of zeroes */
>>  asmlinkage void __init kasan_early_init(void)
>>  {
>> -	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
>> -		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>> +	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>> +		BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
>> +			KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>> +	else
>> +		BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
>>  	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
>>  	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
>>  	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
>
>For the arm64 parts:
>
>Acked-by: Catalin Marinas <catalin.marinas@xxxxxxx>

Thanks :)

>
>I wonder whether it's worth keeping the generic KASAN mode for arm64.
>We've had the hardware TBI from the start, so the architecture version
>is not an issue. The compiler support may differ though.
>
>Anyway, that would be more suitable for a separate cleanup patch.
>
>-- 
>Catalin

I want to test it at some point, but I was always under the impression (that at
least in theory) different modes should be able to catch slightly different
errors. Not a big set but an example being accessing wrong address, but
allocated memory - on Generic it should be okay since shadow memory only says if
and how much is allocated. On sw-tags it will fault because randomized tags
would mismatch. Now I can't think of any examples the other way around but I
assume there is a few.

-- 
Kind regards
Maciej Wieczór-Retman




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux