Re: [PATCH bpf-next] bpf: Add range tracking for BPF_NEG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2025-06-24 at 10:23 -0700, Song Liu wrote:
> Add range tracking for instruction BPF_NEG. Without this logic, a trivial
> program like the following will fail
>
>     volatile bool found_value_b;
>     SEC("lsm.s/socket_connect")
>     int BPF_PROG(test_socket_connect)
>     {
>         if (!found_value_b)
>                 return -1;
>         return 0;
>     }
>
> with verifier log:
>
> "At program exit the register R0 has smin=0 smax=4294967295 should have
> been in [-4095, 0]".
>
> This is because range information is lost in BPF_NEG:
>
> 0: R1=ctx() R10=fp0
> ; if (!found_value_b) @ xxxx.c:24
> 0: (18) r1 = 0xffa00000011e7048       ; R1_w=map_value(...)
> 2: (71) r0 = *(u8 *)(r1 +0)           ; R0_w=scalar(smin32=0,smax=255)
> 3: (a4) w0 ^= 1                       ; R0_w=scalar(smin32=0,smax=255)
> 4: (84) w0 = -w0                      ; R0_w=scalar(range info lost)
>
> Note that, the log above is manually modified to highlight relevant bits.
>
> Fix this by maintaining proper range information with BPF_NEG, so that
> the verifier will know:
>
> 4: (84) w0 = -w0                      ; R0_w=scalar(smin32=-255,smax=0)
>
> Also add selftests to make sure the logic works as expected.
>
> Signed-off-by: Song Liu <song@xxxxxxxxxx>
> ---

I double-checked and backtrack_insn operates as expected indeed.

Note, bpf_reg_state->id has to be reset on BPF_NEG otherwise the
following is possible:

  4: (bf) r2 = r1                       ; R1_w=scalar(id=2,...) R2_w=scalar(id=2,...)
  5: (87) r1 = -r1                      ; R1_w=scalar(id=2,...)
  
On the master the id is reset by mark_reg_unknown.
This id is used to transfer range knowledge over all scalars with the
same id.

[...]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 279a64933262..93512596a590 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[...]

> @@ -15214,6 +15215,14 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
>  		scalar_min_max_sub(dst_reg, &src_reg);
>  		dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
>  		break;
> +	case BPF_NEG:
> +		struct bpf_reg_state dst_reg_copy = *dst_reg;

Nit: there might be a concern regarding stack usage.
     In struct bpf_verifier_env there are a few 'fake_reg'
     scratch registers defined for similar purpose.
     Also clangd warns me that declaration after label is a c23 extension.

> +
> +		___mark_reg_known(dst_reg, 0);
> +		scalar32_min_max_sub(dst_reg, &dst_reg_copy);
> +		scalar_min_max_sub(dst_reg, &dst_reg_copy);
> +		dst_reg->var_off = tnum_neg(dst_reg_copy.var_off);
> +		break;
>  	case BPF_MUL:
>  		dst_reg->var_off = tnum_mul(dst_reg->var_off, src_reg.var_off);
>  		scalar32_min_max_mul(dst_reg, &src_reg);

[...]

> diff --git a/tools/testing/selftests/bpf/progs/verifier_precision.c b/tools/testing/selftests/bpf/progs/verifier_precision.c
> index 9fe5d255ee37..bcff70f8cebb 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_precision.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_precision.c

Nit: tests are usually a separate patch.

> @@ -231,4 +231,34 @@ __naked void bpf_cond_op_not_r10(void)
>  	::: __clobber_all);
>  }
>
> +SEC("lsm.s/socket_connect")
> +__success
> +__naked int bpf_neg_2(void)

Nit: I'd match __log_level(2) output to check the actual range
     inferred by verifier.

Maybe add a test that operates on 64-bit registers?

[...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux